Search results for: G-θ method
895 Mechanism of Veneer Colouring for Production of Multilaminar Veneer from Plantation-Grown Eucalyptus Globulus
Authors: Ngoc Nguyen
Abstract:
There is large plantation of Eucalyptus globulus established which has been grown to produce pulpwood. This resource is not suitable for the production of decorative products, principally due to low grades of wood and “dull” appearance but many trials have been already undertaken for the production of veneer and veneer-based engineered wood products, such as plywood and laminated veneer lumber (LVL). The manufacture of veneer-based products has been recently identified as an unprecedented opportunity to promote higher value utilisation of plantation resources. However, many uncertainties remain regarding the impacts of inferior wood quality of young plantation trees on product recovery and value, and with respect to optimal processing techniques. Moreover, the quality of veneer and veneer-based products is far from optimal as trees are young and have small diameters; and the veneers have the significant colour variation which affects to the added value of final products. Developing production methods which would enhance appearance of low-quality veneer would provide a great potential for the production of high-value wood products such as furniture, joinery, flooring and other appearance products. One of the methods of enhancing appearance of low quality veneer, developed in Italy, involves the production of multilaminar veneer, also named “reconstructed veneer”. An important stage of the multilaminar production is colouring the veneer which can be achieved by dyeing veneer with dyes of different colours depending on the type of appearance products, their design and market demand. Although veneer dyeing technology has been well advanced in Italy, it has been focused on poplar veneer from plantation which wood is characterized by low density, even colour, small amount of defects and high permeability. Conversely, the majority of plantation eucalypts have medium to high density, have a lot of defects, uneven colour and low permeability. Therefore, detailed study is required to develop dyeing methods suitable for colouring eucalypt veneers. Brown reactive dye is used for veneer colouring process. Veneers from sapwood and heartwood of two moisture content levels are used to conduct colouring experiments: green veneer and veneer dried to 12% MC. Prior to dyeing, all samples are treated. Both soaking (dipping) and vacuum pressure methods are used in the study to compare the results and select most efficient method for veneer dyeing. To date, the results of colour measurements by CIELAB colour system showed significant differences in the colour of the undyed veneers produced from heartwood part. The colour became moderately darker with increasing of Sodium chloride, compared to control samples according to the colour measurements. It is difficult to conclude a suitable dye solution used in the experiments at this stage as the variables such as dye concentration, dyeing temperature or dyeing time have not been done. The dye will be used with and without UV absorbent after all trials are completed using optimal parameters in colouring veneers.Keywords: Eucalyptus globulus, veneer colouring/dyeing, multilaminar veneer, reactive dye
Procedia PDF Downloads 350894 Evaluation of Rheological Properties, Anisotropic Shrinkage, and Heterogeneous Densification of Ceramic Materials during Liquid Phase Sintering by Numerical-Experimental Procedure
Authors: Hamed Yaghoubi, Esmaeil Salahi, Fateme Taati
Abstract:
The effective shear and bulk viscosity, as well as dynamic viscosity, describe the rheological properties of the ceramic body during the liquid phase sintering process. The rheological parameters depend on the physical and thermomechanical characteristics of the material such as relative density, temperature, grain size, and diffusion coefficient and activation energy. The main goal of this research is to acquire a comprehensive understanding of the response of an incompressible viscose ceramic material during liquid phase sintering process such as stress-strain relations, sintering and hydrostatic stress, the prediction of anisotropic shrinkage and heterogeneous densification as a function of sintering time by including the simultaneous influence of gravity field, and frictional force. After raw materials analysis, the standard hard porcelain mixture as a ceramic body was designed and prepared. Three different experimental configurations were designed including midpoint deflection, sinter bending, and free sintering samples. The numerical method for the ceramic specimens during the liquid phase sintering process are implemented in the CREEP user subroutine code in ABAQUS. The numerical-experimental procedure shows the anisotropic behavior, the complete difference in spatial displacement through three directions, the incompressibility for ceramic samples during the sintering process. The anisotropic shrinkage factor has been proposed to investigate the shrinkage anisotropy. It has been shown that the shrinkage along the normal axis of casting sample is about 1.5 times larger than that of casting direction, the gravitational force in pyroplastic deformation intensifies the shrinkage anisotropy more than the free sintering sample. The lowest and greatest equivalent creep strain occurs at the intermediate zone and around the central line of the midpoint distorted sample, respectively. In the sinter bending test sample, the equivalent creep strain approaches to the maximum near the contact area with refractory support. The inhomogeneity in Von-Misses, pressure, and principal stress intensifies the relative density non-uniformity in all samples, except in free sintering one. The symmetrical distribution of stress around the center of free sintering sample, cause to hinder the pyroplastic deformations. Densification results confirmed that the effective bulk viscosity was well-defined with relative density values. The stress analysis confirmed that the sintering stress is more than the hydrostatic stress from start to end of sintering time so, from both theoretically and experimentally point of view, the sintering process occurs completely.Keywords: anisotropic shrinkage, ceramic material, liquid phase sintering process, rheological properties, numerical-experimental procedure
Procedia PDF Downloads 341893 Blackcurrant-Associated Rhabdovirus: New Pathogen for Blackcurrants in the Baltic Sea Region
Authors: Gunta Resevica, Nikita Zrelovs, Ivars Silamikelis, Ieva Kalnciema, Helvijs Niedra, Gunārs Lācis, Toms Bartulsons, Inga Moročko-Bičevska, Arturs Stalažs, Kristīne Drevinska, Andris Zeltins, Ina Balke
Abstract:
Newly discovered viruses provide novel knowledge for basic phytovirus research, serve as tools for biotechnology and can be helpful in identification of epidemic outbreaks. Blackcurrant-associated rhabdovirus (BCaRV) have been discovered in USA germplasm collection samples from Russia and France. As it was reported in one accession originating from France it is unclear whether the material was already infected when it entered in the USA or it became infected while in collection in the USA. Due to that BCaRV was definite as non-EU viruses. According to ICTV classification BCaRV is representative of Blackcurrant betanucleorhabdovirus specie in genus Betanucleorhabdovirus (family Rhabdoviridae). Nevertheless, BCaRV impact on the host, transmission mechanisms and vectors are still unknown. In RNA-seq data pool from Ribes plants resistance gene study by high throughput sequencing (HTS) we observed differences between sample group gene transcript heat maps. Additional analysis of the whole data pool (total 393660492 of 150 bp long read pairs) by rnaSPAdes v 3.13.1 resulted into 14424 bases long contig with an average coverage of 684x with shared 99.5% identity to the previously reported first complete genome of BCaRV (MF543022.1) using EMBOSS Needle. This finding proved BCaRV presence in EU and indicated that it might be relevant pathogen. In this study leaf tissue from twelve asymptomatic blackcurrant cv. Mara Eglite plants (negatively tested for blackcurrant reversion virus (BRV)) from Dobele, Latvia (56°36'31.9"N, 23°18'13.6"E) was collected and used for total RNA isolation with RNeasy Plant Mini Kit with minor modifications, followed by plant rRNA removal by a RiboMinus Plant Kit for RNA-Seq. HTS libraries were prepared using MGI Easy RNA Directional Library Prep Set for 16 reactions to obtain 150 bp pair-end reads. Libraries were pooled, circularized and cleaned and sequenced on DNBSEQ-G400 using PE150 flow cell. Additionally, all samples were tested by RT-PCR, and amplicons were directly sequenced by Sanger-based method. The contig representing the genome of BCaRV isolate Mara Eglite was deposited at European Nucleotide Archive under accession number OU015520. Those findings indicate a second evidence on the presence of this particular virus in the EU and further research on BCaRV prevalence in Ribes from other geographical areas should be performed. As there are no information on BCaRV impact on the host this should be investigated, regarding the fact that mixed infections with BRV and nucleorhabdoviruses are reported.Keywords: BCaRV, Betanucleorhabdovirus, Ribes, RNA-seq
Procedia PDF Downloads 184892 Effects of Prescribed Surface Perturbation on NACA 0012 at Low Reynolds Number
Authors: Diego F. Camacho, Cristian J. Mejia, Carlos Duque-Daza
Abstract:
The recent widespread use of Unmanned Aerial Vehicles (UAVs) has fueled a renewed interest in efficiency and performance of airfoils, particularly for applications at low and moderate Reynolds numbers, typical of this kind of vehicles. Most of previous efforts in the aeronautical industry, regarding aerodynamic efficiency, had been focused on high Reynolds numbers applications, typical of commercial airliners and large size aircrafts. However, in order to increase the levels of efficiency and to boost the performance of these UAV, it is necessary to explore new alternatives in terms of airfoil design and application of drag reduction techniques. The objective of the present work is to carry out the analysis and comparison of performance levels between a standard NACA0012 profile against another one featuring a wall protuberance or surface perturbation. A computational model, based on the finite volume method, is employed to evaluate the effect of the presence of geometrical distortions on the wall. The performance evaluation is achieved in terms of variations of drag and lift coefficients for the given profile. In particular, the aerodynamic performance of the new design, i.e. the airfoil with a surface perturbation, is examined under conditions of incompressible and subsonic flow in transient state. The perturbation considered is a shaped protrusion prescribed as a small surface deformation on the top wall of the aerodynamic profile. The ultimate goal by including such a controlled smooth artificial roughness was to alter the turbulent boundary layer. It is shown in the present work that such a modification has a dramatic impact on the aerodynamic characteristics of the airfoil, and if properly adjusted, in a positive way. The computational model was implemented using the unstructured, FVM-based open source C++ platform OpenFOAM. A number of numerical experiments were carried out at Reynolds number 5x104, based on the length of the chord and the free-stream velocity, and angles of attack 6° and 12°. A Large Eddy Simulation (LES) approach was used, together with the dynamic Smagorinsky approach as subgrid scale (SGS) model, in order to account for the effect of the small turbulent scales. The impact of the surface perturbation on the performance of the airfoil is judged in terms of changes in the drag and lift coefficients, as well as in terms of alterations of the main characteristics of the turbulent boundary layer on the upper wall. A dramatic change in the whole performance can be appreciated, including an arguably large level of lift-to-drag coefficient ratio increase for all angles and a size reduction of laminar separation bubble (LSB) for a twelve-angle-of-attack.Keywords: CFD, LES, Lift-to-drag ratio, LSB, NACA 0012 airfoil
Procedia PDF Downloads 386891 A Method Intensive Top-down Approach for Generating Guidelines for an Energy-Efficient Neighbourhood: A Case of Amaravati, Andhra Pradesh, India
Authors: Rituparna Pal, Faiz Ahmed
Abstract:
Neighbourhood energy efficiency is a newly emerged term to address the quality of urban strata of built environment in terms of various covariates of sustainability. The concept of sustainability paradigm in developed nations has encouraged the policymakers for developing urban scale cities to envision plans under the aegis of urban scale sustainability. The concept of neighbourhood energy efficiency is realized a lot lately just when the cities, towns and other areas comprising this massive global urban strata have started facing a strong blow from climate change, energy crisis, cost hike and an alarming shortfall in the justice which the urban areas required. So this step of urban sustainability can be easily referred more as a ‘Retrofit Action’ which is to cover up the already affected urban structure. So even if we start energy efficiency for existing cities and urban areas the initial layer remains, for which a complete model of urban sustainability still lacks definition. Urban sustainability is a broadly spoken off word with end number of parameters and policies through which the loop can be met. Out of which neighbourhood energy efficiency can be an integral part where the concept and index of neighbourhood scale indicators, block level indicators and building physics parameters can be understood, analyzed and concluded to help emerge guidelines for urban scale sustainability. The future of neighbourhood energy efficiency not only lies in energy efficiency but also important parameters like quality of life, access to green, access to daylight, outdoor comfort, natural ventilation etc. So apart from designing less energy-hungry buildings, it is required to create a built environment which will create less stress on buildings to consume more energy. A lot of literary analysis has been done in the Western countries prominently in Spain, Paris and also Hong Kong, leaving a distinct gap in the Indian scenario in exploring the sustainability at the urban strata. The site for the study has been selected in the upcoming capital city of Amaravati which can be replicated with similar neighbourhood typologies in the area. The paper suggests a methodical intent to quantify energy and sustainability indices in detail taking by involving several macro, meso and micro level covariates and parameters. Several iterations have been made both at macro and micro level and have been subjected to simulation, computation and mathematical models and finally to comparative analysis. Parameters at all levels are analyzed to suggest the best case scenarios which in turn is extrapolated to the macro level finally coming out with a proposal model for energy efficient neighbourhood and worked out guidelines with significance and correlations derived.Keywords: energy quantification, macro scale parameters, meso scale parameters, micro scale parameters
Procedia PDF Downloads 176890 Photoemission Momentum Microscopy of Graphene on Ir (111)
Authors: Anna V. Zaporozhchenko, Dmytro Kutnyakhov, Katherina Medjanik, Christian Tusche, Hans-Joachim Elmers, Olena Fedchenko, Sergey Chernov, Martin Ellguth, Sergej A. Nepijko, Gerd Schoenhense
Abstract:
Graphene reveals a unique electronic structure that predetermines many intriguing properties such as massless charge carriers, optical transparency and high velocity of fermions at the Fermi level, opening a wide horizon of future applications. Hence, a detailed investigation of the electronic structure of graphene is crucial. The method of choice is angular resolved photoelectron spectroscopy ARPES. Here we present experiments using time-of-flight (ToF) momentum microscopy, being an alternative way of ARPES using full-field imaging of the whole Brillouin zone (BZ) and simultaneous acquisition of up to several 100 energy slices. Unlike conventional ARPES, k-microscopy is not limited in simultaneous k-space access. We have recorded the whole first BZ of graphene on Ir(111) including all six Dirac cones. As excitation source we used synchrotron radiation from BESSY II (Berlin) at the U125-2 NIM, providing linearly polarized (both polarizations p- and s-) VUV radiation. The instrument uses a delay-line detector for single-particle detection up the 5 Mcps range and parallel energy detection via ToF recording. In this way, we gather a 3D data stack I(E,kx,ky) of the full valence electronic structure in approx. 20 mins. Band dispersion stacks were measured in the energy range of 14 eV up to 23 eV with steps of 1 eV. The linearly-dispersing graphene bands for all six K and K’ points were simultaneously recorded. We find clear features of hybridization with the substrate, in particular in the linear dichroism in the angular distribution (LDAD). Recording of the whole Brillouin zone of graphene/Ir(111) revealed new features. First, the intensity differences (i.e. the LDAD) are very sensitive to the interaction of graphene bands with substrate bands. Second, the dark corridors are investigated in detail for both, p- and s- polarized radiation. They appear as local distortions of photoelectron current distribution and are induced by quantum mechanical interference of graphene sublattices. The dark corridors are located in different areas of the 6 Dirac cones and show chirality behaviour with a mirror plane along vertical axis. Moreover, two out of six show an oval shape while the rest are more circular. It clearly indicates orientation dependence with respect to E vector of incident light. Third, a pattern of faint but very sharp lines is visible at energies around 22eV that strongly remind on Kikuchi lines in diffraction. In conclusion, the simultaneous study of all six Dirac cones is crucial for a complete understanding of dichroism phenomena and the dark corridor.Keywords: band structure, graphene, momentum microscopy, LDAD
Procedia PDF Downloads 340889 Interacting with Multi-Scale Structures of Online Political Debates by Visualizing Phylomemies
Authors: Quentin Lobbe, David Chavalarias, Alexandre Delanoe
Abstract:
The ICT revolution has given birth to an unprecedented world of digital traces and has impacted a wide number of knowledge-driven domains such as science, education or policy making. Nowadays, we are daily fueled by unlimited flows of articles, blogs, messages, tweets, etc. The internet itself can thus be considered as an unsteady hyper-textual environment where websites emerge and expand every day. But there are structures inside knowledge. A given text can always be studied in relation to others or in light of a specific socio-cultural context. By way of their textual traces, human beings are calling each other out: hypertext citations, retweets, vocabulary similarity, etc. We are in fact the architects of a giant web of elements of knowledge whose structures and shapes convey their own information. The global shapes of these digital traces represent a source of collective knowledge and the question of their visualization remains an opened challenge. How can we explore, browse and interact with such shapes? In order to navigate across these growing constellations of words and texts, interdisciplinary innovations are emerging at the crossroad between fields of social and computational sciences. In particular, complex systems approaches make it now possible to reconstruct the hidden structures of textual knowledge by means of multi-scale objects of research such as semantic maps and phylomemies. The phylomemy reconstruction is a generic method related to the co-word analysis framework. Phylomemies aim to reveal the temporal dynamics of large corpora of textual contents by performing inter-temporal matching on extracted knowledge domains in order to identify their conceptual lineages. This study aims to address the question of visualizing the global shapes of online political discussions related to the French presidential and legislative elections of 2017. We aim to build phylomemies on top of a dedicated collection of thousands of French political tweets enriched with archived contemporary news web articles. Our goal is to reconstruct the temporal evolution of online debates fueled by each political community during the elections. To that end, we want to introduce an iterative data exploration methodology implemented and tested within the free software Gargantext. There we combine synchronic and diachronic axis of visualization to reveal the dynamics of our corpora of tweets and web pages as well as their inner syntagmatic and paradigmatic relationships. In doing so, we aim to provide researchers with innovative methodological means to explore online semantic landscapes in a collaborative and reflective way.Keywords: online political debate, French election, hyper-text, phylomemy
Procedia PDF Downloads 186888 A Finite Element Analysis of Hexagonal Double-Arrowhead Auxetic Structure with Enhanced Energy Absorption Characteristics and Stiffness
Abstract:
Auxetic materials, as an emerging artificial designed metamaterial has attracted growing attention due to their promising negative Poisson’s ratio behaviors and tunable properties. The conventional auxetic lattice structures for which the deformation process is governed by a bending-dominated mechanism have faced the limitation of poor mechanical performance for many potential engineering applications. Recently, both load-bearing and energy absorption capabilities have become a crucial consideration in auxetic structure design. This study reports the finite element analysis of a class of hexagonal double-arrowhead auxetic structures with enhanced stiffness and energy absorption performance. The structure design was developed by extending the traditional double-arrowhead honeycomb to a hexagon frame, the stretching-dominated deformation mechanism was determined according to Maxwell’s stability criterion. The finite element (FE) models of 2D lattice structures established with stainless steel material were analyzed in ABAQUS/Standard for predicting in-plane structural deformation mechanism, failure process, and compressive elastic properties. Based on the computational simulation, the parametric analysis was studied to investigate the effect of the structural parameters on Poisson’s ratio and mechanical properties. The geometrical optimization was then implemented to achieve the optimal Poisson’s ratio for the maximum specific energy absorption. In addition, the optimized 2D lattice structure was correspondingly converted into a 3D geometry configuration by using the orthogonally splicing method. The numerical results of 2D and 3D structures under compressive quasi-static loading conditions were compared separately with the traditional double-arrowhead re-entrant honeycomb in terms of specific Young's moduli, Poisson's ratios, and specified energy absorption. As a result, the energy absorption capability and stiffness are significantly reinforced with a wide range of Poisson’s ratio compared to traditional double-arrowhead re-entrant honeycomb. The auxetic behaviors, energy absorption capability, and yield strength of the proposed structure are adjustable with different combinations of joint angle, struts thickness, and the length-width ratio of the representative unit cell. The numerical prediction in this study suggests the proposed concept of hexagonal double-arrowhead structure could be a suitable candidate for the energy absorption applications with a constant request of load-bearing capacity. For future research, experimental analysis is required for the validation of the numerical simulation.Keywords: auxetic, energy absorption capacity, finite element analysis, negative Poisson's ratio, re-entrant hexagonal honeycomb
Procedia PDF Downloads 87887 The Effect of Aerobics and Yogic Exercise on Selected Physiological and Psychological Variables of Middle-Aged Women
Authors: A. Pallavi, N. Vijay Mohan
Abstract:
A nation can be economically progressive only when the citizens have sufficient capacity to work efficiently to increase the productivity. So, good health must be regarded as a primary need of the community. This helps the growth and development of the body and the mind, which in turn leads to progress and prosperity of the nation. An optimum growth is a necessity for an efficient existence in a biologically adverse and economically competitive world. It is also necessary for the execution of daily routine work. Yoga is a method or a system for the complete development of the personality in a human being. It can be further elaborated as an all-around and complete development of the body, mind, morality, intellect and soul of a being. Sri Aurobindo defines yoga as 'a methodical effort towards self-perfection by the development of the potentialities in the individual.' Aerobic exercise as any activity that uses large muscle groups, can be maintained continuously, and is rhythmic I nature. It is a type of exercise that overloads the heart and lungs and causes them to work harder than at rest. The important idea behind aerobic exercise today, is to get up and get moving. There are more activities that ever to choose from, whether it is a new activity or an old one. Find something you enjoy doing that keeps our heart rate elevated for a continuous time period and get moving to a healthier life. Middle aged selected and served as the subjects for the purpose of this study. The selected subjects were in the age group of 30 to 40 years. By going through the literature and after consulting the experts in yoga and aerobic training, the investigator had chosen the variables which are specifically related to the middle-aged men. The selected physiological variables are pulse rate, diastolic blood pressure, systolic blood pressure; percent body fat and vital capacity. The selected psychological variables are job anxiety, occupational stress. The study was formulated as a random group design consisting of aerobic exercise and yogic exercises groups. The subjects (N=60) were at random divided into three equal groups of twenty middle-aged men each. The groups were assigned the names as follows: 1. Experimental group I- aerobic exercises group, 2. Experimental group II- yogic exercises, 3. Control group. All the groups were subjected to pre-test prior to the experimental treatment. The experimental groups participated in their respective duration of twenty-four weeks, six days in a week throughout the study. The various tests administered were: prior to training (pre-test), after twelfth week (second test) and twenty-fourth weeks (post-test) of the training schedule.Keywords: pulse rate, diastolic blood pressure, systolic blood pressure; percent body fat and vital capacity, psychological variables, job anxiety, occupational stress, aerobic exercise, yogic exercise
Procedia PDF Downloads 445886 Optimization of Ultrasound-Assisted Extraction of Oil from Spent Coffee Grounds Using a Central Composite Rotatable Design
Authors: Malek Miladi, Miguel Vegara, Maria Perez-Infantes, Khaled Mohamed Ramadan, Antonio Ruiz-Canales, Damaris Nunez-Gomez
Abstract:
Coffee is the second consumed commodity worldwide, yet it also generates colossal waste. Proper management of coffee waste is proposed by converting them into products with higher added value to achieve sustainability of the economic and ecological footprint and protect the environment. Based on this, a study looking at the recovery of coffee waste is becoming more relevant in recent decades. Spent coffee grounds (SCG's) resulted from brewing coffee represents the major waste produced among all coffee industry. The fact that SCGs has no economic value be abundant in nature and industry, do not compete with agriculture and especially its high oil content (between 7-15% from its total dry matter weight depending on the coffee varieties, Arabica or Robusta), encourages its use as a sustainable feedstock for bio-oil production. The bio-oil extraction is a crucial step towards biodiesel production by the transesterification process. However, conventional methods used for oil extraction are not recommended due to their high consumption of energy, time, and generation of toxic volatile organic solvents. Thus, finding a sustainable, economical, and efficient extraction technique is crucial to scale up the process and to ensure more environment-friendly production. Under this perspective, the aim of this work was the statistical study to know an efficient strategy for oil extraction by n-hexane using indirect sonication. The coffee waste mixed Arabica and Robusta, which was used in this work. The temperature effect, sonication time, and solvent-to-solid ratio on the oil yield were statistically investigated as dependent variables by Central Composite Rotatable Design (CCRD) 23. The results were analyzed using STATISTICA 7 StatSoft software. The CCRD showed the significance of all the variables tested (P < 0.05) on the process output. The validation of the model by analysis of variance (ANOVA) showed good adjustment for the results obtained for a 95% confidence interval, and also, the predicted values graph vs. experimental values confirmed the satisfactory correlation between the model results. Besides, the identification of the optimum experimental conditions was based on the study of the surface response graphs (2-D and 3-D) and the critical statistical values. Based on the CCDR results, 29 ºC, 56.6 min, and solvent-to-solid ratio 16 were the better experimental conditions defined statistically for coffee waste oil extraction using n-hexane as solvent. In these conditions, the oil yield was >9% in all cases. The results confirmed the efficiency of using an ultrasound bath in extracting oil as a more economical, green, and efficient way when compared to the Soxhlet method.Keywords: coffee waste, optimization, oil yield, statistical planning
Procedia PDF Downloads 119885 A Survey of Digital Health Companies: Opportunities and Business Model Challenges
Authors: Iris Xiaohong Quan
Abstract:
The global digital health market reached 175 billion U.S. dollars in 2019, and is expected to grow at about 25% CAGR to over 650 billion USD by 2025. Different terms such as digital health, e-health, mHealth, telehealth have been used in the field, which can sometimes cause confusion. The term digital health was originally introduced to refer specifically to the use of interactive media, tools, platforms, applications, and solutions that are connected to the Internet to address health concerns of providers as well as consumers. While mHealth emphasizes the use of mobile phones in healthcare, telehealth means using technology to remotely deliver clinical health services to patients. According to FDA, “the broad scope of digital health includes categories such as mobile health (mHealth), health information technology (IT), wearable devices, telehealth and telemedicine, and personalized medicine.” Some researchers believe that digital health is nothing else but the cultural transformation healthcare has been going through in the 21st century because of digital health technologies that provide data to both patients and medical professionals. As digital health is burgeoning, but research in the area is still inadequate, our paper aims to clear the definition confusion and provide an overall picture of digital health companies. We further investigate how business models are designed and differentiated in the emerging digital health sector. Both quantitative and qualitative methods are adopted in the research. For the quantitative analysis, our research data came from two databases Crunchbase and CBInsights, which are well-recognized information sources for researchers, entrepreneurs, managers, and investors. We searched a few keywords in the Crunchbase database based on companies’ self-description: digital health, e-health, and telehealth. A search of “digital health” returned 941 unique results, “e-health” returned 167 companies, while “telehealth” 427. We also searched the CBInsights database for similar information. After merging and removing duplicate ones and cleaning up the database, we came up with a list of 1464 companies as digital health companies. A qualitative method will be used to complement the quantitative analysis. We will do an in-depth case analysis of three successful unicorn digital health companies to understand how business models evolve and discuss the challenges faced in this sector. Our research returned some interesting findings. For instance, we found that 86% of the digital health startups were founded in the recent decade since 2010. 75% of the digital health companies have less than 50 employees, and almost 50% with less than 10 employees. This shows that digital health companies are relatively young and small in scale. On the business model analysis, while traditional healthcare businesses emphasize the so-called “3P”—patient, physicians, and payer, digital health companies extend to “5p” by adding patents, which is the result of technology requirements (such as the development of artificial intelligence models), and platform, which is an effective value creation approach to bring the stakeholders together. Our case analysis will detail the 5p framework and contribute to the extant knowledge on business models in the healthcare industry.Keywords: digital health, business models, entrepreneurship opportunities, healthcare
Procedia PDF Downloads 183884 A Conceptual Framework of Integrated Evaluation Methodology for Aquaculture Lakes
Authors: Robby Y. Tallar, Nikodemus L., Yuri S., Jian P. Suen
Abstract:
Research in the subject of ecological water resources management is full of trivial questions addressed and it seems, today to be one branch of science that can strongly contribute to the study of complexity (physical, biological, ecological, socio-economic, environmental, and other aspects). Existing literature available on different facets of these studies, much of it is technical and targeted for specific users. This study offered the combination all aspects in evaluation methodology for aquaculture lakes with its paradigm refer to hierarchical theory and to the effects of spatial specific arrangement of an object into a space or local area. Therefore, the process in developing a conceptual framework represents the more integrated and related applicable concept from the grounded theory. A design of integrated evaluation methodology for aquaculture lakes is presented. The method is based on the identification of a series of attributes which can be used to describe status of aquaculture lakes using certain indicators from aquaculture water quality index (AWQI), aesthetic aquaculture lake index (AALI) and rapid appraisal for fisheries index (RAPFISH). The preliminary preparation could be accomplished as follows: first, the characterization of study area was undertaken at different spatial scales. Second, an inventory data as a core resource such as city master plan, water quality reports from environmental agency, and related government regulations. Third, ground-checking survey should be completed to validate the on-site condition of study area. In order to design an integrated evaluation methodology for aquaculture lakes, finally we integrated and developed rating scores system which called Integrated Aquaculture Lake Index (IALI).The development of IALI are reflecting a compromise all aspects and it responds the needs of concise information about the current status of aquaculture lakes by the comprehensive approach. IALI was elaborated as a decision aid tool for stakeholders to evaluate the impact and contribution of anthropogenic activities on the aquaculture lake’s environment. The conclusion was while there is no denying the fact that the aquaculture lakes are under great threat from the pressure of the increasing human activities, one must realize that no evaluation methodology for aquaculture lakes can succeed by keeping the pristine condition. The IALI developed in this work can be used as an effective, low-cost evaluation methodology of aquaculture lakes for developing countries. Because IALI emphasizes the simplicity and understandability as it must communicate to decision makers and the experts. Moreover, stakeholders need to be helped to perceive their lakes so that sites can be accepted and valued by local people. For this site of lake development, accessibility and planning designation of the site is of decisive importance: the local people want to know whether the lake condition is safe or whether it can be used.Keywords: aesthetic value, AHP, aquaculture lakes, integrated lakes, RAPFISH
Procedia PDF Downloads 237883 Ways to Prevent Increased Wear of the Drive Box Parts and the Central Drive of the Civil Aviation Turbo Engine Based on Tribology
Authors: Liudmila Shabalinskaya, Victor Golovanov, Liudmila Milinis, Sergey Loponos, Alexander Maslov, D. O. Frolov
Abstract:
The work is devoted to the rapid laboratory diagnosis of the condition of aircraft friction units, based on the application of the nondestructive testing method by analyzing the parameters of wear particles, or tribodiagnostics. The most important task of tribodiagnostics is to develop recommendations for the selection of more advanced designs, materials and lubricants based on data on wear processes for increasing the life and ensuring the safety of the operation of machines and mechanisms. The object of tribodiagnostics in this work are the tooth gears of the central drive and the gearboxes of the gas turbine engine of the civil aviation PS-90A type, in which rolling friction and sliding friction with slip occur. The main criterion for evaluating the technical state of lubricated friction units of a gas turbine engine is the intensity and rate of wear of the friction surfaces of the friction unit parts. When the engine is running, oil samples are taken and the state of the friction surfaces is evaluated according to the parameters of the wear particles contained in the oil sample, which carry important and detailed information about the wear processes in the engine transmission units. The parameters carrying this information include the concentration of wear particles and metals in the oil, the dispersion composition, the shape, the size ratio and the number of particles, the state of their surfaces, the presence in the oil of various mechanical impurities of non-metallic origin. Such a morphological analysis of wear particles has been introduced into the order of monitoring the status and diagnostics of various aircraft engines, including a gas turbine engine, since the type of wear characteristic of the central drive and the drive box is surface fatigue wear and the beginning of its development, accompanied by the formation of microcracks, leads to the formation of spherical, up to 10 μm in size, and in the aftermath of flocculent particles measuring 20-200 μm in size. Tribodiagnostics using the morphological analysis of wear particles includes the following techniques: ferrography, filtering, and computer analysis of the classification and counting of wear particles. Based on the analysis of several series of oil samples taken from the drive box of the engine during their operating time, a study was carried out of the processes of wear kinetics. Based on the results of the study and comparing the series of criteria for tribodiagnostics, wear state ratings and statistics of the results of morphological analysis, norms for the normal operating regime were developed. The study allowed to develop levels of wear state for friction surfaces of gearing and a 10-point rating system for estimating the likelihood of the occurrence of an increased wear mode and, accordingly, prevention of engine failures in flight.Keywords: aviation, box of drives, morphological analysis, tribodiagnostics, tribology, ferrography, filtering, wear particle
Procedia PDF Downloads 259882 Coastal Modelling Studies for Jumeirah First Beach Stabilization
Authors: Zongyan Yang, Gagan K. Jena, Sankar B. Karanam, Noora M. A. Hokal
Abstract:
Jumeirah First beach, a segment of coastline of length 1.5 km, is one of the popular public beaches in Dubai, UAE. The stability of the beach has been affected by several coastal developmental projects, including The World, Island 2 and La Mer. A comprehensive stabilization scheme comprising of two composite groynes (of lengths 90 m and 125m), modification to the northern breakwater of Jumeirah Fishing Harbour and beach re-nourishment was implemented by Dubai Municipality in 2012. However, the performance of the implemented stabilization scheme has been compromised by La Mer project (built in 2016), which modified the wave climate at the Jumeirah First beach. The objective of the coastal modelling studies is to establish design basis for further beach stabilization scheme(s). Comprehensive coastal modelling studies had been conducted to establish the nearshore wave climate, equilibrium beach orientations and stable beach plan forms. Based on the outcomes of the modeling studies, recommendation had been made to extend the composite groynes to stabilize the Jumeirah First beach. Wave transformation was performed following an interpolation approach with wave transformation matrixes derived from simulations of a possible range of wave conditions in the region. The Dubai coastal wave model is developed with MIKE21 SW. The offshore wave conditions were determined from PERGOS wave data at 4 offshore locations with consideration of the spatial variation. The lateral boundary conditions corresponding to the offshore conditions, at Dubai/Abu Dhabi and Dubai Sharjah borders, were derived with application of LitDrift 1D wave transformation module. The Dubai coastal wave model was calibrated with wave records at monitoring stations operated by Dubai Municipality. The wave transformation matrix approach was validated with nearshore wave measurement at a Dubai Municipality monitoring station in the vicinity of the Jumeirah First beach. One typical year wave time series was transformed to 7 locations in front of the beach to count for the variation of wave conditions which are affected by adjacent and offshore developments. Equilibrium beach orientations were estimated with application of LitDrift by finding the beach orientations with null annual littoral transport at the 7 selected locations. The littoral transport calculation results were compared with beach erosion/accretion quantities estimated from the beach monitoring program (twice a year including bathymetric and topographical surveys). An innovative integral method was developed to outline the stable beach plan forms from the estimated equilibrium beach orientations, with predetermined minimum beach width. The optimal lengths for the composite groyne extensions were recommended based on the stable beach plan forms.Keywords: composite groyne, equilibrium beach orientation, stable beach plan form, wave transformation matrix
Procedia PDF Downloads 263881 Spatial Climate Changes in the Province of Macerata, Central Italy, Analyzed by GIS Software
Authors: Matteo Gentilucci, Marco Materazzi, Gilberto Pambianchi
Abstract:
Climate change is an increasingly central issue in the world, because it affects many of human activities. In this context regional studies are of great importance because they sometimes differ from the general trend. This research focuses on a small area of central Italy which overlooks the Adriatic Sea, the province of Macerata. The aim is to analyze space-based climate changes, for precipitation and temperatures, in the last 3 climatological standard normals (1961-1990; 1971-2000; 1981-2010) through GIS software. The data collected from 30 weather stations for temperature and 61 rain gauges for precipitation were subject to quality controls: validation and homogenization. These data were fundamental for the spatialization of the variables (temperature and precipitation) through geostatistical techniques. To assess the best geostatistical technique for interpolation, the results of cross correlation were used. The co-kriging method with altitude as independent variable produced the best cross validation results for all time periods, among the methods analysed, with 'root mean square error standardized' close to 1, 'mean standardized error' close to 0, 'average standard error' and 'root mean square error' with similar values. The maps resulting from the analysis were compared by subtraction between rasters, producing 3 maps of annual variation and three other maps for each month of the year (1961/1990-1971/2000; 1971/2000-1981/2010; 1961/1990-1981/2010). The results show an increase in average annual temperature of about 0.1°C between 1961-1990 and 1971-2000 and 0.6 °C between 1961-1990 and 1981-2010. Instead annual precipitation shows an opposite trend, with an average difference from 1961-1990 to 1971-2000 of about 35 mm and from 1961-1990 to 1981-2010 of about 60 mm. Furthermore, the differences in the areas have been highlighted with area graphs and summarized in several tables as descriptive analysis. In fact for temperature between 1961-1990 and 1971-2000 the most areally represented frequency is 0.08°C (77.04 Km² on a total of about 2800 km²) with a kurtosis of 3.95 and a skewness of 2.19. Instead, the differences for temperatures from 1961-1990 to 1981-2010 show a most areally represented frequency of 0.83 °C, with -0.45 as kurtosis and 0.92 as skewness (36.9 km²). Therefore it can be said that distribution is more pointed for 1961/1990-1971/2000 and smoother but more intense in the growth for 1961/1990-1981/2010. In contrast, precipitation shows a very similar shape of distribution, although with different intensities, for both variations periods (first period 1961/1990-1971/2000 and second one 1961/1990-1981/2010) with similar values of kurtosis (1st = 1.93; 2nd = 1.34), skewness (1st = 1.81; 2nd = 1.62 for the second) and area of the most represented frequency (1st = 60.72 km²; 2nd = 52.80 km²). In conclusion, this methodology of analysis allows the assessment of small scale climate change for each month of the year and could be further investigated in relation to regional atmospheric dynamics.Keywords: climate change, GIS, interpolation, co-kriging
Procedia PDF Downloads 127880 Urban Open Source: Synthesis of a Citizen-Centric Framework to Design Densifying Cities
Authors: Shaurya Chauhan, Sagar Gupta
Abstract:
Prominent urbanizing centres across the globe like Delhi, Dhaka, or Manila have exhibited that development often faces a challenge in bridging the gap among the top-down collective requirements of the city and the bottom-up individual aspirations of the ever-diversifying population. When this exclusion is intertwined with rapid urbanization and diversifying urban demography: unplanned sprawl, poor planning, and low-density development emerge as automated responses. In parallel, new ideas and methods of densification and public participation are being widely adopted as sustainable alternatives for the future of urban development. This research advocates a collaborative design method for future development: one that allows rapid application with its prototypical nature and an inclusive approach with mediation between the 'user' and the 'urban', purely with the use of empirical tools. Building upon the concepts and principles of 'open-sourcing' in design, the research establishes a design framework that serves the current user requirements while allowing for future citizen-driven modifications. This is synthesized as a 3-tiered model: user needs – design ideology – adaptive details. The research culminates into a context-responsive 'open source project development framework' (hereinafter, referred to as OSPDF) that can be used for on-ground field applications. To bring forward specifics, the research looks at a 300-acre redevelopment in the core of a rapidly urbanizing city as a case encompassing extreme physical, demographic, and economic diversity. The suggestive measures also integrate the region’s cultural identity and social character with the diverse citizen aspirations, using architecture and urban design tools, and references from recognized literature. This framework, based on a vision – feedback – execution loop, is used for hypothetical development at the five prevalent scales in design: master planning, urban design, architecture, tectonics, and modularity, in a chronological manner. At each of these scales, the possible approaches and avenues for open- sourcing are identified and validated, through hit-and-trial, and subsequently recorded. The research attempts to re-calibrate the architectural design process and make it more responsive and people-centric. Analytical tools such as Space, Event, and Movement by Bernard Tschumi and Five-Point Mental Map by Kevin Lynch, among others, are deep rooted in the research process. Over the five-part OSPDF, a two-part subsidiary process is also suggested after each cycle of application, for a continued appraisal and refinement of the framework and urban fabric with time. The research is an exploration – of the possibilities for an architect – to adopt the new role of a 'mediator' in development of the contemporary urbanity.Keywords: open source, public participation, urbanization, urban development
Procedia PDF Downloads 149879 Facial Recognition and Landmark Detection in Fitness Assessment and Performance Improvement
Authors: Brittany Richardson, Ying Wang
Abstract:
For physical therapy, exercise prescription, athlete training, and regular fitness training, it is crucial to perform health assessments or fitness assessments periodically. An accurate assessment is propitious for tracking recovery progress, preventing potential injury and making long-range training plans. Assessments include necessary measurements, height, weight, blood pressure, heart rate, body fat, etc. and advanced evaluation, muscle group strength, stability-mobility, and movement evaluation, etc. In the current standard assessment procedures, the accuracy of assessments, especially advanced evaluations, largely depends on the experience of physicians, coaches, and personal trainers. And it is challenging to track clients’ progress in the current assessment. Unlike the tradition assessment, in this paper, we present a deep learning based face recognition algorithm for accurate, comprehensive and trackable assessment. Based on the result from our assessment, physicians, coaches, and personal trainers are able to adjust the training targets and methods. The system categorizes the difficulty levels of the current activity for the client or user, furthermore make more comprehensive assessments based on tracking muscle group over time using a designed landmark detection method. The system also includes the function of grading and correcting the form of the clients during exercise. Experienced coaches and personal trainer can tell the clients' limit based on their facial expression and muscle group movements, even during the first several sessions. Similar to this, using a convolution neural network, the system is trained with people’s facial expression to differentiate challenge levels for clients. It uses landmark detection for subtle changes in muscle groups movements. It measures the proximal mobility of the hips and thoracic spine, the proximal stability of the scapulothoracic region and distal mobility of the glenohumeral joint, as well as distal mobility, and its effect on the kinetic chain. This system integrates data from other fitness assistant devices, including but not limited to Apple Watch, Fitbit, etc. for a improved training and testing performance. The system itself doesn’t require history data for an individual client, but the history data of a client can be used to create a more effective exercise plan. In order to validate the performance of the proposed work, an experimental design is presented. The results show that the proposed work contributes towards improving the quality of exercise plan, execution, progress tracking, and performance.Keywords: exercise prescription, facial recognition, landmark detection, fitness assessments
Procedia PDF Downloads 134878 Women’s Experience of Managing Pre-Existing Lymphoedema during Pregnancy and the Early Postnatal Period
Authors: Kim Toyer, Belinda Thompson, Louise Koelmeyer
Abstract:
Lymphoedema is a chronic condition caused by dysfunction of the lymphatic system, which limits the drainage of fluid and tissue waste from the interstitial space of the affected body part. The normal physiological changes in pregnancy cause an increased load on a normal lymphatic system which can result in a transient lymphatic overload (oedema). The interaction between lymphoedema and pregnancy oedema is unclear. Women with pre-existing lymphoedema require accurate information and additional strategies to manage their lymphoedema during pregnancy. Currently, no resources are available to guide women or their healthcare providers with accurate advice and additional management strategies for coping with lymphoedema during pregnancy until they have recovered postnatally. This study explored the experiences of Australian women with pre-existing lymphoedema during recent pregnancy and the early postnatal period to determine how their usual lymphoedema management strategies were adapted and what were their additional or unmet needs. Interactions with their obstetric care providers, the hospital maternity services, and usual lymphoedema therapy services were detailed. Participants were sourced from several Australian lymphoedema community groups, including therapist networks. Opportunistic sampling is appropriate to explore this topic in a small target population as lymphoedema in women of childbearing age is uncommon, with prevalence data unavailable. Inclusion criteria were aged over 18 years, diagnosed with primary or secondary lymphoedema of the arm or leg, pregnant within the preceding ten years (since 2012), and had their pregnancy and postnatal care in Australia. Exclusion criteria were a diagnosis of lipedema and if unable to read or understand a reasonable level of English. A mixed-method qualitative design was used in two phases. This involved an online survey (REDCap platform) of the participants followed by online semi-structured interviews or focus groups to provide the transcript data for inductive thematic analysis to gain an in-depth understanding of issues raised. Women with well-managed pre-existing lymphoedema coped well with the additional oedema load of pregnancy; however, those with limited access to quality conservative care prior to pregnancy were found to be significantly impacted by pregnancy, including many reporting deterioration of their chronic lymphoedema. Misinformation and a lack of support increased fear and apprehension in planning and enjoying their pregnancy experience. Collaboration between maternity and lymphoedema therapy services did not happen despite study participants suggesting it. Helpful resources and unmet needs were identified in the recent Australian context to inform further research and the development of resources to assist women with lymphoedema who are considering or are pregnant and their supporters, including health care providers.Keywords: lymphoedema, management strategies, pregnancy, qualitative
Procedia PDF Downloads 85877 Using Google Distance Matrix Application Programming Interface to Reveal and Handle Urban Road Congestion Hot Spots: A Case Study from Budapest
Authors: Peter Baji
Abstract:
In recent years, a growing body of literature emphasizes the increasingly negative impacts of urban road congestion in the everyday life of citizens. Although there are different responses from the public sector to decrease traffic congestion in urban regions, the most effective public intervention is using congestion charges. Because travel is an economic asset, its consumption can be controlled by extra taxes or prices effectively, but this demand-side intervention is often unpopular. Measuring traffic flows with the help of different methods has a long history in transport sciences, but until recently, there was not enough sufficient data for evaluating road traffic flow patterns on the scale of an entire road system of a larger urban area. European cities (e.g., London, Stockholm, Milan), in which congestion charges have already been introduced, designated a particular zone in their downtown for paying, but it protects only the users and inhabitants of the CBD (Central Business District) area. Through the use of Google Maps data as a resource for revealing urban road traffic flow patterns, this paper aims to provide a solution for a fairer and smarter congestion pricing method in cities. The case study area of the research contains three bordering districts of Budapest which are linked by one main road. The first district (5th) is the original downtown that is affected by the congestion charge plans of the city. The second district (13th) lies in the transition zone, and it has recently been transformed into a new CBD containing the biggest office zone in Budapest. The third district (4th) is a mainly residential type of area on the outskirts of the city. The raw data of the research was collected with the help of Google’s Distance Matrix API (Application Programming Interface) which provides future estimated traffic data via travel times between freely fixed coordinate pairs. From the difference of free flow and congested travel time data, the daily congestion patterns and hot spots are detectable in all measured roads within the area. The results suggest that the distribution of congestion peak times and hot spots are uneven in the examined area; however, there are frequently congested areas which lie outside the downtown and their inhabitants also need some protection. The conclusion of this case study is that cities can develop a real-time and place-based congestion charge system that forces car users to avoid frequently congested roads by changing their routes or travel modes. This would be a fairer solution for decreasing the negative environmental effects of the urban road transportation instead of protecting a very limited downtown area.Keywords: Budapest, congestion charge, distance matrix API, application programming interface, pilot study
Procedia PDF Downloads 195876 Integrating Multiple Types of Value in Natural Capital Accounting Systems: Environmental Value Functions
Authors: Pirta Palola, Richard Bailey, Lisa Wedding
Abstract:
Societies and economies worldwide fundamentally depend on natural capital. Alarmingly, natural capital assets are quickly depreciating, posing an existential challenge for humanity. The development of robust natural capital accounting systems is essential for transitioning towards sustainable economic systems and ensuring sound management of capital assets. However, the accurate, equitable and comprehensive estimation of natural capital asset stocks and their accounting values still faces multiple challenges. In particular, the representation of socio-cultural values held by groups or communities has arguably been limited, as to date, the valuation of natural capital assets has primarily been based on monetary valuation methods and assumptions of individual rationality. People relate to and value the natural environment in multiple ways, and no single valuation method can provide a sufficiently comprehensive image of the range of values associated with the environment. Indeed, calls have been made to improve the representation of multiple types of value (instrumental, intrinsic, and relational) and diverse ontological and epistemological perspectives in environmental valuation. This study addresses this need by establishing a novel valuation framework, Environmental Value Functions (EVF), that allows for the integration of multiple types of value in natural capital accounting systems. The EVF framework is based on the estimation and application of value functions, each of which describes the relationship between the value and quantity (or quality) of an ecosystem component of interest. In this framework, values are estimated in terms of change relative to the current level instead of calculating absolute values. Furthermore, EVF was developed to also support non-marginalist conceptualizations of value: it is likely that some environmental values cannot be conceptualized in terms of marginal changes. For example, ecological resilience value may, in some cases, be best understood as a binary: it either exists (1) or is lost (0). In such cases, a logistic value function may be used as the discriminator. Uncertainty in the value function parameterization can be considered through, for example, Monte Carlo sampling analysis. The use of EVF is illustrated with two conceptual examples. For the first time, EVF offers a clear framework and concrete methodology for the representation of multiple types of value in natural capital accounting systems, simultaneously enabling 1) the complementary use and integration of multiple valuation methods (monetary and non-monetary); 2) the synthesis of information from diverse knowledge systems; 3) the recognition of value incommensurability; 4) marginalist and non-marginalist value analysis. Furthermore, with this advancement, the coupling of EVF and ecosystem modeling can offer novel insights to the study of spatial-temporal dynamics in natural capital asset values. For example, value time series can be produced, allowing for the prediction and analysis of volatility, long-term trends, and temporal trade-offs. This approach can provide essential information to help guide the transition to a sustainable economy.Keywords: economics of biodiversity, environmental valuation, natural capital, value function
Procedia PDF Downloads 194875 Recovery of Food Waste: Production of Dog Food
Authors: K. Nazan Turhan, Tuğçe Ersan
Abstract:
The population of the world is approximately 8 billion, and it increases uncontrollably and irrepressibly, leading to an increase in consumption. This situation causes crucial problems, and food waste is one of these. The Food and Agriculture Organization of the United Nations (FAO) defines food waste as the discarding or alternative utilization of food that is safe and nutritious for the consumption of humans along the entire food supply chain, from primary production to end household consumer level. In addition, according to the estimation of FAO, one-third of all food produced for human consumption is lost or wasted worldwide every year. Wasting food endangers natural resources and causes hunger. For instance, excessive amounts of food waste cause greenhouse gas emissions, contributing to global warming. Therefore, waste management has been gaining significance in the last few decades at both local and global levels due to the expected scarcity of resources for the increasing population of the world. There are several ways to recover food waste. According to the United States Environmental Protection Agency’s Food Recovery Hierarchy, food waste recovery ways are source reduction, feeding hungry people, feeding animals, industrial uses, composting, and landfill/incineration from the most preferred to the least preferred, respectively. Bioethanol, biodiesel, biogas, agricultural fertilizer and animal feed can be obtained from food waste that is generated by different food industries. In this project, feeding animals was selected as a food waste recovery method and food waste of a plant was used to provide ingredient uniformity. Grasshoppers were used as a protein source. In other words, the project was performed to develop a dog food product by recovery of the plant’s food waste after following some steps. The collected food waste and purchased grasshoppers were sterilized, dried and pulverized. Then, they were all mixed with 60 g agar-agar solution (4%w/v). 3 different aromas were added, separately to the samples to enhance flavour quality. Since there are differences in the required amounts of different species of dogs, fulfilling all nutritional needs is one of the problems. In other words, there is a wide range of nutritional needs in terms of carbohydrates, protein, fat, sodium, calcium, and so on. Furthermore, the requirements differ depending on age, gender, weight, height, and species. Therefore, the product that was developed contains average amounts of each substance so as not to cause any deficiency or surplus. On the other hand, it contains more protein than similar products in the market. The product was evaluated in terms of contamination and nutritional content. For contamination risk, detection of E. coli and Salmonella experiments were performed, and the results were negative. For the nutritional value test, protein content analysis was done. The protein contents of different samples vary between 33.68% and 26.07%. In addition, water activity analysis was performed, and the water activity (aw) values of different samples ranged between 0.2456 and 0.4145.Keywords: food waste, dog food, animal nutrition, food waste recovery
Procedia PDF Downloads 63874 The Effect of Vibration Amplitude on Tissue Temperature and Lesion Size When Using a Vibrating Cardiac Catheter
Authors: Kaihong Yu, Tetsui Yamashita, Shigeaki Shingyochi, Kazuo Matsumoto, Makoto Ohta
Abstract:
During cardiac ablation, high power delivery for deeper lesion formation is limited by electrode-tissue interface overheating which can cause serious complications such as thrombus. To prevent this overheating, temperature control and open irrigation are often used. In temperature control, radiofrequency generator is adjusted to deliver the maximum output power, which maintains the electrode temperature at a target temperature (commonly 55°C or 60°C). Then the electrode-tissue interface temperature is also limited. The electrode temperature is a result of heating from the contacted tissue and cooling from the surrounding blood. Because the cooling from blood is decreased under conditions of low blood flow, the generator needs to decrease the output power. Thus, temperature control cannot deliver high power under conditions of low blood flow. In open irrigation, saline in room temperature is flushed through the holes arranged in the electrode. The electrode-tissue interface is cooled by the sufficient environmental cooling. And high power delivery can also be done under conditions of low blood flow. However, a large amount of saline infusions (approximately 1500 ml) during irrigation can cause other serious complication. When open irrigation cannot be used under conditions of low blood flow, a new overheating prevention may be required. The authors have proposed a new electrode cooling method by making the catheter vibrating. The previous work has introduced that the vibration can make a cooling effect on electrode, which may result form that the vibration could increase the flow velocity around the catheter. The previous work has also proved that increasing vibration frequency can increase the cooling by vibration. However, the effect of the vibration amplitude is still unknown. Thus, the present study investigated the effect of vibration amplitude on tissue temperature and lesion size. An agar phantom model was used as a tissue-equivalent material for measuring tissue temperature. Thermocouples were inserted into the agar to measure the internal temperature. Porcine myocardium was used for lesion size measurement. A normal ablation catheter was set perpendicular to the tissue (agar or porcine myocardium) with 10 gf contact force in 37°C saline without flow. Vibration amplitude of ± 0.5, ± 0.75, and ± 1.0 mm with a constant frequency (31 Hz or 63) was used. A temperature control protocol (45°C for agar phantom, 60°C for porcine myocardium) was used for the radiofrequency applications. The larger amplitude shows the larger lesion sizes. And the higher tissue temperatures in agar phantom are also shown with the higher amplitude. With a same frequency, the larger amplitude has the higher vibrating speed. And the higher vibrating speed will increase the flow velocity around the electrode more, which leads to a larger electrode temperature decrease. To maintain the electrode at the target temperature, ablator has to increase the output power. With the higher output power in the same duration, the released energy also increases. Consequently, the tissue temperature will be increased and lead to larger lesion sizes.Keywords: cardiac ablation, electrode cooling, lesion size, tissue temperature
Procedia PDF Downloads 371873 Study Habits and Level of Difficulty Encountered by Maltese Students Studying Biology Advanced Level Topics
Authors: Marthese Azzopardi, Liberato Camilleri
Abstract:
This research was performed to investigate the study habits and level of difficulty perceived by post-secondary students in Biology at Advanced-level topics after completing their first year of study. At the end of a two-year ‘sixth form’ course, Maltese students sit for the Matriculation and Secondary Education Certificate (MATSEC) Advanced-level biology exam as a requirement to pursue science-related studies at the University of Malta. The sample was composed of 23 students (16 taking Chemistry and seven taking some ‘Other’ subject at the Advanced Level). The cohort comprised seven males and 16 females. A questionnaire constructed by the authors, was answered anonymously during the last lecture at the end of the first year of study, in May 2016. The Chi square test revealed that gender plays no effect on the various study habits (c2 (6) = 5.873, p = 0.438). ‘Reading both notes and textbooks’ was the most common method adopted by males (71.4%), whereas ‘Writing notes on each topic’ was that mostly used by females (81.3%). The Mann-Whitney U test showed no significant difference in the study habits of students and the mean assessment mark obtained at the end of the first year course (p = 0.231). Statistical difference was found with the One-ANOVA test when comparing the mean assessment mark obtained at the end of the first year course when students are clustered by their Secondary Education Certificate (SEC) grade (p < 0.001). Those obtaining a SEC grade of 2 and 3 got the highest mean assessment of 68.33% and 66.9%, respectively [SEC grading is 1-7, where 1 is the highest]. The Friedman test was used to compare the mean difficulty rating scores provided for the difficulty of each topic. The mean difficulty rating score ranges from 1 to 4, where the larger the mean rating score, the higher the difficulty. When considering the whole group of students, nine topics out of 21 were perceived as significantly more difficult than the other topics. Protein synthesis, DNA Replication and Biomolecules were the most difficult, in that order. The Mann-Whitney U test revealed that the perceived level of difficulty in comprehending Biomolecules is significantly lower for students taking Chemistry compared to those not choosing the subject (p = 0.018). Protein Synthesis was claimed as the most difficult by Chemistry students and Biomolecules by those not studying Chemistry. DNA Replication was the second most difficult topic perceived by both groups. The Mann-Whitney U test was used to examine the effect of gender on the perceived level of difficulty in comprehending various topics. It was found that females have significantly more difficulty in comprehending Biomolecules than males (p=0.039). Protein synthesis was perceived as the most difficult topic by males (mean difficulty rating score = 3.14), while Biomolecules, DNA Replication and Protein synthesis were of equal difficulty for females (mean difficulty rating score = 3.00). Males and females perceived DNA Replication as equally difficult (mean difficulty rating score = 3.00). Discovering the students’ study habits and perceived level of difficulty of specific topics is vital for the lecturer to offer guidance that leads to higher academic achievement.Keywords: biology, perceived difficulty, post-secondary, study habits
Procedia PDF Downloads 188872 Microbial Contamination of Cell Phones of Health Care Workers: Case Study in Mampong Municipal Government Hospital, Ghana
Authors: Francis Gyapong, Denis Yar
Abstract:
The use of cell phones has become an indispensable tool in the hospital's settings. Cell phones are used in hospitals without restrictions regardless of their unknown microbial load. However, the indiscriminate use of mobile devices, especially at health facilities, can act as a vehicle for transmitting pathogenic bacteria and other microorganisms. These potential pathogens become exogenous sources of infection for the patients and are also a potential health hazard for self and as well as family members. These are a growing problem in many health care institutions. Innovations in mobile communication have led to better patient care in diabetes, asthma, and increased in vaccine uptake via SMS. Notwithstanding, the use of cell phones can be a great potential source for nosocomial infections. Many studies reported heavy microbial contamination of cell phones among healthcare workers and communities. However, limited studies have been reported in our region on bacterial contamination on cell phones among healthcare workers. This study assessed microbial contamination of cell phones of health care workers (HCWs) at the Mampong Municipal Government Hospital (MMGH), Ghana. A cross-sectional design was used to characterize bacterial microflora on cell phones of HCWs at the MMGH. A total of thirty-five (35) swab samples of cell phones of HCWs at the Laboratory, Dental Unit, Children’s Ward, Theater and Male ward were randomly collected for laboratory examinations. A suspension of the swab samples was each streak on blood and MacConkey agar and incubated at 37℃ for 48 hours. Bacterial isolates were identified using appropriate laboratory and biochemical tests. Kirby-Bauer disc diffusion method was used to determine the antimicrobial sensitivity tests of the isolates. Data analysis was performed using SPSS version 16. All mobile phones sampled were contaminated with one or more bacterial isolates. Cell phones from the Male ward, Dental Unit, Laboratory, Theatre and Children’s ward had at least three different bacterial isolates; 85.7%, 71.4%, 57.1% and 28.6% for both Theater and Children’s ward respectively. Bacterial contaminants identified were Staphylococcus epidermidis (37%), Staphylococcus aureus (26%), E. coli (20%), Bacillus spp. (11%) and Klebsiella spp. (6 %). Except for the Children ward, E. coli was isolated at all study sites and predominant (42.9%) at the Dental Unit while Klebsiella spp. (28.6%) was only isolated at the Children’s ward. Antibiotic sensitivity testing of Staphylococcus aureus indicated that they were highly sensitive to cephalexin (89%) tetracycline (80%), gentamycin (75%), lincomycin (70%), ciprofloxacin (67%) and highly resistant to ampicillin (75%). Some of these bacteria isolated are potential pathogens and their presence on cell phones of HCWs could be transmitted to patients and their families. Hence strict hand washing before and after every contact with patient and phone be enforced to reduce the risk of nosocomial infections.Keywords: mobile phones, bacterial contamination, patients, MMGH
Procedia PDF Downloads 103871 Tailorability of Poly(Aspartic Acid)/BSA Complex by Self-Assembling in Aqueous Solutions
Authors: Loredana E. Nita, Aurica P. Chiriac, Elena Stoleru, Alina Diaconu, Tudorachi Nita
Abstract:
Self-assembly processes are an attractive method to form new and complex structures between macromolecular compounds to be used for specific applications. In this context, intramolecular and intermolecular bonds play a key role during self-assembling processes in preparation of carrier systems of bioactive substances. Polyelectrolyte complexes (PECs) are formed through electrostatic interactions, and though they are significantly below of the covalent linkages in their strength, these complexes are sufficiently stable owing to the association processes. The relative ease way of PECs formation makes from them a versatile tool for preparation of various materials, with properties that can be tuned by adjusting several parameters, such as the chemical composition and structure of polyelectrolytes, pH and ionic strength of solutions, temperature and post-treatment procedures. For example, protein-polyelectrolyte complexes (PPCs) are playing an important role in various chemical and biological processes, such as protein separation, enzyme stabilization and polymer drug delivery systems. The present investigation is focused on evaluation of the PPC formation between a synthetic polypeptide (poly(aspartic acid) – PAS) and a natural protein (bovine serum albumin - BSA). The PPC obtained from PAS and BSA in different ratio was investigated by corroboration of various techniques of characterization as: spectroscopy, microscopy, thermo-gravimetric analysis, DLS and zeta potential determination, measurements which were performed in static and/or dynamic conditions. The static contact angle of the sample films was also determined in order to evaluate the changes brought upon surface free energy of the prepared PPCs in interdependence with the complexes composition. The evolution of hydrodynamic diameter and zeta potential of the PPC, recorded in situ, confirm changes of both co-partners conformation, a 1/1 ratio between protein and polyelectrolyte being benefit for the preparation of a stable PPC. Also, the study evidenced the dependence of PPC formation on the temperature of preparation. Thus, at low temperatures the PPC is formed with compact structure, small dimension and hydrodynamic diameter, close to those of BSA. The behavior at thermal treatment of the prepared PPCs is in agreement with the composition of the complexes. From the contact angle determination results the increase of the PPC films cohesion, which is higher than that of BSA films. Also, a higher hydrophobicity corresponds to the new PPC films denoting a good adhesion of the red blood cells onto the surface of PSA/BSA interpenetrated systems. The SEM investigation evidenced as well the specific internal structure of PPC concretized in phases with different size and shape in interdependence with the interpolymer mixture composition.Keywords: polyelectrolyte – protein complex, bovine serum albumin, poly(aspartic acid), self-assembly
Procedia PDF Downloads 245870 Analysis on the Converged Method of Korean Scientific and Mathematical Fields and Liberal Arts Programme: Focusing on the Intervention Patterns in Liberal Arts
Authors: Jinhui Bak, Bumjin Kim
Abstract:
The purpose of this study is to analyze how the scientific and mathematical fields (STEM) and liberal arts (A) work together in the STEAM program. In the future STEAM programs that have been designed and developed, the humanities will act not just as a 'tool' for science technology and mathematics, but as a 'core' content to have an equivalent status. STEAM was first introduced to the Republic of Korea in 2011 when the Ministry of Education emphasized fostering creative convergence talent. Many programs have since been developed under the name STEAM, but with the majority of programs focusing on technology education, arts and humanities are considered secondary. As a result, arts is most likely to be accepted as an option that can be excluded from the teachers who run the STEAM program. If what we ultimately pursue through STEAM education is in fostering STEAM literacy, we should no longer turn arts into a tooling area for STEM. Based on this consciousness, this study analyzed over 160 STEAM programs in middle and high schools, which were produced and distributed by the Ministry of Education and the Korea Science and Technology Foundation from 2012 to 2017. The framework of analyses referenced two criteria presented in the related prior studies: normative convergence and technological convergence. In addition, we divide Arts into fine arts and liberal arts and focused on Korean Language Course which is in liberal arts and analyzed what kind of curriculum standards were selected, and what kind of process the Korean language department participated in teaching and learning. In this study, to ensure the reliability of the analysis results, we have chosen to cross-check the individual analysis results of the two researchers and only if they are consistent. We also conducted a reliability check on the analysis results of three middle and high school teachers involved in the STEAM education program. Analyzing 10 programs selected randomly from the analyzed programs, Cronbach's α .853 showed a reliable level. The results of this study are summarized as follows. First, the convergence ratio of the liberal arts was lowest in the department of moral at 14.58%. Second, the normative convergence is 28.19%, which is lower than that of the technological convergence. Third, the language and achievement criteria selected for the program were limited to functional areas such as listening, talking, reading and writing. This means that the convergence of Korean language departments is made only by the necessary tools to communicate opinions or promote scientific products. In this study, we intend to compare these results with the STEAM programs in the United States and abroad to explore what elements or key concepts are required for the achievement criteria for Korean language and curriculum. This is meaningful in that the humanities field (A), including Korean, provides basic data that can be fused into 'equivalent qualifications' with science (S), technical engineering (TE) and mathematics (M).Keywords: Korean STEAM Programme, liberal arts, STEAM curriculum, STEAM Literacy, STEM
Procedia PDF Downloads 157869 Technology Management for Early Stage Technologies
Authors: Ming Zhou, Taeho Park
Abstract:
Early stage technologies have been particularly challenging to manage due to high degrees of their numerous uncertainties. Most research results directly out of a research lab tend to be at their early, if not the infant stage. A long while uncertain commercialization process awaits these lab results. The majority of such lab technologies go nowhere and never get commercialized due to various reasons. Any efforts or financial resources put into managing these technologies turn fruitless. High stake naturally calls for better results, which make a patenting decision harder to make. A good and well protected patent goes a long way for commercialization of the technology. Our preliminary research showed that there was not a simple yet productive procedure for such valuation. Most of the studies now have been theoretical and overly comprehensive where practical suggestions were non-existent. Hence, we attempted to develop a simple and highly implementable procedure for efficient and scalable valuation. We thoroughly reviewed existing research, interviewed practitioners in the Silicon Valley area, and surveyed university technology offices. Instead of presenting another theoretical and exhaustive research, we aimed at developing a practical guidance that a government agency and/or university office could easily deploy and get things moving to later steps of managing early stage technologies. We provided a procedure to thriftily value and make the patenting decision. A patenting index was developed using survey data and expert opinions. We identified the most important factors to be used in the patenting decision using survey ratings. The rating then assisted us in generating good relative weights for the later scoring and weighted averaging step. More importantly, we validated our procedure by testing it with our practitioner contacts. Their inputs produced a general yet highly practical cut schedule. Such schedule of realistic practices has yet to be witnessed our current research. Although a technology office may choose to deviate from our cuts, what we offered here at least provided a simple and meaningful starting point. This procedure was welcomed by practitioners in our expert panel and university officers in our interview group. This research contributed to our current understanding and practices of managing early stage technologies by instating a heuristically simple yet theoretical solid method for the patenting decision. Our findings generated top decision factors, decision processes and decision thresholds of key parameters. This research offered a more practical perspective which further completed our extant knowledge. Our results could be impacted by our sample size and even biased a bit by our focus on the Silicon Valley area. Future research, blessed with bigger data size and more insights, may want to further train and validate our parameter values in order to obtain more consistent results and analyze our decision factors for different industries.Keywords: technology management, early stage technology, patent, decision
Procedia PDF Downloads 342868 Timely Screening for Palliative Needs in Ambulatory Oncology
Authors: Jaci Mastrandrea
Abstract:
Background: The National Comprehensive Cancer Network (NCCN) recommends that healthcare institutions have established processes for integrating palliative care (PC) into cancer treatment and that all cancer patients be screened for PC needs upon initial diagnosis as well as throughout the entire continuum of care (National Comprehensive Cancer Network, 2021). Early PC screening is directly correlated with improved patient outcomes. The Sky Lakes Cancer Treatment Center (SLCTC) is an institution that has access to PC services yet does not have protocols in place for identifying patients with palliative needs or a standardized referral process. The aim of this quality improvement project is to improve early access to PC services by establishing a standardized screening and referral process for outpatient oncology patients. Method: The sample population included all adult patients with an oncology diagnosis who presented to the SLCTC for treatment during the project timeline from March 15th, 2022, to April 29th, 2022. The “Palliative and Supportive Needs Assessment'' (PSNA) screening tool was developed from validated and evidence-based PC referral criteria. The tool was initially implemented using paper forms and later was integrated into the Epic-Beacon EHR system. Patients were screened by registered nurses on the SLCTC treatment team. Nurses responsible for screening patients received an educational inservice prior to implementation. Patients with a PSNA score of three or higher were considered to be a positive screen. Scores of five or higher triggered a PC referral order in the patient’s EHR for the oncologist to review and approve. All patients with a positive screen received an educational handout on the topic of PC, and the EHR was flagged for follow-up. Results: Prior to implementation of the PSCNA screening tool, the SLCTC had zero referrals to PC in the past year, excluding referrals to hospice. Data was collected from the first 100 patient screenings completed within the eight-week data collection period. Seventy-three percent of patients met criteria for PC referral with a score greater than or equal to three. Of those patients who met referral criteria, 53.4% (39 patients) were referred for a palliative and supportive care consultation. Patients that were not referred to PC upon meeting the criteria were flagged in the EHR for re-screening within one to three months. Patients with lung cancer, chronic hematologic malignancies, breast cancer, and gastrointestinal malignancy most frequently met criteria for PC referral and scored highest overall on the scale of 0-12. Conclusion: The implementation of a standardized PC screening tool at the SLCTC significantly increased awareness of PC needs among cancer patients in the outpatient setting. Additionally, data derived from this quality improvement project supports the national recommendation for PC to be an integral component of cancer treatment across the entire continuum of care.Keywords: oncology, palliative care, symptom management, symptom screening, ambulatory oncology, cancer, supportive care
Procedia PDF Downloads 76867 Eco-Nanofiltration Membranes: Nanofiltration Membrane Technology Utilization-Based Fiber Pineapple Leaves Waste as Solutions for Industrial Rubber Liquid Waste Processing and Fertilizer Crisis in Indonesia
Authors: Andi Setiawan, Annisa Ulfah Pristya
Abstract:
Indonesian rubber plant area reached 2.9 million hectares with productivity reached 1.38 million. High rubber productivity is directly proportional to the amount of waste produced rubber processing industry. Rubber industry would produce a negative impact on the rubber industry in the form of environmental pollution caused by waste that has not been treated optimally. Rubber industrial wastewater containing high-nitrogen compounds (nitrate and ammonia) and phosphate compounds which cause water pollution and odor problems due to the high ammonia content. On the other hand, demand for NPK fertilizers in Indonesia continues to increase from year to year and in need of ammonia and phosphate as raw material. Based on domestic demand, it takes a year to 400,000 tons of ammonia and Indonesia imports 200,000 tons of ammonia per year valued at IDR 4.2 trillion. As well, the lack of phosphoric acid to be imported from Jordan, Morocco, South Africa, the Philippines, and India as many as 225 thousand tons per year. During this time, the process of wastewater treatment is generally done with a rubber on the tank to contain the waste and then precipitated, filtered and the rest released into the environment. However, this method is inefficient and thus require high energy costs because through many stages before producing clean water that can be discharged into the river. On the other hand, Indonesia has the potential of pineapple fruit can be harvested throughout the year in all of Indonesia. In 2010, production reached 1,406,445 tons of pineapple in Indonesia or about 9.36 percent of the total fruit production in Indonesia. Increased productivity is directly proportional to the amount of pineapple waste pineapple leaves are kept continuous and usually just dumped in the ground or disposed of with other waste at the final disposal. Through Eco-Nanofiltration Membrane-Based Fiber Pineapple leaves Waste so that environmental problems can be solved efficiently. Nanofiltration is a process that uses pressure as a driving force that can be either convection or diffusion of each molecule. Nanofiltration membranes that can split water to nano size so as to separate the waste processed residual economic value that N and P were higher as a raw material for the manufacture of NPK fertilizer to overcome the crisis in Indonesia. The raw materials were used to manufacture Eco-Nanofiltration Membrane is cellulose from pineapple fiber which processed into cellulose acetate which is biodegradable and only requires a change of the membrane every 6 months. Expected output target is Green eco-technology so with nanofiltration membranes not only treat waste rubber industry in an effective, efficient and environmentally friendly but also lowers the cost of waste treatment compared to conventional methods.Keywords: biodegradable, cellulose diacetate, fertilizers, pineapple, rubber
Procedia PDF Downloads 446866 Antimicrobial Value of Olax subscorpioidea and Bridelia ferruginea on Micro-Organism Isolates of Dental Infection
Authors: I. C. Orabueze, A. A. Amudalat, S. A. Adesegun, A. A. Usman
Abstract:
Dental and associated oral diseases are increasingly affecting a considerable portion of the population and are considered some of the major causes of tooth loss, discomfort, mouth odor and loss of confidence. This study focused on the ethnobotanical survey of medicinal plants used in oral therapy and evaluation of the antimicrobial activities of methanolic extracts of two selected plants from the survey for their efficacy against dental microorganisms. The ethnobotanical survey was carried out in six herbal markets in Lagos State, Nigeria by oral interviewing and information obtained from an old family manually complied herbal medication book. Methanolic extracts of Olax subscorpioidea (stem bark) and Bridelia ferruginea (stem bark) were assayed for their antimicrobial activities against clinical oral isolates (Aspergillus fumigatus, Candida albicans, Streptococcus spp, Staphylococcus aureus, Lactobacillus acidophilus and Pseudomonas aeruginosa). In vitro microbial technique (agar well diffusion method and minimum inhibitory concentration (MIC) assay) were employed for the assay. Chlorhexidine gluconate was used as the reference drug for comparison with the extract results. And the preliminary phytochemical screening of the constituents of the plants were done. The ethnobotanical survey produced plants (28) of diverse family. Different parts of plants (seed, fruit, leaf, root, bark) were mentioned but 60% mentioned were either the stem or the bark. O. subscorpioidea showed considerable antifungal activity with zone of inhibition ranging from 2.650 – 2.000 cm against Aspergillus fumigatus but no such encouraging inhibitory activity was observed in the other assayed organisms. B. ferruginea showed antibacterial sensitivity against Streptococcus spp, Staphylococcus aureus, Lactobacillus acidophilus and Pseudomonas aeruginosa with zone of inhibitions ranging from 3.400 - 2.500, 2.250 - 1.600, 2.700 - 1.950, 2.225 – 1.525 cm respectively. The minimum inhibitory concentration of O. subscorpioidea against Aspergillus fumigatus was 51.2 mg ml-1 while that of B. ferruginea against Streptococcus spp was 0.1mg ml-1 and for Staphylococcus aureus, Lactobacillus acidophilus and Pseudomonas aeruginosa were 25.6 mg ml-1. A phytochemical analysis reveals the presence of alkaloids, saponins, cardiac glycoside, tannins, phenols and terpenoids in both plants, with steroids only in B. ferruginea. No toxicity was observed among mice given the two methanolic extracts (1000 mg Kg-1) after 21 days. The barks of both plants exhibited antimicrobial properties against periodontal diseases causing organisms assayed, thus up-holding their folkloric use in oral disorder management. Further research could be done viewing these extracts as combination therapy, checking for possible synergistic value in toothpaste and oral rinse formulations for reducing oral bacterial flora and fungi load.Keywords: antimicrobial activities, Bridelia ferruginea, dental disinfection, methanolic extract, Olax subscorpioidea, ethnobotanical survey
Procedia PDF Downloads 244