Search results for: standard procedures
566 Intertemporal Individual Preferences for Climate Change Intergenerational Investments – Estimating the Social Discount Rate for Poland
Authors: Monika Foltyn-Zarychta
Abstract:
Climate change mitigation investment activities are inevitably extended in time extremely. The project cycle does not last for decades – sometimes it stretches out for hundreds of years and the project outcomes impact several generations. The longevity of those activities raises multiple problems in the appraisal procedure. One of the pivotal issues is the choice of the discount rate, which affect tremendously the net present value criterion. The paper aims at estimating the value of social discount rate for intergenerational investment projects in Poland based on individual intertemporal preferences. The analysis is based on questionnaire surveying Polish citizens and designed as contingent valuation method. The analysis aimed at answering two questions: 1) whether the value of the individual discount rate decline with increased time of delay, and 2) whether the value of the individual discount rate changes with increased spatial distance toward the gainers of the project. The valuation questions were designed to identify respondent’s indifference point between lives saved today and in the future due to hypothetical project mitigating climate changes. Several project effects’ delays (of 10, 30, 90 and 150 years) were used to test the decline in value with time. The variability in regard to distance was tested by asking respondents to estimate their indifference point separately for gainers in Poland and in Latvia. The results show that as the time delay increases, the average discount rate value decreases from 15,32% for 10-year delay to 2,75% for 150-year delay. Similar values were estimated for Latvian beneficiaries. There should be also noticed that the average volatility measured by standard deviation also decreased with time delay. However, the results did not show any statistically significant difference in discount rate values for Polish and Latvian gainers. The results showing the decline of the discount rate with time prove the possible economic efficiency of the intergenerational effect of climate change mitigation projects and may induce the assumption of the altruistic behavior of present generation toward future people. Furthermore, it can be backed up by the same discount rate level declared by Polish for distant in space Latvian gainers. The climate change activities usually need significant outlays and the payback period is extremely long. The more precise the variables in the appraisal are, the more trustworthy and rational the investment decision is. The discount rate estimations for Poland add to the vivid discussion concerning the issue of climate change and intergenerational justice.Keywords: climate change, social discount rate, investment appraisal, intergenerational justice
Procedia PDF Downloads 237565 Stuttering Persistence in Children: Effectiveness of the Psicodizione Method in a Small Italian Cohort
Authors: Corinna Zeli, Silvia Calati, Marco Simeoni, Chiara Comastri
Abstract:
Developmental stuttering affects about 10% of preschool children; although the high percentage of natural recovery, a quarter of them will become an adult who stutters. An effective early intervention should help those children with high persistence risk for the future. The Psicodizione method for early stuttering is an Italian behavior indirect treatment for preschool children who stutter in which method parents act as good guides for communication, modeling their own fluency. In this study, we give a preliminary measure to evaluate the long-term effectiveness of Psicodizione method on stuttering preschool children with a high persistence risk. Among all Italian children treated with the Psicodizione method between 2018 and 2019, we selected 8 kids with at least 3 high risk persistence factors from the Illinois Prediction Criteria proposed by Yairi and Seery. The factors chosen for the selection were: one parent who stutters (1pt mother; 1.5pt father), male gender, ≥ 4 years old at onset; ≥ 12 months from onset of symptoms before treatment. For this study, the families were contacted after an average period of time of 14,7 months (range 3 - 26 months). Parental reports were gathered with a standard online questionnaire in order to obtain data reflecting fluency from a wide range of the children’s life situations. The minimum worthwhile outcome was set at "mild evidence" in a 5 point Likert scale (1 mild evidence- 5 high severity evidence). A second group of 6 children, among those treated with the Piscodizione method, was selected as high potential for spontaneous remission (low persistence risk). The children in this group had to fulfill all the following criteria: female gender, symptoms for less than 12 months (before treatment), age of onset <4 years old, none of the parents with persistent stuttering. At the time of this follow-up, the children were aged 6–9 years, with a mean of 15 months post-treatment. Among the children in the high persistence risk group, 2 (25%) hadn’t had stutter anymore, and 3 (37,5%) had mild stutter based on parental reports. In the low persistency risk group, the children were aged 4–6 years, with a mean of 14 months post-treatment, and 5 (84%) hadn’t had stutter anymore (for the past 16 months on average).62,5% of children at high risk of persistence after Psicodizione treatment showed mild evidence of stutter at most. 75% of parents confirmed a better fluency than before the treatment. The low persistence risk group seemed to be representative of spontaneous recovery. This study’s design could help to better evaluate the success of the proposed interventions for stuttering preschool children and provides a preliminary measure of the effectiveness of the Psicodizione method on high persistence risk children.Keywords: early treatment, fluency, preschool children, stuttering
Procedia PDF Downloads 215564 Data Analysis Tool for Predicting Water Scarcity in Industry
Authors: Tassadit Issaadi Hamitouche, Nicolas Gillard, Jean Petit, Valerie Lavaste, Celine Mayousse
Abstract:
Water is a fundamental resource for the industry. It is taken from the environment either from municipal distribution networks or from various natural water sources such as the sea, ocean, rivers, aquifers, etc. Once used, water is discharged into the environment, reprocessed at the plant or treatment plants. These withdrawals and discharges have a direct impact on natural water resources. These impacts can apply to the quantity of water available, the quality of the water used, or to impacts that are more complex to measure and less direct, such as the health of the population downstream from the watercourse, for example. Based on the analysis of data (meteorological, river characteristics, physicochemical substances), we wish to predict water stress episodes and anticipate prefectoral decrees, which can impact the performance of plants and propose improvement solutions, help industrialists in their choice of location for a new plant, visualize possible interactions between companies to optimize exchanges and encourage the pooling of water treatment solutions, and set up circular economies around the issue of water. The development of a system for the collection, processing, and use of data related to water resources requires the functional constraints specific to the latter to be made explicit. Thus the system will have to be able to store a large amount of data from sensors (which is the main type of data in plants and their environment). In addition, manufacturers need to have 'near-real-time' processing of information in order to be able to make the best decisions (to be rapidly notified of an event that would have a significant impact on water resources). Finally, the visualization of data must be adapted to its temporal and geographical dimensions. In this study, we set up an infrastructure centered on the TICK application stack (for Telegraf, InfluxDB, Chronograf, and Kapacitor), which is a set of loosely coupled but tightly integrated open source projects designed to manage huge amounts of time-stamped information. The software architecture is coupled with the cross-industry standard process for data mining (CRISP-DM) data mining methodology. The robust architecture and the methodology used have demonstrated their effectiveness on the study case of learning the level of a river with a 7-day horizon. The management of water and the activities within the plants -which depend on this resource- should be considerably improved thanks, on the one hand, to the learning that allows the anticipation of periods of water stress, and on the other hand, to the information system that is able to warn decision-makers with alerts created from the formalization of prefectoral decrees.Keywords: data mining, industry, machine Learning, shortage, water resources
Procedia PDF Downloads 121563 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements
Authors: Alexander Buhr, Klaus Ehrenfried
Abstract:
Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements
Procedia PDF Downloads 305562 Microfluidic Plasmonic Bio-Sensing of Exosomes by Using a Gold Nano-Island Platform
Authors: Srinivas Bathini, Duraichelvan Raju, Simona Badilescu, Muthukumaran Packirisamy
Abstract:
A bio-sensing method, based on the plasmonic property of gold nano-islands, has been developed for detection of exosomes in a clinical setting. The position of the gold plasmon band in the UV-Visible spectrum depends on the size and shape of gold nanoparticles as well as on the surrounding environment. By adsorbing various chemical entities, or binding them, the gold plasmon band will shift toward longer wavelengths and the shift is proportional to the concentration. Exosomes transport cargoes of molecules and genetic materials to proximal and distal cells. Presently, the standard method for their isolation and quantification from body fluids is by ultracentrifugation, not a practical method to be implemented in a clinical setting. Thus, a versatile and cutting-edge platform is required to selectively detect and isolate exosomes for further analysis at clinical level. The new sensing protocol, instead of antibodies, makes use of a specially synthesized polypeptide (Vn96), to capture and quantify the exosomes from different media, by binding the heat shock proteins from exosomes. The protocol has been established and optimized by using a glass substrate, in order to facilitate the next stage, namely the transfer of the protocol to a microfluidic environment. After each step of the protocol, the UV-Vis spectrum was recorded and the position of gold Localized Surface Plasmon Resonance (LSPR) band was measured. The sensing process was modelled, taking into account the characteristics of the nano-island structure, prepared by thermal convection and annealing. The optimal molar ratios of the most important chemical entities, involved in the detection of exosomes were calculated as well. Indeed, it was found that the results of the sensing process depend on the two major steps: the molar ratios of streptavidin to biotin-PEG-Vn96 and, the final step, the capture of exosomes by the biotin-PEG-Vn96 complex. The microfluidic device designed for sensing of exosomes consists of a glass substrate, sealed by a PDMS layer that contains the channel and a collecting chamber. In the device, the solutions of linker, cross-linker, etc., are pumped over the gold nano-islands and an Ocean Optics spectrometer is used to measure the position of the Au plasmon band at each step of the sensing. The experiments have shown that the shift of the Au LSPR band is proportional to the concentration of exosomes and, thereby, exosomes can be accurately quantified. An important advantage of the method is the ability to discriminate between exosomes having different origins.Keywords: exosomes, gold nano-islands, microfluidics, plasmonic biosensing
Procedia PDF Downloads 172561 Thermal and Visual Comfort Assessment in Office Buildings in Relation to Space Depth
Authors: Elham Soltani Dehnavi
Abstract:
In today’s compact cities, bringing daylighting and fresh air to buildings is a significant challenge, but it also presents opportunities to reduce energy consumption in buildings by reducing the need for artificial lighting and mechanical systems. Simple adjustments to building form can contribute to their efficiency. This paper examines how the relationship between the width and depth of the rooms in office buildings affects visual and thermal comfort, and consequently energy savings. Based on these evaluations, we can determine the best location for sedentary areas in a room. We can also propose improvements to occupant experience and minimize the difference between the predicted and measured performance in buildings by changing other design parameters, such as natural ventilation strategies, glazing properties, and shading. This study investigates the condition of spatial daylighting and thermal comfort for a range of room configurations using computer simulations, then it suggests the best depth for optimizing both daylighting and thermal comfort, and consequently energy performance in each room type. The Window-to-Wall Ratio (WWR) is 40% with 0.8m window sill and 0.4m window head. Also, there are some fixed parameters chosen according to building codes and standards, and the simulations are done in Seattle, USA. The simulation results are presented as evaluation grids using the thresholds for different metrics such as Daylight Autonomy (DA), spatial Daylight Autonomy (sDA), Annual Sunlight Exposure (ASE), and Daylight Glare Probability (DGP) for visual comfort, and Predicted Mean Vote (PMV), Predicted Percentage of Dissatisfied (PPD), occupied Thermal Comfort Percentage (occTCP), over-heated percent, under-heated percent, and Standard Effective Temperature (SET) for thermal comfort that are extracted from Grasshopper scripts. The simulation tools are Grasshopper plugins such as Ladybug, Honeybee, and EnergyPlus. According to the results, some metrics do not change much along the room depth and some of them change significantly. So, we can overlap these grids in order to determine the comfort zone. The overlapped grids contain 8 metrics, and the pixels that meet all 8 mentioned metrics’ thresholds define the comfort zone. With these overlapped maps, we can determine the comfort zones inside rooms and locate sedentary areas there. Other parts can be used for other tasks that are not used permanently or need lower or higher amounts of daylight and thermal comfort is less critical to user experience. The results can be reflected in a table to be used as a guideline by designers in the early stages of the design process.Keywords: occupant experience, office buildings, space depth, thermal comfort, visual comfort
Procedia PDF Downloads 183560 Automatic Aggregation and Embedding of Microservices for Optimized Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.Keywords: aggregation, deployment, embedding, resource allocation
Procedia PDF Downloads 203559 A Content Analysis of the Introduction to the Philosophy of Religion Literature Published in the West between 1950-2010 in Terms of Definition, Method and Subjects
Authors: Fatih Topaloğlu
Abstract:
Although philosophy is inherently a theoretical and intellectual activity, it should not be denied that environmental conditions influence the formation and shaping of philosophical thought. In this context, it should be noted that the Philosophy of Religion has been influential in the debates in the West, especially since the beginning of the 20th century, and that this influence has dimensions that cannot be limited to academic or intellectual fields. The issues and problems that fall within the field of interest of Philosophy of Religion are followed with interest by a significant proportion of society through popular publications. Philosophy of Religion has its share in many social, economic, cultural, scientific, political and ethical developments. Philosophy of Religion, in the most general sense, can be defined as a philosophical approach to religion or a philosophical way of thinking and discussing religion. Philosophy of Religion tries to explain the epistemological foundations of concepts such as belief and faith that shape religious life by revealing their meaning for the individual. Thus, Philosophy of Religion tries to evaluate the effect of beliefs on the individual's values, judgments and behaviours with a comprehensive and critical eye. The Philosophy of Religion, which tries to create new solutions and perspectives by applying the methods of philosophy to religious problems, tries to solve these problems not by referring to the holy book or religious teachings but by logical proofs obtained through the possibilities of reason and evidence filtered through the filter of criticism. Although there is no standard method for doing Philosophy of Religion, it can be said that an approach that can be expressed as thinking about religion in a rational, objective, and consistent way is generally accepted. The evaluations made within the scope of Philosophy of Religion have two stages. The first is the definition stage, and the second is the evaluation stage. In the first stage, the data of different scientific disciplines, especially other religious sciences, are utilized to define the issues objectively. In the second stage, philosophical evaluations are made based on this foundation. During these evaluations, the issue of how the relationship between religion and philosophy should be established is extremely sensitive. The main thesis of this paper is that the Philosophy of Religion, as a branch of philosophy, has been affected by the conditions caused by the historical experience through which it has passed and has differentiated its subjects and the methods it uses to realize its philosophical acts over time under the influence of these conditions. This study will attempt to evaluate the validity of this study based on the "Introduction to Philosophy of Religion" literature, which we assume reflects this differentiation. As a result of this examination will aim to reach some factual conclusions about the nature of both philosophical and religious thought, to determine the phases that the Philosophy of Religion as a discipline has gone through since the day it emerged, and to investigate the possibilities of a holistic view of the field.Keywords: content analysis, culture, history, philosophy of religion, method
Procedia PDF Downloads 57558 Preparation of Allyl BODIPY for the Click Reaction with Thioglycolic Acid
Authors: Chrislaura Carmo, Luca Deiana, Mafalda Laranjo, Abilio Sobral, Armando Cordova
Abstract:
Photodynamic therapy (PDT) is currently used for the treatment of malignancies and premalignant tumors. It is based on the capture of a photosensitizing molecule (PS) which, when excited by light at a certain wavelength, reacts with oxygen and generates oxidizing species (radicals, singlet oxygen, triplet species) in target tissues, leading to cell death. BODIPY (4,4-difluoro-4-bora-3a,4a-diaza-s-indaceno) derivatives are emerging as important candidates for photosensitizer in photodynamic therapy of cancer cells due to their high triplet quantum yield. Today these dyes are relevant molecules in photovoltaic materials and fluorescent sensors. In this study, it will be demonstrated the possibility that BODIPY can be covalently linked to thioglycolic acid through the click reaction. Thiol−ene click chemistry has become a powerful synthesis method in materials science and surface modification. The design of biobased allyl-terminated precursors with high renewable carbon content for the construction of the thiol-ene polymer networks is essential for sustainable development and green chemistry. The work aims to synthesize the BODIPY (10-(4-(allyloxy) phenyl)-2,8-diethyl-5,5-difluoro-1,3,7,9-tetramethyl-5H-dipyrrolo[1,2-c:2',1'-f] [1,3,2] diazaborinin-4-ium-5-uide) and to click reaction with Thioglycolic acid. BODIPY was synthesized by the condensation reaction between aldehyde and pyrrole in dichloromethane, followed by in situ complexation with BF3·OEt2 in the presence of the base. Then it was functionalized with allyl bromide to achieve the double bond and thus be able to carry out the click reaction. The thiol−ene click was performed using DMPA (2,2-Dimethoxy-2-phenylacetophenone) as a photo-initiator in the presence of UV light (320–500 nm) in DMF at room temperature for 24 hours. Compounds were characterized by standard analytical techniques, including UV-Vis Spectroscopy, 1H, 13C, 19F NMR and mass spectroscopy. The results of this study will be important to link BODIPY to polymers through the thiol group offering a diversity of applications and functionalization. This new molecule can be tested as third-generation photosensitizers, in which the dye is targeted by antibodies or nanocarriers by cells, mainly in cancer cells, PDT and Photodynamic Antimicrobial Chemotherapy (PACT). According to our studies, it was possible to visualize a click reaction between allyl BODIPY and thioglycolic acid. Our team will also test the reaction with other thiol groups for comparison. Further, we will do the click reaction of BODIPY with a natural polymer linked with a thiol group. The results of the above compounds will be tested in PDT assays on various lung cancer cell lines.Keywords: bodipy, click reaction, thioglycolic acid, allyl, thiol-ene click
Procedia PDF Downloads 132557 Application of Mesenchymal Stem Cells in Diabetic Therapy
Authors: K. J. Keerthi, Vasundhara Kamineni, A. Ravi Shanker, T. Rammurthy, A. Vijaya Lakshmi, Q. Hasan
Abstract:
Pancreatic β-cells are the predominant insulin-producing cell types within the Islets of Langerhans and insulin is the primary hormone which regulates carbohydrate and fat metabolism. Apoptosis of β-cells or insufficient insulin production leads to Diabetes Mellitus (DM). Current therapy for diabetes includes either medical management or insulin replacement and regular monitoring. Replacement of β- cells is an attractive treatment option for both Type-1 and Type-2 DM in view of the recent paper which indicates that β-cells apoptosis is the common underlying cause for both the Types of DM. With the development of Edmonton protocol, pancreatic β-cells allo-transplantation became possible, but this is still not considered as standard of care due to subsequent requirement of lifelong immunosuppression and the scarcity of suitable healthy organs to retrieve pancreatic β-cell. Fetal pancreatic cells from abortuses were developed as a possible therapeutic option for Diabetes, however, this posed several ethical issues. Hence, in the present study Mesenchymal stem cells (MSCs) were differentiated into insulin producing cells which were isolated from Human Umbilical cord (HUC) tissue. MSCs have already made their mark in the growing field of regenerative medicine, and their therapeutic worth has already been validated for a number of conditions. HUC samples were collected with prior informed consent as approved by the Institutional ethical committee. HUC (n=26) were processed using a combination of both mechanical and enzymatic (collagenase-II, 100 U/ml, Gibco ) methods to obtain MSCs which were cultured in-vitro in L-DMEM (Low glucose Dulbecco's Modified Eagle's Medium, Sigma, 4.5 mM glucose/L), 10% FBS in 5% CO2 incubator at 37°C. After reaching 80-90% confluency, MSCs were characterized with Flowcytometry and Immunocytochemistry for specific cell surface antigens. Cells expressed CD90+, CD73+, CD105+, CD34-, CD45-, HLA-DR-/Low and Vimentin+. These cells were differentiated to β-cells by using H-DMEM (High glucose Dulbecco's Modified Eagle's Medium,25 mM glucose/L, Gibco), β-Mercaptoethanol (0.1mM, Hi-Media), basic Fibroblast growth factor (10 µg /L,Gibco), and Nicotinamide (10 mmol/L, Hi-Media). Pancreatic β-cells were confirmed by positive Dithizone staining and were found to be functionally active as they released 8 IU/ml insulin on glucose stimulation. Isolating MSCs from usually discarded, abundantly available HUC tissue, expanding and differentiating to β-cells may be the most feasible cell therapy option for the millions of people suffering from DM globally.Keywords: diabetes mellitus, human umbilical cord, mesenchymal stem cells, differentiation
Procedia PDF Downloads 259556 Flood Risk Management in the Semi-Arid Regions of Lebanon - Case Study “Semi Arid Catchments, Ras Baalbeck and Fekha”
Authors: Essam Gooda, Chadi Abdallah, Hamdi Seif, Safaa Baydoun, Rouya Hdeib, Hilal Obeid
Abstract:
Floods are common natural disaster occurring in semi-arid regions in Lebanon. This results in damage to human life and deterioration of environment. Despite their destructive nature and their immense impact on the socio-economy of the region, flash floods have not received adequate attention from policy and decision makers. This is mainly because of poor understanding of the processes involved and measures needed to manage the problem. The current understanding of flash floods remains at the level of general concepts; most policy makers have yet to recognize that flash floods are distinctly different from normal riverine floods in term of causes, propagation, intensity, impacts, predictability, and management. Flash floods are generally not investigated as a separate class of event but are rather reported as part of the overall seasonal flood situation. As a result, Lebanon generally lacks policies, strategies, and plans relating specifically to flash floods. Main objective of this research is to improve flash flood prediction by providing new knowledge and better understanding of the hydrological processes governing flash floods in the East Catchments of El Assi River. This includes developing rainstorm time distribution curves that are unique for this type of study region; analyzing, investigating, and developing a relationship between arid watershed characteristics (including urbanization) and nearby villages flow flood frequency in Ras Baalbeck and Fekha. This paper discusses different levels of integration approach¬es between GIS and hydrological models (HEC-HMS & HEC-RAS) and presents a case study, in which all the tasks of creating model input, editing data, running the model, and displaying output results. The study area corresponds to the East Basin (Ras Baalbeck & Fakeha), comprising nearly 350 km2 and situated in the Bekaa Valley of Lebanon. The case study presented in this paper has a database which is derived from Lebanese Army topographic maps for this region. Using ArcMap to digitizing the contour lines, streams & other features from the topographic maps. The digital elevation model grid (DEM) is derived for the study area. The next steps in this research are to incorporate rainfall time series data from Arseal, Fekha and Deir El Ahmar stations to build a hydrologic data model within a GIS environment and to combine ArcGIS/ArcMap, HEC-HMS & HEC-RAS models, in order to produce a spatial-temporal model for floodplain analysis at a regional scale. In this study, HEC-HMS and SCS methods were chosen to build the hydrologic model of the watershed. The model then calibrated using flood event that occurred between 7th & 9th of May 2014 which considered exceptionally extreme because of the length of time the flows lasted (15 hours) and the fact that it covered both the watershed of Aarsal and Ras Baalbeck. The strongest reported flood in recent times lasted for only 7 hours covering only one watershed. The calibrated hydrologic model is then used to build the hydraulic model & assessing of flood hazards maps for the region. HEC-RAS Model is used in this issue & field trips were done for the catchments in order to calibrated both Hydrologic and Hydraulic models. The presented models are a kind of flexible procedures for an ungaged watershed. For some storm events it delivers good results, while for others, no parameter vectors can be found. In order to have a general methodology based on these ideas, further calibration and compromising of results on the dependence of many flood events parameters and catchment properties is required.Keywords: flood risk management, flash flood, semi arid region, El Assi River, hazard maps
Procedia PDF Downloads 478555 Modeling Diel Trends of Dissolved Oxygen for Estimating the Metabolism in Pristine Streams in the Brazilian Cerrado
Authors: Wesley A. Saltarelli, Nicolas R. Finkler, Adriana C. P. Miwa, Maria C. Calijuri, Davi G. F. Cunha
Abstract:
The metabolism of the streams is an indicator of ecosystem disturbance due to the influences of the catchment on the structure of the water bodies. The study of the respiration and photosynthesis allows the estimation of energy fluxes through the food webs and the analysis of the autotrophic and heterotrophic processes. We aimed at evaluating the metabolism in streams located in the Brazilian savannah, Cerrado (Sao Carlos, SP), by determining and modeling the daily changes of dissolved oxygen (DO) in the water during one year. Three water bodies with minimal anthropogenic interference in their surroundings were selected, Espraiado (ES), Broa (BR) and Canchim (CA). Every two months, water temperature, pH and conductivity are measured with a multiparameter probe. Nitrogen and phosphorus forms are determined according to standard methods. Also, canopy cover percentages are estimated in situ with a spherical densitometer. Stream flows are quantified through the conservative tracer (NaCl) method. For the metabolism study, DO (PME-MiniDOT) and light (Odyssey Photosynthetic Active Radiation) sensors log data for at least three consecutive days every ten minutes. The reaeration coefficient (k2) is estimated through the method of the tracer gas (SF6). Finally, we model the variations in DO concentrations and calculate the rates of gross and net primary production (GPP and NPP) and respiration based on the one station method described in the literature. Three sampling were carried out in October and December 2015 and February 2016 (the next will be in April, June and August 2016). The results from the first two periods are already available. The mean water temperatures in the streams were 20.0 +/- 0.8C (Oct) and 20.7 +/- 0.5C (Dec). In general, electrical conductivity values were low (ES: 20.5 +/- 3.5uS/cm; BR 5.5 +/- 0.7uS/cm; CA 33 +/- 1.4 uS/cm). The mean pH values were 5.0 (BR), 5.7 (ES) and 6.4 (CA). The mean concentrations of total phosphorus were 8.0ug/L (BR), 66.6ug/L (ES) and 51.5ug/L (CA), whereas soluble reactive phosphorus concentrations were always below 21.0ug/L. The BR stream had the lowest concentration of total nitrogen (0.55mg/L) as compared to CA (0.77mg/L) and ES (1.57mg/L). The average discharges were 8.8 +/- 6L/s (ES), 11.4 +/- 3L/s and CA 2.4 +/- 0.5L/s. The average percentages of canopy cover were 72% (ES), 75% (BR) and 79% (CA). Significant daily changes were observed in the DO concentrations, reflecting predominantly heterotrophic conditions (respiration exceeded the gross primary production, with negative net primary production). The GPP varied from 0-0.4g/m2.d (in Oct and Dec) and the R varied from 0.9-22.7g/m2.d (Oct) and from 0.9-7g/m2.d (Dec). The predominance of heterotrophic conditions suggests increased vulnerability of the ecosystems to artificial inputs of organic matter that would demand oxygen. The investigation of the metabolism in the pristine streams can help defining natural reference conditions of trophic state.Keywords: low-order streams, metabolism, net primary production, trophic state
Procedia PDF Downloads 258554 Instruction Program for Human Factors in Maintenance, Addressed to the People Working in Colombian Air Force Aeronautical Maintenance Area to Strengthen Operational Safety
Authors: Rafael Andres Rincon Barrera
Abstract:
Safety in global aviation plays a preponderant role in organizations that seek to avoid accidents in an attempt to preserve their most precious assets (the people and the machines). Human factors-based programs have shown to be effective in managing human-generated risks. The importance of training on human factors in maintenance has not been indifferent to the Colombian Air Force (COLAF). This research, which has a mixed quantitative, qualitative and descriptive approach, deals with its absence of structuring an instruction program in Human Factors in Aeronautical Maintenance, which serves as a tool to improve Operational Safety in the military air units of the COLAF. Research shows the trends and evolution of human factors programs in aeronautical maintenance through the analysis of a data matrix with 33 sources taken from different databases that are about the incorporation of these types of programs in the aeronautical industry in the last 20 years; as well as the improvements in the operational safety process that are presented after the implementation of these ones. Likewise, it compiles different normative guides in force from world aeronautical authorities for training in these programs, establishing a matrix of methodologies that may be applicable to develop a training program in human factors in maintenance. Subsequently, it illustrates the design, validation, and development of a human factors knowledge measurement instrument for maintenance at the COLAF that includes topics on Human Factors (HF), Safety Management System (SMS), and aeronautical maintenance regulations at the COLAF. With the information obtained, it performs the statistical analysis showing the aspects of knowledge and strengthening the staff for the preparation of the instruction program. Performing data triangulation based on the applicable methods and the weakest aspects found in the maintenance people shows a variable crossing from color coding, thus indicating the contents according to a training program for human factors in aeronautical maintenance, which are adjusted according to the competencies that are expected to be developed with the staff in a curricular format established by the COLAF. Among the most important findings are the determination that different authors are dealing with human factors in maintenance agrees that there is no standard model for its instruction and implementation, but that it must be adapted to the needs of the organization, that the Safety Culture in the Companies which incorporated programs on human factors in maintenance increased, that from the data obtained with the instrument for knowledge measurement of human factors in maintenance, the level of knowledge is MEDIUM-LOW with a score of 61.79%. And finally that there is an opportunity to improve Operational Safety for the COLAF through the implementation of the training program of human factors in maintenance for the technicians working in this area.Keywords: Colombian air force, human factors, safety culture, safety management system, triangulation
Procedia PDF Downloads 134553 A Framework for Incorporating Non-Linear Degradation of Conductive Adhesive in Environmental Testing
Authors: Kedar Hardikar, Joe Varghese
Abstract:
Conductive adhesives have found wide-ranging applications in electronics industry ranging from fixing a defective conductor on printed circuit board (PCB) attaching an electronic component in an assembly to protecting electronics components by the formation of “Faraday Cage.” The reliability requirements for the conductive adhesive vary widely depending on the application and expected product lifetime. While the conductive adhesive is required to maintain the structural integrity, the electrical performance of the associated sub-assembly can be affected by the degradation of conductive adhesive. The degradation of the adhesive is dependent upon the highly varied use case. The conventional approach to assess the reliability of the sub-assembly involves subjecting it to the standard environmental test conditions such as high-temperature high humidity, thermal cycling, high-temperature exposure to name a few. In order to enable projection of test data and observed failures to predict field performance, systematic development of an acceleration factor between the test conditions and field conditions is crucial. Common acceleration factor models such as Arrhenius model are based on rate kinetics and typically rely on an assumption of linear degradation in time for a given condition and test duration. The application of interest in this work involves conductive adhesive used in an electronic circuit of a capacitive sensor. The degradation of conductive adhesive in high temperature and humidity environment is quantified by the capacitance values. Under such conditions, the use of established models such as Hallberg-Peck model or Eyring Model to predict time to failure in the field typically relies on linear degradation rate. In this particular case, it is seen that the degradation is nonlinear in time and exhibits a square root t dependence. It is also shown that for the mechanism of interest, the presence of moisture is essential, and the dominant mechanism driving the degradation is the diffusion of moisture. In this work, a framework is developed to incorporate nonlinear degradation of the conductive adhesive for the development of an acceleration factor. This method can be extended to applications where nonlinearity in degradation rate can be adequately characterized in tests. It is shown that depending on the expected product lifetime, the use of conventional linear degradation approach can overestimate or underestimate the field performance. This work provides guidelines for suitability of linear degradation approximation for such varied applicationsKeywords: conductive adhesives, nonlinear degradation, physics of failure, acceleration factor model.
Procedia PDF Downloads 135552 Effect of 8-OH-DPAT on the Behavioral Indicators of Stress and on the Number of Astrocytes after Exposure to Chronic Stress
Authors: Ivette Gonzalez-Rivera, Diana B. Paz-Trejo, Oscar Galicia-Castillo, David N. Velazquez-Martinez, Hugo Sanchez-Castillo
Abstract:
Prolonged exposure to stress can cause disorders related with dysfunction in the prefrontal cortex such as generalized anxiety and depression. These disorders involve alterations in neurotransmitter systems; the serotonergic system—a target of the drugs that are commonly used as a treatment to these disorders—is one of them. Recent studies suggest that 5-HT1A receptors play a pivotal role in the serotonergic system regulation and in stress responses. In the same way, there is increasing evidence that astrocytes are involved in the pathophysiology of stress. The aim of this study was to examine the effects of 8-OH-DPAT, a selective agonist of 5-HT1A receptors, in the behavioral signs of anxiety and anhedonia as well as in the number of astrocytes in the medial prefrontal cortex (mPFC) after exposure to chronic stress. They used 50 male Wistar rats of 250-350 grams housed in standard laboratory conditions and treated in accordance with the ethical standards of use and care of laboratory animals. A protocol of chronic unpredictable stress was used for 10 consecutive days during which the presentation of stressors such as motion restriction, water deprivation, wet bed, among others, were used. 40 rats were subjected to the stress protocol and then were divided into 4 groups of 10 rats each, which were administered 8-OH-DPAT (Tocris, USA) intraperitoneally with saline as vehicle in doses 0.0, 0.3, 1.0 and 2.0 mg/kg respectively. Another 10 rats were not subjected to the stress protocol or the drug. Subsequently, all the rats were measured in an open field test, a forced swimming test, sucrose consume, and a cero maze test. At the end of this procedure, the animals were sacrificed, the brain was removed and the tissue of the mPFC (Bregma: 4.20, 3.70, 2.70, 2.20) was processed in immunofluorescence staining for astrocytes (Anti-GFAP antibody - astrocyte maker, ABCAM). Statistically significant differences were found in the behavioral tests of all groups, showing that the stress group with saline administration had more indicators of anxiety and anhedonia than the control group and the groups with administration of 8-OH-DPAT. Also, a dose dependent effect of 8-OH-DPAT was found on the number of astrocytes in the mPFC. The results show that 8-OH-DPAT can modulate the effect of stress in both behavioral and anatomical level. Also they indicate that 5-HT1A receptors and astrocytes play an important role in the stress response and may modulate the therapeutic effect of serotonergic drugs, so they should be explored as a fundamental part in the treatment of symptoms of stress and in the understanding of the mechanisms of stress responses.Keywords: anxiety, prefrontal cortex, serotonergic system, stress
Procedia PDF Downloads 325551 Development Programmes Requirements for Managing and Supporting the Ever-Dynamic Job Roles of Middle Managers in Higher Education Institutions: The Espousal Demanded from Human Resources Department; Case Studies of a New University in United Kingdom
Authors: Mohamed Sameer Mughal, Andrew D. Ross, Damian J. Fearon
Abstract:
Background: The fast-paced changing landscape of UK Higher Education Institution (HEIs) is poised by changes and challenges affecting Middle Managers (MM) in their job roles. MM contribute to the success of HEIs by balancing the equilibrium and pass organization strategies from senior staff towards operationalization directives to junior staff. However, this study showcased from the data analyzed during the semi structured interviews; MM job role is becoming more complex due to changes and challenges creating colossal pressures and workloads in day-to-day working. Current development programmes provisions by Human Resources (HR) departments in such HEIs are not feasible, applicable, and matching the true essence and requirements of MM who suggest that programmes offered by HR are too generic to suit their precise needs and require tailor made espousal to work effectively in their pertinent job roles. Methodologies: This study aims to capture demands of MM Development Needs (DN) by means of a conceptual model as conclusive part of the research that is divided into 2 phases. Phase 1 initiated by carrying out 2 pilot interviews with a retired Emeritus status professor and HR programmes development coordinator. Key themes from the pilot and literature review subsidized into formulation of 22 set of questions (Kvale and Brinkmann) in form of interviewing questionnaire during qualitative data collection. Data strategy and collection consisted of purposeful sampling of 12 semi structured interviews (n=12) lasting approximately an hour for all participants. The MM interviewed were at faculty and departmental levels which included; deans (n=2), head of departments (n=4), subject leaders (n=2), and lastly programme leaders (n=4). Participants recruitment was carried out via emails and snowballing technique. The interviews data was transcribed (verbatim) and managed using Computer Assisted Qualitative Data Analysis using Nvivo ver.11 software. Data was meticulously analyzed using Miles and Huberman inductive approach of positivistic style grounded theory, whereby key themes and categories emerged from the rich data collected. The data was precisely coded and classified into case studies (Robert Yin); with a main case study, sub cases (4 classes of MM) and embedded cases (12 individual MMs). Major Findings: An interim conceptual model emerged from analyzing the data with main concepts that included; key performance indicators (KPI’s), HEI effectiveness and outlook, practices, processes and procedures, support mechanisms, student events, rules, regulations and policies, career progression, reporting/accountability, changes and challenges, and lastly skills and attributes. Conclusion: Dynamic elements affecting MM includes; increase in government pressures, student numbers, irrelevant development programmes, bureaucratic structures, transparency and accountability, organization policies, skills sets… can only be confronted by employing structured development programmes originated by HR that are not provided generically. Future Work: Stage 2 (Quantitative method) of the study plans to validate the interim conceptual model externally through fully completed online survey questionnaire (Bram Oppenheim) from external HEIs (n=150). The total sample targeted is 1500 MM. Author contribution focuses on enhancing management theory and narrow the gap between by HR and MM development programme provision.Keywords: development needs (DN), higher education institutions (HEIs), human resources (HR), middle managers (MM)
Procedia PDF Downloads 232550 Development of Perovskite Quantum Dots Light Emitting Diode by Dual-Source Evaporation
Authors: Antoine Dumont, Weiji Hong, Zheng-Hong Lu
Abstract:
Light emitting diodes (LEDs) are steadily becoming the new standard for luminescent display devices because of their energy efficiency and relatively low cost, and the purity of the light they emit. Our research focuses on the optical properties of the lead halide perovskite CsPbBr₃ and its family that is showing steadily improving performances in LEDs and solar cells. The objective of this work is to investigate CsPbBr₃ as an emitting layer made by physical vapor deposition instead of the usual solution-processed perovskites, for use in LEDs. The deposition in vacuum eliminates any risk of contaminants as well as the necessity for the use of chemical ligands in the synthesis of quantum dots. Initial results show the versatility of the dual-source evaporation method, which allowed us to create different phases in bulk form by altering the mole ratio or deposition rate of CsBr and PbBr₂. The distinct phases Cs₄PbBr₆, CsPbBr₃ and CsPb₂Br₅ – confirmed through XPS (x-ray photoelectron spectroscopy) and X-ray diffraction analysis – have different optical properties and morphologies that can be used for specific applications in optoelectronics. We are particularly focused on the blue shift expected from quantum dots (QDs) and the stability of the perovskite in this form. We already obtained proof of the formation of QDs through our dual source evaporation method with electron microscope imaging and photoluminescence testing, which we understand is a first in the community. We also incorporated the QDs in an LED structure to test the electroluminescence and the effect on performance and have already observed a significant wavelength shift. The goal is to reach 480nm after shifting from the original 528nm bulk emission. The hole transport layer (HTL) material onto which the CsPbBr₃ is evaporated is a critical part of this study as the surface energy interaction dictates the behaviour of the QD growth. A thorough study to determine the optimal HTL is in progress. A strong blue shift for a typically green emitting material like CsPbBr₃ would eliminate the necessity of using blue emitting Cl-based perovskite compounds and could prove to be more stable in a QD structure. The final aim is to make a perovskite QD LED with strong blue luminescence, fabricated through a dual-source evaporation technique that could be scalable to industry level, making this device a viable and cost-effective alternative to current commercial LEDs.Keywords: material physics, perovskite, light emitting diode, quantum dots, high vacuum deposition, thin film processing
Procedia PDF Downloads 161549 Medication Side Effects: Implications on the Mental Health and Adherence Behaviour of Patients with Hypertension
Authors: Irene Kretchy, Frances Owusu-Daaku, Samuel Danquah
Abstract:
Hypertension is the leading risk factor for cardiovascular diseases, and a major cause of death and disability worldwide. This study examined whether psychosocial variables influenced patients’ perception and experience of side effects of their medicines, how they coped with these experiences and the impact on mental health and medication adherence to conventional hypertension therapies. Methods: A hospital-based mixed methods study, using quantitative and qualitative approaches was conducted on hypertensive patients. Participants were asked about side effects, medication adherence, common psychological symptoms, and coping mechanisms with the aid of standard questionnaires. Information from the quantitative phase was analyzed with the Statistical Package for Social Sciences (SPSS) version 20. The interviews from the qualitative study were audio-taped with a digital audio recorder, manually transcribed and analyzed using thematic content analysis. The themes originated from participant interviews a posteriori. Results: The experiences of side effects – such as palpitations, frequent urination, recurrent bouts of hunger, erectile dysfunction, dizziness, cough, physical exhaustion - were categorized as no/low (39.75%), moderate (53.0%) and high (7.25%). Significant relationships between depression (x 2 = 24.21, P < 0.0001), anxiety (x 2 = 42.33, P < 0.0001), stress (x 2 = 39.73, P < 0.0001) and side effects were observed. A logistic regression model using the adjusted results for this association are reported – depression [OR = 1.9 (1.03 – 3.57), p = 0.04], anxiety [OR = 1.5 (1.22 – 1.77), p = < 0.001], and stress [OR = 1.3 (1.02 – 1.71), p = 0.04]. Side effects significantly increased the probability of individuals to be non-adherent [OR = 4.84 (95% CI 1.07 – 1.85), p = 0.04] with social factors, media influences and attitudes of primary caregivers further explaining this relationship. The personal adoption of medication modifying strategies, espousing the use of complementary and alternative treatments, and interventions made by clinicians were the main forms of coping with side effects. Conclusions: Results from this study show that contrary to a biomedical approach, the experience of side effects has biological, social and psychological interrelations. The result offers more support for the need for a multi-disciplinary approach to healthcare where all forms of expertise are incorporated into health provision and patient care. Additionally, medication side effects should be considered as a possible cause of non-adherence among hypertensive patients, thus addressing this problem from a Biopsychosocial perspective in any intervention may improve adherence and invariably control blood pressure.Keywords: biopsychosocial, hypertension, medication adherence, psychological disorders
Procedia PDF Downloads 371548 Transgenerational Impact of Intrauterine Hyperglycaemia to F2 Offspring without Pre-Diabetic Exposure on F1 Male Offspring
Authors: Jun Ren, Zhen-Hua Ming, He-Feng Huang, Jian-Zhong Sheng
Abstract:
Adverse intrauterine stimulus during critical or sensitive periods in early life, may lead to health risk not only in later life span, but also further generations. Intrauterine hyperglycaemia, as a major feature of gestational diabetes mellitus (GDM), is a typical adverse environment for both F1 fetus and F1 gamete cells development. However, there is scare information of phenotypic difference of metabolic memory between somatic cells and germ cells exposed by intrauterine hyperglycaemia. The direct transmission effect of intrauterine hyperglycaemia per se has not been assessed either. In this study, we built a GDM mice model and selected male GDM offspring without pre-diabetic phenotype as our founders, to exclude postnatal diabetic influence on gametes, thereby investigate the direct transmission effect of intrauterine hyperglycaemia exposure on F2 offspring, and we further compared the metabolic difference of affected F1-GDM male offspring and F2 offspring. A GDM mouse model of intrauterine hyperglycemia was established by intraperitoneal injection of streptozotocin after pregnancy. Pups of GDM mother were fostered by normal control mothers. All the mice were fed with standard food. Male GDM offspring without metabolic dysfunction phenotype were crossed with normal female mice to obtain F2 offspring. Body weight, glucose tolerance test, insulin tolerance test and homeostasis model of insulin resistance (HOMA-IR) index were measured in both generations at 8 week of age. Some of F1-GDM male mice showed impaired glucose tolerance (p < 0.001), none of F1-GDM male mice showed impaired insulin sensitivity. Body weight of F1-GDM mice showed no significance with control mice. Some of F2-GDM offspring exhibited impaired glucose tolerance (p < 0.001), all the F2-GDM offspring exhibited higher HOMA-IR index (p < 0.01 of normal glucose tolerance individuals vs. control, p < 0.05 of glucose intolerance individuals vs. control). All the F2-GDM offspring exhibited higher ITT curve than control (p < 0.001 of normal glucose tolerance individuals, p < 0.05 of glucose intolerance individuals, vs. control). F2-GDM offspring had higher body weight than control mice (p < 0.001 of normal glucose tolerance individuals, p < 0.001 of glucose intolerance individuals, vs. control). While glucose intolerance is the only phenotype that F1-GDM male mice may exhibit, F2 male generation of healthy F1-GDM father showed insulin resistance, increased body weight and/or impaired glucose tolerance. These findings imply that intrauterine hyperglycaemia exposure affects germ cells and somatic cells differently, thus F1 and F2 offspring demonstrated distinct metabolic dysfunction phenotypes. And intrauterine hyperglycaemia exposure per se has a strong influence on F2 generation, independent of postnatal metabolic dysfunction exposure.Keywords: inheritance, insulin resistance, intrauterine hyperglycaemia, offspring
Procedia PDF Downloads 238547 Sustainable Development and Modern Challenges of Higher Educational Institutions in the Regions of Georgia
Authors: Natia Tsiklashvili, Tamari Poladashvili
Abstract:
Education is one of the fundamental factors of economic prosperity in all respects. It is impossible to talk about the sustainable economic development of the country without substantial investments in human capital and investment into higher educational institutions. Education improves the standard of living of the population and expands the opportunities to receive more benefits, which will be equally important for both the individual and the society as a whole. There are growing initiatives among educated people such as entrepreneurship, technological development, etc. At the same time, the distribution of income between population groups is improving. The given paper discusses the scientific literature in the field of sustainable development through higher educational institutions. Scholars of economic theory emphasize a few major aspects that show the role of higher education in economic growth: a) Alongside education, human capital gradually increases which leads to increased competitiveness of the labor force, not only in the national but also in the international labor market (Neoclassical growth theory), b) The high level of education can increase the efficiency of the economy, investment in human capital, innovation, and knowledge are significant contributors to economic growth. Hence, it focuses on positive externalities and spillover effects of a knowledge-based economy which leads to economic development (endogenous growth theory), c) Education can facilitate the diffusion and transfer of knowledge. Hence, it supports macroeconomic sustainability and microeconomic conditions of individuals. While discussing the economic importance of education, we consider education as the spiritual development of the human that advances general skills, acquires a profession, and improves living conditions. Scholars agree that human capital is not only money but liquid assets, stocks, and competitive knowledge. The last one is the main lever in the context of increasing human competitiveness and high productivity. To address the local issues, the present article researched ten educational institutions across Georgia, including state and private HEIs. Qualitative research was done by analyzing in-depth interweaves of representatives from each institution, and respondents were rectors/vice-rectors/heads of quality assurance service at the institute. The result shows that there is a number of challenges that institution face in order to maintain sustainable development and be the strong links to education and the labor market. Mostly it’s contacted with bureaucracy, insufficient finances they receive, and local challenges that differ across the regions.Keywords: higher education, higher educational institutions, sustainable development, regions, Georgia
Procedia PDF Downloads 85546 Performance Estimation of Small Scale Wind Turbine Rotor for Very Low Wind Regime Condition
Authors: Vilas Warudkar, Dinkar Janghel, Siraj Ahmed
Abstract:
Rapid development experienced by India requires huge amount of energy. Actual supply capacity additions have been consistently lower than the targets set by the government. According to World Bank 40% of residences are without electricity. In 12th five year plan 30 GW grid interactive renewable capacity is planned in which 17 GW is Wind, 10 GW is from solar and 2.1 GW from small hydro project, and rest is compensated by bio gas. Renewable energy (RE) and energy efficiency (EE) meet not only the environmental and energy security objectives, but also can play a crucial role in reducing chronic power shortages. In remote areas or areas with a weak grid, wind energy can be used for charging batteries or can be combined with a diesel engine to save fuel whenever wind is available. India according to IEC 61400-1 belongs to class IV Wind Condition; it is not possible to set up wind turbine in large scale at every place. So, the best choice is to go for small scale wind turbine at lower height which will have good annual energy production (AEP). Based on the wind characteristic available at MANIT Bhopal, rotor for small scale wind turbine is designed. Various Aero foil data is reviewed for selection of airfoil in the Blade Profile. Airfoil suited of Low wind conditions i.e. at low Reynold’s number is selected based on Coefficient of Lift, Drag and angle of attack. For designing of the rotor blade, standard Blade Element Momentum (BEM) Theory is implanted. Performance of the Blade is estimated using BEM theory in which axial induction factor and angular induction factor is optimized using iterative technique. Rotor performance is estimated for particular designed blade specifically for low wind Conditions. Power production of rotor is determined at different wind speeds for particular pitch angle of the blade. At pitch 15o and velocity 5 m/sec gives good cut in speed of 2 m/sec and power produced is around 350 Watts. Tip speed of the Blade is considered as 6.5 for which Coefficient of Performance of the rotor is calculated 0.35, which is good acceptable value for Small scale Wind turbine. Simple Load Model (SLM, IEC 61400-2) is also discussed to improve the structural strength of the rotor. In SLM, Edge wise Moment and Flap Wise moment is considered which cause bending stress at the root of the blade. Various Load case mentioned in the IEC 61400-2 is calculated and checked for the partial safety factor of the wind turbine blade.Keywords: annual energy production, Blade Element Momentum Theory, low wind Conditions, selection of airfoil
Procedia PDF Downloads 337545 Development and Psychometric Validation of the Hospitalised Older Adults Dignity Scale for Measuring Dignity during Acute Hospital Admissions
Authors: Abdul-Ganiyu Fuseini, Bernice Redley, Helen Rawson, Lenore Lay, Debra Kerr
Abstract:
Aim: The study aimed to develop and validate a culturally appropriate patient-reported outcome measure for measuring dignity for older adults during acute hospital admissions. Design: A three-phased mixed-method sequential exploratory design was used. Methods: Concept elicitation and generation of items for the scale was informed by older adults’ perspectives about dignity during acute hospitalization and a literature review. Content validity evaluation and pre-testing were undertaken using standard instrument development techniques. A cross-sectional survey design was conducted involving 270 hospitalized older adults for evaluation of construct and convergent validity, internal consistency reliability, and test–retest reliability of the scale. Analysis was performed using Statistical Package for the Social Sciences, version 25. Reporting of the study was guided by the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist. Results: We established the 15-item Hospitalized Older Adults’ Dignity Scale that has a 5-factor structure: Shared Decision-Making (3 items); Healthcare Professional-Patient Communication (3 items); Patient Autonomy (4 items); Patient Privacy (2 items); and Respectful Care (3 items). Excellent content validity, adequate construct and convergent validity, acceptable internal consistency reliability, and good test-retest reliability were demonstrated. Conclusion: We established the Hospitalized Older Adults Dignity Scale as a valid and reliable scale to measure dignity for older adults during acute hospital admissions. Future studies using confirmatory factor analysis are needed to corroborate the dimensionality of the factor structure and external validity of the scale. Routine use of the scale may provide information that informs the development of strategies to improve dignity-related care in the future. Impact: The development and validation of the Hospitalized Older Adults Dignity Scale will provide healthcare professionals with a feasible and reliable scale for measuring older adults’ dignity during acute hospitalization. Routine use of the scale may enable the capturing and incorporation of older patients’ perspectives about their healthcare experience and provide information that informs the development of strategies to improve dignity-related care in the future.Keywords: dignity, older adults, hospitalisation, scale, patients, dignified care, acute care
Procedia PDF Downloads 90544 Retrospective Demographic Analysis of Patients Lost to Follow-Up from Antiretroviral Therapy in Mulanje Mission Hospital, Malawi
Authors: Silas Webb, Joseph Hartland
Abstract:
Background: Long-term retention of patients on ART has become a major health challenge in Sub-Saharan Africa (SSA). In 2010 a systematic review of 39 papers found that 30% of patients were no longer taking their ARTs two years after starting treatment. In the same review, it was noted that there was a paucity of data as to why patients become lost to follow-up (LTFU) in SSA. This project was performed in Mulanje Mission Hospital in Malawi as part of Swindon Academy’s Global Health eSSC. The HIV prevalence for Malawi is 10.3%, one of the highest rates in the world, however prevalence soars to 18% in the Mulanje. Therefore it is essential that patients at risk of being LTFU are identified early and managed appropriately to help them continue to participate in the service. Methodology: All patients on adult antiretroviral formulations at MMH, who were classified as ‘defaulters’ (patients missing a scheduled follow up visit by more than two months) over the last 12 months were included in the study. Demographic varibales were collected from Mastercards for data analysis. A comparison group of patients currently not lost to follow up was created by using all of the patients who attended the HIV clinic between 18th-22nd July 2016 who had never defaulted from ART. Data was analysed using the chi squared (χ²) test, as data collected was categorical, with alpha levels set at 0.05. Results: Overall, 136 patients had defaulted from ART over the past 12 months at MMH. Of these, 43 patients had missing Mastercards, so 93 defaulter datasets were analysed. In the comparison group 93 datasets were also analysed and statistical analysis done using Chi-Squared testing. A higher proportion of men in the defaulting group was noted (χ²=0.034) and defaulters tended to be younger (χ²=0.052). 94.6% of patients who defaulted were taking Tenofovir, Lamivudine and Efavirenz, the standard first line ART therapy in Malawi. The mean length of time on ART was 39.0 months (RR: -22.4-100.4) in the defaulters group and 47.3 months (RR: -19.71-114.23) in the control group, with a mean difference of 8.3 less months in the defaulters group (χ ²=0.056). Discussion: The findings in this study echo the literature, however this review expands on that and shows the demographic for the patient at most risk of defaulting and being LTFU would be: a young male who has missed more than 4 doses of ART and is within his first year of treatment. For the hospital, this data is important at it identifies significant areas for public health focus. For instance, fear of disclosure and stigma may be disproportionately affecting younger men, so interventions can be aimed specifically at them to improve their health outcomes. The mean length of time on medication was 8.3 months less in the defaulters group, with a p-value of 0.056, emphasising the need for more intensive follow-up in the early stages of treatment, when patients are at the highest risk of defaulting.Keywords: anti-retroviral therapy, ART, HIV, lost to follow up, Malawi
Procedia PDF Downloads 186543 Non-Melanoma Skin Cancer of Cephalic Extremity – Clinical and Histological Aspects
Authors: Razvan Mercut, Mihaela Ionescu, Vlad Parvanescu, Razvan Ghita, Tudor-Gabriel Caragea, Cristina Simionescu, Marius-Eugen Ciurea
Abstract:
Introduction: Over the past years, the incidence of non-melanoma skin cancer (NMSC) has continuously increased, being one of the most commonly diagnosed carcinomasofthe cephalic extremity. NMSC regroups basal cell carcinoma (BCC), squamous cell carcinoma (SCC), Merkel cell carcinoma, cutaneous lymphoma, and sarcoma. The most common forms are BCC and SCC, both still implying a significant level of morbidity due to local invasion (especially BCC), even if the overall death rates are declining. The objective of our study was the evaluation of clinical and histological aspects of NMSC for a group of patients with BCC and SCC, from Craiova, a south-western major city in Romania. Materialand method: Our study lot comprised 65 patients, with an almost equal distribution of sexes, and ages between 23-91 years old (mean value±standard deviation62.61±16.67), all treated within the Clinic of Plastic Surgery and Reconstructive Microsurgery, Clinical Emergency County Hospital Craiova, Romania, between 2019-2020. In order to determine the main morphological characteristics of both studied cancers, we used paraffin embedding techniques, with various staining methods:hematoxylin-eosin, Masson's trichrome stain with aniline blue, and Periodic acid-schiffAlcian Blue. The statistical study was completed using Microsoft Excel (Microsoft Corp., Redmond, WA, USA), with XLSTAT (Addinsoft SARL, Paris, France). Results: The overall results of our study indicate that BCC accounts for 67.69% of all NMSC forms; SCC covers 27.69%, while 4.62% are representedby other forms. The most frequent site is the nose for BCC (27.69%, 18 patients), being followed by preauricular regions, forehead, and periorbital areas. For patients with SCC, tumors were mainly located at lips level (66.67%, 12 patients). The analysis of NMSC histological forms indicated that nodular BCC is predominant (45.45%, 20 patients), as well as ulcero-vegetant SCC (38.89%, 7 patients). We have not identified any topographic characteristics or NMSC forms significantly related to age or sex. Conclusions: The most frequent NMSC form identified for our study lot was BCC. The preferred location was the nose for BCC. For SCC, the oral cavity is the most frequent anatomical site, especially the lips level. Nodular BCC and ulcero-vegetant SCC were the most commonly identified histological types. Our findings emphasize the need for periodic screening, in order to improve prevention and early treatment for these malignancies.Keywords: non-melanoma skin cancer, basal cell carcinoma, squamous cell carcinoma, histological
Procedia PDF Downloads 189542 Prenatal Genetic Screening and Counselling Competency Challenges of Nurse-Midwife
Authors: Girija Madhavanprabhakaran, Frincy Franacis, Sheeba Elizabeth John
Abstract:
Introduction: A wide range of prenatal genetic screening is introduced with increasing incidences of congenital anomalies even in low-risk pregnancies and is an emerging standard of care. Being frontline caretakers, the role and responsibilities of nurses and midwives are critical as they are working along with couples to provide evidence-based supportive educative care. The increasing genetic disorders and advances in prenatal genetic screening with limited genetic counselling facilities urge nurses and midwifery nurses with essential competencies to help couples to take informed decision. Objective: This integrative literature review aimed to explore nurse midwives’ knowledge and role in prenatal screening and genetic counselling competency and the challenges faced by them to cater to all pregnant women to empower their autonomy in decision making and ensuring psychological comfort. Method: An electronic search using keywords prenatal screening, genetic counselling, prenatal counselling, nurse midwife, nursing education, genetics, and genomics were done in the PUBMED, SCOPUS and Medline, Google Scholar. Finally, based on inclusion criteria, 8 relevant articles were included. Results: The main review results suggest that nurses and midwives lack essential support, knowledge, or confidence to be able to provide genetic counselling and help the couples ethically to ensure client autonomy and decision making. The majority of nurses and midwives reported inadequate levels of knowledge on genetic screening and their roles in obtaining family history, pedigrees, and providing genetic information for an affected client or high-risk families. The deficiency of well-recognized and influential clinical academic midwives in midwifery practice is also reported. Evidence recommended to update and provide sound educational training to improve nurse-midwife competence and confidence. Conclusion: Overcoming the challenges to achieving informed choices about fetal anomaly screening globally is a major concern. Lack of adequate knowledge and counselling competency, communication insufficiency, need for education and policy are major areas to address. Prenatal nurses' and midwives’ knowledge on prenatal genetic screening and essential counselling competencies can ensure services to the majority of pregnant women around the globe to be better-informed decision-makers and enhances their autonomy, and reduces ethical dilemmas.Keywords: challenges, genetic counselling, prenatal screening, prenatal counselling
Procedia PDF Downloads 199541 The Diagnostic Utility and Sensitivity of the Xpert® MTB/RIF Assay in Diagnosing Mycobacterium tuberculosis in Bone Marrow Aspirate Specimens
Authors: Nadhiya N. Subramony, Jenifer Vaughan, Lesley E. Scott
Abstract:
In South Africa, the World Health Organisation estimated 454000 new cases of Mycobacterium tuberculosis (M.tb) infection (MTB) in 2015. Disseminated tuberculosis arises from the haematogenous spread and seeding of the bacilli in extrapulmonary sites. The gold standard for the detection of MTB in bone marrow is TB culture which has an average turnaround time of 6 weeks. Histological examinations of trephine biopsies to diagnose MTB also have a time delay owing mainly to the 5-7 day processing period prior to microscopic examination. Adding to the diagnostic delay is the non-specific nature of granulomatous inflammation which is the hallmark of MTB involvement of the bone marrow. A Ziehl-Neelson stain (which highlights acid-fast bacilli) is therefore mandatory to confirm the diagnosis but can take up to 3 days for processing and evaluation. Owing to this delay in diagnosis, many patients are lost to follow up or remain untreated whilst results are awaited, thus encouraging the spread of undiagnosed TB. The Xpert® MTB/RIF (Cepheid, Sunnyvale, CA) is the molecular test used in the South African national TB program as the initial diagnostic test for pulmonary TB. This study investigates the optimisation and performance of the Xpert® MTB/RIF on bone marrow aspirate specimens (BMA), a first since the introduction of the assay in the diagnosis of extrapulmonary TB. BMA received for immunophenotypic analysis as part of the investigation into disseminated MTB or in the evaluation of cytopenias in immunocompromised patients were used. Processing BMA on the Xpert® MTB/RIF was optimised to ensure bone marrow in EDTA and heparin did not inhibit the PCR reaction. Inactivated M.tb was spiked into the clinical bone marrow specimen and distilled water (as a control). A volume of 500mcl and an incubation time of 15 minutes with sample reagent were investigated as the processing protocol. A total of 135 BMA specimens had sufficient residual volume for Xpert® MTB/RIF testing however 22 specimens (16.3%) were not included in the final statistical analysis as an adequate trephine biopsy and/or TB culture was not available. Xpert® MTB/RIF testing was not affected by BMA material in the presence of heparin or EDTA, but the overall detection of MTB in BMA was low compared to histology and culture. Sensitivity of the Xpert® MTB/RIF compared to both histology and culture was 8.7% (95% confidence interval (CI): 1.07-28.04%) and sensitivity compared to histology only was 11.1% (95% CI: 1.38-34.7%). Specificity of the Xpert® MTB/RIF was 98.9% (95% CI: 93.9-99.7%). Although the Xpert® MTB/RIF generates a faster result than histology and TB culture and is less expensive than culture and drug susceptibility testing, the low sensitivity of the Xpert® MTB/RIF precludes its use for the diagnosis of MTB in bone marrow aspirate specimens and warrants alternative/additional testing to optimise the assay.Keywords: bone marrow aspirate , extrapulmonary TB, low sensitivity, Xpert® MTB/RIF
Procedia PDF Downloads 170540 Biosorption of Nickel by Penicillium simplicissimum SAU203 Isolated from Indian Metalliferous Mining Overburden
Authors: Suchhanda Ghosh, A. K. Paul
Abstract:
Nickel, an industrially important metal is not mined in India, due to the lack of its primary mining resources. But, the chromite deposits occurring in the Sukinda and Baula-Nuasahi region of Odhisa, India, is reported to contain around 0.99% of nickel entrapped in the goethite matrix of the lateritic iron rich ore. Weathering of the dumped chromite mining overburden often leads to the contamination of the ground as well as the surface water with toxic nickel. Microbes inherent to this metal contaminated environment are reported to be capable of removal as well as detoxification of various metals including nickel. Nickel resistant fungal isolates obtained in pure form from the metal rich overburden were evaluated for their potential to biosorb nickel by using their dried biomass. Penicillium simplicissimum SAU203 was the best nickel biosorbant among the 20 fungi tested and was capable to sorbing 16.85 mg Ni/g biomass from a solution containing 50 mg/l of Ni. The identity of the isolate was confirmed using 18S rRNA gene analysis. The sorption capacity of the isolate was further standardized following Langmuir and Freundlich adsorption isotherm models and the results reflected energy efficient sorption. Fourier-transform infrared spectroscopy studies of the nickel loaded and control biomass in a comparative basis revealed the involvement of hydroxyl, amine and carboxylic groups in Ni binding. The sorption process was also optimized for several standard parameters like initial metal ion concentration, initial sorbet concentration, incubation temperature and pH, presence of additional cations and pre-treatment of the biomass by different chemicals. Optimisation leads to significant improvements in the process of nickel biosorption on to the fungal biomass. P. simplicissimum SAU203 could sorb 54.73 mg Ni/g biomass with an initial Ni concentration of 200 mg/l in solution and 21.8 mg Ni/g biomass with an initial biomass concentration of 1g/l solution. Optimum temperature and pH for biosorption was recorded to be 30°C and pH 6.5 respectively. Presence of Zn and Fe ions improved the sorption of Ni(II), whereas, cobalt had a negative impact. Pre-treatment of biomass with various chemical and physical agents has affected the proficiency of Ni sorption by P. simplicissimum SAU203 biomass, autoclaving as well as treatment of biomass with 0.5 M sulfuric acid and acetic acid reduced the sorption as compared to the untreated biomass, whereas, NaOH and Na₂CO₃ and Twin 80 (0.5 M) treated biomass resulted in augmented metal sorption. Hence, on the basis of the present study, it can be concluded that P. simplicissimum SAU203 has the potential for the removal as well as detoxification of nickel from contaminated environments in general and particularly from the chromite mining areas of Odhisa, India.Keywords: nickel, fungal biosorption, Penicillium simplicissimum SAU203, Indian chromite mines, mining overburden
Procedia PDF Downloads 191539 The Development of Traffic Devices Using Natural Rubber in Thailand
Authors: Weeradej Cheewapattananuwong, Keeree Srivichian, Godchamon Somchai, Wasin Phusanong, Nontawat Yoddamnern
Abstract:
Natural rubber used for traffic devices in Thailand has been developed and researched for several years. When compared with Dry Rubber Content (DRC), the quality of Rib Smoked Sheet (RSS) is better. However, the cost of admixtures, especially CaCO₃ and sulphur, is higher than the cost of RSS itself. In this research, Flexible Guideposts and Rubber Fender Barriers (RFB) are taken into consideration. In case of flexible guideposts, the materials used are both RSS and DRC60%, but for RFB, only RSS is used due to the controlled performance tests. The objective of flexible guideposts and RFB is to decrease a number of accidents, fatal rates, and serious injuries. Functions of both devices are to save road users and vehicles as well as to absorb impact forces from vehicles so as to decrease of serious road accidents. This leads to the mitigation methods to remedy the injury of motorists, form severity to moderate one. The solution is to find the best practice of traffic devices using natural rubber under the engineering concepts. In addition, the performances of materials, such as tensile strength and durability, are calculated for the modulus of elasticity and properties. In the laboratory, the simulation of crashes, finite element of materials, LRFD, and concrete technology methods are taken into account. After calculation, the trials' compositions of materials are mixed and tested in the laboratory. The tensile test, compressive test, and weathering or durability test are followed and based on ASTM. Furthermore, the Cycle-Repetition Test of Flexible Guideposts will be taken into consideration. The final decision is to fabricate all materials and have a real test section in the field. In RFB test, there will be 13 crash tests, 7 Pickup Truck tests, and 6 Motorcycle Tests. The test of vehicular crashes happens for the first time in Thailand, applying the trial and error methods; for example, the road crash test under the standard of NCHRP-TL3 (100 kph) is changed to the MASH 2016. This is owing to the fact that MASH 2016 is better than NCHRP in terms of speed, types, and weight of vehicles and the angle of crash. In the processes of MASH, Test Level 6 (TL-6), which is composed of 2,270 kg Pickup Truck, 100 kph, and 25 degree of crash-angle is selected. The final test for real crash will be done, and the whole system will be evaluated again in Korea. The researchers hope that the number of road accidents will decrease, and Thailand will be no more in the top tenth ranking of road accidents in the world.Keywords: LRFD, load and resistance factor design, ASTM, american society for testing and materials, NCHRP, national cooperation highway research program, MASH, manual for assessing safety hardware
Procedia PDF Downloads 128538 Interrelationship between Quadriceps' Activation and Inhibition as a Function of Knee-Joint Angle and Muscle Length: A Torque and Electro and Mechanomyographic Investigation
Authors: Ronald Croce, Timothy Quinn, John Miller
Abstract:
Incomplete activation, or activation failure, of motor units during maximal voluntary contractions is often referred to as muscle inhibition (MI), and is defined as the inability of the central nervous system to maximally drive a muscle during a voluntary contraction. The purpose of the present study was to assess the interrelationship amongst peak torque (PT), muscle inhibition (MI; incomplete activation of motor units), and voluntary muscle activation (VMA) of the quadriceps’ muscle group as a function of knee angle and muscle length during maximal voluntary isometric contractions (MVICs). Nine young adult males (mean + standard deviation: age: 21.58 + 1.30 years; height: 180.07 + 4.99 cm; weight: 89.07 + 7.55 kg) performed MVICs in random order with the knee at 15, 55, and 95° flexion. MI was assessed using the interpolated twitch technique and was estimated by the amount of additional knee extensor PT evoked by the superimposed twitch during MVICs. Voluntary muscle activation was estimated by root mean square amplitude electromyography (EMGrms) and mechanomyography (MMGrms) of agonist (vastus medialis [VM], vastus lateralis [VL], and rectus femoris [RF]) and antagonist (biceps femoris ([BF]) muscles during MVICs. Data were analyzed using separate repeated measures analysis of variance. Results revealed a strong dependency of quadriceps’ PT (p < 0.001), MI (p < 0.001) and MA (p < 0.01) on knee joint position: PT was smallest at the most shortened muscle position (15°) and greatest at mid-position (55°); MI and MA were smallest at the most shortened muscle position (15°) and greatest at the most lengthened position (95°), with the RF showing the greatest change in MA. It is hypothesized that the ability to more fully activate the quadriceps at short compared to longer muscle lengths (96% contracted at 15°; 91% at 55°; 90% at 95°) might partly compensate for the unfavorable force-length mechanics at the more extended position and consequent declines in VMA (decreases in EMGrms and MMGrms muscle amplitude during MVICs) and force production (PT = 111-Nm at 15°, 217-NM at 55°, 199-Nm at 95°). Biceps femoris EMG and MMG data showed no statistical differences (p = 0.11 and 0.12, respectively) at joint angles tested, although there were greater values at the extended position. Increased BF muscle amplitude at this position could be a mechanism by which anterior shear and tibial rotation induced by high quadriceps’ activity are countered. Measuring and understanding the degree to which one sees MI and VMA in the QF muscle has particular clinical relevance because different knee-joint disorders, such ligament injuries or osteoarthritis, increase levels of MI observed and markedly reduced the capability of full VMA.Keywords: electromyography, interpolated twitch technique, mechanomyography, muscle activation, muscle inhibition
Procedia PDF Downloads 347537 Production of Bricks Using Mill Waste and Tyre Crumbs at a Low Temperature by Alkali-Activation
Authors: Zipeng Zhang, Yat C. Wong, Arul Arulrajah
Abstract:
Since automobiles became widely popular around the early 20th century, end-of-life tyres have been one of the major types of waste humans encounter. Every minute, there are considerable quantities of tyres being disposed of around the world. Most end-of-life tyres are simply landfilled or simply stockpiled, other than recycling. To address the potential issues caused by tyre waste, incorporating it into construction materials can be a possibility. This research investigated the viability of manufacturing bricks using mill waste and tyre crumb by alkali-activation at a relatively low temperature. The mill waste was extracted from a brick factory located in Melbourne, Australia, and the tyre crumbs were supplied by a local recycling company. As the main precursor, the mill waste was activated by the alkaline solution, which was comprised of sodium hydroxide (8m) and sodium silicate (liquid). The introduction ratio of alkaline solution (relative to the solid weight) and the weight ratio between sodium hydroxide and sodium silicate was fixed at 20 wt.% and 1:1, respectively. The tyre crumb was introduced to substitute part of the mill waste at four ratios by weight, namely 0, 5, 10 and 15%. The mixture of mill waste and tyre crumbs were firstly dry-mixed for 2 min to ensure the homogeneity, followed by a 2.5-min wet mixing after adding the solution. The ready mixture subsequently was press-moulded into blocks with the size of 109 mm in length, 112.5 mm in width and 76 mm in height. The blocks were cured at 50°C with 95% relative humidity for 2 days, followed by a 110°C oven-curing for 1 day. All the samples were then placed under the ambient environment until the age of 7 and 28 days for testing. A series of tests were conducted to evaluate the linear shrinkage, compressive strength and water absorption of the samples. In addition, the microstructure of the samples was examined via the scanning electron microscope (SEM) test. The results showed the highest compressive strength was 17.6 MPa, found in the 28-day-old group using 5 wt.% tyre crumbs. Such strength has been able to satisfy the requirement of ASTM C67. However, the increasing addition of tyre crumb weakened the compressive strength of samples. Apart from the strength, the linear shrinkage and water absorption of all the groups can meet the requirements of the standard. It is worth noting that the use of tyre crumbs tended to decrease the shrinkage and even caused expansion when the tyre content was over 15 wt.%. The research also found that there was a significant reduction in compressive strength for the samples after water absorption tests. In conclusion, the tyre crumbs have the potential to be used as a filler material in brick manufacturing, but more research needs to be done to tackle the durability problem in the future.Keywords: bricks, mill waste, tyre crumbs, waste recycling
Procedia PDF Downloads 122