Search results for: speed ratio
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7136

Search results for: speed ratio

1736 Hybrid Reusable Launch Vehicle for Space Application A Naval Approach

Authors: Rajasekar Elangopandian, Anand Shanmugam

Abstract:

In order to reduce the cost of launching satellite and payloads to the orbit this project envisages some immense combined technology. This new technology in space odyssey contains literally four concepts. The first mode in this innovation is flight mission characteristics which, says how the mission will induct. The conventional technique of magnetic levitation will help us to produce the initial thrust. The name states reusable launch vehicle shows its viability of reuseness. The flight consists miniature rocket which produces the required thrust and the two JATO (jet assisted takeoff) boosters which gives the initial boost for the vehicle. The vehicle ostensibly looks like an airplane design and will be located on the super conducting rail track. When the high power electric current given to the rail track, the vehicle starts floating as per the principle of magnetic levitation. If the flight reaches the particular takeoff distance the two boosters gets starts and will give the 48KN thrust each. Obviously it`ll follow the vertical path up to the atmosphere end/start to space. As soon as it gets its speed the two boosters will cutoff. Once it reaches the space the inbuilt spacecraft keep the satellite in the desired orbit. When the work finishes, the apogee motors gives the initial kick to the vehicle to come in to the earth’s atmosphere with 22N thrust and automatically comes to the ground by following the free fall, the help of gravitational force. After the flying region it makes the spiral flight mode then gets landing where the super conducting levitated rail track located. It will catch up the vehicle and keep it by changing the poles of magnets and varying the current. Initial cost for making this vehicle might be high but for the frequent usage this will reduce the launch cost exactly half than the now-a-days technology. The incorporation of such a mechanism gives `hybrid` and the reusability gives `reusable launch vehicle` and ultimately Hybrid reusable launch vehicle.

Keywords: the two JATO (jet assisted takeoff) boosters, magnetic levitation, 48KN thrust each, 22N thrust and automatically comes to the ground

Procedia PDF Downloads 413
1735 The Feasibility of Anaerobic Digestion at 45⁰C

Authors: Nuruol S. Mohd, Safia Ahmed, Rumana Riffat, Baoqiang Li

Abstract:

Anaerobic digestion at mesophilic and thermophilic temperatures have been widely studied and evaluated by numerous researchers. Limited extensive research has been conducted on anaerobic digestion in the intermediate zone of 45°C, mainly due to the notion that limited microbial activity occurs within this zone. The objectives of this research were to evaluate the performance and the capability of anaerobic digestion at 45°C in producing class A biosolids, in comparison to a mesophilic and thermophilic anaerobic digestion system operated at 35°C and 55°C, respectively. In addition to that, the investigation on the possible inhibition factors affecting the performance of the digestion system at this temperature will be conducted as well. The 45°C anaerobic digestion systems were not able to achieve comparable methane yield and high-quality effluent compared to the mesophilic system, even though the systems produced biogas with about 62-67% methane. The 45°C digesters suffered from high acetate accumulation, but sufficient buffering capacity was observed as the pH, alkalinity and volatile fatty acids (VFA)-to-alkalinity ratio were within recommended values. The accumulation of acetate observed in 45°C systems were presumably due to the high temperature which contributed to high hydrolysis rate. Consequently, it produced a large amount of toxic salts that combined with the substrate making them not readily available to be consumed by methanogens. Acetate accumulation, even though contributed to 52 to 71% reduction in acetate degradation process, could not be considered as completely inhibitory. Additionally, at 45°C, no ammonia inhibition was observed and the digesters were able to achieve volatile solids (VS) reduction of 47.94±4.17%. The pathogen counts were less than 1,000 MPN/g total solids, thus, producing Class A biosolids.

Keywords: 45°C anaerobic digestion, acetate accumulation, class A biosolids, salt toxicity

Procedia PDF Downloads 290
1734 Toxic Ingredients Contained in Our Cosmetics

Authors: El Alia Boularas, H. Bekkar, H. Larachi, H. Rezk-kallah

Abstract:

Introduction: Notwithstanding cosmetics are used in life every day, these products are not all innocuous and harmless, as they may contain ingredients responsible for allergic reactions and, possibly, for other health problems. Additionally, environmental pollution should be taken into account. Thus, it is time to investigate what is ‘hidden behind beauty’. Aims: 1.To investigate prevalence of 13 chemical ingredients in cosmetics being object of concern, which the Algerians use regularly. 2.To know the profile of questioned consumers and describe their opinion on cosmetics. Methods: The survey was carried out in year 2013 over a period of 3 months, among Algerian Internet users having an e-mail address or a Facebook account.The study investigated 13 chemical agents showing health and environmental problems, selected after analysis of the recent studies published on the subject, the lists of national and international regulatory references on chemical hazards, and querying the database Skin Deep presented by the Environmental Working Group. Results: 300 people distributed all over the Algerian territory participated in the survey, providing information about 731 cosmetics; 86% aged from 20 to 39 years, with a sex ratio=0,27. A percentage of 43% of the analyzed cosmetics contained at least one of the 13 toxic ingredients. The targeted ingredient that has been most frequently reported was ‘perfume’ followed by parabens and PEG.85% of the participants declared that cosmetics ‘can contain toxic substances’, 27% asserted that they verify regularly the list of ingredients when they buy cosmetics, 61% said that they try to avoid the toxic ingredients, among whom 24 % were more vigilant on the presence of parabens, 95% were in favour of the strengthening of the Algerian laws on cosmetics. Conclusion: The results of the survey provide the indication of a widespread presence of toxic chemical ingredients in personal care products that Algerians use daily.

Keywords: Algerians consumers, cosmetics, survey, toxic ingredients

Procedia PDF Downloads 264
1733 Design and Development of an Innovative MR Damper Based on Intelligent Active Suspension Control of a Malaysia's Model Vehicle

Authors: L. Wei Sheng, M. T. Noor Syazwanee, C. J. Carolyna, M. Amiruddin, M. Pauziah

Abstract:

This paper exhibits the alternatives towards active suspension systems revised based on the classical passive suspension system to improve comfort and handling performance. An active Magneto rheological (MR) suspension system is proposed as to explore the active based suspension system to enhance performance given its freedom to independently specify the characteristics of load carrying, handling, and ride quality. Malaysian quarter car with two degrees of freedom (2DOF) system is designed and constructed to simulate the actions of an active vehicle suspension system. The structure of a conventional twin-tube shock absorber is modified both internally and externally to comprehend with the active suspension system. The shock absorber peripheral structure is altered to enable the assembling and disassembling of the damper through a non-permanent joint whereby the stress analysis of the designed joint is simulated using Finite Element Analysis. Simulation on the internal part where an electrified copper coil of 24AWG is winded is done using Finite Element Method Magnetics to measure the magnetic flux density inside the MR damper. The primary purpose of this approach is to reduce the vibration transmitted from the effects of road surface irregularities while maintaining solid manoeuvrability. The aim of this research is to develop an intelligent control system of a consecutive damping automotive suspension system. The ride quality is improved by means of the reduction of the vertical body acceleration caused by the car body when it experiences disturbances from speed bump and random road roughness. Findings from this research are expected to enhance the quality of ride which in return can prevent the deteriorating effect of vibration on the vehicle condition as well as the passengers’ well-being.

Keywords: active suspension, FEA, magneto rheological damper, Malaysian quarter car model, vibration control

Procedia PDF Downloads 197
1732 A Statistical Analysis on the Comparison of First and Second Waves of COVID-19 and Importance of Early Actions in Public Health for Third Wave in India

Authors: Maitri Dave

Abstract:

Coronaviruses (CoV) is such infectious virus which has impacted globally in a more dangerous manner causing severe lung problems and leaving behind more serious diseases among the people. This pandemic has affected globally and created severe respiratory problems, and damaged the lungs. India has reported the first case of COVID-19 in January 2020. The first wave of COVID-19 took place from April to September of 2020. Soon after, a second peak is also noticed in the month of March 2021, which in turn becomes more dangerous due to a lack of supply of medical equipment. It created resource deficiency globally, specifically in India, where some necessary life-saving equipment like ventilators and oxygenators were not sufficient to cater to the demand-supply ratio effectively. Through carefully examining such a situation, India began to execute the process of vaccination in the month of January 2021 and successfully administered 25,46,71,259 doses of vaccines till now, which is only 15.5% of the total population while only 3.6% of the total population is fully vaccinated. India has authorized the British Oxford–AstraZeneca vaccine (Covishield), the Indian BBV152 (Covaxin) vaccine, and the Russian Sputnik V vaccine for emergency use. In the present study, we have collected all the data state wisely of both first and second wave and analyzed them using MS Excel Version 2019 and SPSS Statistics Version 26. Following the trends, we have predicted the characteristics of the upcoming third wave of COVID-19 and recommended some strategies, early actions, and measures that can be taken by the public health system in India to combat the third wave more effectively.

Keywords: COVID-19, vaccination, Covishiled, Coronavirus

Procedia PDF Downloads 198
1731 Exploring Emerging Viruses From a Protected Reserve

Authors: Nemat Sokhandan Bashir

Abstract:

Threats from viruses to agricultural crops could be even larger than the losses caused by the other pathogens because, in many cases, the viral infection is latent but crucial from an epidemic point of view. Wild vegetation can be a source of many viruses that eventually find their destiny in crop plants. Although often asymptomatic in wild plants due to adaptation, they can potentially cause serious losses in crops. Therefore, exploring viruses in wild vegetation is very important. Recently, omics have been quite useful for exploring plant viruses from various plant sources, especially wild vegetation. For instance, we have discovered viruses such as Ambrossia asymptomatic virus I (AAV-1) through the application of metagenomics from Oklahoma Prairie Reserve. Accordingly, extracts from randomly-sampled plants are subjected to high speed and ultracentrifugation to separated virus-like particles (VLP), then nucleic acids in the form of DNA or RNA are extracted from such VLPs by treatment with phenol—chloroform and subsequent precipitation by ethanol. The nucleic acid preparations are separately treated with RNAse or DNAse in order to determine the genome component of VLPs. In the case of RNAs, the complementary cDNAs are synthesized before submitting to DNA sequencing. However, for VLPs with DNA contents, the procedure would be relatively straightforward without making cDNA. Because the length of the nucleic acid content of VPLs can be different, various strategies are employed to achieve sequencing. Techniques similar to so-called "chromosome walking" may be used to achieve sequences of long segments. When the nucleotide sequence data were obtained, they were subjected to BLAST analysis to determine the most related previously reported virus sequences. In one case, we determined that the novel virus was AAV-l because the sequence comparison and analysis revealed that the reads were the closest to the Indian citrus ringspot virus (ICRSV). AAV—l had an RNA genome with 7408 nucleotides in length and contained six open reading frames (ORFs). Based on phylogenies inferred from the replicase and coat protein ORFs of the virus, it was placed in the genus Mandarivirus.

Keywords: wild, plant, novel, metagenomics

Procedia PDF Downloads 59
1730 Effect of L-Dopa on Performance and Carcass Characteristics in Broiler Chickens

Authors: B. R. O. Omidiwura, A. F. Agboola, E. A. Iyayi

Abstract:

Pure form of L-Dopa is used to enhance muscular development, fat breakdown and suppress Parkinson disease in humans. However, the L-Dopa in mucuna seed, when present with other antinutritional factors, causes nutritional disorders in monogastric animals. Information on the utilisation of pure L-Dopa in monogastric animals is scanty. Therefore, effect of L-Dopa on growth performance and carcass characteristics in broiler chickens was investigated. Two hundred and forty one-day-old chicks were allotted to six treatments, which consisted of a positive control (PC) with standard energy (3100Kcal/Kg) and negative control (NC) with high energy (3500Kcal/Kg). The rest 4 diets were NC+0.1, NC+0.2, NC+0.3 and NC+0.4% L-Dopa, respectively. All treatments had 4 replicates in a completely randomized design. Body weight gain, final weight, feed intake, dressed weight and carcass characteristics were determined. Body weight gain and final weight of birds fed PC were 1791.0 and 1830.0g, NC+0.1% L-Dopa were 1827.7 and 1866.7g and NC+0.2% L-Dopa were 1871.9 and 1910.9g respectively, and the feed intake of PC (3231.5g), were better than other treatments. The dressed weight at 1375.0g and 1357.1g of birds fed NC+0.1% and NC+0.2% L-Dopa, respectively, were similar but better than other treatments. Also, the thigh (202.5g and 194.9g) and the breast meat (413.8g and 410.8g) of birds fed NC+0.1% and NC+0.2% L-Dopa, respectively, were similar but better than birds fed other treatments. The drum stick of birds fed NC+0.1% L-Dopa (220.5g) was observed to be better than birds on other diets. Meat to bone ratio and relative organ weights were not affected across treatments. L-Dopa extract, at levels tested, had no detrimental effect on broilers, rather better bird performance and carcass characteristics were observed especially at 0.1% and 0.2% L-Dopa inclusion rates. Therefore, 0.2% inclusion is recommended in diets of broiler chickens for improved performance and carcass characteristics.

Keywords: broilers, carcass characteristics, l-dopa, performance

Procedia PDF Downloads 297
1729 An Investigation into Why Liquefaction Charts Work: A Necessary Step toward Integrating the States of Art and Practice

Authors: Tarek Abdoun, Ricardo Dobry

Abstract:

This paper is a systematic effort to clarify why field liquefaction charts based on Seed and Idriss’ Simplified Procedure work so well. This is a necessary step toward integrating the states of the art (SOA) and practice (SOP) for evaluating liquefaction and its effects. The SOA relies mostly on laboratory measurements and correlations with void ratio and relative density of the sand. The SOP is based on field measurements of penetration resistance and shear wave velocity coupled with empirical or semi-empirical correlations. This gap slows down further progress in both SOP and SOA. The paper accomplishes its objective through: a literature review of relevant aspects of the SOA including factors influencing threshold shear strain and pore pressure buildup during cyclic strain-controlled tests; a discussion of factors influencing field penetration resistance and shear wave velocity; and a discussion of the meaning of the curves in the liquefaction charts separating liquefaction from no liquefaction, helped by recent full-scale and centrifuge results. It is concluded that the charts are curves of constant cyclic strain at the lower end (Vs1 < 160 m/s), with this strain being about 0.03 to 0.05% for earthquake magnitude, Mw ≈ 7. It is also concluded, in a more speculative way, that the curves at the upper end probably correspond to a variable increasing cyclic strain and Ko, with this upper end controlled by over consolidated and preshaken sands, and with cyclic strains needed to cause liquefaction being as high as 0.1 to 0.3%. These conclusions are validated by application to case histories corresponding to Mw ≈ 7, mostly in the San Francisco Bay Area of California during the 1989 Loma Prieta earthquake.

Keywords: permeability, lateral spreading, liquefaction, centrifuge modeling, shear wave velocity charts

Procedia PDF Downloads 279
1728 Cyclic Behaviour of Wide Beam-Column Joints with Shear Strength Ratios of 1.0 and 1.7

Authors: Roy Y. C. Huang, J. S. Kuang, Hamdolah Behnam

Abstract:

Beam-column connections play an important role in the reinforced concrete moment resisting frame (RCMRF), which is one of the most commonly used structural systems around the world. The premature failure of such connections would severely limit the seismic performance and increase the vulnerability of RCMRF. In the past decades, researchers primarily focused on investigating the structural behaviour and failure mechanisms of conventional beam-column joints, the beam width of which is either smaller than or equal to the column width, while studies in wide beam-column joints were scarce. This paper presents the preliminary experimental results of two full-scale exterior wide beam-column connections, which are mainly designed and detailed according to ACI 318-14 and ACI 352R-02, under reversed cyclic loading. The ratios of the design shear force to the nominal shear strength of these specimens are 1.0 and 1.7, respectively, so as to probe into differences of the joint shear strength between experimental results and predictions by design codes of practice. Flexural failure dominated in the specimen with ratio of 1.0 in which full-width plastic hinges were observed, while both beam hinges and post-peak joint shear failure occurred for the other specimen. No sign of premature joint shear failure was found which is inconsistent with ACI codes’ prediction. Finally, a modification of current codes of practice is provided to accurately predict the joint shear strength in wide beam-column joint.

Keywords: joint shear strength, reversed cyclic loading, seismic vulnerability, wide beam-column joints

Procedia PDF Downloads 310
1727 An Impairment of Spatiotemporal Gait Adaptation in Huntington's Disease when Navigating around Obstacles

Authors: Naznine Anwar, Kim Cornish, Izelle Labuschagne, Nellie Georgiou-Karistianis

Abstract:

Falls and subsequent injuries are common features in symptomatic Huntington’s disease (symp-HD) individuals. As part of daily walking, navigating around obstacles may incur a greater risk of falls in symp-HD. We designed obstacle-crossing experiment to examine adaptive gait dynamics and to identify underlying spatiotemporal gait characteristics that could increase the risk of falling in symp-HD. This experiment involved navigating around one or two ground-based obstacles under two conditions (walking while navigating around one obstacle, and walking while navigating around two obstacles). A total of 32 participants were included, 16 symp-HD and 16 healthy controls with age and sex matched. We used a GAITRite electronic walkway to examine the spatiotemporal gait characteristics and inter-trail gait variability when participants walked at their preferable speed. A minimum of six trials were completed which were performed for baseline free walk and also for each and every condition during navigating around the obstacles. For analysis, we separated all walking steps into three phases as approach steps, navigating steps and recovery steps. The mean and inter-trail variability (within participant standard deviation) for each step gait variable was calculated across the six trails. We found symp-HD individuals significantly decreased their gait velocity and step length and increased step duration variability during the navigating steps and recovery steps compared with approach steps. In contrast, HC individuals showed less difference in gait velocity, step time and step length variability from baseline in both respective conditions as well as all three approaches. These findings indicate that increasing spatiotemporal gait variability may be a possible compensatory strategy that is adopted by symp-HD individuals to effectively navigate obstacles during walking. Such findings may offer benefit to clinicians in the development of strategies for HD individuals to improve functional outcomes in the home and hospital based rehabilitation program.

Keywords: Huntington’s disease, gait variables, navigating around obstacle, basal ganglia dysfunction

Procedia PDF Downloads 430
1726 Preparation of Carbon Nanofiber Reinforced HDPE Using Dialkylimidazolium as a Dispersing Agent: Effect on Thermal and Rheological Properties

Authors: J. Samuel, S. Al-Enezi, A. Al-Banna

Abstract:

High-density polyethylene reinforced with carbon nanofibers (HDPE/CNF) have been prepared via melt processing using dialkylimidazolium tetrafluoroborate (ionic liquid) as a dispersion agent. The prepared samples were characterized by thermogravimetric (TGA) and differential scanning calorimetric (DSC) analyses. The samples blended with imidazolium ionic liquid exhibit higher thermal stability. DSC analysis showed clear miscibility of ionic liquid in the HDPE matrix and showed single endothermic peak. The melt rheological analysis of HDPE/CNF composites was performed using an oscillatory rheometer. The influence of CNF and ionic liquid concentration (ranging from 0, 0.5, and 1 wt%) on the viscoelastic parameters was investigated at 200 °C with an angular frequency range of 0.1 to 100 rad/s. The rheological analysis shows the shear-thinning behavior for the composites. An improvement in the viscoelastic properties was observed as the nanofiber concentration increases. The progress in the modulus values was attributed to the structural rigidity imparted by the high aspect ratio CNF. The modulus values and complex viscosity of the composites increased significantly at low frequencies. Composites blended with ionic liquid exhibit slightly lower values of complex viscosity and modulus over the corresponding HDPE/CNF compositions. Therefore, reduction in melt viscosity is an additional benefit for polymer composite processing as a result of wetting effect by polymer-ionic liquid combinations.

Keywords: high-density polyethylene, carbon nanofibers, ionic liquid, complex viscosity

Procedia PDF Downloads 111
1725 Comparison of Soil Test Extractants for Determination of Available Soil Phosphorus

Authors: Violina Angelova, Stefan Krustev

Abstract:

The aim of this work was to evaluate the effectiveness of different soil test extractants for the determination of available soil phosphorus in five internationally certified standard soils, sludge and clay (NCS DC 85104, NCS DC 85106, ISE 859, ISE 952, ISE 998). The certified samples were extracted with the following methods/extractants: CaCl₂, CaCl₂ and DTPA (CAT), double lactate (DL), ammonium lactate (AL), calcium acetate lactate (CAL), Olsen, Mehlich 3, Bray and Kurtz I, and Morgan, which are commonly used in soil testing laboratories. The phosphorus in soil extracts was measured colorimetrically using Spectroquant Pharo 100 spectrometer. The methods used in the study were evaluated according to the recovery of available phosphorus, facility of application and rapidity of performance. The relationships between methods are examined statistically. A good agreement of the results from different soil test was established for all certified samples. In general, the P values extracted by the nine extraction methods significantly correlated with each other. When grouping the soils according to pH, organic carbon content and clay content, weaker extraction methods showed analogous trends; also among the stronger extraction methods, common tendencies were found. Other factors influencing the extraction force of the different methods include soil: solution ratio, as well as the duration and power of shaking the samples. The mean extractable P in certified samples was found to be in the order of CaCl₂ < CAT < Morgan < Bray and Kurtz I < Olsen < CAL < DL < Mehlich 3 < AL. Although the nine methods extracted different amounts of P from the certified samples, values of P extracted by the different methods were strongly correlated among themselves. Acknowledgment: The financial support by the Bulgarian National Science Fund Projects DFNI Н04/9 and DFNI Н06/21 are greatly appreciated.

Keywords: available soil phosphorus, certified samples, determination, soil test extractants

Procedia PDF Downloads 134
1724 Validation Study of Radial Aircraft Engine Model

Authors: Lukasz Grabowski, Tytus Tulwin, Michal Geca, P. Karpinski

Abstract:

This paper presents the radial aircraft engine model which has been created in AVL Boost software. This model is a one-dimensional physical model of the engine, which enables us to investigate the impact of an ignition system design on engine performance (power, torque, fuel consumption). In addition, this model allows research under variable environmental conditions to reflect varied flight conditions (altitude, humidity, cruising speed). Before the simulation research the identifying parameters and validating of model were studied. In order to verify the feasibility to take off power of gasoline radial aircraft engine model, some validation study was carried out. The first stage of the identification was completed with reference to the technical documentation provided by manufacturer of engine and the experiments on the test stand of the real engine. The second stage involved a comparison of simulation results with the results of the engine stand tests performed on a WSK ’PZL-Kalisz’. The engine was loaded by a propeller in a special test bench. Identifying the model parameters referred to a comparison of the test results to the simulation in terms of: pressure behind the throttles, pressure in the inlet pipe, and time course for pressure in the first inlet pipe, power, and specific fuel consumption. Accordingly, the required coefficients and error of simulation calculation relative to the real-object experiments were determined. Obtained the time course for pressure and its value is compatible with the experimental results. Additionally the engine power and specific fuel consumption tends to be significantly compatible with the bench tests. The mapping error does not exceed 1.5%, which verifies positively the model of combustion and allows us to predict engine performance if the process of combustion will be modified. The next conducted tests verified completely model. The maximum mapping error for the pressure behind the throttles and the inlet pipe pressure is 4 %, which proves the model of the inlet duct in the engine with the charging compressor to be correct.

Keywords: 1D-model, aircraft engine, performance, validation

Procedia PDF Downloads 323
1723 Community Forest Management and Ecological and Economic Sustainability: A Two-Way Street

Authors: Sony Baral, Harald Vacik

Abstract:

This study analyzes the sustainability of community forest management in two community forests in Terai and Hills of Nepal, representing four forest types: 1) Shorearobusta, 2) Terai hardwood, 3) Schima-Castanopsis, and 4) other Hills. The sustainability goals for this region include maintaining and enhancing the forest stocks. Considering this, we analysed changes in species composition, stand density, growing stock volume, and growth-to-removal ratio at 3-5 year intervals from 2005-2016 within 109 permanent forest plots (57 in the Terai and 52 in the Hills). To complement inventory data, forest users, forest committee members, and forest officials were consulted. The results indicate that the relative representation of economically valuable tree species has increased. Based on trends in stand density, both forests are being sustainably managed. Pole-sized trees dominated the diameter distribution, however, with a limited number of mature trees and declined regeneration. The forests were over-harvested until 2013 but under-harvested in the recent period in the Hills. In contrast, both forest types were under-harvested throughout the inventory period in the Terai. We found that the ecological dimension of sustainable forest management is strongly achieved while the economic dimension is lacking behind the current potential. Thus, we conclude that maintaining a large number of trees in the forest does not necessarily ensure both ecological and economical sustainability. Instead, priority should be given on a rational estimation of the annual harvest rates to enhance forest resource conditions together with regular benefits to the local communities.

Keywords: community forests, diversity, growing stock, forest management, sustainability, nepal

Procedia PDF Downloads 80
1722 Understanding Mathematics Achievements among U. S. Middle School Students: A Bayesian Multilevel Modeling Analysis with Informative Priors

Authors: Jing Yuan, Hongwei Yang

Abstract:

This paper aims to understand U.S. middle school students’ mathematics achievements by examining relevant student and school-level predictors. Through a variance component analysis, the study first identifies evidence supporting the use of multilevel modeling. Then, a multilevel analysis is performed under Bayesian statistical inference where prior information is incorporated into the modeling process. During the analysis, independent variables are entered sequentially in the order of theoretical importance to create a hierarchy of models. By evaluating each model using Bayesian fit indices, a best-fit and most parsimonious model is selected where Bayesian statistical inference is performed for the purpose of result interpretation and discussion. The primary dataset for Bayesian modeling is derived from the Program for International Student Assessment (PISA) in 2012 with a secondary PISA dataset from 2003 analyzed under the traditional ordinary least squares method to provide the information needed to specify informative priors for a subset of the model parameters. The dependent variable is a composite measure of mathematics literacy, calculated from an exploratory factor analysis of all five PISA 2012 mathematics achievement plausible values for which multiple evidences are found supporting data unidimensionality. The independent variables include demographics variables and content-specific variables: mathematics efficacy, teacher-student ratio, proportion of girls in the school, etc. Finally, the entire analysis is performed using the MCMCpack and MCMCglmm packages in R.

Keywords: Bayesian multilevel modeling, mathematics education, PISA, multilevel

Procedia PDF Downloads 315
1721 Effect of Helical Flow on Separation Delay in the Aortic Arch for Different Mechanical Heart Valve Prostheses by Time-Resolved Particle Image Velocimetry

Authors: Qianhui Li, Christoph H. Bruecker

Abstract:

Atherosclerotic plaques are typically found where flow separation and variations of shear stress occur. Although helical flow patterns and flow separations have been recorded in the aorta, their relation has not been clearly clarified and especially in the condition of artificial heart valve prostheses. Therefore, an experimental study is performed to investigate the hemodynamic performance of different mechanical heart valves (MHVs), i.e. the SJM Regent bileaflet mechanical heart valve (BMHV) and the Lapeyre-Triflo FURTIVA trileaflet mechanical heart valve (TMHV), in a transparent model of the human aorta under a physiological pulsatile right-hand helical flow condition. A typical systolic flow profile is applied in the pulse-duplicator to generate a physiological pulsatile flow which thereafter flows past an axial turbine blade structure to imitate the right-hand helical flow induced in the left ventricle. High-speed particle image velocimetry (PIV) measurements are used to map the flow evolution. A circular open orifice nozzle inserted in the valve plane as the reference configuration initially replaces the valve under investigation to understand the hemodynamic effects of the entered helical flow structure on the flow evolution in the aortic arch. Flow field analysis of the open orifice nozzle configuration illuminates the helical flow effectively delays the flow separation at the inner radius wall of the aortic arch. The comparison of the flow evolution for different MHVs shows that the BMHV works like a flow straightener which re-configures the helical flow pattern into three parallel jets (two side-orifice jets and the central orifice jet) while the TMHV preserves the helical flow structure and therefore prevent the flow separation at the inner radius wall of the aortic arch. Therefore the TMHV is of better hemodynamic performance and reduces the pressure loss.

Keywords: flow separation, helical aortic flow, mechanical heart valve, particle image velocimetry

Procedia PDF Downloads 162
1720 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments

Authors: Rahul Paul, Peter Mctaggart, Luke Skinner

Abstract:

Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.

Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry

Procedia PDF Downloads 84
1719 Self-Healing Phenomenon Evaluation in Cementitious Matrix with Different Water/Cement Ratios and Crack Opening Age

Authors: V. G. Cappellesso, D. M. G. da Silva, J. A. Arndt, N. dos Santos Petry, A. B. Masuero, D. C. C. Dal Molin

Abstract:

Concrete elements are subject to cracking, which can be an access point for deleterious agents that can trigger pathological manifestations reducing the service life of these structures. Finding ways to minimize or eliminate the effects of this aggressive agents’ penetration, such as the sealing of these cracks, is a manner of contributing to the durability of these structures. The cementitious self-healing phenomenon can be classified in two different processes. The autogenous self-healing that can be defined as a natural process in which the sealing of this cracks occurs without the stimulation of external agents, meaning, without different materials being added to the mixture, while on the other hand, the autonomous seal-healing phenomenon depends on the insertion of a specific engineered material added to the cement matrix in order to promote its recovery. This work aims to evaluate the autogenous self-healing of concretes produced with different water/cement ratios and exposed to wet/dry cycles, considering two ages of crack openings, 3 days and 28 days. The self-healing phenomenon was evaluated using two techniques: crack healing measurement using ultrasonic waves and image analysis performed with an optical microscope. It is possible to observe that by both methods, it possible to observe the self-healing phenomenon of the cracks. For young ages of crack openings and lower water/cement ratios, the self-healing capacity is higher when compared to advanced ages of crack openings and higher water/cement ratios. Regardless of the crack opening age, these concretes were found to stabilize the self-healing processes after 80 days or 90 days.

Keywords: sealf-healing, autogenous, water/cement ratio, curing cycles, test methods

Procedia PDF Downloads 144
1718 Clinicians’ Experiences with IT Systems in a UK District General Hospital: A Qualitative Analysis

Authors: Sunny Deo, Eve Barnes, Peter Arnold-Smith

Abstract:

Introduction: Healthcare technology is a rapidly expanding field in healthcare, with enthusiasts suggesting a revolution in the quality and efficiency of healthcare delivery based on the utilisation of better e-healthcare, including the move to paperless healthcare. The role and use of computers and programmes for healthcare have been increasing over the past 50 years. Despite this, there is no standardised method of assessing the quality of hardware and software utilised by frontline healthcare workers. Methods and subjects: Based on standard Patient Related Outcome Measures, a questionnaire was devised with the aim of providing quantitative and qualitative data on clinicians’ perspectives of their hospital’s Information Technology (IT). The survey was distributed via the Institution’s Intranet to all contracted doctors, and the survey's qualitative results were analysed. Qualitative opinions were grouped as positive, neutral, or negative and further sub-grouped into speed/usability, software/hardware, integration, IT staffing, clinical risk, and wellbeing. Analysis was undertaken on the basis of doctor seniority and by specialty. Results: There were 196 responses, with 51% from senior doctors (consultant grades) and the rest from junior grades, with the largest group of respondents 52% coming from medicine specialties. Differences in the proportion of principle and sub-groups were noted by seniority and specialty. Negative themes were by far the commonest stated opinion type, occurring in almost 2/3’s of responses (63%), while positive comments occurred less than 1 in 10 (8%). Conclusions: This survey confirms strongly negative attitudes to the current state of electronic documentation and IT in a large single-centre cohort of hospital-based frontline physicians after two decades of so-called progress to a paperless healthcare system. Greater use would provide further insights and potentially optimise the focus of development and delivery to improve the quality and effectiveness of IT for clinicians and their patients.

Keywords: information technology, electronic patient records, digitisation, paperless healthcare

Procedia PDF Downloads 69
1717 Ecological-Economics Evaluation of Water Treatment Systems

Authors: Hwasuk Jung, Seoi Lee, Dongchoon Ryou, Pyungjong Yoo, Seokmo Lee

Abstract:

The Nakdong River being used as drinking water sources for Pusan metropolitan city has the vulnerability of water management due to the fact that industrial areas are located in the upper Nakdong River. Most citizens of Busan think that the water quality of Nakdong River is not good, so they boil or use home filter to drink tap water, which causes unnecessary individual costs to Busan citizens. We need to diversify water intake to reduce the cost and to change the weak water source. Under this background, this study was carried out for the environmental accounting of Namgang dam water treatment system compared to Nakdong River water treatment system by using emergy analysis method to help making reasonable decision. Emergy analysis method evaluates quantitatively both natural environment and human economic activities as an equal unit of measure. The emergy transformity of Namgang dam’s water was 1.16 times larger than that of Nakdong River’s water. Namgang Dam’s water shows larger emergy transformity than that of Nakdong River’s water due to its good water quality. The emergy used in making 1 m3 tap water from Namgang dam water treatment system was 1.26 times larger than that of Nakdong River water treatment system. Namgang dam water treatment system shows larger emergy input than that of Nakdong river water treatment system due to its construction cost of new pipeline for intaking Namgang daw water. If the Won used in making 1 m3 tap water from Nakdong river water treatment system is 1, Namgang dam water treatment system used 1.66. If the Em-won used in making 1 m3 tap water from Nakdong river water treatment system is 1, Namgang dam water treatment system used 1.26. The cost-benefit ratio of Em-won was smaller than that of Won. When we use emergy analysis, which considers the benefit of a natural environment such as good water quality of Namgang dam, Namgang dam water treatment system could be a good alternative for diversifying intake source.

Keywords: emergy, emergy transformity, Em-won, water treatment system

Procedia PDF Downloads 284
1716 MAGE-A3 and PRAME Gene Expression and EGFR Mutation Status in Non-Small-Cell Lung Cancer

Authors: Renata Checiches, Thierry Coche, Nicolas F. Delahaye, Albert Linder, Fernando Ulloa Montoya, Olivier Gruselle, Karen Langfeld, An de Creus, Bart Spiessens, Vincent G. Brichard, Jamila Louahed, Frédéric F. Lehmann

Abstract:

Background: The RNA-expression levels of cancer-testis antigens MAGE A3 and PRAME were determined in resected tissue from patients with primary non-small-cell lung cancer (NSCLC) and related to clinical outcome. EGFR, KRAS and BRAF mutation status was determined in a subset to investigate associations with MAGE A3 and PRAME expression. Methods: We conducted a single-centre, uncontrolled, retrospective study of 1260 tissue-bank samples from stage IA-III resected NSCLC. The prognostic value of antigen expression (qRT-PCR) was determined by hazard-ratio and Kaplan-Meier curves. Results: Thirty-seven percent (314/844) of tumours expressed MAGE-A3, 66% (723/1092) expressed PRAME and 31% (239/839) expressed both. Respective frequencies in squamous-cell tumours and adenocarcinomas were 43%/30% for MAGE A3 and 80%/44% for PRAME. No correlation with stage, tumour size or patient age was found. Overall, no prognostic value was identified for either antigen. A trend to poorer overall survival was associated with MAGE-A3 in stage IIIB and with PRAME in stage IB. EGFR and KRAS mutations were found in 10.1% (28/311) and 33.8% (97/311) of tumours, respectively. EGFR (but not KRAS) mutation status was negatively associated with PRAME expression. Conclusion: No clear prognostic value for either PRAME or MAGE A3 was observed in the overall population, although some observed trends may warrant further investigation.

Keywords: MAGE A3, PRAME, cancer-testis gene, NSCLC, survival, EGFR

Procedia PDF Downloads 369
1715 An Improved Total Variation Regularization Method for Denoising Magnetocardiography

Authors: Yanping Liao, Congcong He, Ruigang Zhao

Abstract:

The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.

Keywords: constraint parameters, derivative matrix, magnetocardiography, regular term, total variation

Procedia PDF Downloads 140
1714 The Bayesian Premium Under Entropy Loss

Authors: Farouk Metiri, Halim Zeghdoudi, Mohamed Riad Remita

Abstract:

Credibility theory is an experience rating technique in actuarial science which can be seen as one of quantitative tools that allows the insurers to perform experience rating, that is, to adjust future premiums based on past experiences. It is used usually in automobile insurance, worker's compensation premium, and IBNR (incurred but not reported claims to the insurer) where credibility theory can be used to estimate the claim size amount. In this study, we focused on a popular tool in credibility theory which is the Bayesian premium estimator, considering Lindley distribution as a claim distribution. We derive this estimator under entropy loss which is asymmetric and squared error loss which is a symmetric loss function with informative and non-informative priors. In a purely Bayesian setting, the prior distribution represents the insurer’s prior belief about the insured’s risk level after collection of the insured’s data at the end of the period. However, the explicit form of the Bayesian premium in the case when the prior is not a member of the exponential family could be quite difficult to obtain as it involves a number of integrations which are not analytically solvable. The paper finds a solution to this problem by deriving this estimator using numerical approximation (Lindley approximation) which is one of the suitable approximation methods for solving such problems, it approaches the ratio of the integrals as a whole and produces a single numerical result. Simulation study using Monte Carlo method is then performed to evaluate this estimator and mean squared error technique is made to compare the Bayesian premium estimator under the above loss functions.

Keywords: bayesian estimator, credibility theory, entropy loss, monte carlo simulation

Procedia PDF Downloads 318
1713 Particle Swarm Optimization Based Vibration Suppression of a Piezoelectric Actuator Using Adaptive Fuzzy Sliding Mode Controller

Authors: Jin-Siang Shaw, Patricia Moya Caceres, Sheng-Xiang Xu

Abstract:

This paper aims to integrate the particle swarm optimization (PSO) method with the adaptive fuzzy sliding mode controller (AFSMC) to achieve vibration attenuation in a piezoelectric actuator subject to base excitation. The piezoelectric actuator is a complicated system made of ferroelectric materials and its performance can be affected by nonlinear hysteresis loop and unknown system parameters and external disturbances. In this study, an adaptive fuzzy sliding mode controller is proposed for the vibration control of the system, because the fuzzy sliding mode controller is designed to tackle the unknown parameters and external disturbance of the system, and the adaptive algorithm is aimed for fine-tuning this controller for error converging purpose. Particle swarm optimization method is used in order to find the optimal controller parameters for the piezoelectric actuator. PSO starts with a population of random possible solutions, called particles. The particles move through the search space with dynamically adjusted speed and direction that change according to their historical behavior, allowing the values of the particles to quickly converge towards the best solutions for the proposed problem. In this paper, an initial set of controller parameters is applied to the piezoelectric actuator which is subject to resonant base excitation with large amplitude vibration. The resulting vibration suppression is about 50%. Then PSO is applied to search for an optimal controller in the neighborhood of this initial controller. The performance of the optimal fuzzy sliding mode controller found by PSO indeed improves up to 97.8% vibration attenuation. Finally, adaptive version of fuzzy sliding mode controller is adopted for further improving vibration suppression. Simulation result verifies the performance of the adaptive controller with 99.98% vibration reduction. Namely the vibration of the piezoelectric actuator subject to resonant base excitation can be completely annihilated using this PSO based adaptive fuzzy sliding mode controller.

Keywords: adaptive fuzzy sliding mode controller, particle swarm optimization, piezoelectric actuator, vibration suppression

Procedia PDF Downloads 131
1712 Characterisation of Fractions Extracted from Sorghum Byproducts

Authors: Prima Luna, Afroditi Chatzifragkou, Dimitris Charalampopoulos

Abstract:

Sorghum byproducts, namely bran, stalk, and panicle are examples of lignocellulosic biomass. These raw materials contain large amounts of polysaccharides, in particular hemicelluloses, celluloses, and lignins, which if efficiently extracted, can be utilised for the development of a range of added value products with potential applications in agriculture and food packaging sectors. The aim of this study was to characterise fractions extracted from sorghum bran and stalk with regards to their physicochemical properties that could determine their applicability as food-packaging materials. A sequential alkaline extraction was applied for the isolation of cellulosic, hemicellulosic and lignin fractions from sorghum stalk and bran. Lignin content, phenolic content and antioxidant capacity were also investigated in the case of the lignin fraction. Thermal analysis using differential scanning calorimetry (DSC) and X-Ray Diffraction (XRD) revealed that the glass transition temperature (Tg) of cellulose fraction of the stalk was ~78.33 oC at amorphous state (~65%) and water content of ~5%. In terms of hemicellulose, the Tg value of stalk was slightly lower compared to bran at amorphous state (~54%) and had less water content (~2%). It is evident that hemicelluloses generally showed a lower thermal stability compared to cellulose, probably due to their lack of crystallinity. Additionally, bran had higher arabinose-to-xylose ratio (0.82) than the stalk, a fact that indicated its low crystallinity. Furthermore, lignin fraction had Tg value of ~93 oC at amorphous state (~11%). Stalk-derived lignin fraction contained more phenolic compounds (mainly consisting of p-coumaric and ferulic acid) and had higher lignin content and antioxidant capacity compared to bran-derived lignin fraction.

Keywords: alkaline extraction, bran, cellulose, hemicellulose, lignin, stalk

Procedia PDF Downloads 284
1711 Numerical Modeling of Geogrid Reinforced Soil Bed under Strip Footings Using Finite Element Analysis

Authors: Ahmed M. Gamal, Adel M. Belal, S. A. Elsoud

Abstract:

This article aims to study the effect of reinforcement inclusions (geogrids) on the sand dunes bearing capacity under strip footings. In this research experimental physical model was carried out to study the effect of the first geogrid reinforcement depth (u/B), the spacing between the reinforcement (h/B) and its extension relative to the footing length (L/B) on the mobilized bearing capacity. This paper presents the numerical modeling using the commercial finite element package (PLAXIS version 8.2) to simulate the laboratory physical model, studying the same parameters previously handled in the experimental work (u/B, L/B & h/B) for the purpose of validation. In this study the soil, the geogrid, the interface element and the boundary condition are discussed with a set of finite element results and the validation. Then the validated FEM used for studying real material and dimensions of strip foundation. Based on the experimental and numerical investigation results, a significant increase in the bearing capacity of footings has occurred due to an appropriate location of the inclusions in sand. The optimum embedment depth of the first reinforcement layer (u/B) is equal to 0.25. The optimum spacing between each successive reinforcement layer (h/B) is equal to 0.75 B. The optimum Length of the reinforcement layer (L/B) is equal to 7.5 B. The optimum number of reinforcement is equal to 4 layers. The study showed a directly proportional relation between the number of reinforcement layer and the Bearing Capacity Ratio BCR, and an inversely proportional relation between the footing width and the BCR.

Keywords: reinforced soil, geogrid, sand dunes, bearing capacity

Procedia PDF Downloads 397
1710 Big Data Analytics and Public Policy: A Study in Rural India

Authors: Vasantha Gouri Prathapagiri

Abstract:

Innovations in ICT sector facilitate qualitative life style for citizens across the globe. Countries that facilitate usage of new techniques in ICT, i.e., big data analytics find it easier to fulfil the needs of their citizens. Big data is characterised by its volume, variety, and speed. Analytics involves its processing in a cost effective way in order to draw conclusion for their useful application. Big data also involves into the field of machine learning, artificial intelligence all leading to accuracy in data presentation useful for public policy making. Hence using data analytics in public policy making is a proper way to march towards all round development of any country. The data driven insights can help the government to take important strategic decisions with regard to socio-economic development of her country. Developed nations like UK and USA are already far ahead on the path of digitization with the support of Big Data analytics. India is a huge country and is currently on the path of massive digitization being realised through Digital India Mission. Internet connection per household is on the rise every year. This transforms into a massive data set that has the potential to improvise the public services delivery system into an effective service mechanism for Indian citizens. In fact, when compared to developed nations, this capacity is being underutilized in India. This is particularly true for administrative system in rural areas. The present paper focuses on the need for big data analytics adaptation in Indian rural administration and its contribution towards development of the country on a faster pace. Results of the research focussed on the need for increasing awareness and serious capacity building of the government personnel working for rural development with regard to big data analytics and its utility for development of the country. Multiple public policies are framed and implemented for rural development yet the results are not as effective as they should be. Big data has a major role to play in this context as can assist in improving both policy making and implementation aiming at all round development of the country.

Keywords: Digital India Mission, public service delivery system, public policy, Indian administration

Procedia PDF Downloads 143
1709 Surface Nanostructure Developed by Ultrasonic Shot Peening and Its Effect on Low Cycle Fatigue Life of the IN718 Superalloy

Authors: Sanjeev Kumar, Vikas Kumar

Abstract:

Inconel 718 (IN718) is a high strength nickel-based superalloy designed for high-temperature applications up to 650 °C. It is widely used in gas turbines of jet engines and related aerospace applications because of its good mechanical properties and structural stability at elevated temperatures. Because of good performance ratio and excellent process capability, this alloy has been used predominantly for aeronautic engine components like compressor disc and compressor blade. The main precipitates that contribute to high-temperature strength of IN718 are γʹ Ni₃(Al, Ti) and mainly γʹʹ (Ni₃ Nb). Various processes have been used for modification of the surface of components, such as Laser Shock Peening (LSP), Conventional Shot Peening (SP) and Ultrasonic Shot Peening (USP) to induce compressive residual stress (CRS) and development of fine-grained structure in the surface region. Surface nanostructure by ultrasonic shot peening is a novel methodology of surface modification to improve the overall performance of structural components. Surface nanostructure was developed on the peak aged IN718 superalloy using USP and its effect was studied on low cycle fatigue (LCF) life. Nanostructure of ~ 49 to 73 nm was developed in the surface region of the alloy by USP. The gage section of LCF samples was USPed for 5 minutes at a constant frequency of 20 kHz using StressVoyager to modify the surface. Strain controlled cyclic tests were performed for non-USPed and USPed samples at ±Δεt/2 from ±0.50% to ±1.0% at strain rate (ė) 1×10⁻³ s⁻¹ under reversal loading (R=‒1) at room temperature. The fatigue life of the USPed specimens was found to be more than that of the non-USPed ones. LCF life of the USPed specimen at Δεt/2=±0.50% was enhanced by more than twice of the non-USPed specimen.

Keywords: IN718 superalloy, nanostructure, USP, LCF life

Procedia PDF Downloads 99
1708 Assessing the NYC's Single-Family Housing Typology for Urban Heat Vulnerability and Occupants’ Health Risk under the Climate Change Emergency

Authors: Eleni Stefania Kalapoda

Abstract:

Recurring heat waves due to the global climate change emergency pose continuous risks to human health and urban resources. Local and state decision-makers incorporate Heat Vulnerability Indices (HVIs) to quantify and map the relative impact on human health in emergencies. These maps enable government officials to identify the highest-risk districts and to concentrate emergency planning efforts and available resources accordingly (e.g., to reevaluate the location and the number of heat-relief centers). Even though the framework of conducting an HVI is unique per municipality, its accuracy in assessing the heat risk is limited. To resolve this issue, varied housing-related metrics should be included. This paper quantifies and classifies NYC’s single detached housing typology within high-vulnerable NYC districts using detailed energy simulations and post-processing calculations. The results show that the variation in indoor heat risk depends significantly on the dwelling’s design/operation characteristics, concluding that low-ventilated dwellings are the most vulnerable ones. Also, it confirmed that when building-level determinants of exposure are excluded from the assessment, HVI fails to capture important components of heat vulnerability. Lastly, the overall vulnerability ratio of the housing units was calculated between 0.11 to 1.6 indoor heat degrees in terms of ventilation and shading capacity, insulation degree, and other building attributes.

Keywords: heat vulnerability index, energy efficiency, urban heat, resiliency to heat, climate adaptation, climate mitigation, building energy

Procedia PDF Downloads 62
1707 Reducing Waiting Time in Outpatient Services: Six Sigma and Technological Approach

Authors: Omkar More, Isha Saini, Gracy Mathai

Abstract:

To study whether there is any clinical correlation between pterygium and dry eye and to evaluate the status of the tear film in patients with pterygium. Methods: 100 eyes with pterygium were compared with 100 control eyes without pterygium. Patients between 20 – 70 years were included in the study. A detailed history was taken and Schirmer’s test and TBUT were performed on all to evaluate the status of dry eye. Schirmer’s test ˂ 10 mm and TBUT ˂10 seconds was considered abnormal. Results: Maximum number (52) of patients affected by dry eye in both the groups were in the age group 31-40 years which statistically showed age as a significant factor of association for both pterygium and dry eye (P < 0.01).Schirmer’s test was slightly reduced in patients with pterygium(18.73±5.69 mm). TBUT was significantly reduced in the case group (12.26±2.24sec).TBUT decreased maximally in 51-60 yrs age group (13.00±2.77sec) with pterygium showing a tear film instability. On comparison of pterygia and controls with normal and abnormal tear film, Odd’s Ratio was 1.14 showing a risk of dry eye in pterygia patients to be 1.14 times higher than controls. Conclusion: Whether tear dysfunction is a precursor to pterygium growth or pterygium causes tear dysfunction is still not clear. Research and clinical evidence, however, suggest that there is a relationship between the two. This study is, therefore, undertaken to investigate the correlation between pterygium and dry eye. The patients with pterygia were compared with normals to evaluate their status regarding dryness. A close relationship exists between ocular irritation symptoms and functional evidence of tear instability. Schirmer’s test and TBUT should routinely be used in the outpatient department to diagnose dry eye in patients with pterygium and these patients should be promptly treated to prevent any sight-threatening complications.

Keywords: footfall, nursing assessment, quality improvement, six sigma

Procedia PDF Downloads 344