Search results for: single step anodization
1565 Clinical and Etiological Particularities of Infectious Uveitis in HIV+ and HIV- Patients in the Internal Medicine Department
Authors: N. Jait, M. Maamar, H. Khibri, H. Harmouche, N. Mouatssim, W. Ammouri, Z. Tazimezaelek, M. Adnaoui
Abstract:
Introduction: Uveitis presents with inflammation of the uvea, intraocular, of heterogeneous etiology and presentation. The objective of our study is to describe the clinical and therapeutic characteristics of infectious uveitis in HIV+ and HIV- patients. Patients and Methods: This is a retrospective study conducted at the internal medicine department of CHU Ibn Sina in Rabat over a period of 12 years (2010–2021), collecting 42 cases of infectious uveitis. Results: 42 patients were identified. 34% (14 cases) had acquired immunosuppression (9 cases: 22% had HIV infection and 12% were on chemotherapy), and 66% were immunocompetent. The M/F sex ratio was 1.1. The average age was 39 years old. Uveitis revealed HIV in a single case; 8/9 patients have already been followed, their average viral load is 3.4 log and an average CD4 count is 356/mm³. The revealing functional signs were: ocular redness (27%), decreased visual acuity (63%), visual blurring (40%), ocular pain (18%), scotoma (13%), and headaches (4%). The uveitis was site: anterior (30%), intermediate (6%), posterior (32%), and pan-uveitis (32%); unilateral in 80% of patients and bilateral in 20%. The etiologies of uveitis in HIV+ were: 3 cases of CMV, 2 cases of toxoplasmosis, 1 case of tuberculosis, 1 case of HSV, 1 case of VZV, and 1 case of syphilis. Etiologies of immunocompetent patients: tuberculosis (41%), toxoplasmosis (18%), syphilis (15%), CMV infection (4 cases: 10%), HSV infection (4 cases: 10%) , lepromatous uveitis (1 case: 2%), VZV infection (1 case: 2%), a locoregional infectious cause such as dental abscess (1 case: 2%), and one case of borreliosis (3% ). 50% of tuberculous uveitis was of the pan-uveitis type, 75% of the uveitis by toxoplasmosis was of the posterior type. Uveitis was associated with other pathologies in 2 seropositive cases (cerebral vasculitis, multifocal tuberculosis). A specific treatment was prescribed in all patients. The initial evolution was favorable in 67%, including 12% HIV+. 11% presented relapses of the same seat during uveitis of the toxoplasmic, tuberculous and herpetic type. 47% presented complications, of which 4 patients were HIV+: 3 retinal detachments; 7 Retinal hemorrhages. 6 unilateral blindness (including 2 HIV+ patients). Conclusion: In our series, the etiologies of infectious uveitis differ between HIV+ and HIV- patients. In HIV+ patients most often had toxoplasmosis and CMV, while HIV - patients mainly presented with tuberculosis and toxoplasmosis. The association between HIV and uveitis is undetermined, but HIV infection was an independent risk factor for uveitis.Keywords: uveitis, HIV, immunosuppression, infection
Procedia PDF Downloads 921564 Impact Evaluation and Technical Efficiency in Ethiopia: Correcting for Selectivity Bias in Stochastic Frontier Analysis
Authors: Tefera Kebede Leyu
Abstract:
The purpose of this study was to estimate the impact of LIVES project participation on the level of technical efficiency of farm households in three regions of Ethiopia. We used household-level data gathered by IRLI between February and April 2014 for the year 2013(retroactive). Data on 1,905 (754 intervention and 1, 151 control groups) sample households were analyzed using STATA software package version 14. Efforts were made to combine stochastic frontier modeling with impact evaluation methodology using the Heckman (1979) two-stage model to deal with possible selectivity bias arising from unobservable characteristics in the stochastic frontier model. Results indicate that farmers in the two groups are not efficient and operate below their potential frontiers i.e., there is a potential to increase crop productivity through efficiency improvements in both groups. In addition, the empirical results revealed selection bias in both groups of farmers confirming the justification for the use of selection bias corrected stochastic frontier model. It was also found that intervention farmers achieved higher technical efficiency scores than the control group of farmers. Furthermore, the selectivity bias-corrected model showed a different technical efficiency score for the intervention farmers while it more or less remained the same for that of control group farmers. However, the control group of farmers shows a higher dispersion as measured by the coefficient of variation compared to the intervention counterparts. Among the explanatory variables, the study found that farmer’s age (proxy to farm experience), land certification, frequency of visit to improved seed center, farmer’s education and row planting are important contributing factors for participation decisions and hence technical efficiency of farmers in the study areas. We recommend that policies targeting the design of development intervention programs in the agricultural sector focus more on providing farmers with on-farm visits by extension workers, provision of credit services, establishment of farmers’ training centers and adoption of modern farm technologies. Finally, we recommend further research to deal with this kind of methodological framework using a panel data set to test whether technical efficiency starts to increase or decrease with the length of time that farmers participate in development programs.Keywords: impact evaluation, efficiency analysis and selection bias, stochastic frontier model, Heckman-two step
Procedia PDF Downloads 741563 Decision-Making Process Based on Game Theory in the Process of Urban Transformation
Authors: Cemil Akcay, Goksun Yerlikaya
Abstract:
Buildings are the living spaces of people with an active role in every aspect of life in today's world. While some structures have survived from the early ages, most of the buildings that completed their lifetime have not transported to the present day. Nowadays, buildings that do not meet the social, economic, and safety requirements of the age return to life with a transformation process. This transformation is called urban transformation. Urban transformation is the renewal of the areas with a risk of disaster and the technological infrastructure required by the structure. The transformation aims to prevent damage to earthquakes and other disasters by rebuilding buildings that have completed their non-earthquake-resistant economic life. It is essential to decide on other issues related to conversion and transformation in places where most of the building stock should transform into the first-degree earthquake belt, such as Istanbul. In urban transformation, property owners, local authority, and contractor must deal at a common point. Considering that hundreds of thousands of property owners are sometimes in the areas of transformation, it is evident how difficult it is to make the deal and decide. For the optimization of these decisions, the use of game theory is foreseeing. The main problem in this study is that the urban transformation is carried out in place, or the building or buildings are transport to a different location. There are many stakeholders in the Istanbul University Cerrahpaşa Medical Faculty Campus, which is planned to be carried out in the process of urban transformation, was tried to solve the game theory applications. An analysis of the decisions given on a real urban transformation project and the logical suitability of decisions taken without the use of game theory were also supervised using game theory. In each step of this study, many decision-makers are classifying according to a specific logical sequence, and in the game trees that emerged as a result of this classification, Nash balances were tried to observe, and optimum decisions were determined. All decisions taken for this project have been subjected to two significant differentiated comparisons using game theory, and as decisions are taken without the use of game theory, and according to the results, solutions for the decision phase of the urban transformation process introduced. The game theory model developed from beginning to the end of the urban transformation process, particularly as a solution to the difficulty of making rational decisions in large-scale projects with many participants in the decision-making process. The use of a decision-making mechanism can provide an optimum answer to the demands of the stakeholders. In today's world for the construction sector, it is also seeing that the game theory is a non-surprising consequence of the fact that it is the most critical issues of planning and making the right decision in future years.Keywords: urban transformation, the game theory, decision making, multi-actor project
Procedia PDF Downloads 1401562 Optimization Based Design of Decelerating Duct for Pumpjets
Authors: Mustafa Sengul, Enes Sahin, Sertac Arslan
Abstract:
Pumpjets are one of the marine propulsion systems frequently used in underwater vehicles nowadays. The reasons for frequent use of pumpjet as a propulsion system are that it has higher relative efficiency at high speeds, better cavitation, and acoustic performance than its rivals. Pumpjets are composed of rotor, stator, and duct, and there are two different types of pumpjet configurations depending on the desired hydrodynamic characteristic, which are with accelerating and decelerating duct. Pumpjet with an accelerating channel is used at cargo ships where it works at low speeds and high loading conditions. The working principle of this type of pumpjet is to maximize the thrust by reducing the pressure of the fluid through the channel and throwing the fluid out from the channel with high momentum. On the other hand, for decelerating ducted pumpjets, the main consideration is to prevent the occurrence of the cavitation phenomenon by increasing the pressure of the fluid about the rotor region. By postponing the cavitation, acoustic noise naturally falls down, so decelerating ducted systems are used at noise-sensitive vehicle systems where acoustic performance is vital. Therefore, duct design becomes a crucial step during pumpjet design. This study, it is aimed to optimize the duct geometry of a decelerating ducted pumpjet for a highly speed underwater vehicle by using proper optimization tools. The target output of this optimization process is to obtain a duct design that maximizes fluid pressure around the rotor region to prevent from cavitation and minimizes drag force. There are two main optimization techniques that could be utilized for this process which are parameter-based optimization and gradient-based optimization. While parameter-based algorithm offers more major changes in interested geometry, which makes user to get close desired geometry, gradient-based algorithm deals with minor local changes in geometry. In parameter-based optimization, the geometry should be parameterized first. Then, by defining upper and lower limits for these parameters, design space is created. Finally, by proper optimization code and analysis, optimum geometry is obtained from this design space. For this duct optimization study, a commercial codedparameter-based optimization algorithm is used. To parameterize the geometry, duct is represented with b-spline curves and control points. These control points have x and y coordinates limits. By regarding these limits, design space is generated.Keywords: pumpjet, decelerating duct design, optimization, underwater vehicles, cavitation, drag minimization
Procedia PDF Downloads 2071561 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R
Authors: Pavel H. Llamocca, Victoria Lopez
Abstract:
The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.Keywords: open data, R language, data integration, environmental data
Procedia PDF Downloads 3141560 Tryptophan and Its Derivative Oxidation by Heme-Dioxygenase Enzyme
Authors: Ali Bahri Lubis
Abstract:
Tryptophan oxidation by Heme-dioxygenase enzyme is initial important stepTryptophan oxidation by Heme-dioxygenase enzyme is initial important step in kynurenine pathway implicating to several severe diseases such as Parkinson’s Disease, Huntington Disease, poliomyelitis and cataract. It is crucial to comprehend the oxidation mechanism with the hope to find decent treatment upon abovementioned diseases. The mechanism has been debatable since no one has been yet proved the mechanism obviously. In this research we have attempted to prove mechanistic steps of tryptophan oxidation via human indoleamine dioxygenase (h-IDO) using various substrates: L-tryptophan, L-tryptophan (indole-ring-2-13C), L-fully-labelled13C-tryptophan, L-N-methyl-tryptophan, L-tryptophan and 2-amino-3-(benzo(b)thiophene-3-yl) propanoic acid. All enzyme assay experiments were measured using a UV-Vis spectrophotometer, LC-MS, 1H-NMR, and HSQC. We also successfully synthesized enzyme products as our control in NMR measurements. The result exhibited that the distinct substrates produced N-formyl kynurenine (NFK) and hydroxypyrrolloindoleamine carboxylate acid (HPIC) in different concentrations and isomers, correlated to the proposal of considered mechanism reaction in kynurenine pathway implicating to several severe diseases such as Parkinson’s Disease, Huntington Disease, poliomyelitis and cataract. It is crucial to comprehend the oxidation mechanism with the hope to find decent treatment for the abovementioned diseases. The mechanism has been debatable since no one has yet proven the mechanism obviously. In this research we have attempted to prove mechanistic steps of tryptophan oxidation via human indoleamine dioxygenase (h-IDO) using various substrates: L-tryptophan, L-tryptophan (indole-ring-2-13C), L-fully-labelled13C-tryptophan, L-N-methyl-tryptophan, L-tryptophan and 2-amino-3-(benzo(b)thiophene-3-yl) propanoic acid. All enzyme assay experiments were measured using a UV-Vis spectrophotometer, LC-MS, 1H-NMR and HSQC. We also successfully synthesized enzyme products as our control in NMR measurements. The result exhibited that the distinct substrates produced N-formyl kynurenine (NFK) and hydroxypyrrolloindoleamine carboxylate acid (HPIC) in different concentrations and isomers, correlated to the proposal of considered mechanism reaction.Keywords: heme-dioxygenase enzyme, tryptophan oxidation, kynurenine pathway, n-formyl kynurenine
Procedia PDF Downloads 771559 Effect of Insulin versus Green Tea on the Parotid Gland of Streptozotocin Induced Diabetic Rats
Authors: H. El-Messiry, M. El-Zainy, D. Ghazy
Abstract:
Diabetes is a metabolic disease that results in a variety of oral health complications. Green tea is a natural antioxidant proved to have powerful effects against diabetes. The aim of this study was to compare between the effect of insulin and green tea on the Parotid gland of streptozotocin induced diabetic Albino rats by using light and transmission electron microscopy. Forty male Albino rats were divided into control group and diabetic groups. The diabetic group received a single injection of 40 mg/kg of streptozotocin intra-peritoneal under anesthesia and was further subdivided into three subgroups: The diabetic untreated subgroup which was untreated for two weeks, the insulin treated subgroup which has received insulin subcutaneously in a daily dose of 5 IU/kg body weight/day for two weeks and a green tea treated subgroup received a daily dose of 1 ml/ 100 gm body weight intragastrically for two weeks. Rats were terminated and parotid glands were dissected and processed for light and transmission electron microscopic examination. Histological examination of the diabetic untreated subgroup revealed acinar cells with pyknotic and hyperchromatic nuclei with cytoplasmic vacuolations. Ultrastructurally, acinar cells showed nuclear pleomorphism, dilated rough endoplasmic reticulum and swollen mitochondria with damaged cristae. Inflammatory cell infiltration was detected both histologically and ultrastructurally. Ducts showed signs of degeneration with loss of their normal outline and stagnated secretion within the lumen. However, insulin and green tea treated subgroups showed minimal degenerative damage and were almost similar to the control with minimal changes. Treatment of the parotid gland of the streptozotocin induced diabetic rats with GT was closely comparable to the traditional insulin therapy in reducing signs of histological and ultrastructural damage.Keywords: diabetes, green tea, insulin, parotid
Procedia PDF Downloads 1761558 Downtime Estimation of Building Structures Using Fuzzy Logic
Authors: M. De Iuliis, O. Kammouh, G. P. Cimellaro, S. Tesfamariam
Abstract:
Community Resilience has gained a significant attention due to the recent unexpected natural and man-made disasters. Resilience is the process of maintaining livable conditions in the event of interruptions in normally available services. Estimating the resilience of systems, ranging from individuals to communities, is a formidable task due to the complexity involved in the process. The most challenging parameter involved in the resilience assessment is the 'downtime'. Downtime is the time needed for a system to recover its services following a disaster event. Estimating the exact downtime of a system requires a lot of inputs and resources that are not always obtainable. The uncertainties in the downtime estimation are usually handled using probabilistic methods, which necessitates acquiring large historical data. The estimation process also involves ignorance, imprecision, vagueness, and subjective judgment. In this paper, a fuzzy-based approach to estimate the downtime of building structures following earthquake events is proposed. Fuzzy logic can integrate descriptive (linguistic) knowledge and numerical data into the fuzzy system. This ability allows the use of walk down surveys, which collect data in a linguistic or a numerical form. The use of fuzzy logic permits a fast and economical estimation of parameters that involve uncertainties. The first step of the method is to determine the building’s vulnerability. A rapid visual screening is designed to acquire information about the analyzed building (e.g. year of construction, structural system, site seismicity, etc.). Then, a fuzzy logic is implemented using a hierarchical scheme to determine the building damageability, which is the main ingredient to estimate the downtime. Generally, the downtime can be divided into three main components: downtime due to the actual damage (DT1); downtime caused by rational and irrational delays (DT2); and downtime due to utilities disruption (DT3). In this work, DT1 is computed by relating the building damageability results obtained from the visual screening to some already-defined components repair times available in the literature. DT2 and DT3 are estimated using the REDITM Guidelines. The Downtime of the building is finally obtained by combining the three components. The proposed method also allows identifying the downtime corresponding to each of the three recovery states: re-occupancy; functional recovery; and full recovery. Future work is aimed at improving the current methodology to pass from the downtime to the resilience of buildings. This will provide a simple tool that can be used by the authorities for decision making.Keywords: resilience, restoration, downtime, community resilience, fuzzy logic, recovery, damage, built environment
Procedia PDF Downloads 1581557 Modelling the Impact of Installation of Heat Cost Allocators in District Heating Systems Using Machine Learning
Authors: Danica Maljkovic, Igor Balen, Bojana Dalbelo Basic
Abstract:
Following the regulation of EU Directive on Energy Efficiency, specifically Article 9, individual metering in district heating systems has to be introduced by the end of 2016. These directions have been implemented in member state’s legal framework, Croatia is one of these states. The directive allows installation of both heat metering devices and heat cost allocators. Mainly due to bad communication and PR, the general public false image was created that the heat cost allocators are devices that save energy. Although this notion is wrong, the aim of this work is to develop a model that would precisely express the influence of installation heat cost allocators on potential energy savings in each unit within multifamily buildings. At the same time, in recent years, a science of machine learning has gain larger application in various fields, as it is proven to give good results in cases where large amounts of data are to be processed with an aim to recognize a pattern and correlation of each of the relevant parameter as well as in the cases where the problem is too complex for a human intelligence to solve. A special method of machine learning, decision tree method, has proven an accuracy of over 92% in prediction general building consumption. In this paper, a machine learning algorithms will be used to isolate the sole impact of installation of heat cost allocators on a single building in multifamily houses connected to district heating systems. Special emphasises will be given regression analysis, logistic regression, support vector machines, decision trees and random forest method.Keywords: district heating, heat cost allocator, energy efficiency, machine learning, decision tree model, regression analysis, logistic regression, support vector machines, decision trees and random forest method
Procedia PDF Downloads 2491556 Research on Evaluation of Renewable Energy Technology Innovation Strategy Based on PMC Index Model
Abstract:
Renewable energy technology innovation is an important way to realize the energy transformation. Our government has issued a series of policies to guide and support the development of renewable energy. The implementation of these policies will affect the further development, utilization and technological innovation of renewable energy. In this context, it is of great significance to systematically sort out and evaluate the renewable energy technology innovation policy for improving the existing policy system. Taking the 190 renewable energy technology innovation policies issued during 2005-2021 as a sample, from the perspectives of policy issuing departments and policy keywords, it uses text mining and content analysis methods to analyze the current situation of the policies and conduct a semantic network analysis to identify the core issuing departments and core policy topic words; A PMC (Policy Modeling Consistency) index model is built to quantitatively evaluate the selected policies, analyze the overall pros and cons of the policy through its PMC index, and reflect the PMC value of the model's secondary index The core departments publish policies and the performance of each dimension of the policies related to the core topic headings. The research results show that Renewable energy technology innovation policies focus on synergy between multiple departments, while the distribution of the issuers is uneven in terms of promulgation time; policies related to different topics have their own emphasis in terms of policy types, fields, functions, and support measures, but It still needs to be improved, such as the lack of policy forecasting and supervision functions, the lack of attention to product promotion, and the relatively single support measures. Finally, this research puts forward policy optimization suggestions in terms of promoting joint policy release, strengthening policy coherence and timeliness, enhancing the comprehensiveness of policy functions, and enriching incentive measures for renewable energy technology innovation.Keywords: renewable energy technology innovation, content analysis, policy evaluation, PMC index model
Procedia PDF Downloads 641555 Identification and Origins of Multiple Personality: A Criterion from Wiggins
Authors: Brittany L. Kang
Abstract:
One familiar theory of the origin of multiple personalities focuses on how symptoms of trauma or abuse are central causes, as seen in paradigmatic examples of the condition. The theory states that multiple personalities constitute a congenital condition, as babies all exhibit multiplicity, and that generally alters only remain separated due to trauma. In more typical cases, the alters converge and become a single identity; only in cases of trauma, according to this account, do the alters remain separated. This theory is misleading in many aspects, the most prominent being that not all multiple personality patients are victims of child abuse or trauma, nor are all cases of multiple personality observed in early childhood. The use of this criterion also causes clinical problems, including an inability to identify multiple personalities through the variety of symptoms and traits seen across observed cases. These issues present a need for revision in the currently applied criterion in order to separate the notion of child abuse and to be able to better understand the origins of multiple personalities itself. Identifying multiplicity through the application of identity theories will improve the current criterion, offering a bridge between identifying existing cases and understanding their origins. We begin by applying arguments from Wiggins, who held that each personality within a multiple was not a whole individual, but rather characters who switch off. Wiggins’ theory is supported by observational evidence of how such characters are differentiated. Alters of older ages are seen to require different prescription lens, in addition to having different handwriting. The alters may also display drastically varying styles of clothing, preferences in food, their gender, sexuality, religious beliefs and more. The definitions of terms such as 'personality' or 'persons' also become more distinguished, leading to greater understanding of who is exactly able to be classified as a patient of multiple personalities. While a more common meaning of personality is a designation of specific characteristics which account for the entirety of a person, this paper argues from Wiggins’ theory that each 'personality' is in fact only partial. Clarification of the concept in question will allow for more successful future clinical applications.Keywords: identification, multiple personalities, origin, Wiggins' theory
Procedia PDF Downloads 2411554 Lumbar Punctures: Re-Audit of Procedure Documentation Following the Introduction of a Standardised Procedure Checklist
Authors: Hayley Lawrence, Nabi Shah, Sarah Dyer
Abstract:
Aims: Lumbar punctures are a common bedside procedure performed in acute medicine. Published guidance exists on the standardised documentation of invasive procedures in order to reduce the risk of complications. The audit aim was to assess current standards of documentation in accordance with both the GMC and the National Standards for Invasive Procedures guidelines. A second cycle was conducted after introducing a standardised sticker created using current guidelines. This would assess whether the sticker improved documentation, aiming for 100% standard in each step of the procedure. Methods: An initial prospective audit of current practice was conducted over a 3-month period. Patients were identified by their presenting complaints and by colleagues assessing acute medical patients. Initial findings were presented locally, and a further prospective audit was conducted following the implementation of a standardised sticker. Results: 19 lumbar punctures were included in the first cycle and 13 procedures in the second. Pre-procedure documentation was collected for each cycle, whereby documentation of ‘Indication’ improved from 5.3% to 84.6%, ‘Consent’ from 84.2% to 100%, ‘Coagulopathy’ from 0% to 61.5%, ‘Drug Chart checked’ from 0% to 100%, ‘Position of patient’ from 26.3% to 100% and use of ‘Aseptic Technique’ from 83.3% to 100% from the first to the second cycle respectively. ‘Level of Doctor’ and ‘Supervision’ decreased from 53% to 31% and 53% to 46%, respectively, in the second cycle. Documentation of the procedure itself also demonstrated improvements, with ‘Level of Insertion’ 15.8% to 100%, ‘Name of Antiseptic Used’ 11.1% to 69.2%, ‘Local Anaesthetic Used’ 26.3% to 53.8%, ‘Needle Gauge’ 42.1% to 76.9%, ‘Number of Attempts’ 78.9% to 100% and ‘Traumatic/Atraumatic’ procedure 26.3% to 92.3%, respectively. A similar number of opening pressures were documented in each cycle at 57.9% and 53.8%, respectively, but its documentation was deemed ‘Not Applicable’ in a higher number of patients in the second cycle. Post-procedure documentation improved, with ‘Number of Samples obtained’ increasing from 52.6% to 92.3% and documentation of ‘Immediate Complications’ increasing from 78.9% to 100%. ‘Dressing Applied’ was poorly documented in the first cycle at 16.7%. This was not included on the standardised sticker, resulting in 0% documentation in the second cycle. Documentation of Clinicians’ Name and Bleep reduced from 63.2% to 15.4%, but when the name only was analysed, this increased to 84.6%. Conclusions: Standardised stickers for lumbar punctures do improve documentation and hence should result in improved patient safety. There is still room for improvement to reach 100% standard in each area, especially with respect to the clinician’s name and contact details being documented. Final adjustments will be made to the sticker before being included in a lumbar puncture kit, which will be made readily available in the acute medical wards. Future audits could be extended to include other common bedside procedures performed in acute medicine to ensure documentation of all these procedures reaches 100% standard.Keywords: invasive procedure, lumbar puncture, medical record keeping, procedure checklist, procedure documentation, standardised documentation
Procedia PDF Downloads 1011553 Adsorption of Chlorinated Pesticides in Drinking Water by Carbon Nanotubes
Authors: Hacer Sule Gonul, Vedat Uyak
Abstract:
Intensive use of pesticides in agricultural activity causes mixing of these compounds into water sources with surface flow. Especially after the 1970s, a number of limitations imposed on the use of chlorinated pesticides that have a carcinogenic risk potential and regulatory limit have been established. These chlorinated pesticides discharge to water resources, transport in the water and land environment and accumulation in the human body through the food chain raises serious health concerns. Carbon nanotubes (CNTs) have attracted considerable attention from on all because of their excellent mechanical, electrical, and environmental characteristics. Due to CNT particles' high degree of hydrophobic surfaces, these nanoparticles play critical role in the removal of water contaminants of natural organic matters, pesticides and phenolic compounds in water sources. Health concerns associated with chlorinated pesticides requires the removal of such contaminants from aquatic environment. Although the use of aldrin and atrazine was restricted in our country, repatriation of illegal entry and widespread use of such chemicals in agricultural areas cause increases for the concentration of these chemicals in the water supply. In this study, the compounds of chlorinated pesticides such as aldrin and atrazine compounds would be tried to eliminate from drinking water with carbon nanotube adsorption method. Within this study, 2 different types of CNT would be used including single-wall (SWCNT) and multi-wall (MWCNT) carbon nanotubes. Adsorption isotherms within the scope of work, the parameters affecting the adsorption of chlorinated pesticides in water are considered as pH, contact time, CNT type, CNT dose and initial concentration of pesticides. As a result, under conditions of neutral pH conditions with MWCNT respectively for atrazine and aldrin obtained adsorption capacity of determined as 2.24 µg/mg ve 3.84 µg/mg. On the other hand, the determined adsorption capacity rates for SWCNT for aldrin and atrazine has identified as 3.91 µg/mg ve 3.92 µg/mg. After all, each type of pesticide that provides superior performance in relieving SWCNT particles has emerged.Keywords: pesticide, drinking water, carbon nanotube, adsorption
Procedia PDF Downloads 1681552 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators
Authors: M. A. Okezue, K. L. Clase, S. R. Byrn
Abstract:
The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets
Procedia PDF Downloads 1681551 Physical Activity and Sport Research with People with Impairments: Oppression–Empowerment Continuum
Authors: Gyozo Molnar, Nancy Spencer-Cavaliere
Abstract:
Research in the area of physical activity and sport, while becoming multidisciplinary, is still dominated by post-positivist approaches that have the tendency to position the researcher as an expert and the participant as subordinate thereby perpetuating an unequal balance of power. Despite physical activity’s and sport’s universal appeal, their historic practices have excluded particular groups of people who assumed lesser forms of human capital. Adapted physical activity (APA) is a field that has responded to those segregations with specific application and relevance to people with impairments. Nevertheless, to date, similar to physical activity and sport, research in APA is still dominated by post-positivist epistemology. Stemming from this, there is gradually growing criticism within the field related to the abundance of research ‘on’ people with impairments and lack of research ‘with’ and ‘by’ people with impairments. Furthermore, research questions in the field are most often pursued from a single axis of analysis and constructed by non-disabled researchers. Concurrently, while calls for interdisciplinary approaches to understanding disability are growing in popularity, there is also a clear need to take an intersectionality-informed research methodology to understanding physical activity and sport and power (im)balances therein. In other words, impairment needs to be considered in conjunction with other socially and politically constructed and historically embedded differences such as gender, race, class, etc. when analyzing physical activity and sport experiences for people with impairments. Moreover, it is reasonable to argue that non-disabled researchers must recognize and theorize ableism in its complicated intersectional manifestation to show the structural constraints that disabled scholars face in the field. Consequently, this presentation will offer an alternative approach that acknowledges and prioritizes the perspectives and experiences of people with impairments to expand the field of APA. As such, the importance of broadening epistemologies in APA and prioritizing an appreciation for multiple bits of knowledge of people with impairments through intersections of social locations (e.g., gender, race, class) will be considered.Keywords: adapted physical activity, disability, intersectionality, post-positivist, power imbalances
Procedia PDF Downloads 2351550 A Robust Optimization of Chassis Durability/Comfort Compromise Using Chebyshev Polynomial Chaos Expansion Method
Authors: Hanwei Gao, Louis Jezequel, Eric Cabrol, Bernard Vitry
Abstract:
The chassis system is composed of complex elements that take up all the loads from the tire-ground contact area and thus it plays an important role in numerous specifications such as durability, comfort, crash, etc. During the development of new vehicle projects in Renault, durability validation is always the main focus while deployment of comfort comes later in the project. Therefore, sometimes design choices have to be reconsidered because of the natural incompatibility between these two specifications. Besides, robustness is also an important point of concern as it is related to manufacturing costs as well as the performance after the ageing of components like shock absorbers. In this paper an approach is proposed aiming to realize a multi-objective optimization between chassis endurance and comfort while taking the random factors into consideration. The adaptive-sparse polynomial chaos expansion method (PCE) with Chebyshev polynomial series has been applied to predict responses’ uncertainty intervals of a system according to its uncertain-but-bounded parameters. The approach can be divided into three steps. First an initial design of experiments is realized to build the response surfaces which represent statistically a black-box system. Secondly within several iterations an optimum set is proposed and validated which will form a Pareto front. At the same time the robustness of each response, served as additional objectives, is calculated from the pre-defined parameter intervals and the response surfaces obtained in the first step. Finally an inverse strategy is carried out to determine the parameters’ tolerance combination with a maximally acceptable degradation of the responses in terms of manufacturing costs. A quarter car model has been tested as an example by applying the road excitations from the actual road measurements for both endurance and comfort calculations. One indicator based on the Basquin’s law is defined to compare the global chassis durability of different parameter settings. Another indicator related to comfort is obtained from the vertical acceleration of the sprung mass. An optimum set with best robustness has been finally obtained and the reference tests prove a good robustness prediction of Chebyshev PCE method. This example demonstrates the effectiveness and reliability of the approach, in particular its ability to save computational costs for a complex system.Keywords: chassis durability, Chebyshev polynomials, multi-objective optimization, polynomial chaos expansion, ride comfort, robust design
Procedia PDF Downloads 1511549 Neuroprotective Effect of Tangeretin against Potassium Dichromate-Induced Acute Brain Injury via Modulating AKT/Nrf2 Signaling Pathway in Rats
Authors: Ahmed A. Sedik, Doaa Mahmoud Shuaib
Abstract:
Brain injury is a cause of disability and death worldwide. Potassium dichromate (PD) is an environmental contaminant widely recognized as teratogenic, carcinogenic, and mutagenic towards animals and humans. The aim of the present study was to investigate the possible neuroprotective effects of tangeretin (TNG) on PD-induced brain injury in rats. Forty male adult Wistar rats were randomly and blindly allocated into four groups (8 rats /group). The first group received saline intranasally (i.n.). The second group received a single dose of PD (2 mg/kg, i.n.). The third group received TNG (50 mg/kg; orally) for 14 days, followed by i.n. of PD on the last day of the experiment. Four groups received TNG (100 mg/kg; orally) for 14 days, followed by i.n. of PD on the last day of the experiment. 18- hours after the final treatment, behavioral parameters, neuro-biochemical indices, FTIR analysis, and histopathological studies were evaluated. Results of the present study revealed that rats intoxicated with PD promoted oxidative stress and inflammation via an increase in MDA and a decrease in Nrf2 signaling pathway and GSH levels with an increase in brain contents of TNF-α, IL-10, and NF-kβ and reduced AKT levels in brain homogenates. Treatment with TNG (100 mg/kg; orally) ameliorated behavioral, cholinergic activities and oxidative stress, decreased the elevated levels of pro-inflammatory mediators; TNF-α, IL-10, and NF-κβ elevated AKT pathway with corrected FTIR spectra with a decrease in brain content of chromium residues detected by atomic absorption spectrometry. Also, TNG administration restored the morphological changes as degenerated neurons and necrosis associated with PD intoxication. Additionally, TNG decreased Caspase-3 expression in the brain of PD rats. TNG plays a crucial role in AKT/Nrf2 pathway that is responsible for their antioxidant, anti-inflammatory effects, and apoptotic pathway against PD-induced brain injury in rats.Keywords: tangeretin, potassium dichromate, brain injury, AKT/Nrf2 signaling pathway, FTIR, atomic absorption spectrometry
Procedia PDF Downloads 1021548 Metal Layer Based Vertical Hall Device in a Complementary Metal Oxide Semiconductor Process
Authors: Se-Mi Lim, Won-Jae Jung, Jin-Sup Kim, Jun-Seok Park, Hyung-Il Chae
Abstract:
This paper presents a current-mode vertical hall device (VHD) structure using metal layers in a CMOS process. The proposed metal layer based vertical hall device (MLVHD) utilizes vertical connection among metal layers (from M1 to the top metal) to facilitate hall effect. The vertical metal structure unit flows a bias current Ibias from top to bottom, and an external magnetic field changes the current distribution by Lorentz force. The asymmetric current distribution can be detected by two differential-mode current outputs on each side at the bottom (M1), and each output sinks Ibias/2 ± Ihall. A single vertical metal structure generates only a small amount of hall effect of Ihall due to the short length from M1 to the top metal as well as the low conductivity of the metal, and a series connection between thousands of vertical structure units can solve the problem by providing NxIhall. The series connection between two units is another vertical metal structure flowing current in the opposite direction, and generates negative hall effect. To mitigate the negative hall effect from the series connection, the differential current outputs at the bottom (M1) from one unit merges on the top metal level of the other unit. The proposed MLVHD is simulated in a 3-dimensional model simulator in COMSOL Multiphysics, with 0.35 μm CMOS process parameters. The simulated MLVHD unit size is (W) 10 μm × (L) 6 μm × (D) 10 μm. In this paper, we use an MLVHD with 10 units; the overall hall device size is (W) 10 μm × (L)78 μm × (D) 10 μm. The COMSOL simulation result is as following: the maximum hall current is approximately 2 μA with a 12 μA bias current and 100mT magnetic field; This work was supported by Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government(MSIP) (No.R7117-16-0165, Development of Hall Effect Semiconductor for Smart Car and Device).Keywords: CMOS, vertical hall device, current mode, COMSOL
Procedia PDF Downloads 3001547 Study of Biofouling Wastewater Treatment Technology
Authors: Sangho Park, Mansoo Kim, Kyujung Chae, Junhyuk Yang
Abstract:
The International Maritime Organization (IMO) recognized the problem of invasive species invasion and adopted the "International Convention for the Control and Management of Ships' Ballast Water and Sediments" in 2004, which came into force on September 8, 2017. In 2011, the IMO approved the "Guidelines for the Control and Management of Ships' Biofouling to Minimize the Transfer of Invasive Aquatic Species" to minimize the movement of invasive species by hull-attached organisms and required ships to manage the organisms attached to their hulls. Invasive species enter new environments through ships' ballast water and hull attachment. However, several obstacles to implementing these guidelines have been identified, including a lack of underwater cleaning equipment, regulations on underwater cleaning activities in ports, and difficulty accessing crevices in underwater areas. The shipping industry, which is the party responsible for understanding these guidelines, wants to implement them for fuel cost savings resulting from the removal of organisms attached to the hull, but they anticipate significant difficulties in implementing the guidelines due to the obstacles mentioned above. Robots or people remove the organisms attached to the hull underwater, and the resulting wastewater includes various species of organisms and particles of paint and other pollutants. Currently, there is no technology available to sterilize the organisms in the wastewater or stabilize the heavy metals in the paint particles. In this study, we aim to analyze the characteristics of the wastewater generated from the removal of hull-attached organisms and select the optimal treatment technology. The organisms in the wastewater generated from the removal of the attached organisms meet the biological treatment standard (D-2) using the sterilization technology applied in the ships' ballast water treatment system. The heavy metals and other pollutants in the paint particles generated during removal are treated using stabilization technologies such as thermal decomposition. The wastewater generated is treated using a two-step process: 1) development of sterilization technology through pretreatment filtration equipment and electrolytic sterilization treatment and 2) development of technology for removing particle pollutants such as heavy metals and dissolved inorganic substances. Through this study, we will develop a biological removal technology and an environmentally friendly processing system for the waste generated after removal that meets the requirements of the government and the shipping industry and lays the groundwork for future treatment standards.Keywords: biofouling, ballast water treatment system, filtration, sterilization, wastewater
Procedia PDF Downloads 1081546 Marginal Productivity of Small Scale Yam and Cassava Farmers in Kogi State, Nigeria: Data Envelopment Analysis as a Complement
Authors: M. A. Ojo, O. A. Ojo, A. I. Odine, A. Ogaji
Abstract:
The study examined marginal productivity analysis of small scale yam and cassava farmers in Kogi State, Nigeria. Data used for the study were obtained from primary source using a multi-stage sampling technique with structured questionnaires administered to 150 randomly selected yam and cassava farmers from three Local Government Areas of the State. Description statistics, data envelopment analysis and Cobb-Douglas production function were used to analyze the data. The DEA result on the overall technical efficiency of the farmers showed that 40% of the sampled yam and cassava farmers in the study area were operating at frontier and optimum level of production with mean technical efficiency of 1.00. This implies that 60% of the yam and cassava farmers in the study area can still improve their level of efficiency through better utilization of available resources, given the current state of technology. The results of the Cobb-Douglas analysis of factors affecting the output of yam and cassava farmers showed that labour, planting materials, fertilizer and capital inputs positively and significantly affected the output of the yam and cassava farmers in the study area. The study further revealed that yam and cassava farms in the study area operated under increasing returns to scale. This result of marginal productivity analysis further showed that relatively efficient farms were more marginally productive in resource utilization This study also shows that estimating production functions without separating the farms to efficient and inefficient farms bias the parameter values obtained from such production function. It is therefore recommended that yam and cassava farmers in the study area should form cooperative societies so as to enable them have access to productive inputs that will enable them expand. Also, since using a single equation model for production function produces a bias parameter estimates as confirmed above, farms should, therefore, be decomposed into efficient and inefficient ones before production function estimation is done.Keywords: marginal productivity, DEA, production function, Kogi state
Procedia PDF Downloads 4821545 Transverse Momentum Dependent Factorization and Evolution for Spin Physics
Authors: Bipin Popat Sonawane
Abstract:
After 1988 Electron muon Collaboration (EMC) announcement of measurement of spin dependent structure function, it has been found that it has become a need to understand spin structure of a hadron. In the study of three-dimensional spin structure of a proton, we need to understand the foundation of quantum field theory in terms of electro-weak and strong theories using rigorous mathematical theories and models. In the process of understanding the inner dynamical stricture of proton we need understand the mathematical formalism in perturbative quantum chromodynamics (pQCD). In QCD processes like proton-proton collision at high energy we calculate cross section using conventional collinear factorization schemes. In this calculations, parton distribution functions (PDFs) and fragmentation function are used which provide the information about probability density of finding quarks and gluons ( partons) inside the proton and probability density of finding final hadronic state from initial partons. In transverse momentum dependent (TMD) PDFs and FFs, collectively called as TMDs, take an account for intrinsic transverse motion of partons. The TMD factorization in the calculation of cross sections provide a scheme of hadronic and partonic states in the given QCD process. In this study we review Transverse Momentum Dependent (TMD) factorization scheme using Collins-Soper-Sterman (CSS) Formalism. CSS formalism considers the transverse momentum dependence of the partons, in this formalism the cross section is written as a Fourier transform over a transverse position variable which has physical interpretation as impact parameter. Along with this we compare this formalism with improved CSS formalism. In this work we study the TMD evolution schemes and their comparison with other schemes. This would provide description in the process of measurement of transverse single spin asymmetry (TSSA) in hadro-production and electro-production of J/psi meson at RHIC, LHC, ILC energy scales. This would surely help us to understand J/psi production mechanism which is an appropriate test of QCD. Procedia PDF Downloads 691544 Corporate Governance and Corporate Social Responsibility: Research on the Interconnection of Both Concepts and Its Impact on Non-Profit Organizations
Authors: Helene Eller
Abstract:
The aim of non-profit organizations (NPO) is to provide services and goods for its clientele, with profit being a minor objective. By having this definition as the basic purpose of doing business, it is obvious that the goal of an organisation is to serve several bottom lines and not only the financial one. This approach is underpinned by the non-distribution constraint which means that NPO are allowed to make profits to a certain extent, but not to distribute them. The advantage is that there are no single shareholders who might have an interest in the prosperity of the organisation: there is no pie to divide. The gained profits remain within the organisation and will be reinvested in purposeful projects. Good governance is mandatory to support the aim of NPOs. Looking for a measure of good governance the principals of corporate governance (CG) will come in mind. The purpose of CG is direction and control, and in the field of NPO, CG is enlarged to consider the relationship to all important stakeholders who have an impact on the organisation. The recognition of more relevant parties than the shareholder is the link to corporate social responsibility (CSR). It supports a broader view of the bottom line: It is no longer enough to know how profits are used but rather how they are made. Besides, CSR addresses the responsibility of organisations for their impact on society. When transferring the concept of CSR to the non-profit area it will become obvious that CSR with its distinctive features will match the aims of NPOs. As a consequence, NPOs who apply CG apply also CSR to a certain extent. The research is designed as a comprehensive theoretical and empirical analysis. First, the investigation focuses on the theoretical basis of both concepts. Second, the similarities and differences are outlined and as a result the interconnection of both concepts will show up. The contribution of this research is manifold: The interconnection of both concepts when applied to NPOs has not got any attention in science yet. CSR and governance as integrated concept provides a lot of advantages for NPOs compared to for-profit organisations which are in a steady justification to show the impact they might have on the society. NPOs, however, integrate economic and social aspects as starting point. For NPOs CG is not a mere concept of compliance but rather an enhanced concept integrating a lot of aspects of CSR. There is no “either-nor” between the concepts for NPOs.Keywords: business ethics, corporate governance, corporate social responsibility, non-profit organisations
Procedia PDF Downloads 2401543 Importance of the Bali Strait for Devil Ray Reproduction
Authors: Irianes C. Gozali, Betty J.L. Laglbauer, Muhammad G. Salim, Sila K. Sari, Fahmi Fahmi, Selvia Oktaviyani
Abstract:
Muncar, located off the eastern coast of Java, is an important fishing port for small-scale fleets which land mobulid rays as retained bycatch, primarily in drift gillnets. Due to overlap with fishing grounds in the Bali Strait, three devil ray species are landed in Muncar, the spinetail devil ray Mobula mobular, the bentfin devil ray Mobula thurstoni, and the Chilean devil ray Mobula tarapacana, which are all listed as Endangered by the International Union for the Conservation of Nature. However, despite the importance of life-history data to better manage stocks, such information is still rare or unavailable for Indonesian mobulid ray populations. Using morphometric data, reproductive assessments, and samples collected from dead specimens at fish markets from 2015-2019, we provide information on the maturity stage, reproductive periodicity, gestation, and size at parturition. A majority of immature individuals of all three devil ray species were recorded (<10% individuals in Mobula mobular to <30% individuals in Mobula thurstoni). Pregnant females of two species, Mobula mobular and Mobula thurstoni were recorded containing embryos of various developmental stages (each with a single embryo in the left functional uterus), while for Mobula tarapacana, no fetuses were found. The largest embryo recorded in M. mobular was within the range of that previously reported for neonates of the species in Indonesia (957 cm, for a 920-994 range), and represents a near-term embryo reflecting size at parturition. Low reproductive output was confirmed for the study-species. Based on this study, we infer that the Bali Straight is likely an important location for devil ray reproduction, which raises concern for the sustainability of mobulid ray populations in the face of bycatch in drift gillnets. Potential management approaches to tackle this issue are discussed.Keywords: devil ray, mobulid, reproduction, Indonesia
Procedia PDF Downloads 1791542 Contrasting Infrastructure Sharing and Resource Substitution Synergies Business Models
Authors: Robin Molinier
Abstract:
Industrial symbiosis (I.S) rely on two modes of cooperation that are infrastructure sharing and resource substitution to obtain economic and environmental benefits. The former consists in the intensification of use of an asset while the latter is based on the use of waste, fatal energy (and utilities) as alternatives to standard inputs. Both modes, in fact, rely on the shift from a business-as-usual functioning towards an alternative production system structure so that in a business point of view the distinction is not clear. In order to investigate the way those cooperation modes can be distinguished, we consider the stakeholders' interplay in the business model structure regarding their resources and requirements. For infrastructure sharing (following economic engineering literature) the cost function of capacity induces economies of scale so that demand pooling reduces global expanses. Grassroot investment sizing decision and the ex-post pricing strongly depends on the design optimization phase for capacity sizing whereas ex-post operational cost sharing minimizing budgets are less dependent upon production rates. Value is then mainly design driven. For resource substitution, synergies value stems from availability and is at risk regarding both supplier and user load profiles and market prices of the standard input. Baseline input purchasing cost reduction is thus more driven by the operational phase of the symbiosis and must be analyzed within the whole sourcing policy (including diversification strategies and expensive back-up replacement). Moreover, while resource substitution involves a chain of intermediate processors to match quality requirements, the infrastructure model relies on a single operator whose competencies allow to produce non-rival goods. Transaction costs appear higher in resource substitution synergies due to the high level of customization which induces asset specificity, and non-homogeneity following transaction costs economics arguments.Keywords: business model, capacity, sourcing, synergies
Procedia PDF Downloads 1731541 Discrete Element Simulations of Composite Ceramic Powders
Authors: Julia Cristina Bonaldo, Christophe L. Martin, Severine Romero Baivier, Stephane Mazerat
Abstract:
Alumina refractories are commonly used in steel and foundry industries. These refractories are prepared through a powder metallurgy route. They are a mixture of hard alumina particles and graphite platelets embedded into a soft carbonic matrix (binder). The powder can be cold pressed isostatically or uniaxially, depending on the application. The compact is then fired to obtain the final product. The quality of the product is governed by the microstructure of the composite and by the process parameters. The compaction behavior and the mechanical properties of the fired product depend greatly on the amount of each phase, on their morphology and on the initial microstructure. In order to better understand the link between these parameters and the macroscopic behavior, we use the Discrete Element Method (DEM) to simulate the compaction process and the fracture behavior of the fired composite. These simulations are coupled with well-designed experiments. Four mixes with various amounts of Al₂O₃ and binder were tested both experimentally and numerically. In DEM, each particle is modelled and the interactions between particles are taken into account through appropriate contact or bonding laws. Here, we model a bimodal mixture of large Al₂O₃ and small Al₂O₃ covered with a soft binder. This composite is itself mixed with graphite platelets. X-ray tomography images are used to analyze the morphologies of the different components. Large Al₂O₃ particles and graphite platelets are modelled in DEM as sets of particles bonded together. The binder is modelled as a soft shell that covers both large and small Al₂O₃ particles. When two particles with binder indent each other, they first interact through this soft shell. Once a critical indentation is reached (towards the end of compaction), hard Al₂O₃ - Al₂O₃ contacts appear. In accordance with experimental data, DEM simulations show that the amount of Al₂O₃ and the amount of binder play a major role for the compaction behavior. The graphite platelets bend and break during the compaction, also contributing to the macroscopic stress. Firing step is modeled in DEM by ascribing bonds to particles which contact each other after compaction. The fracture behavior of the compacted mixture is also simulated and compared with experimental data. Both diametrical tests (Brazilian tests) and triaxial tests are carried out. Again, the link between the amount of Al₂O₃ particles and the fracture behavior is investigated. The methodology described here can be generalized to other particulate materials that are used in the ceramic industry.Keywords: cold compaction, composites, discrete element method, refractory materials, x-ray tomography
Procedia PDF Downloads 1371540 Effect of Cumulative Dissipated Energy on Short-Term and Long-Term Outcomes after Uncomplicated Cataract Surgery
Authors: Palaniraj Rama Raj, Himeesh Kumar, Paul Adler
Abstract:
Purpose: To investigate the effect of ultrasound energy, expressed as cumulative dissipated energy (CDE), on short and long-term outcomes after uncomplicated cataract surgery by phacoemulsification. Methods: In this single-surgeon, two-center retrospective study, non-glaucomatous participants who underwent uncomplicated cataract surgery were investigated. Best-corrected visual acuity (BCVA) and intraocular pressure (IOP) were measured at 3 separate time points: pre-operative, Day 1 and ≥1 month. Anterior chamber (AC) inflammation and corneal odema (CO) were assessed at 2 separate time points: Pre-operative and Day 1. Short-term changes (Day 1) in BCVA, IOP, AC and CO and long-term changes (≥1 month) in BCVA and IOP were evaluated as a function of CDE using a multivariate multiple linear regression model, adjusting for age, gender, cataract type and grade, preoperative IOP, preoperative BCVA and duration of long-term follow-up. Results: 110 eyes from 97 non-glaucomatous participants were analysed. 60 (54.55%) were female and 50 (45.45%) were male. The mean (±SD) age was 73.40 (±10.96) years. Higher CDE counts were strongly associated with higher grades of sclerotic nuclear cataracts (p <0.001) and posterior subcapsular cataracts (p <0.036). There was no significant association between CDE counts and cortical cataracts. CDE counts also had a positive correlation with Day 1 CO (p <0.001). There was no correlation between CDE counts and Day 1 AC inflammation. Short-term and long-term changes in post-operative IOP did not demonstrate significant associations with CDE counts (all p >0.05). Though there was no significant correlation between CDE counts and short-term changes in BCVA, higher CDE counts were strongly associated with greater improvements in long-term BCVA (p = 0.011). Conclusion: Though higher CDE counts were strongly associated with higher grades of Day 1 postoperative CO, there appeared to be no detriment to long-term BCVA. Correspondingly, the strong positive correlation between CDE counts and long-term BCVA was likely reflective of the greater severity of underlying cataract type and grade. CDE counts were not associated with short-term or long-term postoperative changes in IOP.Keywords: cataract surgery, phacoemulsification, cumulative dissipated energy, CDE, surgical outcomes
Procedia PDF Downloads 1801539 Preventing Factors for Innovation: The Case of Swedish Construction Small and Medium-Sized Local Companies towards a One-Stop-Shop Business Concept
Authors: Georgios Pardalis, Krushna Mahapatra, Brijesh Mainali
Abstract:
Compared to other sectors, the residential and service sector in Sweden is responsible for almost 40% of the national final energy use and faces great challenges towards achieving reduction of energy intensity. The one- and two-family (henceforth 'detached') houses, constituting 60% of the residential floor area and using 32 TWh for space heating and hot water purposes, offers significant opportunities for improved energy efficiency. More than 80% of those houses are more than 35 years of old and a large share of them need major renovations. However, the rate of energy renovations for such houses is significantly low. The renovation market is dominated by small and medium-sized local companies (SMEs), who mostly offer individual solutions. A one-stop-shop business framework, where a single actor collaborates with other actors and coordinates them to offer a full package for holistic renovations, may speed up the rate of renovation. Such models are emerging in some European countries. This paper aims to understand the willingness of the SMEs to adopt a one-stop-shop business framework. Interviews were conducted with 13 SMEs in Kronoberg county in Sweden, a geographic region known for its initiatives towards sustainability and energy efficiency. The examined firms seem reluctant to adopt one-stop-shop for nonce due to the perceived risks they see in such a business move and due to their characteristics, although they agree that such a move will advance their position in the market and their business volume. By using threat-rigidity and prospect theory, we illustrate how this type of companies can move from being reluctant to adopt one-stop-shop framework to its adoption. Additionally, with the use of behavioral theory, we gain deeper knowledge on those exact reasons preventing those firms from adopting the one-stop-shop framework.Keywords: construction SMEs, innovation adoption, one-stop-shop, perceived risks
Procedia PDF Downloads 1251538 Perception of Quality of Life and Self-Assessed Health in Patients Undergoing Haemodialysis
Authors: Magdalena Barbara Kaziuk, Waldemar Kosiba
Abstract:
Introduction: Despite the development of technologies and improvements in the interior of dialysis stations, dialysis remains an unpleasant procedure, difficult to accept by the patients (who undergo it 2 to 3 times a week, a single treatment lasting several hours). Haemodialysis is one of the renal replacement therapies, in Poland most commonly used in patients with chronic or acute kidney failure. Purpose: An attempt was made to evaluate the quality of life in haemodialysed patients using the WHOQOL-BREF questionnaire. Material and methods: The study covered 422 patients (200 women and 222 men, aged 60.5 ± 12.9 years) undergoing dialysis at three selected stations in Poland. The patients were divided into 2 groups, depending on the duration of their dialysis treatment. The evaluation was conducted with the WHOQOL-BREF questionnaire containing 26 questions analysing 4 areas of life, as well as the perception of the quality of life and health self-assessment. A 5-point scale is used to answer them. The maximum score in each area is 20 points. The results in individual areas have a positive direction. Results: In patients undergoing dialysis for more than 3 years, a reduction in the quality of life was found in the physical area and in their environment versus a group of patients undergoing dialysis for less than 3 years, where a reduced quality of life was found in the areas of social relations and mental well-being (p < 0.05). A significant correlation (p < 0.01) between the two groups was found in self-perceived general health, while no significant differences were observed in the general perception of the quality of life (p > 0.05). Conclusions: The study confirmed that in patients undergoing dialysis for more than three years, the quality of life is especially reduced in their environment (access to and quality of healthcare, financial resources, and mental and physical safety). The assessment of the quality of life should form a part of the therapeutic process, in which the role of the patient in chronic renal care should be emphasised, reflected in the quality of services provided by dialysis stations.Keywords: haemodialysis, perception of quality of life, quality of services provided, dialysis station
Procedia PDF Downloads 2611537 Advantages of Neural Network Based Air Data Estimation for Unmanned Aerial Vehicles
Authors: Angelo Lerro, Manuela Battipede, Piero Gili, Alberto Brandl
Abstract:
Redundancy requirements for UAV (Unmanned Aerial Vehicle) are hardly faced due to the generally restricted amount of available space and allowable weight for the aircraft systems, limiting their exploitation. Essential equipment as the Air Data, Attitude and Heading Reference Systems (ADAHRS) require several external probes to measure significant data as the Angle of Attack or the Sideslip Angle. Previous research focused on the analysis of a patented technology named Smart-ADAHRS (Smart Air Data, Attitude and Heading Reference System) as an alternative method to obtain reliable and accurate estimates of the aerodynamic angles. This solution is based on an innovative sensor fusion algorithm implementing soft computing techniques and it allows to obtain a simplified inertial and air data system reducing external devices. In fact, only one external source of dynamic and static pressures is needed. This paper focuses on the benefits which would be gained by the implementation of this system in UAV applications. A simplification of the entire ADAHRS architecture will bring to reduce the overall cost together with improved safety performance. Smart-ADAHRS has currently reached Technology Readiness Level (TRL) 6. Real flight tests took place on ultralight aircraft equipped with a suitable Flight Test Instrumentation (FTI). The output of the algorithm using the flight test measurements demonstrates the capability for this fusion algorithm to embed in a single device multiple physical and virtual sensors. Any source of dynamic and static pressure can be integrated with this system gaining a significant improvement in terms of versatility.Keywords: aerodynamic angles, air data system, flight test, neural network, unmanned aerial vehicle, virtual sensor
Procedia PDF Downloads 2191536 The Relationship of Lean Management Principles with Lean Maturity Levels: Multiple Case Study in Manufacturing Companies
Authors: Alexandre D. Ferraz, Dario H. Alliprandini, Mauro Sampaio
Abstract:
Companies and other institutions are constantly seeking better organizational performance and greater competitiveness. In order to fulfill this purpose, there are many tools, methodologies and models for increasing performance. However, the Lean Management approach seems to be the most effective in terms of achieving a significant improvement in productivity relatively quickly. Although Lean tools are relatively easy to understand and implement in different contexts, many organizations are not able to transform themselves into 'Lean companies'. Most of the efforts in its implementation have shown single benefits, failing to achieve the desired impact on the performance of the overall enterprise system. There is also a growing perception of the importance of management in Lean transformation, but few studies have empirically investigated and described the 'Lean Management'. In order to understand more clearly the ideas that guide Lean Management and its influence on the maturity level of the production system, the objective of this research is analyze the relationship between the Lean Management principles and the Lean maturity level in the organizations. The research also analyzes the principles of Lean Management and its relationship with the 'Lean culture' and the results obtained. The research was developed using the case study methodology. Three manufacturing units of a German multinational company from industrial automation segment, located in different countries were studied, in order to have a better comparison between the practices and the level of maturity in the implementation. The primary source of information was the application of a research questionnaire based on the theoretical review. The research showed that higher the level of Lean Management principles, higher are the Lean maturity level, the Lean culture level, and the level of Lean results obtained in the organization. The research also showed that factors such as time for application of Lean concepts and company size were not determinant for the level of Lean Management principles and, consequently, for the level of Lean maturity in the organization. The characteristics of the production system showed much more influence in different evaluated aspects. The present research also left recommendations for the managers of the plants analyzed and suggestions for future research.Keywords: lean management, lean principles, lean maturity level, lean manufacturing
Procedia PDF Downloads 141