Search results for: interest rate swap
1444 A Next-Generation Pin-On-Plate Tribometer for Use in Arthroplasty Material Performance Research
Authors: Lewis J. Woollin, Robert I. Davidson, Paul Watson, Philip J. Hyde
Abstract:
Introduction: In-vitro testing of arthroplasty materials is of paramount importance when ensuring that they can withstand the performance requirements encountered in-vivo. One common machine used for in-vitro testing is a pin-on-plate tribometer, an early stage screening device that generates data on the wear characteristics of arthroplasty bearing materials. These devices test vertically loaded rotating cylindrical pins acting against reciprocating plates, representing the bearing surfaces. In this study, a pin-on-plate machine has been developed that provides several improvements over current technology, thereby progressing arthroplasty bearing research. Historically, pin-on-plate tribometers have been used to investigate the performance of arthroplasty bearing materials under conditions commonly encountered during a standard gait cycle; nominal operating pressures of 2-6 MPa and an operating frequency of 1 Hz are typical. There has been increased interest in using pin-on-plate machines to test more representative in-vivo conditions, due to the drive to test 'beyond compliance', as well as their testing speed and economic advantages over hip simulators. Current pin-on-plate machines do not accommodate the increased performance requirements associated with more extreme kinematic conditions, therefore a next-generation pin-on-plate tribometer has been developed to bridge the gap between current technology and future research requirements. Methodology: The design was driven by several physiologically relevant requirements. Firstly, an increased loading capacity was essential to replicate the peak pressures that occur in the natural hip joint during running and chair-rising, as well as increasing the understanding of wear rates in obese patients. Secondly, the introduction of mid-cycle load variation was of paramount importance, as this allows for an approximation of the loads present in a gait cycle to be applied and to test the fatigue properties of materials. Finally, the rig must be validated against previous-generation pin-on-plate and arthroplasty wear data. Results: The resulting machine is a twelve station device that is split into three sets of four stations, providing an increased testing capacity compared to most current pin-on-plate tribometers. The loading of the pins is generated using a pneumatic system, which can produce contact pressures of up to 201 MPa on a 3.2 mm² round pin face. This greatly exceeds currently achievable contact pressures in literature and opens new research avenues such as testing rim wear of mal-positioned hip implants. Additionally, the contact pressure of each set can be changed independently of the others, allowing multiple loading conditions to be tested simultaneously. Using pneumatics also allows the applied pressure to be switched ON/OFF mid-cycle, another feature not currently reported elsewhere, which allows for investigation into intermittent loading and material fatigue. The device is currently undergoing a series of validation tests using Ultra-High-Molecular-Weight-Polyethylene pins and 316L Stainless Steel Plates (polished to a Ra < 0.05 µm). The operating pressures will be between 2-6 MPa, operating at 1 Hz, allowing for validation of the machine against results reported previously in the literature. The successful production of this next-generation pin-on-plate tribometer will, following its validation, unlock multiple previously unavailable research avenues.Keywords: arthroplasty, mechanical design, pin-on-plate, total joint replacement, wear testing
Procedia PDF Downloads 941443 Social Vulnerability Mapping in New York City to Discuss Current Adaptation Practice
Authors: Diana Reckien
Abstract:
Vulnerability assessments are increasingly used to support policy-making in complex environments, like urban areas. Usually, vulnerability studies include the construction of aggregate (sub-) indices and the subsequent mapping of indices across an area of interest. Vulnerability studies show a couple of advantages: they are great communication tools, can inform a wider general debate about environmental issues, and can help allocating and efficiently targeting scarce resources for adaptation policy and planning. However, they also have a number of challenges: Vulnerability assessments are constructed on the basis of a wide range of methodologies and there is no single framework or methodology that has proven to serve best in certain environments, indicators vary highly according to the spatial scale used, different variables and metrics produce different results, and aggregate or composite vulnerability indicators that are mapped easily distort or bias the picture of vulnerability as they hide the underlying causes of vulnerability and level out conflicting reasons of vulnerability in space. So, there is urgent need to further develop the methodology of vulnerability studies towards a common framework, which is one reason of the paper. We introduce a social vulnerability approach, which is compared with other approaches of bio-physical or sectoral vulnerability studies relatively developed in terms of a common methodology for index construction, guidelines for mapping, assessment of sensitivity, and verification of variables. Two approaches are commonly pursued in the literature. The first one is an additive approach, in which all potentially influential variables are weighted according to their importance for the vulnerability aspect, and then added to form a composite vulnerability index per unit area. The second approach includes variable reduction, mostly Principal Component Analysis (PCA) that reduces the number of variables that are interrelated into a smaller number of less correlating components, which are also added to form a composite index. We test these two approaches of constructing indices on the area of New York City as well as two different metrics of variables used as input and compare the outcome for the 5 boroughs of NY. Our analysis yields that the mapping exercise yields particularly different results in the outer regions and parts of the boroughs, such as Outer Queens and Staten Island. However, some of these parts, particularly the coastal areas receive the highest attention in the current adaptation policy. We imply from this that the current adaptation policy and practice in NY might need to be discussed, as these outer urban areas show relatively low social vulnerability as compared with the more central parts, i.e. the high dense areas of Manhattan, Central Brooklyn, Central Queens and the Southern Bronx. The inner urban parts receive lesser adaptation attention, but bear a higher risk of damage in case of hazards in those areas. This is conceivable, e.g., during large heatwaves, which would more affect more the inner and poorer parts of the city as compared with the outer urban areas. In light of the recent planning practice of NY one needs to question and discuss who in NY makes adaptation policy for whom, but the presented analyses points towards an under representation of the needs of the socially vulnerable population, such as the poor, the elderly, and ethnic minorities, in the current adaptation practice in New York City.Keywords: vulnerability mapping, social vulnerability, additive approach, Principal Component Analysis (PCA), New York City, United States, adaptation, social sensitivity
Procedia PDF Downloads 3951442 Using Linear Logistic Regression to Evaluation the Patient and System Delay and Effective Factors in Mortality of Patients with Acute Myocardial Infarction
Authors: Firouz Amani, Adalat Hoseinian, Sajjad Hakimian
Abstract:
Background: The mortality due to Myocardial Infarction (MI) is often occur during the first hours after onset of symptom. So, for taking the necessary treatment and decreasing the mortality rate, timely visited of the hospital could be effective in this regard. The aim of this study was to investigate the impact of effective factors in mortality of MI patients by using Linear Logistic Regression. Materials and Methods: In this case-control study, all patients with Acute MI who referred to the Ardabil city hospital were studied. All of died patients were considered as the case group (n=27) and we select 27 matched patients without Acute MI as a control group. Data collected for all patients in two groups by a same checklist and then analyzed by SPSS version 24 software using statistical methods. We used the linear logistic regression model to determine the effective factors on mortality of MI patients. Results: The mean age of patients in case group was significantly higher than control group (75.1±11.7 vs. 63.1±11.6, p=0.001).The history of non-cardinal diseases in case group with 44.4% significantly higher than control group with 7.4% (p=0.002).The number of performed PCIs in case group with 40.7% significantly lower than control group with 74.1% (P=0.013). The time distance between hospital admission and performed PCI in case group with 110.9 min was significantly upper than control group with 56 min (P=0.001). The mean of delay time from Onset of symptom to hospital admission (patient delay) and the mean of delay time from hospital admissions to receive treatment (system delay) was similar between two groups. By using logistic regression model we revealed that history of non-cardinal diseases (OR=283) and the number of performed PCIs (OR=24.5) had significant impact on mortality of MI patients in compare to other factors. Conclusion: Results of this study showed that of all studied factors, the number of performed PCIs, history of non-cardinal illness and the interval between onset of symptoms and performed PCI have significant relation with morality of MI patients and other factors were not meaningful. So, doing more studies with a large sample and investigated other involved factors such as smoking, weather and etc. is recommended in future.Keywords: acute MI, mortality, heart failure, arrhythmia
Procedia PDF Downloads 1221441 Ultrasound-Mediated Separation of Ethanol, Methanol, and Butanol from Their Aqueous Solutions
Authors: Ozan Kahraman, Hao Feng
Abstract:
Ultrasonic atomization (UA) is a useful technique for producing a liquid spray for various processes, such as spray drying. Ultrasound generates small droplets (a few microns in diameter) by disintegration of the liquid via cavitation and/or capillary waves, with low range velocity and narrow droplet size distribution. In recent years, UA has been investigated as an alternative for enabling or enhancing ultrasound-mediated unit operations, such as evaporation, separation, and purification. The previous studies on the UA separation of a solvent from a bulk solution were limited to ethanol-water systems. More investigations into ultrasound-mediated separation for other liquid systems are needed to elucidate the separation mechanism. This study was undertaken to investigate the effects of the operational parameters on the ultrasound-mediated separation of three miscible liquid pairs: ethanol-, methanol-, and butanol-water. A 2.4 MHz ultrasonic mister with a diameter of 18 mm and rating power of 24 W was installed on the bottom of a custom-designed cylindrical separation unit. Air was supplied to the unit (3 to 4 L/min.) as a carrier gas to collect the mist. The effects of the initial alcohol concentration, viscosity, and temperature (10, 30 and 50°C) on the atomization rates were evaluated. The alcohol concentration in the collected mist was measured with high performance liquid chromatography and a refractometer. The viscosity of the solutions was determined using a Brookfield digital viscometer. The alcohol concentration of the atomized mist was dependent on the feed concentration, feed rate, viscosity, and temperature. Increasing the temperature of the alcohol-water mixtures from 10 to 50°C increased the vapor pressure of both the alcohols and water, resulting in an increase in the atomization rates but a decrease in the separation efficiency. The alcohol concentration in the mist was higher than that of the alcohol-water equilibrium at all three temperatures. More importantly, for ethanol, the ethanol concentration in the mist went beyond the azeotropic point, which cannot be achieved by conventional distillation. Ultrasound-mediated separation is a promising non-equilibrium method for separating and purifying alcohols, which may result in significant energy reductions and process intensification.Keywords: azeotropic mixtures, distillation, evaporation, purification, seperation, ultrasonic atomization
Procedia PDF Downloads 1801440 Lessons Learned in Implementing Programs to Delay Diabetic Nephropathy Management in Primary Health Care: Case Study in Sakon Nakhon Province
Authors: Sasiwan Tassana-iem, Sumattana Glangkarn
Abstract:
Diabetic nephropathy is a major complication in diabetic patients whom as the glomerular filtration rate falls. The affects their quality of life and results in loss of money for kidney replacement therapy costs. There is an existing intervention, but the prevalence remains high, thus this research aims to study lessons learned in implementing programs to delay diabetic nephropathy management in primary health care. Method: The target settings are, 24 sub-district health promoting hospital in Sakon Nakhon province. Participants included the health care professionals, head of the sub-district health promoting hospital and the person responsible for managing diabetic nephropathy in each hospital (n= 50). There are 400 patients with diabetes mellitus in an area. Data were collected using questionnaires, patient records data, interviews and focus groups and analyzed by statistics and content analysis. Result: Reflection of participants that the interventions to delay diabetic nephropathy management in each area, the Ministry of Public Health has a policy to screen and manage this disease. The implementing programs aimed to provide health education, innovative teaching media used in communication to educate. Patients and caregivers had misunderstanding about the actual causes and prevention of this disease and how to apply knowledge suitable for daily life. Conclusion: The obstacles to the success of the implementing programs to delay diabetic nephropathy management in primary health care were most importantly, the patient needs self-care and should be evaluated for health literacy. This is crucial to promote health literacy; to access and understand health information as well to decide their health-related choices based on health information which will promote and maintain a good health. This preliminary research confirms that situation of diabetic nephropathy still exists. The results of this study will lead to the development of delay in diabetic nephropathy implementation among patients in the province studied.Keywords: diabetic nephropathy, chronic kidney disease, primary health care, implementation
Procedia PDF Downloads 2001439 Modeling Karachi Dengue Outbreak and Exploration of Climate Structure
Authors: Syed Afrozuddin Ahmed, Junaid Saghir Siddiqi, Sabah Quaiser
Abstract:
Various studies have reported that global warming causes unstable climate and many serious impact to physical environment and public health. The increasing incidence of dengue incidence is now a priority health issue and become a health burden of Pakistan. In this study it has been investigated that spatial pattern of environment causes the emergence or increasing rate of dengue fever incidence that effects the population and its health. The climatic or environmental structure data and the Dengue Fever (DF) data was processed by coding, editing, tabulating, recoding, restructuring in terms of re-tabulating was carried out, and finally applying different statistical methods, techniques, and procedures for the evaluation. Five climatic variables which we have studied are precipitation (P), Maximum temperature (Mx), Minimum temperature (Mn), Humidity (H) and Wind speed (W) collected from 1980-2012. The dengue cases in Karachi from 2010 to 2012 are reported on weekly basis. Principal component analysis is applied to explore the climatic variables and/or the climatic (structure) which may influence in the increase or decrease in the number of dengue fever cases in Karachi. PC1 for all the period is General atmospheric condition. PC2 for dengue period is contrast between precipitation and wind speed. PC3 is the weighted difference between maximum temperature and wind speed. PC4 for dengue period contrast between maximum and wind speed. Negative binomial and Poisson regression model are used to correlate the dengue fever incidence to climatic variable and principal component score. Relative humidity is estimated to positively influence on the chances of dengue occurrence by 1.71% times. Maximum temperature positively influence on the chances dengue occurrence by 19.48% times. Minimum temperature affects positively on the chances of dengue occurrence by 11.51% times. Wind speed is effecting negatively on the weekly occurrence of dengue fever by 7.41% times.Keywords: principal component analysis, dengue fever, negative binomial regression model, poisson regression model
Procedia PDF Downloads 4451438 Bioinformatic Design of a Non-toxic Modified Adjuvant from the Native A1 Structure of Cholera Toxin with Membrane Synthetic Peptide of Naegleria fowleri
Authors: Frida Carrillo Morales, Maria Maricela Carrasco Yépez, Saúl Rojas Hernández
Abstract:
Naegleria fowleri is the causative agent of primary amebic meningoencephalitis, this disease is acute and fulminant that affects humans. It has been reported that despite the existence of therapeutic options against this disease, its mortality rate is 97%. Therefore, the need arises to have vaccines that confer protection against this disease and, in addition to developing adjuvants to enhance the immune response. In this regard, in our work group, we obtained a peptide designed from the membrane protein MP2CL5 of Naegleria fowleri called Smp145 that was shown to be immunogenic; however, it would be of great importance to enhance its immunological response, being able to co-administer it with a non-toxic adjuvant. Therefore, the objective of this work was to carry out the bioinformatic design of a peptide of the Naegleria fowleri membrane protein MP2CL5 conjugated with a non-toxic modified adjuvant from the native A1 structure of Cholera Toxin. For which different bioinformatics tools were used to obtain a model with a modification in amino acid 61 of the A1 subunit of the CT (CTA1), to which the Smp145 peptide was added and both molecules were joined with a 13-glycine linker. As for the results obtained, the modification in CTA1 bound to the peptide produces a reduction in the toxicity of the molecule in in silico experiments, likewise, the prediction in the binding of Smp145 to the receptor of B cells suggests that the molecule is directed in specifically to the BCR receptor, decreasing its native enzymatic activity. The stereochemical evaluation showed that the generated model has a high number of adequately predicted residues. In the ERRAT test, the confidence with which it is possible to reject regions that exceed the error values was evaluated, in the generated model, a high score was obtained, which determines that the model has a good structural resolution. Therefore, the design of the conjugated peptide in this work will allow us to proceed with its chemical synthesis and subsequently be able to use it in the mouse meningitis protection model caused by N. fowleri.Keywords: immunology, vaccines, pathogens, infectious disease
Procedia PDF Downloads 921437 The Relationship of Weight Regain with Biochemical and Psychological Factors in Non Postmenopausal Women
Authors: Farzad Shidfar, Najmeh Rostami, Ziaodin Mazhari, Fatemeh Hosseini Baharanchi
Abstract:
Background and Aim: The rate of failure to maintain a reduced weight has been increased. By definition, people who regain about one-third to two-thirds of their lost weight after one year from the end of the dietary treatment and return all the lost weight after 5 years it is called weight regain. This study was performed to find the causes of weight regain and its relationship with biochemical and psychological factors. Materials and Methods: This cross-sectional study was performed by reviewing the files of people who followed the dietary treatment in 1397-1398.seventy-three persons was in the weight regain group, and seventy-three people were in the weight maintenance group. Psychological factors such as depression, anxiety, quality of life, physical activity, and dietary frequency were assessed through a questionnaire, and biochemical factors such as serum insulin and fasting blood sugar were measured. The mean basal energy in the weight regain group was significantly higher than the weight maintenance group (p = 0.004). There was no significant difference between the two groups in terms of food intake and inflammatory index of food. There was no significant difference between the two groups in terms of food intake and inflammatory index of food. Mean serum insulin concentration (p = 0.023), mean fasting blood sugar (p = 0.04) and insulin resistance (p = 0.013) in the weight regain group were higher than the weight maintenance group. The weight maintenance group showed higher insulin sensitivity than the weight regain group (p = 0.005). There was no significant difference between the two groups in terms of psychological indicators. Conclusion: The only body mass index after one year from the end of the treatment period, insulin sensitivity, serum insulin concentration, fasting blood sugar, insulin resistance, selenium intake, and basal energy expenditure Specific and significant with weight regain. However, the significance of insulin resistance, basal energy expenditure, and body mass index after one year from the end of the treatment period was higher than other variables in the weight regain group.Keywords: body weight maintenance, weight regain, insulin resistance, insulin sensitivity
Procedia PDF Downloads 1141436 Development of One-Pot Sequential Cyclizations and Photocatalyzed Decarboxylative Radical Cyclization: Application Towards Aspidospermatan Alkaloids
Authors: Guillaume Bélanger, Jean-Philippe Fontaine, Clémence Hauduc
Abstract:
There is an undeniable thirst from organic chemists and from the pharmaceutical industry to access complex alkaloids with short syntheses. While medicinal chemists are interested in the fascinating wide range of biological properties of alkaloids, synthetic chemists are rather interested in finding new routes to access these challenging natural products of often low availability from nature. To synthesize complex polycyclic cores of natural products, reaction cascades or sequences performed one-pot offer a neat advantage over classical methods for their rapid increase in molecular complexity in a single operation. In counterpart, reaction cascades need to be run on substrates bearing all the required functional groups necessary for the key cyclizations. Chemoselectivity is thus a major issue associated with such a strategy, in addition to diastereocontrol and regiocontrol for the overall transformation. In the pursuit of synthetic efficiency, our research group developed an innovative one-pot transformation of linear substrates into bi- and tricyclic adducts applied to the construction of Aspidospermatan-type alkaloids. The latter is a rich class of indole alkaloids bearing a unique bridged azatricyclic core. Despite many efforts toward the synthesis of members of this family, efficient and versatile synthetic routes are still coveted. Indeed, very short, non-racemic approaches are rather scarce: for example, in the cases of aspidospermidine and aspidospermine, syntheses are all fifteen steps and over. We envisaged a unified approach to access several members of the Aspidospermatan alkaloids family. The key sequence features a highly chemoselective formamide activation that triggers a Vilsmeier-Haack cyclization, followed by an azomethine ylide generation and intramolecular cycloaddition. Despite the high density and variety of functional groups on the substrates (electron-rich and electron-poor alkenes, nitrile, amide, ester, enol ether), the sequence generated three new carbon-carbon bonds and three rings in a single operation with good yield and high chemoselectivity. A detailed study of amide, nucleophile, and dipolarophile variations to finally get to the successful combination required for the key transformation will be presented. To complete the indoline fragment of the natural products, we developed an original approach. Indeed, all reported routes to Aspidospermatan alkaloids introduce the indoline or indole early in the synthesis. In our work, the indoline needs to be installed on the azatricyclic core after the key cyclization sequence. As a result, typical Fischer indolization is not suited since this reaction is known to fail on such substrates. We thus envisaged a unique photocatalyzed decarboxylative radical cyclization. The development of this reaction as well as the scope and limitations of the methodology, will also be presented. The original Vilsmeier-Haack and azomethine ylide cyclization sequence as well as the new photocatalyzed decarboxylative radical cyclization will undoubtedly open access to new routes toward polycyclic indole alkaloids and derivatives of pharmaceutical interest in general.Keywords: Aspidospermatan alkaloids, azomethine ylide cycloaddition, decarboxylative radical cyclization, indole and indoline synthesis, one-pot sequential cyclizations, photocatalysis, Vilsmeier-Haack Cyclization
Procedia PDF Downloads 811435 Thermal Decomposition Behaviors of Hexafluoroethane (C2F6) Using Zeolite/Calcium Oxide Mixtures
Authors: Kazunori Takai, Weng Kaiwei, Sadao Araki, Hideki Yamamoto
Abstract:
HFC and PFC gases have been commonly and widely used as refrigerant of air conditioner and as etching agent of semiconductor manufacturing process, because of their higher heat of vaporization and chemical stability. On the other hand, HFCs and PFCs gases have the high global warming effect on the earth. Therefore, we have to be decomposed these gases emitted from chemical apparatus like as refrigerator. Until now, disposal of these gases were carried out by using combustion method like as Rotary kiln treatment mainly. However, this treatment needs extremely high temperature over 1000 °C. In the recent year, in order to reduce the energy consumption, a hydrolytic decomposition method using catalyst and plasma decomposition treatment have been attracted much attention as a new disposal treatment. However, the decomposition of fluorine-containing gases under the wet condition is not able to avoid the generation of hydrofluoric acid. Hydrofluoric acid is corrosive gas and it deteriorates catalysts in the decomposition process. Moreover, an additional process for the neutralization of hydrofluoric acid is also indispensable. In this study, the decomposition of C2F6 using zeolite and zeolite/CaO mixture as reactant was evaluated in the dry condition at 923 K. The effect of the chemical structure of zeolite on the decomposition reaction was confirmed by using H-Y, H-Beta, H-MOR and H-ZSM-5. The formation of CaF2 in zeolite/CaO mixtures after the decomposition reaction was confirmed by XRD measurements. The decomposition of C2F6 using zeolite as reactant showed the closely similar behaviors regardless the type of zeolite (MOR, Y, ZSM-5, Beta type). There was no difference of XRD patterns of each zeolite before and after reaction. On the other hand, the difference in the C2F6 decomposition for each zeolite/CaO mixtures was observed. These results suggested that the rate-determining process for the C2F6 decomposition on zeolite alone is the removal of fluorine from reactive site. In other words, the C2F6 decomposition for the zeolite/CaO improved compared with that for the zeolite alone by the removal of the fluorite from reactive site. HMOR/CaO showed 100% of the decomposition for 3.5 h and significantly improved from zeolite alone. On the other hand, Y type zeolite showed no improvement, that is, the almost same value of Y type zeolite alone. The descending order of C2F6 decomposition was MOR, ZSM-5, beta and Y type zeolite. This order is similar to the acid strength characterized by NH3-TPD. Hence, it is considered that the C-F bond cleavage is closely related to the acid strength.Keywords: hexafluoroethane, zeolite, calcium oxide, decomposition
Procedia PDF Downloads 4811434 Incidence and Risk Factors of Central Venous Associated Infections in a Tunisian Medical Intensive Care Unit
Authors: Ammar Asma, Bouafia Nabiha, Ghammam Rim, Ezzi Olfa, Ben Cheikh Asma, Mahjoub Mohamed, Helali Radhia, Sma Nesrine, Chouchène Imed, Boussarsar Hamadi, Njah Mansour
Abstract:
Background: Central venous catheter associated infections (CVC-AI) are among the serious hospital-acquired infections. The aims of this study are to determine the incidence of CVC-AI, and their risk factors among patients followed in a Tunisian medical intensive care unit (ICU). Materials / Methods: A prospective cohort study conducted between September 15th, 2015 and November 15th, 2016 in an 8-bed medical ICU including all patients admitted for more than 48h. CVC-AI were defined according to CDC of ATLANTA criteria. The enrollment was based on clinical and laboratory diagnosis of CVC-AI. For all subjects, age, sex, underlying diseases, SAPS II score, ICU length of stay, exposure to CVC (number of CVC placed, site of insertion and duration catheterization) were recorded. Risk factors were analyzed by conditional stepwise logistic regression. The p-value of < 0.05 was considered significant. Results: Among 192 eligible patients, 144 patients (75%) had a central venous catheter. Twenty-eight patients (19.4%) had developed CVC-AI with density rate incidence 20.02/1000 CVC-days. Among these infections, 60.7% (n=17) were systemic CVC-AI (with negative blood culture), and 35.7% (n=10) were bloodstream CVC-AI. The mean SAPS II of patients with CVC-AI was 32.76 14.48; their mean Charlson index was 1.77 1.55, their mean duration of catheterization was 15.46 10.81 days and the mean duration of one central line was 5.8+/-3.72 days. Gram-negative bacteria was determined in 53.5 % of CVC-AI (n= 15) dominated by multi-drug resistant Acinetobacter baumani (n=7). Staphylococci were isolated in 3 CVC-AI. Fourteen (50%) patients with CVC-AI died. Univariate analysis identified men (p=0.034), the referral from another hospital department (p=0.03), tobacco (p=0.006), duration of sedation (p=0.003) and the duration of catheterization (p=0), as possible risk factors of CVC-AI. Multivariate analysis showed that independent factors of CVC-AI were, male sex; OR= 5.73, IC 95% [2; 16.46], p=0.001, Ramsay score; OR= 1.57, IC 95% [1.036; 2.38], p=0.033, and duration of catheterization; OR=1.093, IC 95% [1.035; 1.15], p=0.001. Conclusion: In a monocenter cohort, CVC-AI had a high density and is associated with poor outcome. Identifying the risk factors is necessary to find solutions for this major health problem.Keywords: central venous catheter associated infection, intensive care unit, prospective cohort studies, risk factors
Procedia PDF Downloads 3611433 Prediction of Formation Pressure Using Artificial Intelligence Techniques
Authors: Abdulmalek Ahmed
Abstract:
Formation pressure is the main function that affects drilling operation economically and efficiently. Knowing the pore pressure and the parameters that affect it will help to reduce the cost of drilling process. Many empirical models reported in the literature were used to calculate the formation pressure based on different parameters. Some of these models used only drilling parameters to estimate pore pressure. Other models predicted the formation pressure based on log data. All of these models required different trends such as normal or abnormal to predict the pore pressure. Few researchers applied artificial intelligence (AI) techniques to predict the formation pressure by only one method or a maximum of two methods of AI. The objective of this research is to predict the pore pressure based on both drilling parameters and log data namely; weight on bit, rotary speed, rate of penetration, mud weight, bulk density, porosity and delta sonic time. A real field data is used to predict the formation pressure using five different artificial intelligence (AI) methods such as; artificial neural networks (ANN), radial basis function (RBF), fuzzy logic (FL), support vector machine (SVM) and functional networks (FN). All AI tools were compared with different empirical models. AI methods estimated the formation pressure by a high accuracy (high correlation coefficient and low average absolute percentage error) and outperformed all previous. The advantage of the new technique is its simplicity, which represented from its estimation of pore pressure without the need of different trends as compared to other models which require a two different trend (normal or abnormal pressure). Moreover, by comparing the AI tools with each other, the results indicate that SVM has the advantage of pore pressure prediction by its fast processing speed and high performance (a high correlation coefficient of 0.997 and a low average absolute percentage error of 0.14%). In the end, a new empirical correlation for formation pressure was developed using ANN method that can estimate pore pressure with a high precision (correlation coefficient of 0.998 and average absolute percentage error of 0.17%).Keywords: Artificial Intelligence (AI), Formation pressure, Artificial Neural Networks (ANN), Fuzzy Logic (FL), Support Vector Machine (SVM), Functional Networks (FN), Radial Basis Function (RBF)
Procedia PDF Downloads 1491432 Construction of Ovarian Cancer-on-Chip Model by 3D Bioprinting and Microfluidic Techniques
Authors: Zakaria Baka, Halima Alem
Abstract:
Cancer is a major worldwide health problem that has caused around ten million deaths in 2020. In addition, efforts to develop new anti-cancer drugs still face a high failure rate. This is partly due to the lack of preclinical models that recapitulate in-vivo drug responses. Indeed conventional cell culture approach (known as 2D cell culture) is far from reproducing the complex, dynamic and three-dimensional environment of tumors. To set up more in-vivo-like cancer models, 3D bioprinting seems to be a promising technology due to its ability to achieve 3D scaffolds containing different cell types with controlled distribution and precise architecture. Moreover, the introduction of microfluidic technology makes it possible to simulate in-vivo dynamic conditions through the so-called “cancer-on-chip” platforms. Whereas several cancer types have been modeled through the cancer-on-chip approach, such as lung cancer and breast cancer, only a few works describing ovarian cancer models have been described. The aim of this work is to combine 3D bioprinting and microfluidic technics with setting up a 3D dynamic model of ovarian cancer. In the first phase, alginate-gelatin hydrogel containing SKOV3 cells was used to achieve tumor-like structures through an extrusion-based bioprinter. The desired form of the tumor-like mass was first designed on 3D CAD software. The hydrogel composition was then optimized for ensuring good and reproducible printability. Cell viability in the bioprinted structures was assessed using Live/Dead assay and WST1 assay. In the second phase, these bioprinted structures will be included in a microfluidic device that allows simultaneous testing of different drug concentrations. This microfluidic dispositive was first designed through computational fluid dynamics (CFD) simulations for fixing its precise dimensions. It was then be manufactured through a molding method based on a 3D printed template. To confirm the results of CFD simulations, doxorubicin (DOX) solutions were perfused through the dispositive and DOX concentration in each culture chamber was determined. Once completely characterized, this model will be used to assess the efficacy of anti-cancer nanoparticles developed in the Jean Lamour institute.Keywords: 3D bioprinting, ovarian cancer, cancer-on-chip models, microfluidic techniques
Procedia PDF Downloads 1961431 Effect of Non-Regulated pH on the Dynamics of Dark Fermentative Biohydrogen Production with Suspended and Immobilized Cell Culture
Authors: Joelle Penniston, E. B. Gueguim-Kana
Abstract:
Biohydrogen has been identified as a promising alternative to the use of non-renewable fossil reserves, owing to its sustainability and non-polluting nature. pH is considered as a key parameter in fermentative biohydrogen production processes, due to its effect on the hydrogenase activity, metabolic activity as well as substrate hydrolysis. The present study assesses the influence of regulating pH on dark fermentative biohydrogen production. Four experimental hydrogen production schemes were evaluated. Two were implemented using suspended cells under regulated pH growth conditions (Sus_R) and suspended and non-regulated pH (Sus_N). The two others regimes consisted of alginate immobilized cells under pH regulated growth conditions (Imm_R) and immobilized and non-pH regulated conditions (Imm_N). All experiments were carried out at 37.5°C with glucose as sole source of carbon. Sus_R showed a lag time of 5 hours and a peak hydrogen fraction of 36% and a glucose degradation of 37%, compared to Sus_N which showed a peak hydrogen fraction of 44% and complete glucose degradation. Both suspended culture systems showed a higher peak biohydrogen fraction compared to the immobilized cell system. Imm_R experiments showed a lag phase of 8 hours, a peak biohydrogen fraction of 35%, while Imm_N showed a lag phase of 5 hours, a peak biohydrogen fraction of 22%. 100% glucose degradation was observed in both pH regulated and non-regulated processes. This study showed that biohydrogen production in batch mode with suspended cells in a non-regulated pH environment results in a partial degradation of substrate, with lower yield. This scheme has been the culture mode of choice for most reported studies in biohydrogen research. The relatively lower slope in pH trend of the non-regulated pH experiment with immobilized cells (Imm_N) compared to Sus_N revealed that that immobilized systems have a better buffering capacity compared to suspended systems, which allows for the extended production of biohydrogen even under non-regulated pH conditions. However, alginate immobilized cultures in flask systems showed some drawbacks associated to high rate of gas production that leads to increased buoyancy of the immobilization beads. This ultimately impedes the release of gas out of the flask.Keywords: biohydrogen, sustainability, suspended, immobilized
Procedia PDF Downloads 3421430 Decreased Non-Communicable Disease by Surveillance, Control, Prevention Systems, and Community Engagement Process in Phayao, Thailand
Authors: Vichai Tienthavorn
Abstract:
Background: Recently, the patients of non-communicable diseases (NCDs) are increasing in Thailand; especially hypertension and diabetes. Hypertension and Diabetes patients were found to be of 3.7 million in 2008. The varieties of human behaviors have been extensively changed in health. Hence, Thai Government has a policy to reduce NCDs. Generally, primary care plays an important role in treatment using medical process. However, NCDs patients have not been decreased. Objectives: This study not only reduce the patient and mortality rate but also increase the quality of life, could apply in different areas and propose to be the national policy, effectively for a long term operation. Methods: Here we report that primary health care (PHC), which is a primary process to screening, rapidly seek the person's risk. The screening tool of the study was Vichai's 7 color balls model, the medical education tool to transfer knowledge from student health team to community through health volunteers, creating community engagement in terms of social participation. It was found that people in community were realized in their health and they can evaluate the level of risk using this model. Results: Projects implementation (2015) in Nong Lom Health Center in Phayao (target group 15-65 years, 2529); screening hypertension coveraged 99.01%, risk group (light green) was decreased to normal group (white) from 1806 to 1893, significant severe patient (red) was decreased to moderate (orange) from 10 to 5. Health Program in behaving change with best practice of 3Es (Eating, Exercise, Emotion) and 3Rs (Reducing tobacco, alcohol, obesity) were applied in risk group; and encourage strictly medication, investigation in severe patient (red). Conclusion: This is the first demonstration of knowledge transfer to community engagement by student, which is the sustainable education in PHC.Keywords: non-communicable disease, surveillance control and prevention systems, community engagement, primary health care
Procedia PDF Downloads 2501429 Evaluating the Performance of Passive Direct Methanol Fuel Cell under Varying Operating and Structural Conditions
Authors: Rahul Saraswat
Abstract:
More recently, a focus is given on replacing machined stainless steel metal flow-fields with inexpensive wiremesh current collectors. The flow-fields are based on simple woven wiremesh screens of various stainless steels, which are sandwiched between a thin metal plate of the same material to create a bipolar plate/flow-field configuration for use in a stack. Major advantages of using stainless steel wire screens include the elimination of expensive raw materials as well as machining and/or other special fabrication costs. Objective of the project is to improve the performance of the passive direct methanol fuel cell without increasing the cost of the cell and to make it as compact and light as possible. From the literature survey, it was found that very little is done in this direction & the following methodology was used. 1.) The passive DMFC cell can be made more compact, lighter and less costly by changing the material used in its construction. 2.) Controlling the fuel diffusion rate through the cell improves the performance of the cell. A passive liquid feed direct methanol fuel cell ( DMFC ) was fabricated using given MEA( Membrane Electrode Assembly ) and tested for different current collector structure. Mesh current collectors of different mesh densities, along with different support structures, were used, and the performance was found to be better. Methanol concentration was also varied. Optimisation of mesh size, support structure and fuel concentration was achieved. Cost analysis was also performed hereby. From the performance analysis study of DMFC, we can conclude with the following points : Area specific resistance (ASR) of wiremesh current collectors is lower than ASR of stainless steel current collectors. Also, the power produced by wiremesh current collectors is always more than that produced by stainless steel current collectors. Low or moderate methanol concentrations should be used for better and stable DMFC performance. Wiremesh is a good substitute of stainless steel for current collector plates of passive DMFC because of lower cost( by about 27 %), flexibility and light in weight characteristics of wiremesh.Keywords: direct methanol fuel cell, membrane electrode assembly, mesh, mesh size, methanol concentration and support structure
Procedia PDF Downloads 681428 Nurses' Perception and Core Competencies for Disaster Preparedness: A Study from the Western Region of Turkey
Authors: Gülcan Taşkıran, Ülkü Tatar Baykal
Abstract:
Aim: To identify nurses’ perceived competencies for disaster preparedness. Background: Recently, the number of disasters has increased worldwide. Since disasters often strike without warning, healthcare providers, especially nurses must be prepared with appropriate competencies for disaster procedures. Nurses’ perceptions of their own competencies for disaster preparedness need to be evaluated to aid in the creation of effective national plans and educational programs. Design: This study was conducted with a descriptive and cross-sectional design. Methods: Nurses’ perceptions were assessed using the 13-item Demographic Profile Questionnaire that is based on previous literature and the 45-item Nurses’ Perception of Core Competencies for Disaster Preparedness Scale (NPCDPS). Data were collected from June to September 2014 from 406 (79.9% return rate) Turkish nurses working in the western region of Turkey. Results: At the end of the study, it was found that out of the nurses whose mean age was 31.27 ± 5.86 and mean of working time was 8.07 ± 6.60 by the time vast majority of the nurses were women (85.7%), married (59.4%), bachelor’s degree holder (88.2%) and service nurses (56.2%). The most potential disaster that nurses think is an earthquake (70.9%) by the time majority of nurses consider having a role as a nurse at every stage of disasters. The mean total point score of nurses’ perception of disaster preparedness was 4.62. The mean total point score of the nurses from the Nurses’ Perception of Core Competencies for Disaster Preparedness Scale was 133.96. When the subscales’ mean scores are examined, the highest average of the mean score is for Technical Skills (44.52), and the lowest is for Critical Thinking Skills (10.47). When the subscales of Nurses’ Perception of Core Competencies for Disaster Preparedness Scale compared with sex, marital status and education level out of independent variable of nurses there is no significant difference (p > 0.05); compared with age group, working years, duty and being with a disaster out of independent variable of nurses there is a significant difference (p ≤ 0.05). Conclusion: Nurses generally perceive themselves as sufficient at a ‘medium level’ in terms of meeting the core competencies that are required for disaster preparedness. Nurses are not adequately prepared for disasters, but they are aware of the need for such preparation and disaster education. Disaster management training should be given to all nurses in their basic education.Keywords: disaster competencies, disaster management, disaster nursing, disaster preparedness, nursing, nursing administration, Turkish nurses
Procedia PDF Downloads 3681427 Role of the Midwifery Trained Registered Nurse in Postnatal Units at Tertiary Care Hospitals in the Western Province of Sri Lanka: A Postal Survey
Authors: Sunethra Jayathilake, Vathsala Jayasuriya-Illesinghe, Kerstin Samarasinghe, Himani Molligoda, Rasika Perera
Abstract:
In Sri Lanka, postnatal care in the state hospitals is provided by different professional categories: Midwifery trained registered nurses (MTRNs), Registered Nurses (RNs) who do not have midwifery training, doctors and midwives. Even though four professional categories provide postnatal care to mothers and newborn babies, they are not aware of their own tasks and responsibilities in postnatal care. Particularly MTRN’s role in the postnatal unit is unclear. The current study aimed to identify nurses’ (both MTRN and RNs) perception on MTRN’s tasks and responsibilities in postnatal care. This is a descriptive cross sectional study using postal survey. All nurses who were currently working in postnatal units at five selected tertiary care hospitals in the Western Province at that time were invited to participate in the study. Accordingly, the pre evaluated self-administered questionnaire was sent to 201 nurses (53 MTRNs and 148 RNs) in the study setting. The number of valid return questionnaire was 166; response rate was 83%. Respondents rated the responsibility of four professional categories: MTRN, RN, doctor and midwife whether they are 'primarily responsible', 'responsible in absence' and 'not responsible', for each of 15 postnatal (PN) tasks which were previously identified from focus group discussions with care providers during the first phase of the study. Data were analyzed using SPSS version 20; descriptive statistics were calculated. Out of the 15 PN tasks, 13 were identified as MTRNs’ primary responsibilities by 71%-93% of respondents. The respondents also considered six (6) tasks out of 15 as primary responsibility of both MTRN and RN, seven (7) tasks as primary responsibility of MTRN, RN and doctor and the remaining two (2) tasks were identified as the primary responsibility of MTRN, RN and midwife. All 15 PN tasks overlapped with other professional categories. Overlapping tasks may create role confusion leading to conflicts among professional categories which affect the quality of care they provide, eventually, threaten the safety of the client. It is recommended that an official job description for each care provider is needed to recognize their own professional boundaries for ensuring safe, quality care delivery in Sri Lanka.Keywords: overlapping, postnatal, responsibilities, tasks
Procedia PDF Downloads 1501426 Analysis of Complex Business Negotiations: Contributions from Agency-Theory
Authors: Jan Van Uden
Abstract:
The paper reviews classical agency-theory and its contributions to the analysis of complex business negotiations and gives an approach for the modification of the basic agency-model in order to examine the negotiation specific dimensions of agency-problems. By illustrating fundamental potentials for the modification of agency-theory in context of business negotiations the paper highlights recent empirical research that investigates agent-based negotiations and inter-team constellations. A general theoretical analysis of complex negotiation would be based on a two-level approach. First, the modification of the basic agency-model in order to illustrate the organizational context of business negotiations (i.e., multi-agent issues, common-agencies, multi-period models and the concept of bounded rationality). Second, the application of the modified agency-model on complex business negotiations to identify agency-problems and relating areas of risk in the negotiation process. The paper is placed on the first level of analysis – the modification. The method builds on the one hand on insights from behavior decision research (BRD) and on the other hand on findings from agency-theory as normative directives to the modification of the basic model. Through neoclassical assumptions concerning the fundamental aspects of agency-relationships in business negotiations (i.e., asymmetric information, self-interest, risk preferences and conflict of interests), agency-theory helps to draw solutions on stated worst-case-scenarios taken from the daily negotiation routine. As agency-theory is the only universal approach able to identify trade-offs between certain aspects of economic cooperation, insights obtained provide a deeper understanding of the forces that shape business negotiation complexity. The need for a modification of the basic model is illustrated by highlighting selected issues of business negotiations from agency-theory perspective: Negotiation Teams require a multi-agent approach under the condition that often decision-makers as superior-agents are part of the team. The diversity of competences and decision-making authority is a phenomenon that overrides the assumptions of classical agency-theory and varies greatly in context of certain forms of business negotiations. Further, the basic model is bound to dyadic relationships preceded by the delegation of decision-making authority and builds on a contractual created (vertical) hierarchy. As a result, horizontal dynamics within the negotiation team playing an important role for negotiation success are therefore not considered in the investigation of agency-problems. Also, the trade-off between short-term relationships within the negotiation sphere and the long-term relationships of the corporate sphere calls for a multi-period perspective taking into account the sphere-specific governance-mechanisms already established (i.e., reward and monitoring systems). Within the analysis, the implementation of bounded rationality is closely related to findings from BRD to assess the impact of negotiation behavior on underlying principal-agent-relationships. As empirical findings show, the disclosure and reservation of information to the agent affect his negotiation behavior as well as final negotiation outcomes. Last, in context of business negotiations, asymmetric information is often intended by decision-makers acting as superior-agents or principals which calls for a bilateral risk-approach to agency-relations.Keywords: business negotiations, agency-theory, negotiation analysis, interteam negotiations
Procedia PDF Downloads 1391425 Atomic Scale Storage Mechanism Study of the Advanced Anode Materials for Lithium-Ion Batteries
Authors: Xi Wang, Yoshio Bando
Abstract:
Lithium-ion batteries (LIBs) can deliver high levels of energy storage density and offer long operating lifetimes, but their power density is too low for many important applications. Therefore, we developed some new strategies and fabricated novel electrodes for fast Li transport and its facile synthesis including N-doped graphene-SnO2 sandwich papers, bicontinuous nanoporous Cu/Li4Ti5O12 electrode, and binder-free N-doped graphene papers. In addition, by using advanced in-TEM, STEM techniques and the theoretical simulations, we systematically studied and understood their storage mechanisms at the atomic scale, which shed a new light on the reasons of the ultrafast lithium storage property and high capacity for these advanced anodes. For example, by using advanced in-situ TEM, we directly investigated these processes using an individual CuO nanowire anode and constructed a LIB prototype within a TEM. Being promising candidates for anodes in lithium-ion batteries (LIBs), transition metal oxide anodes utilizing the so-called conversion mechanism principle typically suffer from the severe capacity fading during the 1st cycle of lithiation–delithiation. Also we report on the atomistic insights of the GN energy storage as revealed by in situ TEM. The lithiation process on edges and basal planes is directly visualized, the pyrrolic N "hole" defect and the perturbed solid-electrolyte-interface (SEI) configurations are observed, and charge transfer states for three N-existing forms are also investigated. In situ HRTEM experiments together with theoretical calculations provide a solid evidence that enlarged edge {0001} spacings and surface "hole" defects result in improved surface capacitive effects and thus high rate capability and the high capacity is owing to short-distance orderings at the edges during discharging and numerous surface defects; the phenomena cannot be understood previously by standard electron or X-ray diffraction analyses.Keywords: in-situ TEM, STEM, advanced anode, lithium-ion batteries, storage mechanism
Procedia PDF Downloads 3521424 Prevalence and Correlates of Complementary and Alternative Medicine Use among Diabetic Patients in Lebanon: A Cross-Sectional Study
Authors: Farah Naja, Mohamad Alameddine
Abstract:
Background: The difficulty of compliance to therapeutic and lifestyle management of type 2 diabetes mellitus (T2DM) encourages patients to use complementary and alternative medicine (CAM) therapies. Little is known about the prevalence and mode of CAM use among diabetics in the Eastern Mediterranean Region in general and Lebanon in particular. Objective: To assess the prevalence and modes of CAM use among patients with T2DM residing in Beirut, Lebanon. Methods: A cross-sectional survey of T2DM patients was conducted on patients recruited from two major referral centers - a public hospital and a private academic medical center in Beirut. In a face-to-face interview, participants completed a survey questionnaire comprised of three sections: socio-demographic, diabetes characteristics and types and modes of CAM use. Descriptive statistics, univariate and multivariate logistic regression analyses were utilized to assess the prevalence, mode and correlates of CAM use in the study population. The main outcome in this study (CAM use) was defined as using CAM at least once since diagnosis with T2DM. Results: A total of 333 T2DM patients completed the survey (response rate: 94.6%). Prevalence of CAM use in the study population was 38%, 95% CI (33.1-43.5). After adjustment, CAM use was significantly associated with a “married” status, a longer duration of T2DM, the presence of disease complications, and a positive family history of the disease. Folk foods and herbs were the most commonly used CAM followed by natural health products. One in five patients used CAM as an alternative to conventional treatment. Only 7 % of CAM users disclosed the CAM use to their treating physician. Health care practitioners were the least cited (7%) as influencing the choice of CAM among users. Conclusion: The use of CAM therapies among T2DM patients in Lebanon is prevalent. Decision makers and care providers must fully understand the potential risks and benefits of CAM therapies to appropriately advise their patients. Attention must be dedicated to educating T2DM patients on the importance of disclosing CAM use to their physicians especially patients with a family history of diabetes, and those using conventional therapy for a long time.Keywords: nutritional supplements, type 2 diabetes mellitus, complementary and alternative medicine (CAM), conventional therapy
Procedia PDF Downloads 3491423 Supercritical Hydrothermal and Subcritical Glycolysis Conversion of Biomass Waste to Produce Biofuel and High-Value Products
Authors: Chiu-Hsuan Lee, Min-Hao Yuan, Kun-Cheng Lin, Qiao-Yin Tsai, Yun-Jie Lu, Yi-Jhen Wang, Hsin-Yi Lin, Chih-Hua Hsu, Jia-Rong Jhou, Si-Ying Li, Yi-Hung Chen, Je-Lueng Shie
Abstract:
Raw food waste has a high-water content. If it is incinerated, it will increase the cost of treatment. Therefore, composting or energy is usually used. There are mature technologies for composting food waste. Odor, wastewater, and other problems are serious, but the output of compost products is limited. And bakelite is mainly used in the manufacturing of integrated circuit boards. It is hard to directly recycle and reuse due to its hard structure and also difficult to incinerate and produce air pollutants due to incomplete incineration. In this study, supercritical hydrothermal and subcritical glycolysis thermal conversion technology is used to convert biomass wastes of bakelite and raw kitchen wastes to carbon materials and biofuels. Batch carbonization tests are performed under high temperature and pressure conditions of solvents and different operating conditions, including wet and dry base mixed biomass. This study can be divided into two parts. In the first part, bakelite waste is performed as dry-based industrial waste. And in the second part, raw kitchen wastes (lemon, banana, watermelon, and pineapple peel) are used as wet-based biomass ones. The parameters include reaction temperature, reaction time, mass-to-solvent ratio, and volume filling rates. The yield, conversion, and recovery rates of products (solid, gas, and liquid) are evaluated and discussed. The results explore the benefits of synergistic effects in thermal glycolysis dehydration and carbonization on the yield and recovery rate of solid products. The purpose is to obtain the optimum operating conditions. This technology is a biomass-negative carbon technology (BNCT); if it is combined with carbon capture and storage (BECCS), it can provide a new direction for 2050 net zero carbon dioxide emissions (NZCDE).Keywords: biochar, raw food waste, bakelite, supercritical hydrothermal, subcritical glycolysis, biofuels
Procedia PDF Downloads 1791422 Analysis of Socio-Economics of Tuna Fisheries Management (Thunnus Albacares Marcellus Decapterus) in Makassar Waters Strait and Its Effect on Human Health and Policy Implications in Central Sulawesi-Indonesia
Authors: Siti Rahmawati
Abstract:
Indonesia has had long period of monetary economic crisis and it is followed by an upward trend in the price of fuel oil. This situation impacts all aspects of tuna fishermen community. For instance, the basic needs of fishing communities increase and the lower purchasing power then lead to economic and social instability as well as the health of fishermen household. To understand this AHP method is applied to acknowledge the model of tuna fisheries management priorities and cold chain marketing channel and the utilization levels that impact on human health. The study is designed as a development research with the number of 180 respondents. The data were analyzed by Analytical Hierarchy Process (AHP) method. The development of tuna fishery business can improve productivity of production with economic empowerment activities for coastal communities, improving the competitiveness of products, developing fish processing centers and provide internal capital for the development of optimal fishery business. From economic aspects, fishery business is more attracting because the benefit cost ratio of 2.86. This means that for 10 years, the economic life of this project can work well as B/C> 1 and therefore the rate of investment is economically viable. From the health aspects, tuna can reduce the risk of dying from heart disease by 50%, because tuna contain selenium in the human body. The consumption of 100 g of tuna meet 52.9% of the selenium in the body and activating the antioxidant enzyme glutathione peroxidaxe which can protect the body from free radicals and stimulate various cancers. The results of the analytic hierarchy process that the quality of tuna products is the top priority for export quality as well as quality control in order to compete in the global market. The implementation of the policy can increase the income of fishermen and reduce the poverty of fishermen households and have impact on the human health whose has high risk of disease.Keywords: management of tuna, social, economic, health
Procedia PDF Downloads 3161421 Chemical Synthesis, Characterization and Dose Optimization of Chitosan-Based Nanoparticles of MCPA for Management of Broad-Leaved Weeds (Chenopodium album, Lathyrus aphaca, Angalis arvensis and Melilotus indica) of Wheat
Authors: Muhammad Ather Nadeem, Bilal Ahmad Khan, Tasawer Abbas
Abstract:
Nanoherbicides utilize nanotechnology to enhance the delivery of biological or chemical herbicides using combinations of nanomaterials. The aim of this research was to examine the efficacy of chitosan nanoparticles containing MCPA herbicide as a potential eco-friendly alternative for weed control in wheat crops. Scanning electron microscopy (SEM), X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR), and ultraviolet absorbance were used to analyze the developed nanoparticles. The SEM analysis indicated that the average size of the particles was 35 nm, forming clusters with a porous structure. Both nanoparticles of fluroxyper + MCPA exhibited maximal absorption peaks at a wavelength of 320 nm. The compound fluroxyper +MCPA has a strong peak at a 2θ value of 30.55°, which correlates to the 78 plane of the anatase phase. The weeds, including Chenopodium album, Lathyrus aphaca, Angalis arvensis, and Melilotus indica, were sprayed with the nanoparticles while they were in the third or fourth leaf stage. There were seven distinct dosages used: doses (D0 (Check weeds), D1 (Recommended dose of traditional herbicide, D2 (Recommended dose of Nano-herbicide (NPs-H)), D3 (NPs-H with 05-fold lower dose), D4 ((NPs-H) with 10-fold lower dose), D5 (NPs-H with 15-fold lower dose), and D6 (NPs-H with 20-fold lower dose)). The chitosan-based nanoparticles of MCPA at the prescribed dosage of conventional herbicide resulted in complete death and visual damage, with a 100% fatality rate. The dosage that was 5-fold lower exhibited the lowest levels of plant height (3.95 cm), chlorophyll content (5.63%), dry biomass (0.10 g), and fresh biomass (0.33 g) in the broad-leaved weed of wheat. The herbicide nanoparticles, when used at a dosage 10-fold lower than that of conventional herbicides, had a comparable impact on the prescribed dosage. Nano-herbicides have the potential to improve the efficiency of standard herbicides by increasing stability and lowering toxicity.Keywords: mortality, visual injury, chlorophyl contents, chitosan-based nanoparticles
Procedia PDF Downloads 651420 Grisotti Flap as Treatment for Central Tumors of the Breast
Authors: R. Pardo, P. Menendez, MA Gil-Olarte, S. Sanchez, E. García, R. Quintana, J. Martín
Abstract:
Introduction : Within oncoplastic breast techniques there is increased interest in immediate partial breast reconstruction. The volume resected is greater than that of conventional conservative techniques. Central tumours of the breast have classically been treated with a mastectomy with regard to oncological safety and cosmetic secondary effects after wide central resection of the nipple and breast tissue beneath. Oncological results for central quadrantectomy have a recurrence level, disease- free period and survival identical to mastectomy. Grissoti flap is an oncoplastic surgical technique that allows the surgeon to perform a safe central quadrantectomy with excellent cosmetic results. Material and methods: The Grissoti flap is a glandular cutaneous advancement rotation flap that can fill the defect in the central portion of the excised breast. If the inferior border is affected by tumour and further surgery is decided upon at the Multidisciplinary Team Meeting, it will be necessary to perform a mastectomy. All patients with a Grisotti flap undergoing surgery since 2009 were reviewed obtaining the following data: age, hystopathological diagnosis, size, operating time, volume of tissue resected, postoperative admission time, re-excisions due to positive margins affected by tumour, wound dehiscence, complications and recurrence. Analysis and results of sentinel node biopsy were also obtained. Results: 12 patients underwent surgery between 2009-2015. The mean age was 54 years (34-67) . All had a preoperative diagnosis of ductal infiltrative carcinoma of less than 2 cm,. Diagnosis was made with Ultrasound, Mamography or both . Magnetic resonance was used in 5 cases. No patients had preoperative positive axilla after ultrasound exploration. Mean operating time was 104 minutes (84-130). Postoperative stay was 24 hours. Mean volume resected was 159 cc (70-286). In one patient the surgical border was affected by tumour and a further procedure with resection of the affected border was performed as ambulatory surgery. The sentinel node biopsy was positive for micrometastasis in only two cases. In one case lymphadenectomy was performed in 2009. In the other, treated in 2015, no lymphadenectomy was performed as the patient had a favourable histopathological prognosis and the multidisciplinary team meeting agreed that lymphadenectomy was not required. No recurrence has been diagnosed in any of the patients who underwent surgery and they are all disease free at present. Conclusions: Conservative surgery for retroareolar central tumours of the breast results in good local control of the disease with free surgical borders, including resection of the nipple areola complex and pectoral major muscle fascia. Reconstructive surgery with the inferior Grissoti flap adequately fills the defect after central quadrantectomy with creation of a new cutaneous disc where a new nipple areola complex is reconstructed with a local flap or micropigmentation. This avoids the need for contralateral symmetrization. Sentinel Node biopsy can be performed without added morbidity. When feasible, the Grissoti flap will avoid skin-sparing mastectomy for central breast tumours that will require the use of an expander, prosthesis or myocutaneous flap, with all the complications of a more complex operation.Keywords: Grisotti flap, oncoplastic surgery, central tumours, breast
Procedia PDF Downloads 3421419 Indigo Dye Wastewater Treatment by Fenton Oxidation
Authors: Anurak Khrueakham, Tassanee Chanphuthin
Abstract:
Indigo is a well-known natural blue dye that is used hither to even though synthetic ones are commercially available. The removal of indigo from effluents is difficult due to its resistance towards biodegradation which causes an aquatic environment effect. Fenton process is a reaction between hydrogen peroxide H2O2 and Fe2+ to generate •OH (highly reactive oxidant (E◦= 2.8 V)). Additionally, •OH is non-selective oxidant which is capable of destroying wide range of organic pollutants in water and wastewater. The aims of this research were to investigate the effect of H2O2, Fe2+ and pH on indigo wastewater oxidation by Fenton process. A liter reactor was operated in all experiments. The batch reactor was prepared by filling 1 liter of indigo wastewater. The pH was adjusted to the desired value; then, FeSO4 at predetermined amount was added. Finally, H2O2 was immediately added to start the Fenton’s reaction. The Fenton oxidation of indigo wastewater was operated for 60 minutes. Residual H2O2 was analyzed using titanium oxalate method. The Fe2+ concentration was determined by phenanthroline method. COD was determined using closed-reflux titrimetric method to indicate the removal efficiency. The results showed that at pH 2 increasing the initial ferrous concentration from 0.1 mM to 1 mM enhanced the indigo removal from 36% to 59%. Fenton reaction was rapidly due to the high generation rate of •OH. The degradation of indigo increased with increasing pH up to pH 3. This can be explained that the scavenging effect of the •OH by H+ in the condition of low pH is severe to form an oxonium ion, resulting in decrease the production of •OH and lower the decolorization efficiency of indigo. Increasing the initial H2O2 concentration from 5 mM to 20 mM could enhance the decolorization. The COD removal was increased from 35% to 65% with increasing H2O2 concentration from 5 mM to 20 mM. The generations of •OH were promoted by the increase of initial H2O2 concentration. However, the higher concentration of H2O2 resulted in the reduction of COD removal efficiency. The initial ferrous concentrations were studied in the range of 0.05-15.0 mM. The results found that the COD removals increased with increasing ferrous concentrations. The COD removals were increased from 32% to 65% when increase the ferrous concentration from 0.5 mM to 10.0 mM. However, the COD removal did not significantly change at higher 10.0 mM. This is because •OH yielding was lower level of oxidation, therefore, the COD removals were not improved. According to the studies, the Fenton’s reagents were important factors for COD removal by Fenton process. The optimum condition for COD removal of indigo dye wastewater was 10.0 mM of ferrous, 20 mM of H2O2 and at pH 3.Keywords: indigo dye, fenton oxidation, wastewater treatment, advanced oxidation processes
Procedia PDF Downloads 3951418 Predictors of Lost to Follow-Up among HIV Patients Attending Anti-Retroviral Therapy Treatment Centers in Nigeria
Authors: Oluwasina Folajinmi, Kate Ssamulla, Penninah Lutung, Daniel Reijer
Abstract:
Background: Despite of well-verified benefits of anti-retroviral therapy (ART) in prolonging life expectancy being lost to follow-up (LTFU) presents a challenge to the success of ART programs in resource limited countries like Nigeria. In several studies of ART programs in developing countries, researchers have reported that there has been a high rate of LTFU among patients receiving care and treatment at ART treatment centers. This study seeks to determine the cause of LTFU among HIV clients. Method: A descriptive cross sectional study focused on a population of 9,280 persons living with HIV/AIDS who were enrolled in nine treatment centers in Nigeria (both pre-ART and ART patients were included). Out of the total population, 1752 (18.9%) were found to be LTFU. Of this group we randomly selected 1200 clients (68.5%) their d patients’ information was generated through a database. Data on demographics and CD4 counts, causes of LTFU were analyzed and summarized. Results: Out of 1200 LTFU clients selected, 462 (38.5%) were on ART; 341 clients (73.8%) had CD4 level < 500cell/µL and 738 (61.5%) on pre-ART had CD4 level >500/µL. In our records we found telephone number for 675 (56.1%) of these clients. 675 (56.1%) were owners of a phone. The majority of the client’s 731 (60.9%) were living at not more than 25km away from the ART center. A majority were females (926 or 77.2%) while 274 (22.8%) were male. 675 (56.1%) clients were reported traced via telephone and home address. 326 (27.2%) of clients phone numbers were not reachable; 173 (14.4%) of telephone numbers were incomplete. 71 (5.9%) had relocated due to communal crises and expert client trackers reported that some patient could not afford transportation to ART centers. Conclusion: This study shows that, low health education levels, poverty, relocations and lack of reliable phone contact were major predictors of LTFU. Periodic updates of home addresses, telephone contacts including at least two next of kin, phone text messages and home visits may improve follow up. Early and consistent tracking of missed appointments is crucial. Creation of more ART decentralized centres are needed to avoid long distances.Keywords: anti-retroviral therapy, HIV/AIDS, predictors, lost to follow up
Procedia PDF Downloads 3041417 Interruption Overload in an Office Environment: Hungarian Survey Focusing on the Factors that Affect Job Satisfaction and Work Efficiency
Authors: Fruzsina Pataki-Bittó, Edit Németh
Abstract:
On the one hand, new technologies and communication tools improve employee productivity and accelerate information and knowledge transfer, while on the other hand, information overload and continuous interruptions make it even harder to concentrate at work. It is a great challenge for companies to find the right balance, while there is also an ongoing demand to recruit and retain the talented employees who are able to adopt the modern work style and effectively use modern communication tools. For this reason, this research does not focus on the objective measures of office interruptions, but aims to find those disruption factors which influence the comfort and job satisfaction of employees, and the way how they feel generally at work. The focus of this research is on how employees feel about the different types of interruptions, which are those they themselves identify as hindering factors, and those they feel as stress factors. By identifying and then reducing these destructive factors, job satisfaction can reach a higher level and employee turnover can be reduced. During the research, we collected information from depth interviews and questionnaires asking about work environment, communication channels used in the workplace, individual communication preferences, factors considered as disruptions, and individual steps taken to avoid interruptions. The questionnaire was completed by 141 office workers from several types of workplaces based in Hungary. Even though 66 respondents are working at Hungarian offices of multinational companies, the research is about the characteristics of the Hungarian labor force. The most important result of the research shows that while more than one third of the respondents consider office noise as a disturbing factor, personal inquiries are welcome and considered useful, even if in such cases the work environment will not be convenient to solve tasks requiring concentration. Analyzing the sizes of the offices, in an open-space environment, the rate of those who consider office noise as a disturbing factor is surprisingly lower than in smaller office rooms. Opinions are more diverse regarding information communication technologies. In addition to the interruption factors affecting the employees' job satisfaction, the research also focuses on the role of the offices in the 21st century.Keywords: information overload, interruption, job satisfaction, office environment, work efficiency
Procedia PDF Downloads 2271416 Increased Energy Efficiency and Improved Product Quality in Processing of Lithium Bearing Ores by Applying Fluidized-Bed Calcination Systems
Authors: Edgar Gasafi, Robert Pardemann, Linus Perander
Abstract:
For the production of lithium carbonate or hydroxide out of lithium bearing ores, a thermal activation (calcination/decrepitation) is required for the phase transition in the mineral to enable an acid respectively soda leaching in the downstream hydrometallurgical section. In this paper, traditional processing in Lithium industry is reviewed, and opportunities to reduce energy consumption and improve product quality and recovery rate will be discussed. The conventional process approach is still based on rotary kiln calcination, a technology in use since the early days of lithium ore processing, albeit not significantly further developed since. A new technology, at least for the Lithium industry, is fluidized bed calcination. Decrepitation of lithium ore was investigated at Outotec’s Frankfurt Research Centre. Focusing on fluidized bed technology, a study of major process parameters (temperature and residence time) was performed at laboratory and larger bench scale aiming for optimal product quality for subsequent processing. The technical feasibility was confirmed for optimal process conditions on pilot scale (400 kg/h feed input) providing the basis for industrial process design. Based on experimental results, a comprehensive Aspen Plus flow sheet simulation was developed to quantify mass and energy flow for the rotary kiln and fluidized bed system. Results show a significant reduction in energy consumption and improved process performance in terms of temperature profile, product quality and plant footprint. The major conclusion is that a substantial reduction of energy consumption can be achieved in processing Lithium bearing ores by using fluidized bed based systems. At the same time and different from rotary kiln process, an accurate temperature and residence time control is ensured in fluidized-bed systems leading to a homogenous temperature profile in the reactor which prevents overheating and sintering of the solids and results in uniform product quality.Keywords: calcination, decrepitation, fluidized bed, lithium, spodumene
Procedia PDF Downloads 2301415 Climate Change Adaptation in the U.S. Coastal Zone: Data, Policy, and Moving Away from Moral Hazard
Authors: Thomas Ruppert, Shana Jones, J. Scott Pippin
Abstract:
State and federal government agencies within the United States have recently invested substantial resources into studies of future flood risk conditions associated with climate change and sea-level rise. A review of numerous case studies has uncovered several key themes that speak to an overall incoherence within current flood risk assessment procedures in the U.S. context. First, there are substantial local differences in the quality of available information about basic infrastructure, particularly with regard to local stormwater features and essential facilities that are fundamental components of effective flood hazard planning and mitigation. Second, there can be substantial mismatch between regulatory Flood Insurance Rate Maps (FIRMs) as produced by the National Flood Insurance Program (NFIP) and other 'current condition' flood assessment approaches. This is of particular concern in areas where FIRMs already seem to underestimate extant flood risk, which can only be expected to become a greater concern if future FIRMs do not appropriately account for changing climate conditions. Moreover, while there are incentives within the NFIP’s Community Rating System (CRS) to develop enhanced assessments that include future flood risk projections from climate change, the incentive structures seem to have counterintuitive implications that would tend to promote moral hazard. In particular, a technical finding of higher future risk seems to make it easier for a community to qualify for flood insurance savings, with much of these prospective savings applied to individual properties that have the most physical risk of flooding. However, there is at least some case study evidence to indicate that recognition of these issues is prompting broader discussion about the need to move beyond FIRMs as a standalone local flood planning standard. The paper concludes with approaches for developing climate adaptation and flood resilience strategies in the U.S. that move away from the social welfare model being applied through NFIP and toward more of an informed risk approach that transfers much of the investment responsibility over to individual private property owners.Keywords: climate change adaptation, flood risk, moral hazard, sea-level rise
Procedia PDF Downloads 108