Search results for: variable step size
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10081

Search results for: variable step size

2041 Decision-Making Process Based on Game Theory in the Process of Urban Transformation

Authors: Cemil Akcay, Goksun Yerlikaya

Abstract:

Buildings are the living spaces of people with an active role in every aspect of life in today's world. While some structures have survived from the early ages, most of the buildings that completed their lifetime have not transported to the present day. Nowadays, buildings that do not meet the social, economic, and safety requirements of the age return to life with a transformation process. This transformation is called urban transformation. Urban transformation is the renewal of the areas with a risk of disaster and the technological infrastructure required by the structure. The transformation aims to prevent damage to earthquakes and other disasters by rebuilding buildings that have completed their non-earthquake-resistant economic life. It is essential to decide on other issues related to conversion and transformation in places where most of the building stock should transform into the first-degree earthquake belt, such as Istanbul. In urban transformation, property owners, local authority, and contractor must deal at a common point. Considering that hundreds of thousands of property owners are sometimes in the areas of transformation, it is evident how difficult it is to make the deal and decide. For the optimization of these decisions, the use of game theory is foreseeing. The main problem in this study is that the urban transformation is carried out in place, or the building or buildings are transport to a different location. There are many stakeholders in the Istanbul University Cerrahpaşa Medical Faculty Campus, which is planned to be carried out in the process of urban transformation, was tried to solve the game theory applications. An analysis of the decisions given on a real urban transformation project and the logical suitability of decisions taken without the use of game theory were also supervised using game theory. In each step of this study, many decision-makers are classifying according to a specific logical sequence, and in the game trees that emerged as a result of this classification, Nash balances were tried to observe, and optimum decisions were determined. All decisions taken for this project have been subjected to two significant differentiated comparisons using game theory, and as decisions are taken without the use of game theory, and according to the results, solutions for the decision phase of the urban transformation process introduced. The game theory model developed from beginning to the end of the urban transformation process, particularly as a solution to the difficulty of making rational decisions in large-scale projects with many participants in the decision-making process. The use of a decision-making mechanism can provide an optimum answer to the demands of the stakeholders. In today's world for the construction sector, it is also seeing that the game theory is a non-surprising consequence of the fact that it is the most critical issues of planning and making the right decision in future years.

Keywords: urban transformation, the game theory, decision making, multi-actor project

Procedia PDF Downloads 122
2040 Optimization Based Design of Decelerating Duct for Pumpjets

Authors: Mustafa Sengul, Enes Sahin, Sertac Arslan

Abstract:

Pumpjets are one of the marine propulsion systems frequently used in underwater vehicles nowadays. The reasons for frequent use of pumpjet as a propulsion system are that it has higher relative efficiency at high speeds, better cavitation, and acoustic performance than its rivals. Pumpjets are composed of rotor, stator, and duct, and there are two different types of pumpjet configurations depending on the desired hydrodynamic characteristic, which are with accelerating and decelerating duct. Pumpjet with an accelerating channel is used at cargo ships where it works at low speeds and high loading conditions. The working principle of this type of pumpjet is to maximize the thrust by reducing the pressure of the fluid through the channel and throwing the fluid out from the channel with high momentum. On the other hand, for decelerating ducted pumpjets, the main consideration is to prevent the occurrence of the cavitation phenomenon by increasing the pressure of the fluid about the rotor region. By postponing the cavitation, acoustic noise naturally falls down, so decelerating ducted systems are used at noise-sensitive vehicle systems where acoustic performance is vital. Therefore, duct design becomes a crucial step during pumpjet design. This study, it is aimed to optimize the duct geometry of a decelerating ducted pumpjet for a highly speed underwater vehicle by using proper optimization tools. The target output of this optimization process is to obtain a duct design that maximizes fluid pressure around the rotor region to prevent from cavitation and minimizes drag force. There are two main optimization techniques that could be utilized for this process which are parameter-based optimization and gradient-based optimization. While parameter-based algorithm offers more major changes in interested geometry, which makes user to get close desired geometry, gradient-based algorithm deals with minor local changes in geometry. In parameter-based optimization, the geometry should be parameterized first. Then, by defining upper and lower limits for these parameters, design space is created. Finally, by proper optimization code and analysis, optimum geometry is obtained from this design space. For this duct optimization study, a commercial codedparameter-based optimization algorithm is used. To parameterize the geometry, duct is represented with b-spline curves and control points. These control points have x and y coordinates limits. By regarding these limits, design space is generated.

Keywords: pumpjet, decelerating duct design, optimization, underwater vehicles, cavitation, drag minimization

Procedia PDF Downloads 186
2039 Consequences of Transformation of Modern Monetary Policy during the Global Financial Crisis

Authors: Aleksandra Szunke

Abstract:

Monetary policy is an important pillar of the economy, directly affecting on the condition of banking sector. Depending on the strategy may both support functioning of banking institutions, as well as limit their excessively risky activities. The literature studies indicate a large number of publications, which include characteristics of initiatives, implemented by central banks during the global financial crisis and the potential effects of the use of non-standard monetary policy instruments. However, the empirical evidence about their effects and real consequences for the financial markets are still not final. Even before the escalation of instability, Bernanke, Reinhart, and Sack (2004) analyzed the effectiveness of various unconventional monetary tools in lowering long-term interest rates in the United States and Japan. The obtained results largely confirmed the effectiveness of the zero-interest-rate policy and Quantitative Easing (QE) in achieving the goal of reducing long-term interest rates. Japan, considered as the precursor of QE policy, also conducted research about the consequences of non-standard instruments, implemented to restore financial stability of the country. Although the literature about the effectiveness of Quantitative Easing in Japan is extensive, it does not uniquely specify whether it brought permanent effects. The main aim of the study is to identify the implications of non-standard monetary policy, implemented by selected central banks (the Federal Reserve System, Bank of England and European Central Bank), paying particular attention to the consequences into three areas: the size of money supply, financial markets, and the real economy.

Keywords: consequences of modern monetary policy, quantitative easing policy, banking sector instability, global financial crisis

Procedia PDF Downloads 459
2038 Written Argumentative Texts in Elementary School: The Development of Text Structure and Its Relation to Reading Comprehension

Authors: Sara Zadunaisky Ehrlich, Batia Seroussi, Anat Stavans

Abstract:

Text structure is a parameter of text quality. This study investigated the structure of written argumentative texts produced by elementary school age children. We set two objectives: to identify and trace the structural components of the argumentative texts and to investigate whether reading comprehension skills were correlated with text structure. 293 school children from 2nd to 5th grades were asked to write two argumentative texts about informal or everyday life controversial topics and completed two reading tasks that targeted different levels of text comprehension. The findings indicated, on the one hand, significant developmental differences between mature and more novice writers in terms of text length and mean proportion of clauses produced for a better elaboration of the different text components. On the other hand, with certain fluctuations, no meaningful differences were found in terms of presence of text structure: at all grade levels, elementary school children produced the basic and minimal structure that included the writer's argument and reasons or arguments' supports. Counter-arguments were scarce even in the upper grades. While the children captured that essentially an argument must be justified, the more the number of supports produced, the fewer the clauses the children produced. Last, weak to mild relations were found between reading comprehension and argumentative text structure. Nevertheless, children who scored higher on sophisticated questions that require inferential or world knowledge displayed more elaborated structures in terms of text length and size of supports to the writer's argument. These findings indicate how school-age children perceive the basic template of an argument with future implications regarding how to elaborate written arguments.

Keywords: argumentative text, text structure, elementary school children, written argumentations

Procedia PDF Downloads 151
2037 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R

Authors: Pavel H. Llamocca, Victoria Lopez

Abstract:

The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.

Keywords: open data, R language, data integration, environmental data

Procedia PDF Downloads 298
2036 Graduates Construction of Knowledge and Ability to Act on Employable Opportunities

Authors: Martabolette Stecher

Abstract:

Introductory: How is knowledge and ability to act on employable opportunities constructed among students and graduates at higher educations? This question have been drawn much attention by researchers, governments and universities in Denmark, since there has been an increases in the rate of unemployment among graduates from higher education. The fact that more than ten thousand graduates from higher education without the opportunity to get a job in these years has a tremendous impact upon the social economy in Denmark. Every time a student graduate from higher education and become unemployed, it is possible to trace upon the person´s chances to get a job many years ahead. This means that the tremendous rate of graduate unemployment implies a decrease in employment and lost prosperity in Denmark within a billion Danish Kroner scale. Basic methodologies: The present study investigates the construction of knowledge and ability to act upon employable opportunities among students and graduates at higher educations in Denmark in a literature review as well as a preliminary study of students from Aarhus University. 15 students from the candidate of drama have been engaging in an introductory program at the beginning of their candidate study, which included three workshops focusing upon the more personal matters of their studies and life. They have reflected upon this process during the intervention and afterwards in a semi-structured interview. Concurrently a thorough literature review has delivered key concepts for the exploration of the research question. Major findings of the study: It is difficult to find one definition of what employability encompasses, hence the overall picture of how to incorporate the concept is difficult. The present theory of employability has been focusing upon the competencies, which students and graduates are going to develop in order to become employable. In recent years there has been an emphasis upon the mechanism which supports graduates to trust themselves and to develop their self-efficacy in terms of getting a sustainable job. However, there has been little or no focus in the literature upon the idea of how students and graduates from higher education construct knowledge about and ability to act upon employable opportunities involving network of actors both material and immaterial network and meaningful relations for students and graduates in developing their enterprising behavior to achieve employment. The Act-network-theory combined with theory of entrepreneurship education suggests an alternative strategy to focus upon when explaining sustainable ways of creating employability among graduates. The preliminary study also supports this theory suggesting that it is difficult to emphasize a single or several factors of importance rather highlighting the effect of a multitude network. Concluding statement: This study is the first step of a ph.d.-study investigating this problem in Denmark and the USA in the period 2015 – 2019.

Keywords: employablity, graduates, action, opportunities

Procedia PDF Downloads 179
2035 Tryptophan and Its Derivative Oxidation by Heme-Dioxygenase Enzyme

Authors: Ali Bahri Lubis

Abstract:

Tryptophan oxidation by Heme-dioxygenase enzyme is initial important stepTryptophan oxidation by Heme-dioxygenase enzyme is initial important step in kynurenine pathway implicating to several severe diseases such as Parkinson’s Disease, Huntington Disease, poliomyelitis and cataract. It is crucial to comprehend the oxidation mechanism with the hope to find decent treatment upon abovementioned diseases. The mechanism has been debatable since no one has been yet proved the mechanism obviously. In this research we have attempted to prove mechanistic steps of tryptophan oxidation via human indoleamine dioxygenase (h-IDO) using various substrates: L-tryptophan, L-tryptophan (indole-ring-2-13C), L-fully-labelled13C-tryptophan, L-N-methyl-tryptophan, L-tryptophan and 2-amino-3-(benzo(b)thiophene-3-yl) propanoic acid. All enzyme assay experiments were measured using a UV-Vis spectrophotometer, LC-MS, 1H-NMR, and HSQC. We also successfully synthesized enzyme products as our control in NMR measurements. The result exhibited that the distinct substrates produced N-formyl kynurenine (NFK) and hydroxypyrrolloindoleamine carboxylate acid (HPIC) in different concentrations and isomers, correlated to the proposal of considered mechanism reaction in kynurenine pathway implicating to several severe diseases such as Parkinson’s Disease, Huntington Disease, poliomyelitis and cataract. It is crucial to comprehend the oxidation mechanism with the hope to find decent treatment for the abovementioned diseases. The mechanism has been debatable since no one has yet proven the mechanism obviously. In this research we have attempted to prove mechanistic steps of tryptophan oxidation via human indoleamine dioxygenase (h-IDO) using various substrates: L-tryptophan, L-tryptophan (indole-ring-2-13C), L-fully-labelled13C-tryptophan, L-N-methyl-tryptophan, L-tryptophan and 2-amino-3-(benzo(b)thiophene-3-yl) propanoic acid. All enzyme assay experiments were measured using a UV-Vis spectrophotometer, LC-MS, 1H-NMR and HSQC. We also successfully synthesized enzyme products as our control in NMR measurements. The result exhibited that the distinct substrates produced N-formyl kynurenine (NFK) and hydroxypyrrolloindoleamine carboxylate acid (HPIC) in different concentrations and isomers, correlated to the proposal of considered mechanism reaction.

Keywords: heme-dioxygenase enzyme, tryptophan oxidation, kynurenine pathway, n-formyl kynurenine

Procedia PDF Downloads 62
2034 Downtime Estimation of Building Structures Using Fuzzy Logic

Authors: M. De Iuliis, O. Kammouh, G. P. Cimellaro, S. Tesfamariam

Abstract:

Community Resilience has gained a significant attention due to the recent unexpected natural and man-made disasters. Resilience is the process of maintaining livable conditions in the event of interruptions in normally available services. Estimating the resilience of systems, ranging from individuals to communities, is a formidable task due to the complexity involved in the process. The most challenging parameter involved in the resilience assessment is the 'downtime'. Downtime is the time needed for a system to recover its services following a disaster event. Estimating the exact downtime of a system requires a lot of inputs and resources that are not always obtainable. The uncertainties in the downtime estimation are usually handled using probabilistic methods, which necessitates acquiring large historical data. The estimation process also involves ignorance, imprecision, vagueness, and subjective judgment. In this paper, a fuzzy-based approach to estimate the downtime of building structures following earthquake events is proposed. Fuzzy logic can integrate descriptive (linguistic) knowledge and numerical data into the fuzzy system. This ability allows the use of walk down surveys, which collect data in a linguistic or a numerical form. The use of fuzzy logic permits a fast and economical estimation of parameters that involve uncertainties. The first step of the method is to determine the building’s vulnerability. A rapid visual screening is designed to acquire information about the analyzed building (e.g. year of construction, structural system, site seismicity, etc.). Then, a fuzzy logic is implemented using a hierarchical scheme to determine the building damageability, which is the main ingredient to estimate the downtime. Generally, the downtime can be divided into three main components: downtime due to the actual damage (DT1); downtime caused by rational and irrational delays (DT2); and downtime due to utilities disruption (DT3). In this work, DT1 is computed by relating the building damageability results obtained from the visual screening to some already-defined components repair times available in the literature. DT2 and DT3 are estimated using the REDITM Guidelines. The Downtime of the building is finally obtained by combining the three components. The proposed method also allows identifying the downtime corresponding to each of the three recovery states: re-occupancy; functional recovery; and full recovery. Future work is aimed at improving the current methodology to pass from the downtime to the resilience of buildings. This will provide a simple tool that can be used by the authorities for decision making.

Keywords: resilience, restoration, downtime, community resilience, fuzzy logic, recovery, damage, built environment

Procedia PDF Downloads 151
2033 Frequency of Alloimmunization in Sickle Cell Disease Patients in Africa: A Systematic Review with Meta-analysis

Authors: Theresa Ukamaka Nwagha, Angela Ogechukwu Ugwu, Martins Nweke

Abstract:

Background and Objectives: Blood transfusion is an effective and proven treatment for some severe complications of sickle cell disease. Recurrent transfusions have put patients with sickle cell disease at risk of developing antibodies against the various antigens they were exposed to. This study aims to investigate the frequency of red blood cell alloimmunization in patients with sickle disease in Africa. Materials and Methods: This is a systematic review of peer-reviewed literature published in English. The review was conducted consistent with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist. Data sources for the review include MEDLINE, PubMed, CINAHL, and Academic Search Complete. Included in this review are articles that reported the frequency/prevalence of red blood cell alloimmunization in sickle cell disease patients in Africa. Eligible studies were subjected to independent full-text screening and data extraction. Risk of bias assessment was conducted with the aid of the mixed method appraisal tool. We employed a random-effects model of meta-analysis to estimate the pooled prevalence. We computed Cochrane’s Q statistics and I2 and prediction interval to quantify heterogeneity in effect size. Results: The prevalence estimates range from 2.6% to 29%. Pooled prevalence was estimated to be 10.4% (CI 7.7.–13.8); PI = 3.0 – 34.0%), with significant heterogeneity (I2 = 84.62; PI = 2.0-32.0%) and publication bias (Egger’s t-test = 1.744, p = 0.0965). Conclusion: The frequency of red cell alloantibody varies considerably in Africa. The alloantibodies appeared frequent in this order: the Rhesus, Kell, Lewis, Duffy, MNS, and Lutheran

Keywords: frequency, red blood cell, alloimmunization, sickle cell disease, Africa

Procedia PDF Downloads 80
2032 Comparison of Safety and Efficacy between Thulium Fibre Laser and Holmium YAG Laser for Retrograde Intrarenal Surgery

Authors: Sujeet Poudyal

Abstract:

Introduction: After Holmium:yttrium-aluminum-garnet (Ho: YAG) laser has revolutionized the management of urolithiasis, the introduction of Thulium fibre laser (TFL) has already challenged Ho:YAG laser due to its multiple commendable properties. Nevertheless, there are only few studies comparing TFL and holmium laser in Retrograde Intrarenal Surgery(RIRS). Therefore, this study was carried out to compare the efficacy and safety of thulium fiber laser (TFL) and holmium laser in RIRS. Methods: This prospective comparative study, which included all patients undergoing laser lithotripsy (RIRS) for proximal ureteric calculus and nephrolithiasis from March 2022 to March 2023, consisted of 63 patients in Ho:YAG laser group and 65 patients in TFL group. Stone free rate, operative time, laser utilization time, energy used, and complications were analysed between the two groups. Results: Mean stone size was comparable in TFL (14.23±4.1 mm) and Ho:YAG (13.88±3.28 mm) group, p-0.48. Similarly, mean stone density in TFL (1269±262 HU) was comparable to Ho:YAG (1189±212 HU), p-0.48. There was significant difference in lasing time between TFL (12.69±7.41 mins) and Ho:YAG (20.44±14 mins), p-0.012). TFL group had operative time of 43.47± 16.8 mins which was shorter than Ho:YAG group (58±26.3 mins),p-0.005. Both TFL and Ho:YAG groups had comparable total energy used(11.4±6.2 vs 12±8 respectively, p-0.758). Stone free rate was 87%for TFL, whereas it was 79.5% for Ho:YAG, p-0.25). Two cases of sepsis and one ureteric stricture were encountered in TFL, whereas three cases suffered from sepsis apart from one ureteric stricture in Ho:YAG group, p-0.62). Conclusion: Thulium Fibre Laser has similar efficacy as Holmium: YAG Laser in terms of safety and stone free rate. However, due to better stone ablation rate in TFL, it can become the game changer in management of urolithiasis in the coming days.

Keywords: retrograde intrarenal surgery, thulium fibre laser, holmium:yttrium-aluminum-garnet (ho:yag) laser, nephrolithiasis

Procedia PDF Downloads 56
2031 Genetics of Atopic Dermatitis: Role of Cytokines Genes Polymorphisms

Authors: Ghaleb Bin Huraib, Fahad Al Harthi, Misbahul Arfin, Abdulrahman Al-Asmari

Abstract:

Atopic dermatitis (AD), also known as atopic eczema, is a chronic inflammatory skin disease characterized by severe itching and recurrent relapsing eczema-like skin lesions, affecting up to 20% of children and 10% of adults in industrialized countries. AD is a complex multifactorial disease, and its exact etiology and pathogenesis have not been fully elucidated. The aim of this study was to investigate the impact of gene polymorphisms of T helper cell subtype Th1 and Th2 cytokines, interferon-gamma (IFN-γ), interleukin-6 (IL-6) and transforming growth factor (TGF)-β1on AD susceptibility in a Saudi cohort. One hundred four unrelated patients with AD and 195 healthy controls were genotyped for IFN-γ (874A/T), IL-6 (174G/C) and TGF-β1 (509C/T) polymorphisms using ARMS-PCR and PCR-RFLP technique. The frequency of genotypes AA and AT of IFN-γ (874A/T) differed significantly among patients and controls (P 0.001). The genotype AT was increased while genotype AA was decreased in AD patients as compared to controls. AD patients also had higher frequency of T containing genotypes (AT+TT) than controls (P = 0.001). The frequencies of allele T and A were statistically different in patients and controls (P = 0.04). The frequencies of genotype GG and allele G of IL-6 (174G/C) were significantly higher while genotype GC and allele C were lower in AD patients than controls. There was no significant difference in the frequencies of alleles and genotypes of TGF-β1 (509C/T) polymorphism between patient and control groups. These results showed that susceptibility to AD is influenced by presence or absence of genotypes of IFN-γ (874A/T) and IL-6 (174G/C) polymorphisms. It is concluded that T-allele and T-containing genotypes (AT+TT) of IFN-γ (874A/T) and G-allele and GG genotype ofIL-6 (174G/C) polymorphisms are susceptible to AD in Saudis.On the other hand, the TGF-β1 (509C/T) polymorphism may not be associated with AD risk in Saudi population however further studies with large sample size are required to confirm these findings.

Keywords: atopic dermatitis, interferon-γ, interleukin-6, transforming growth factor-β1, polymorphism

Procedia PDF Downloads 101
2030 Assessment of Physical and Mechanical Properties of Perlite Mortars with Recycled Cement

Authors: Saca Nastasia, Radu Lidia, Dobre Daniela, Calotă Razvan

Abstract:

In order to achieve the European Union's sustainable and circular economy goals, strategies for reducing raw material consumption, reusing waste, and lowering CO₂ emissions have been developed. In this study, expanded perlite mortars with recycled cement (RC) were obtained and characterized. The recycled cement was obtained from demolition concrete waste. The concrete waste was crushed in a jaw and grinded in a horizontal ball mill to reduce the material's average grain size. Finally, the fine particles were sieved through a 125 µm sieve. The recycled cement was prepared by heating demolition concrete waste at 550°C for 3 hours. At this temperature, the decarbonization does not occur. The utilization of recycled cement can minimize the negative environmental effects of demolished concrete landfills as well as the demand for natural resources used in cement manufacturing. Commercial cement CEM II/A-LL 42.5R was substituted by 10%, 20%, and 30% recycled cement. By substituting reference cement (CEM II/A-LL 42.5R) by RC, a decrease in cement aqueous suspension pH, electrical conductivity, and Ca²⁺ concentration was observed for all measurements (2 hours, 6 hours, 24 hours, 4 days, and 7 days). After 2 hours, pH value was 12.42 for reference and conductivity of 2220 µS/cm and decreased to 12.27, respectively 1570 µS/cm for 30% RC. The concentration of Ca²⁺ estimated by complexometric titration was 20% lower in suspension with 30% RC in comparison to reference for 2 hours. The difference significantly diminishes over time. The mortars have cement: expanded perlite volume ratio of 1:3 and consistency between 140 mm and 200 mm. The density of fresh mortar was about 1400 kg/m3. The density, flexural and compressive strengths, water absorption, and thermal conductivity of hardened mortars were tested. Due to its properties, expanded perlite mortar is a good thermal insulation material.

Keywords: concrete waste, expanded perlite, mortar, recycled cement, thermal conductivity, mechanical strength

Procedia PDF Downloads 71
2029 Development of a Process Method to Manufacture Spreads from Powder Hardstock

Authors: Phakamani Xaba, Robert Huberts, Bilainu Oboirien

Abstract:

It has been over 200 years since margarine was discovered and manufactured using liquid oil, liquified hardstock oils and other oil phase & aqueous phase ingredients. Henry W. Bradley first used vegetable oils in liquid state and around 1871, since then; spreads have been traditionally manufactured using liquified oils. The main objective of this study was to develop a process method to produce spreads using spray dried hardstock fat powders as a structing fats in place of current liquid structuring fats. A high shear mixing system was used to condition the fat phase and the aqueous phase was prepared separately. Using a single scraped surface heat exchanger and pin stirrer, margarine was produced. The process method was developed for to produce spreads with 40%, 50% and 60% fat . The developed method was divided into three steps. In the first step, fat powders were conditioned by melting and dissolving them into liquid oils. The liquified portion of the oils were at 65 °C, whilst the spray dried fat powder was at 25 °C. The two were mixed using a mixing vessel at 900 rpm for 4 minutes. The rest of the ingredients i.e., lecithin, colorant, vitamins & flavours were added at ambient conditions to complete the fat/ oil phase. The water phase was prepared separately by mixing salt, water, preservative, acidifier in the mixing tank. Milk was also separately prepared by pasteurizing it at 79°C prior to feeding it into the aqueous phase. All the water phase contents were chilled to 8 °C. The oil phase and water phase were mixed in a tank, then fed into a single scraped surface heat exchanger. After the scraped surface heat exchanger, the emulsion was fed in a pin stirrer to work the formed crystals and produce margarine. The margarine produced using the developed process had fat levels of 40%, 50% and 60%. The margarine passed all the qualitative, stability, and taste assessments. The scores were 6/10, 7/10 & 7.5/10 for the 40%, 50% & 60% fat spreads, respectively. The success of the trials brought about differentiated knowledge on how to manufacture spreads using non micronized spray dried fat powders as hardstock. Manufacturers do not need to store structuring fats at 80-90°C and even high in winter, instead, they can adapt their processes to use fat powders which need to be stored at 25 °C. The developed process method used one scrape surface heat exchanger instead of the four to five currently used in votator based plants. The use of a single scraped surface heat exchanger translated to about 61% energy savings i.e., 23 kW per ton of product. Furthermore, it was found that the energy saved by implementing separate pasteurization was calculated to be 6.5 kW per ton of product produced.

Keywords: margarine emulsion, votator technology, margarine processing, scraped sur, fat powders

Procedia PDF Downloads 74
2028 Lumbar Punctures: Re-Audit of Procedure Documentation Following the Introduction of a Standardised Procedure Checklist

Authors: Hayley Lawrence, Nabi Shah, Sarah Dyer

Abstract:

Aims: Lumbar punctures are a common bedside procedure performed in acute medicine. Published guidance exists on the standardised documentation of invasive procedures in order to reduce the risk of complications. The audit aim was to assess current standards of documentation in accordance with both the GMC and the National Standards for Invasive Procedures guidelines. A second cycle was conducted after introducing a standardised sticker created using current guidelines. This would assess whether the sticker improved documentation, aiming for 100% standard in each step of the procedure. Methods: An initial prospective audit of current practice was conducted over a 3-month period. Patients were identified by their presenting complaints and by colleagues assessing acute medical patients. Initial findings were presented locally, and a further prospective audit was conducted following the implementation of a standardised sticker. Results: 19 lumbar punctures were included in the first cycle and 13 procedures in the second. Pre-procedure documentation was collected for each cycle, whereby documentation of ‘Indication’ improved from 5.3% to 84.6%, ‘Consent’ from 84.2% to 100%, ‘Coagulopathy’ from 0% to 61.5%, ‘Drug Chart checked’ from 0% to 100%, ‘Position of patient’ from 26.3% to 100% and use of ‘Aseptic Technique’ from 83.3% to 100% from the first to the second cycle respectively. ‘Level of Doctor’ and ‘Supervision’ decreased from 53% to 31% and 53% to 46%, respectively, in the second cycle. Documentation of the procedure itself also demonstrated improvements, with ‘Level of Insertion’ 15.8% to 100%, ‘Name of Antiseptic Used’ 11.1% to 69.2%, ‘Local Anaesthetic Used’ 26.3% to 53.8%, ‘Needle Gauge’ 42.1% to 76.9%, ‘Number of Attempts’ 78.9% to 100% and ‘Traumatic/Atraumatic’ procedure 26.3% to 92.3%, respectively. A similar number of opening pressures were documented in each cycle at 57.9% and 53.8%, respectively, but its documentation was deemed ‘Not Applicable’ in a higher number of patients in the second cycle. Post-procedure documentation improved, with ‘Number of Samples obtained’ increasing from 52.6% to 92.3% and documentation of ‘Immediate Complications’ increasing from 78.9% to 100%. ‘Dressing Applied’ was poorly documented in the first cycle at 16.7%. This was not included on the standardised sticker, resulting in 0% documentation in the second cycle. Documentation of Clinicians’ Name and Bleep reduced from 63.2% to 15.4%, but when the name only was analysed, this increased to 84.6%. Conclusions: Standardised stickers for lumbar punctures do improve documentation and hence should result in improved patient safety. There is still room for improvement to reach 100% standard in each area, especially with respect to the clinician’s name and contact details being documented. Final adjustments will be made to the sticker before being included in a lumbar puncture kit, which will be made readily available in the acute medical wards. Future audits could be extended to include other common bedside procedures performed in acute medicine to ensure documentation of all these procedures reaches 100% standard.

Keywords: invasive procedure, lumbar puncture, medical record keeping, procedure checklist, procedure documentation, standardised documentation

Procedia PDF Downloads 74
2027 Assessing Acute Toxicity and Endocrine Disruption Potential of Selected Packages Internal Layers Extracts

Authors: N. Szczepanska, B. Kudlak, G. Yotova, S. Tsakovski, J. Namiesnik

Abstract:

In the scientific literature related to the widely understood issue of packaging materials designed to have contact with food (food contact materials), there is much information on raw materials used for their production, as well as their physiochemical properties, types, and parameters. However, not much attention is given to the issues concerning migration of toxic substances from packaging and its actual influence on the health of the final consumer, even though health protection and food safety are the priority tasks. The goal of this study was to estimate the impact of particular foodstuff packaging type, food production, and storage conditions on the degree of leaching of potentially toxic compounds and endocrine disruptors to foodstuffs using the acute toxicity test Microtox and XenoScreen YES YAS assay. The selected foodstuff packaging materials were metal cans used for fish storage and tetrapak. Five stimulants respectful to specific kinds of food were chosen in order to assess global migration: distilled water for aqueous foods with a pH above 4.5; acetic acid at 3% in distilled water for acidic aqueous food with pH below 4.5; ethanol at 5% for any food that may contain alcohol; dimethyl sulfoxide (DMSO) and artificial saliva were used in regard to the possibility of using it as an simulation medium. For each packaging three independent variables (temperature and contact time) factorial design simulant was performed. Xenobiotics migration from epoxy resins was studied at three different temperatures (25°C, 65°C, and 121°C) and extraction time of 12h, 48h and 2 weeks. Such experimental design leads to 9 experiments for each food simulant as conditions for each experiment are obtained by combination of temperature and contact time levels. Each experiment was run in triplicate for acute toxicity and in duplicate for estrogen disruption potential determination. Multi-factor analysis of variation (MANOVA) was used to evaluate the effects of the three main factors solvent, temperature (temperature regime for cup), contact time and their interactions on the respected dependent variable (acute toxicity or estrogen disruption potential). From all stimulants studied the most toxic were can and tetrapak lining acetic acid extracts that are indication for significant migration of toxic compounds. This migration increased with increase of contact time and temperature and justified the hypothesis that food products with low pH values cause significant damage internal resin filling. Can lining extracts of all simulation medias excluding distilled water and artificial saliva proved to contain androgen agonists even at 25°C and extraction time of 12h. For tetrapak extracts significant endocrine potential for acetic acid, DMSO and saliva were detected.

Keywords: food packaging, extraction, migration, toxicity, biotest

Procedia PDF Downloads 166
2026 Performance Evaluation of Polyethyleneimine/Polyethylene Glycol Functionalized Reduced Graphene Oxide Membranes for Water Desalination via Forward Osmosis

Authors: Mohamed Edokali, Robert Menzel, David Harbottle, Ali Hassanpour

Abstract:

Forward osmosis (FO) process has stood out as an energy-efficient technology for water desalination and purification, although the practical application of FO for desalination still relies on RO-based Thin Film Composite (TFC) and Cellulose Triacetate (CTA) polymeric membranes which have a low performance. Recently, graphene oxide (GO) laminated membranes have been considered an ideal selection to overcome the bottleneck of the FO-polymeric membranes owing to their simple fabrication procedures, controllable thickness and pore size and high water permeability rates. However, the low stability of GO laminates in wet and harsh environments is still problematic. The recent developments of modified GO and hydrophobic reduced graphene oxide (rGO) membranes for FO desalination have demonstrated attempts to overcome the ongoing trade-off between desalination performance and stability, which is yet to be achieved prior to the practical implementation. In this study, acid-functionalized GO nanosheets cooperatively reduced and crosslinked by the hyperbranched polyethyleneimine (PEI) and polyethylene glycol (PEG) polymers, respectively, are applied for fabrication of the FO membrane, to enhance the membrane stability and performance, and compared with other functionalized rGO-FO membranes. PEI/PEG doped rGO membrane retained two compacted d-spacings (0.7 and 0.31 nm) compared to the acid-functionalized GO membrane alone (0.82 nm). Besides increasing the hydrophilicity, the coating layer of PEG onto the PEI-doped rGO membrane surface enhanced the structural integrity of the membrane chemically and mechanically. As a result of these synergetic effects, the PEI/PEG doped rGO membrane exhibited a water permeation of 7.7 LMH, salt rejection of 97.9 %, and reverse solute flux of 0.506 gMH at low flow rates in the FO desalination process.

Keywords: desalination, forward osmosis, membrane performance, polyethyleneimine, polyethylene glycol, reduced graphene oxide, stability

Procedia PDF Downloads 81
2025 Synthesis and Characterization of CNPs Coated Carbon Nanorods for Cd2+ Ion Adsorption from Industrial Waste Water and Reusable for Latent Fingerprint Detection

Authors: Bienvenu Gael Fouda Mbanga

Abstract:

This study reports a new approach of preparation of carbon nanoparticles coated cerium oxide nanorods (CNPs/CeONRs) nanocomposite and reusing the spent adsorbent of Cd2+- CNPs/CeONRs nanocomposite for latent fingerprint detection (LFP) after removing Cd2+ ions from aqueous solution. CNPs/CeONRs nanocomposite was prepared by using CNPs and CeONRs with adsorption processes. The prepared nanocomposite was then characterized by using UV-visible spectroscopy (UV-visible), Fourier transforms infrared spectroscopy (FTIR), X-ray diffraction pattern (XRD), scanning electron microscope (SEM), Transmission electron microscopy (TEM), Energy-dispersive X-ray spectroscopy (EDS), Zeta potential, X-ray photoelectron spectroscopy (XPS). The average size of the CNPs was 7.84nm. The synthesized CNPs/CeONRs nanocomposite has proven to be a good adsorbent for Cd2+ removal from water with optimum pH 8, dosage 0. 5 g / L. The results were best described by the Langmuir model, which indicated a linear fit (R2 = 0.8539-0.9969). The adsorption capacity of CNPs/CeONRs nanocomposite showed the best removal of Cd2+ ions with qm = (32.28-59.92 mg/g), when compared to previous reports. This adsorption followed pseudo-second order kinetics and intra particle diffusion processes. ∆G and ∆H values indicated spontaneity at high temperature (40oC) and the endothermic nature of the adsorption process. CNPs/CeONRs nanocomposite therefore showed potential as an effective adsorbent. Furthermore, the metal loaded on the adsorbent Cd2+- CNPs/CeONRs has proven to be sensitive and selective for LFP detection on various porous substrates. Hence Cd2+-CNPs/CeONRs nanocomposite can be reused as a good fingerprint labelling agent in LFP detection so as to avoid secondary environmental pollution by disposal of the spent adsorbent.

Keywords: Cd2+-CNPs/CeONRs nanocomposite, cadmium adsorption, isotherm, kinetics, thermodynamics, reusable for latent fingerprint detection

Procedia PDF Downloads 97
2024 Identification of Clay Mineral for Determining Reservoir Maturity Levels Based on Petrographic Analysis, X-Ray Diffraction and Porosity Test on Penosogan Formation Karangsambung Sub-District Kebumen Regency Central Java

Authors: Ayu Dwi Hardiyanti, Bernardus Anggit Winahyu, I. Gusti Agung Ayu Sugita Sari, Lestari Sutra Simamora, I. Wayan Warmada

Abstract:

The Penosogan Formation sandstone, that has Middle Miosen age, has been deemed as a reservoir potential based on sample data from sandstone outcrop in Kebakalan and Kedawung villages, Karangsambung sub-district, Kebumen Regency, Central Java. This research employs the following analytical methods; petrography, X-ray diffraction (XRD), and porosity test. Based on the presence of micritic sandstone, muddy micrite, and muddy sandstone, the Penosogan Formation sandstone has a fine-coarse granular size and middle-to-fine sorting. The composition of the sandstone is mostly made up of plagioclase, skeletal grain, and traces of micrite. The percentage of clay minerals based on petrographic analysis is 10% and appears to envelop grain, resulting enveloping grain which reduces the porosity of rocks. The porosity types as follows: interparticle, vuggy, channel, and shelter, with an equant form of cement. Moreover, the diagenesis process involves compaction, cementation, authigenic mineral growth, and dissolving due to feldspar alteration. The maturity of the reservoir can be seen through the X-ray diffraction analysis results, using ethylene glycol solution for clay minerals fraction transformed from smectite–illite. Porosity test analysis showed that the Penosogan Formation sandstones has a porosity value of 22% based on the Koeseomadinata classification, 1980. That shows high maturity is very influential for the quality of reservoirs sandstone of the Penosogan Formation.

Keywords: sandstone reservoir, Penosogan Formation, smectite, XRD

Procedia PDF Downloads 157
2023 The Identification of Environmentally Friendly People: A Case of South Sumatera Province, Indonesia

Authors: Marpaleni

Abstract:

The intergovernmental Panel on Climate Change (IPCC) declared in 2007 that global warming and climate change are not just a series of events caused by nature, but rather caused by human behaviour. Thus, to reduce the impact of human activities on climate change it is required to have information about how people respond to the environmental issues and what constraints they face. However, information on these and other phenomena remains largely missing, or not fully integrated within the existing data systems. The proposed study is aimed at filling the gap in this knowledge by focusing on Environmentally Friendly Behaviour (EFB) of the people of Indonesia, by taking the province of South Sumatera as a case of study. EFB is defined as any activity in which people engage to improve the conditions of the natural resources and/or to diminish the impact of their behaviour on the environment. This activity is measured in terms of consumption in five areas at the household level, namely housing, energy, water usage, recycling and transportation. By adopting the Indonesia’s Environmentally Friendly Behaviour conducted by Statistics Indonesia in 2013, this study aims to precisely identify one’s orientation towards EFB based on socio demographic characteristics such as: age, income, occupation, location, education, gender and family size. The results of this research will be useful to precisely identify what support people require to strengthen their EFB, to help identify specific constraints that different actors and groups face and to uncover a more holistic understanding of EFB in relation to particular demographic and socio-economics contexts. As the empirical data are examined from the national data sample framework, which will continue to be collected, it can be used to forecast and monitor the future of EFB.

Keywords: environmentally friendly behavior, demographic, South Sumatera, Indonesia

Procedia PDF Downloads 268
2022 Perceived Stigma, Perception of Burden and Psychological Distress among Parents of Intellectually Disable Children: Role of Perceived Social Support

Authors: Saima Shafiq, Najma Iqbal Malik

Abstract:

This study was aimed to explore the relationship of perceived stigma, perception of burden and psychological distress among parents of intellectually disabled children. The study also aimed to explore the moderating role of perceived social support on all the variables of the study. The sample of the study comprised of (N = 250) parents of intellectually disabled children. The present study utilized the co-relational research design. It consists of two phases. Phase-I consisted of two steps which contained the translation of two scales that were used in the present study and tried out on the sample of parents (N = 70). The Affiliated Stigma Scale and Care Giver Burden Inventory were translated into Urdu for the present study. Phase-1 revealed that translated scaled entailed satisfactory psychometric properties. Phase -II of the study was carried out in order to test the hypothesis. Correlation, linear regression analysis, and t-test were computed for hypothesis testing. Hierarchical regression analysis was applied to study the moderating effect of perceived social support. Findings revealed that there was a positive relationship between perceived stigma and psychological distress, perception of burden and psychological distress. Linear regression analysis showed that perceived stigma and perception of burden were positive predictors of psychological distress. The study did not show the moderating role of perceived social support among variables of the present study. The major limitation of the study is the sample size and the major implication is awareness regarding problems of parents of intellectually disabled children.

Keywords: perceived stigma, perception of burden, psychological distress, perceived social support

Procedia PDF Downloads 198
2021 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators

Authors: M. A. Okezue, K. L. Clase, S. R. Byrn

Abstract:

The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.

Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets

Procedia PDF Downloads 155
2020 A Robust Optimization of Chassis Durability/Comfort Compromise Using Chebyshev Polynomial Chaos Expansion Method

Authors: Hanwei Gao, Louis Jezequel, Eric Cabrol, Bernard Vitry

Abstract:

The chassis system is composed of complex elements that take up all the loads from the tire-ground contact area and thus it plays an important role in numerous specifications such as durability, comfort, crash, etc. During the development of new vehicle projects in Renault, durability validation is always the main focus while deployment of comfort comes later in the project. Therefore, sometimes design choices have to be reconsidered because of the natural incompatibility between these two specifications. Besides, robustness is also an important point of concern as it is related to manufacturing costs as well as the performance after the ageing of components like shock absorbers. In this paper an approach is proposed aiming to realize a multi-objective optimization between chassis endurance and comfort while taking the random factors into consideration. The adaptive-sparse polynomial chaos expansion method (PCE) with Chebyshev polynomial series has been applied to predict responses’ uncertainty intervals of a system according to its uncertain-but-bounded parameters. The approach can be divided into three steps. First an initial design of experiments is realized to build the response surfaces which represent statistically a black-box system. Secondly within several iterations an optimum set is proposed and validated which will form a Pareto front. At the same time the robustness of each response, served as additional objectives, is calculated from the pre-defined parameter intervals and the response surfaces obtained in the first step. Finally an inverse strategy is carried out to determine the parameters’ tolerance combination with a maximally acceptable degradation of the responses in terms of manufacturing costs. A quarter car model has been tested as an example by applying the road excitations from the actual road measurements for both endurance and comfort calculations. One indicator based on the Basquin’s law is defined to compare the global chassis durability of different parameter settings. Another indicator related to comfort is obtained from the vertical acceleration of the sprung mass. An optimum set with best robustness has been finally obtained and the reference tests prove a good robustness prediction of Chebyshev PCE method. This example demonstrates the effectiveness and reliability of the approach, in particular its ability to save computational costs for a complex system.

Keywords: chassis durability, Chebyshev polynomials, multi-objective optimization, polynomial chaos expansion, ride comfort, robust design

Procedia PDF Downloads 142
2019 The Influence of Argumentation Strategy on Student’s Web-Based Argumentation in Different Scientific Concepts

Authors: Xinyue Jiao, Yu-Ren Lin

Abstract:

Argumentation is an essential aspect of scientific thinking which has been widely concerned in recent reform of science education. The purpose of the present studies was to explore the influences of two variables termed ‘the argumentation strategy’ and ‘the kind of science concept’ on student’s web-based argumentation. The first variable was divided into either monological (which refers to individual’s internal discourse and inner chain reasoning) or dialectical (which refers to dialogue interaction between/among people). The other one was also divided into either descriptive (i.e., macro-level concept, such as phenomenon can be observed and tested directly) or theoretical (i.e., micro-level concept which is abstract, and cannot be tested directly in nature). The present study applied the quasi-experimental design in which 138 7th grade students were invited and then assigned to either monological group (N=70) or dialectical group (N=68) randomly. An argumentation learning program called ‘the PWAL’ was developed to improve their scientific argumentation abilities, such as arguing from multiple perspectives and based on scientific evidence. There were two versions of PWAL created. For the individual version, students can propose argument only through knowledge recall and self-reflecting process. On the other hand, the students were allowed to construct arguments through peers’ communication in the collaborative version. The PWAL involved three descriptive science concept-based topics (unit 1, 3 and 5) and three theoretical concept-based topics (unit 2, 4 and 6). Three kinds of scaffoldings were embedded into the PWAL: a) argument template, which was used for constructing evidence-based argument; b) the model of the Toulmin’s TAP, which shows the structure and elements of a sound argument; c) the discussion block, which enabled the students to review what had been proposed during the argumentation. Both quantitative and qualitative data were collected and analyzed. An analytical framework for coding students’ arguments proposed in the PWAL was constructed. The results showed that the argumentation approach has a significant effect on argumentation only in theoretical topics (f(1, 136)=48.2, p < .001, η2=2.62). The post-hoc analysis showed the students in the collaborative group perform significantly better than the students in the individual group (mean difference=2.27). However, there is no significant difference between the two groups regarding their argumentation in descriptive topics. Secondly, the students made significant progress in the PWAL from the earlier descriptive or theoretical topic to the later one. The results enabled us to conclude that the PWAL was effective for students’ argumentation. And the students’ peers’ interaction was essential for students to argue scientifically especially for the theoretical topic. The follow-up qualitative analysis showed student tended to generate arguments through critical dialogue interactions in the theoretical topic which promoted them to use more critiques and to evaluate and co-construct each other’s arguments. More explanations regarding the students’ web-based argumentation and the suggestions for the development of web-based science learning were proposed in our discussions.

Keywords: argumentation, collaborative learning, scientific concepts, web-based learning

Procedia PDF Downloads 91
2018 Unification of Lactic Acid Bacteria and Aloe Vera for Healthy Gut

Authors: Pavitra Sharma, Anuradha Singh, Nupur Mathur

Abstract:

There exist more than 100 trillion bacteria in the digestive system of human-beings. Such bacteria are referred to as gut microbiota. Gut microbiota comprises around 75% of our immune system. The bacteria that comprise the gut microbiota are unique to every individual and their composition keeps changing with time owing to factors such as the host’s age, diet, genes, environment, and external medication. Of these factors, the variable easiest to control is one’s diet. By modulating one’s diet, one can ensure an optimal composition of the gut microbiota yielding several health benefits. Prebiotics and probiotics are two compounds that have been considered as viable options to modulate the host’s diet. Prebiotics are basically plant products that support the growth of good bacteria in the host’s gut. Examples include garden asparagus, aloe vera etc. Probiotics are living microorganisms that exist in our intestines and play an integral role in promoting digestive health and supporting our immune system in general. Examples include yogurt, kimchi, kombucha etc. In the context of modulating the host’s diet, the key attribute of prebiotics is that they support the growth of probiotics. By developing the right combination of prebiotics and probiotics, food products or supplements can be created to enhance the host’s health. An effective combination of prebiotics and probiotics that yields health benefits to the host is referred to as synbiotics. Synbiotics comprise of an optimal proportion of prebiotics and probiotics, their application benefits the host’s health more than the application of prebiotics and probiotics used in isolation. When applied to food supplements, synbiotics preserve the beneficial probiotic bacteria during storage period and during the bacteria’s passage through the intestinal tract. When applied to the gastrointestinal tract, the composition of the synbiotics assumes paramount importance. Reason being that for synbiotics to be effective in the gastrointestinal tract, the chosen probiotic must be able to survive in the stomach’s acidic environment and manifest tolerance towards bile and pancreatic secretions. Further, not every prebiotic stimulates the growth of a particular probiotic. The prebiotic chosen should be one that not only maintains 2 balance in the host’s digestive system, but also provides the required nutrition to probiotics. Hence in each application of synbiotics, the prebiotic-probiotic combination needs to be carefully selected. Once the combination is finalized, the exact proportion of prebiotics and probiotics to be used needs to be considered. When determining this proportion, only that amount of a prebiotic should be used that activates metabolism of the required number of probiotics. It was observed that while probiotics are active is both the small and large intestine, the effect of prebiotics is observed primarily in the large intestine. Hence in the host’s small intestine, synbiotics are likely to have the maximum efficacy. In small intestine, prebiotics not only assist in the growth of probiotics, but they also enable probiotics to exhibit a higher tolerance to pH levels, oxygenation, and intestinal temperature

Keywords: microbiota, probiotics, prebiotics, synbiotics

Procedia PDF Downloads 121
2017 Study of Biofouling Wastewater Treatment Technology

Authors: Sangho Park, Mansoo Kim, Kyujung Chae, Junhyuk Yang

Abstract:

The International Maritime Organization (IMO) recognized the problem of invasive species invasion and adopted the "International Convention for the Control and Management of Ships' Ballast Water and Sediments" in 2004, which came into force on September 8, 2017. In 2011, the IMO approved the "Guidelines for the Control and Management of Ships' Biofouling to Minimize the Transfer of Invasive Aquatic Species" to minimize the movement of invasive species by hull-attached organisms and required ships to manage the organisms attached to their hulls. Invasive species enter new environments through ships' ballast water and hull attachment. However, several obstacles to implementing these guidelines have been identified, including a lack of underwater cleaning equipment, regulations on underwater cleaning activities in ports, and difficulty accessing crevices in underwater areas. The shipping industry, which is the party responsible for understanding these guidelines, wants to implement them for fuel cost savings resulting from the removal of organisms attached to the hull, but they anticipate significant difficulties in implementing the guidelines due to the obstacles mentioned above. Robots or people remove the organisms attached to the hull underwater, and the resulting wastewater includes various species of organisms and particles of paint and other pollutants. Currently, there is no technology available to sterilize the organisms in the wastewater or stabilize the heavy metals in the paint particles. In this study, we aim to analyze the characteristics of the wastewater generated from the removal of hull-attached organisms and select the optimal treatment technology. The organisms in the wastewater generated from the removal of the attached organisms meet the biological treatment standard (D-2) using the sterilization technology applied in the ships' ballast water treatment system. The heavy metals and other pollutants in the paint particles generated during removal are treated using stabilization technologies such as thermal decomposition. The wastewater generated is treated using a two-step process: 1) development of sterilization technology through pretreatment filtration equipment and electrolytic sterilization treatment and 2) development of technology for removing particle pollutants such as heavy metals and dissolved inorganic substances. Through this study, we will develop a biological removal technology and an environmentally friendly processing system for the waste generated after removal that meets the requirements of the government and the shipping industry and lays the groundwork for future treatment standards.

Keywords: biofouling, ballast water treatment system, filtration, sterilization, wastewater

Procedia PDF Downloads 94
2016 Bi-Layer Electro-Conductive Nanofibrous Conduits for Peripheral Nerve Regeneration

Authors: Niloofar Nazeri, Mohammad Ali Derakhshan, Reza Faridi Majidi, Hossein Ghanbari

Abstract:

Injury of peripheral nervous system (PNS) can lead to loss of sensation or movement. To date, one of the challenges for surgeons is repairing large gaps in PNS. To solve this problem, nerve conduits have been developed. Conduits produced by means of electrospinning can mimic extracellular matrix and provide enough surface for further functionalization. In this research, a conductive bilayer nerve conduit with poly caprolactone (PCL), poly (lactic acid co glycolic acid) (PLGA) and MWCNT for promoting peripheral nerve regeneration was fabricated. The conduit was made of longitudinally aligned PLGA nanofibrous sheets in the lumen to promote nerve regeneration and randomly oriented PCL nanofibers on the outer surface for mechanical support. The intra-luminal guidance channel was made out of conductive aligned nanofibrous rolled sheets which are coated with laminin via dopamine. Different properties of electrospun scaffolds were investigated by using contact angle, mechanical strength, degradation time, scanning electron microscopy (SEM) and X-ray photoelectron spectroscopy (XPS). The SEM analysis was shown that size range of nanofibrous mat were about 600-750 nm and MWCNTs deposited between nanofibers. The XPS result was shown that laminin attached to the nanofibers surface successfully. The contact-angle and tensile tests analysis revealed that scaffolds have good hydrophilicity and enough mechanical strength. In vitro studies demonstrated that this conductive surface was able to enhance the attachment and proliferation of PC12 and Schwann cells. We concluded that this bilayer composite conduit has good potential for nerve regeneration.

Keywords: conductive, conduit, laminin, MWCNT

Procedia PDF Downloads 182
2015 Pain Management in Burn Wounds with Dual Drug Loaded Double Layered Nano-Fiber Based Dressing

Authors: Sharjeel Abid, Tanveer Hussain, Ahsan Nazir, Abdul Zahir, Nabyl Khenoussi

Abstract:

Localized application of drug has various advantages and fewer side effects as compared with other methods. Burn patients suffer from swear pain and the major aspects that are considered for burn victims include pain and infection management. Nano-fibers (NFs) loaded with drug, applied on local wound area, can solve these problems. Therefore, this study dealt with the fabrication of drug loaded NFs for better pain management. Two layers of NFs were fabricated with different drugs. Contact layer was loaded with Gabapentin (a nerve painkiller) and the second layer with acetaminophen. The fabricated dressing was characterized using scanning electron microscope, Fourier Transform Infrared Spectroscopy, X-Ray Diffraction and UV-Vis Spectroscopy. The double layered based NFs dressing was designed to have both initial burst release followed by slow release to cope with pain for two days. The fabricated nanofibers showed diameter < 300 nm. The liquid absorption capacity of the NFs was also checked to deal with the exudate. The fabricated double layered dressing with dual drug loading and release showed promising results that could be used for dealing pain in burn victims. It was observed that by the addition of drug, the size of nanofibers was reduced, on the other hand, the crystallinity %age was increased, and liquid absorption decreased. The combination of fast nerve pain killer release followed by slow release of non-steroidal anti-inflammatory drug could be a good tool to reduce pain in a more secure manner with fewer side effects.

Keywords: pain management, burn wounds, nano-fibers, controlled drug release

Procedia PDF Downloads 236
2014 Fabrication of 3D Scaffold Consisting of Spiral-Like Micro-Sized PCL Struts and Selectively Deposited Nanofibers as a Tissue Regenerative Material

Authors: Gi-Hoon Yang, JongHan Ha, MyungGu Yeo, JaeYoon Lee, SeungHyun Ahn, Hyeongjin Lee, HoJun Jeon, YongBok Kim, Minseong Kim, GeunHyung Kim

Abstract:

Tissue engineering scaffolds must be biocompatible and biodegradable, provide adequate mechanical strength and cell attachment site for proliferation and differentiation. Furthermore, the scaffold morphology (such as pore size, porosity and pore interconnectivity) plays an important role. The electrospinning process has been widely used to fabricate micro/nano-sized fibres. Electrospinning allows for the fabrication of non-woven meshes containing micro- to nano-sized fibers providing high surface-to-volume area for cell attachment. Due to its advantageous characteristics, electrospinning is a useful method for skin, cartilage, bone, and nerve regeneration. In this study, we fabricated PCL scaffolds (SP) consisting of spiral-like struts using 3D melt-plotting system and micro/nanofibers using direct electrospinning writing. By altering the conditions of the conventional melt-plotting method, spiral-like struts were generated. Then, micro/nanofibers were deposited selectively. The control scaffold composed of perpendicular PCL struts was fabricated using the conventional melt-plotting method to compare the cellular activities. The effect on the attached cells (osteoblast-like cells (MG63)) was evaluated depending on the bending instability of the struts. The SP scaffolds showed enhanced biological properties such as initial cell attachment, proliferation and osteogenic differentiation. These results suggest that the SP scaffolds has potential as a bioengineered substitute for soft and hard tissue regeneration.

Keywords: cell attachment, electrospinning, mechanical strength, melt-plotting

Procedia PDF Downloads 305
2013 Macroinvertebrates of Paravani and Saghamo Lakes, South Georgia

Authors: Bella Japoshvili, Zhanetta Shubitidze, Ani Bikashvili, Sophio Gabelashvili, Marina Gioshvili, Levan Mumladze

Abstract:

Paravani and Saghamo Lakes are oligotrophic lentic systems located in Javakheti plateau (South Georgia) at 2073 m and 1996 m a.s.l. respectively. Javakheti plateau is known as a lakes region as there are located almost 60 small and medium size lakes. Paravani Lake is the biggest lake by its surface area in Georgia, 37 km 2. The Saghamo Lake is smaller and its surface area consists 4.58 km2. These two lakes are connected with Paravani River, because of this the main hydrobiological and ichthyological features are the same. More than 15-30 years were not studied macroinvertebrates of these lakes. Even the existing information is lack and very limited. The aim of our study was to identify main macroinvertebrate groups inhabiting both lakes and to compare obtaining results to existing information. Our investigation was carried out during 2014 and 2015, in 3 seasons of the year, in winter because of severe condition samples were not taken. Kick-net and Petersen grab were used for material collecting, 4 sites from Paravani Lake and 3–from Saghamo Lake were sampled. Collected invertebrates were fixed in ethanol and late taken to the laboratory, where organisms were identified to the lowest taxon possible, usually family. By our results identified 14 taxa for Paravani Lake and 12 taxa for Saghamo Lake. Our results differ from previous information; for Saghamo Lake previously 13 taxa and for Paravani Lake 12 taxa were described. The percentage of the groups also differ from existing information. Our investigation showed that in Paravani Lake most abundant are Apmhipoda, Hydrachnidae, and Hemiptera, in our samples the number of individuals for those 3 taxa was more than thousand, in each. For Saghamo Lake numerous taxon was Amphipoda-36.3%, following by Ephemeroptera-11.37%, Chironomidae-10.5% and Hydrachnidae-7.03% respectively. We also identified the dominant taxon for all studied seasons. Autumn is the period when the diversity of macroinvertebrates are higher in both lakes.

Keywords: Georgia, lakes, macroinvertebrates, monitoring

Procedia PDF Downloads 176
2012 Solving a Micromouse Maze Using an Ant-Inspired Algorithm

Authors: Rolando Barradas, Salviano Soares, António Valente, José Alberto Lencastre, Paulo Oliveira

Abstract:

This article reviews the Ant Colony Optimization, a nature-inspired algorithm, and its implementation in the Scratch/m-Block programming environment. The Ant Colony Optimization is a part of Swarm Intelligence-based algorithms and is a subset of biological-inspired algorithms. Starting with a problem in which one has a maze and needs to find its path to the center and return to the starting position. This is similar to an ant looking for a path to a food source and returning to its nest. Starting with the implementation of a simple wall follower simulator, the proposed solution uses a dynamic graphical interface that allows young students to observe the ants’ movement while the algorithm optimizes the routes to the maze’s center. Things like interface usability, Data structures, and the conversion of algorithmic language to Scratch syntax were some of the details addressed during this implementation. This gives young students an easier way to understand the computational concepts of sequences, loops, parallelism, data, events, and conditionals, as they are used through all the implemented algorithms. Future work includes the simulation results with real contest mazes and two different pheromone update methods and the comparison with the optimized results of the winners of each one of the editions of the contest. It will also include the creation of a Digital Twin relating the virtual simulator with a real micromouse in a full-size maze. The first test results show that the algorithm found the same optimized solutions that were found by the winners of each one of the editions of the Micromouse contest making this a good solution for maze pathfinding.

Keywords: nature inspired algorithms, scratch, micromouse, problem-solving, computational thinking

Procedia PDF Downloads 106