Search results for: Coulomb modified Glauber model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18357

Search results for: Coulomb modified Glauber model

11367 Comparison of Methods for the Synthesis of Eu+++, Tb+++, and Tm+++ Doped Y2O3 Nanophosphors by Sol-Gel and Hydrothermal Methods for Bioconjugation

Authors: Ravindra P. Singh, Drupad Ram, Dinesh K. Gupta

Abstract:

Rare earth ions doped metal oxides are a class of luminescent materials which have been proved to be excellent for applications in field emission displays and cathode ray tubes, plasma display panels. Under UV irradiation Eu+++ doped Y2O3 is a red phosphor and Tb+++ doped Y 2O3 is a green phosphor. It is possible that, due to their high quantum efficiency, they might serve as improved luminescent markers for identification of biomolecules, as already reported for CdSe and CdSe/ZnS nanocrystals. However, for any biological applications these particle powders must be suspended in water while retaining their phosphorescence. We hereby report synthesis and characterization of Eu+++ and Tb+++ doped yttrium oxide nanoparticles by sol-gel and hydrothermal processes. Eu+++ and Tb+++ doped Y2O3 nanoparticles have been synthesized by hydrothermal process using yttrium oxo isopropoxide [Y5O(OPri)13] (crystallized twice) and it’s acetyl acetone modified product [Y(O)(acac)] as precursors. Generally the sol-gel derived metal oxides are required to be annealed to the temperature ranging from 400°C-800°C in order to develop crystalline phases. However, this annealing also results in the development of aggregates which are undesirable for bio-conjugation experiments. In the hydrothermal process, we have achieved crystallinity of the nanoparticles at 300°C and the development of crystalline phases has been found to be proportional to the time of heating of the reactor. The average particle sizes as calculated from XRD were found to be 28 nm, 32 nm, and 34 nm by hydrothermal process. The particles were successfully suspended in chloroform in the presence of trioctyl phosphene oxide and TEM investigations showed the presence of single particles along with agglomerates.

Keywords: nanophosphors, Y2O3:Eu+3, Y2O3:Tb+3, sol-gel, hydrothermal method, TEM, XRD

Procedia PDF Downloads 387
11366 Simulation of Multistage Extraction Process of Co-Ni Separation Using Ionic Liquids

Authors: Hongyan Chen, Megan Jobson, Andrew J. Masters, Maria Gonzalez-Miquel, Simon Halstead, Mayri Diaz de Rienzo

Abstract:

Ionic liquids offer excellent advantages over conventional solvents for industrial extraction of metals from aqueous solutions, where such extraction processes bring opportunities for recovery, reuse, and recycling of valuable resources and more sustainable production pathways. Recent research on the use of ionic liquids for extraction confirms their high selectivity and low volatility, but there is relatively little focus on how their properties can be best exploited in practice. This work addresses gaps in research on process modelling and simulation, to support development, design, and optimisation of these processes, focusing on the separation of the highly similar transition metals, cobalt, and nickel. The study exploits published experimental results, as well as new experimental results, relating to the separation of Co and Ni using trihexyl (tetradecyl) phosphonium chloride. This extraction agent is attractive because it is cheaper, more stable and less toxic than fluorinated hydrophobic ionic liquids. This process modelling work concerns selection and/or development of suitable models for the physical properties, distribution coefficients, for mass transfer phenomena, of the extractor unit and of the multi-stage extraction flowsheet. The distribution coefficient model for cobalt and HCl represents an anion exchange mechanism, supported by the literature and COSMO-RS calculations. Parameters of the distribution coefficient models are estimated by fitting the model to published experimental extraction equilibrium results. The mass transfer model applies Newman’s hard sphere model. Diffusion coefficients in the aqueous phase are obtained from the literature, while diffusion coefficients in the ionic liquid phase are fitted to dynamic experimental results. The mass transfer area is calculated from the surface to mean diameter of liquid droplets of the dispersed phase, estimated from the Weber number inside the extractor. New experiments measure the interfacial tension between the aqueous and ionic phases. The empirical models for predicting the density and viscosity of solutions under different metal loadings are also fitted to new experimental data. The extractor is modelled as a continuous stirred tank reactor with mass transfer between the two phases and perfect phase separation of the outlet flows. A multistage separation flowsheet simulation is set up to replicate a published experiment and compare model predictions with the experimental results. This simulation model is implemented in gPROMS software for dynamic process simulation. The results of single stage and multi-stage flowsheet simulations are shown to be in good agreement with the published experimental results. The estimated diffusion coefficient of cobalt in the ionic liquid phase is in reasonable agreement with published data for the diffusion coefficients of various metals in this ionic liquid. A sensitivity study with this simulation model demonstrates the usefulness of the models for process design. The simulation approach has potential to be extended to account for other metals, acids, and solvents for process development, design, and optimisation of extraction processes applying ionic liquids for metals separations, although a lack of experimental data is currently limiting the accuracy of models within the whole framework. Future work will focus on process development more generally and on extractive separation of rare earths using ionic liquids.

Keywords: distribution coefficient, mass transfer, COSMO-RS, flowsheet simulation, phosphonium

Procedia PDF Downloads 176
11365 Knowledge Sharing within a Team: Exploring the Antecedents and Role of Trust

Authors: Li Yan Hei, Au Wing Tung

Abstract:

Knowledge sharing is a process in which individuals mutually exchange existing knowledge and co-create new knowledge. Previous research has confirmed that trust is positively associated with knowledge sharing. However, only few studies systematically examined the antecedents of trust and these antecedents’ impacts on knowledge sharing. In order to explore and understand the relationships between trust and knowledge sharing in depth, this study proposed a relationship maintenance-based model to examine the antecedents of trust in knowledge sharing in project teams. Three critical elements within a project team were measured, including the environment, project team partner and interaction. It was hypothesized that the trust would lead to knowledge sharing and in turn result in perceived good team performance. With a sample of 200 Hong Kong employees, the proposed model was evaluated with structural equation modeling. Expected findings are trust will contribute to knowledge sharing, resulting in better team performance. The results will also offer insights into antecedents of trust that play a heavy role in the focal relationship. The present study contributes to a more holistic understanding of relationship between trust and knowledge sharing by linking the antecedents and outcomes. The findings will raise the awareness of project managers on ways to promote knowledge sharing.

Keywords: knowledge sharing, project management, team, trust

Procedia PDF Downloads 599
11364 Development of a Model for Predicting Radiological Risks in Interventional Cardiology

Authors: Stefaan Carpentier, Aya Al Masri, Fabrice Leroy, Thibault Julien, Safoin Aktaou, Malorie Martin, Fouad Maaloul

Abstract:

Introduction: During an 'Interventional Radiology (IR)' procedure, the patient's skin-dose may become very high for a burn, necrosis, and ulceration to appear. In order to prevent these deterministic effects, a prediction of the peak skin-dose for the patient is important in order to improve the post-operative care to be given to the patient. The objective of this study is to estimate, before the intervention, the patient dose for ‘Chronic Total Occlusion (CTO)’ procedures by selecting relevant clinical indicators. Materials and methods: 103 procedures were performed in the ‘Interventional Cardiology (IC)’ department using a Siemens Artis Zee image intensifier that provides the Air Kerma of each IC exam. Peak Skin Dose (PSD) was measured for each procedure using radiochromic films. Patient parameters such as sex, age, weight, and height were recorded. The complexity index J-CTO score, specific to each intervention, was determined by the cardiologist. A correlation method applied to these indicators allowed to specify their influence on the dose. A predictive model of the dose was created using multiple linear regressions. Results: Out of 103 patients involved in the study, 5 were excluded for clinical reasons and 2 for placement of radiochromic films outside the exposure field. 96 2D-dose maps were finally used. The influencing factors having the highest correlation with the PSD are the patient's diameter and the J-CTO score. The predictive model is based on these parameters. The comparison between estimated and measured skin doses shows an average difference of 0.85 ± 0.55 Gy for doses of less than 6 Gy. The mean difference between air-Kerma and PSD is 1.66 Gy ± 1.16 Gy. Conclusion: Using our developed method, a first estimate of the dose to the skin of the patient is available before the start of the procedure, which helps the cardiologist in carrying out its intervention. This estimation is more accurate than that provided by the Air-Kerma.

Keywords: chronic total occlusion procedures, clinical experimentation, interventional radiology, patient's peak skin dose

Procedia PDF Downloads 123
11363 Optimization and Retrofitting for an Egyptian Refinery Water Network

Authors: Mohamed Mousa

Abstract:

Sacristies in the supply of freshwater, strict regulations on discharging wastewater and the support to encourage sustainable development by water minimization techniques leads to raise the interest of water reusing, regeneration, and recycling. Water is considered a vital element in chemical industries. In this study, an optimization model will be developed to determine the optimal design of refinery’s water network system via source interceptor sink that involves several network alternatives, then a Mixed-Integer Non-Linear programming (MINLP) was used to obtain the optimal network superstructure based on flowrates, the concentration of contaminants, etc. The main objective of the model is to reduce the fixed cost of piping installation interconnections, reducing the operating cots of all streams within the refiner’s water network, and minimize the concentration of pollutants to comply with the environmental regulations. A real case study for one of the Egyptian refineries was studied by GAMS / BARON global optimization platform, and the water network had been retrofitted and optimized, leading to saving around 195 m³/ hr. of freshwater with a total reduction reaches to 26 %.

Keywords: freshwater minimization, modelling, GAMS, BARON, water network design, wastewater reudction

Procedia PDF Downloads 212
11362 Hansen Solubility Parameter from Surface Measurements

Authors: Neveen AlQasas, Daniel Johnson

Abstract:

Membranes for water treatment are an established technology that attracts great attention due to its simplicity and cost effectiveness. However, membranes in operation suffer from the adverse effect of membrane fouling. Bio-fouling is a phenomenon that occurs at the water-membrane interface, and is a dynamic process that is initiated by the adsorption of dissolved organic material, including biomacromolecules, on the membrane surface. After initiation, attachment of microorganisms occurs, followed by biofilm growth. The biofilm blocks the pores of the membrane and consequently results in reducing the water flux. Moreover, the presence of a fouling layer can have a substantial impact on the membrane separation properties. Understanding the mechanism of the initiation phase of biofouling is a key point in eliminating the biofouling on membrane surfaces. The adhesion and attachment of different fouling materials is affected by the surface properties of the membrane materials. Therefore, surface properties of different polymeric materials had been studied in terms of their surface energies and Hansen solubility parameters (HSP). The difference between the combined HSP parameters (HSP distance) allows prediction of the affinity of two materials to each other. The possibilities of measuring the HSP of different polymer films via surface measurements, such as contact angle has been thoroughly investigated. Knowing the HSP of a membrane material and the HSP of a specific foulant, facilitate the estimation of the HSP distance between the two, and therefore the strength of attachment to the surface. Contact angle measurements using fourteen different solvents on five different polymeric films were carried out using the sessile drop method. Solvents were ranked as good or bad solvents using different ranking method and ranking was used to calculate the HSP of each polymeric film. Results clearly indicate the absence of a direct relation between contact angle values of each film and the HSP distance between each polymer film and the solvents used. Therefore, estimating HSP via contact angle alone is not sufficient. However, it was found if the surface tensions and viscosities of the used solvents are taken in to the account in the analysis of the contact angle values, a prediction of the HSP from contact angle measurements is possible. This was carried out via training of a neural network model. The trained neural network model has three inputs, contact angle value, surface tension and viscosity of solvent used. The model is able to predict the HSP distance between the used solvent and the tested polymer (material). The HSP distance prediction is further used to estimate the total and individual HSP parameters of each tested material. The results showed an accuracy of about 90% for all the five studied films

Keywords: surface characterization, hansen solubility parameter estimation, contact angle measurements, artificial neural network model, surface measurements

Procedia PDF Downloads 78
11361 SEM Image Classification Using CNN Architectures

Authors: Güzi̇n Ti̇rkeş, Özge Teki̇n, Kerem Kurtuluş, Y. Yekta Yurtseven, Murat Baran

Abstract:

A scanning electron microscope (SEM) is a type of electron microscope mainly used in nanoscience and nanotechnology areas. Automatic image recognition and classification are among the general areas of application concerning SEM. In line with these usages, the present paper proposes a deep learning algorithm that classifies SEM images into nine categories by means of an online application to simplify the process. The NFFA-EUROPE - 100% SEM data set, containing approximately 21,000 images, was used to train and test the algorithm at 80% and 20%, respectively. Validation was carried out using a separate data set obtained from the Middle East Technical University (METU) in Turkey. To increase the accuracy in the results, the Inception ResNet-V2 model was used in view of the Fine-Tuning approach. By using a confusion matrix, it was observed that the coated-surface category has a negative effect on the accuracy of the results since it contains other categories in the data set, thereby confusing the model when detecting category-specific patterns. For this reason, the coated-surface category was removed from the train data set, hence increasing accuracy by up to 96.5%.

Keywords: convolutional neural networks, deep learning, image classification, scanning electron microscope

Procedia PDF Downloads 106
11360 Students' Perspectives on Quality of Course Evaluation Practices and Feedbacks in Eritrea

Authors: Ermias Melake Tesfay

Abstract:

The importance of evaluation practice and feedback to student advancement and retention has gained importance in the literature over the past ten years. So many issues and cases have been raised about the quality and types of evaluation carried out in higher education and the quality and quantity of student feedback. The aim of this study was to explore the students’ perspectives on the quality of course evaluation practice and feedback in College of Education and College of Science. The study used both quantitative and qualitative methods to collect data. Data were collected from third-year and fourth-year students of 13 departments in the College of Education and College of Science in Eritrea. A modified Service Performance (SERVPERF) questionnaire and focus group discussions were used to collect the data. The sample population comprised of 135 third-year and fourth-year students’ from both Colleges. A questionnaire using a 5 point Likert-scale was administered to all respondents whilst two focus group discussions were conducted. Findings from survey data and focus group discussions showed that the majority of students hold a positive perception of the quality of course evaluation practice but had a negative perception of methods of awarding grades and administrators’ role in listening to the students complain about the course. Furthermore, the analysis from the questionnaire showed that there is no statistically significant difference between third-year and fourth-year students, College of Education and College of Science and male and female students on the quality of course evaluation practice and feedback. The study recommends that colleges improve the quality of fairness and feedback during course assessment.

Keywords: evaluation, feedback, quality, students' perception

Procedia PDF Downloads 138
11359 Ethical Decision-Making in AI and Robotics Research: A Proposed Model

Authors: Sylvie Michel, Emmanuelle Gagnou, Joanne Hamet

Abstract:

Researchers in the fields of AI and Robotics frequently encounter ethical dilemmas throughout their research endeavors. Various ethical challenges have been pinpointed in the existing literature, including biases and discriminatory outcomes, diffusion of responsibility, and a deficit in transparency within AI operations. This research aims to pinpoint these ethical quandaries faced by researchers and shed light on the mechanisms behind ethical decision-making in the research process. By synthesizing insights from existing literature and acknowledging prevalent shortcomings, such as overlooking the heterogeneous nature of decision-making, non-accumulative results, and a lack of consensus on numerous factors due to limited empirical research, the objective is to conceptualize and validate a model. This model will incorporate influences from individual perspectives and situational contexts, considering potential moderating factors in the ethical decision-making process. Qualitative analyses were conducted based on direct observation of an AI/Robotics research team focusing on collaborative robotics for several months. Subsequently, semi-structured interviews with 16 team members were conducted. The entire process took place during the first semester of 2023. Observations were analyzed using an analysis grid, and the interviews underwent thematic analysis using Nvivo software. An initial finding involves identifying the ethical challenges that AI/robotics researchers confront, underlining a disparity between practical applications and theoretical considerations regarding ethical dilemmas in the realm of AI. Notably, researchers in AI prioritize the publication and recognition of their work, sparking the genesis of these ethical inquiries. Furthermore, this article illustrated that researchers tend to embrace a consequentialist ethical framework concerning safety (for humans engaging with robots/AI), worker autonomy in relation to robots, and the societal implications of labor (can robots displace jobs?). A second significant contribution entails proposing a model for ethical decision-making within the AI/Robotics research sphere. The model proposed adopts a process-oriented approach, delineating various research stages (topic proposal, hypothesis formulation, experimentation, conclusion, and valorization). Across these stages and the ethical queries, they entail, a comprehensive four-point comprehension of ethical decision-making is presented: recognition of the moral quandary; moral judgment, signifying the decision-maker's aptitude to discern the morally righteous course of action; moral intention, reflecting the ability to prioritize moral values above others; and moral behavior, denoting the application of moral intention to the situation. Variables such as political inclinations ((anti)-capitalism, environmentalism, veganism) seem to wield significant influence. Moreover, age emerges as a noteworthy moderating factor. AI and robotics researchers are continually confronted with ethical dilemmas during their research endeavors, necessitating thoughtful decision-making. The contribution involves introducing a contextually tailored model, derived from meticulous observations and insightful interviews, enabling the identification of factors that shape ethical decision-making at different stages of the research process.

Keywords: ethical decision making, artificial intelligence, robotics, research

Procedia PDF Downloads 63
11358 Automated Adaptions of Semantic User- and Service Profile Representations by Learning the User Context

Authors: Nicole Merkle, Stefan Zander

Abstract:

Ambient Assisted Living (AAL) describes a technological and methodological stack of (e.g. formal model-theoretic semantics, rule-based reasoning and machine learning), different aspects regarding the behavior, activities and characteristics of humans. Hence, a semantic representation of the user environment and its relevant elements are required in order to allow assistive agents to recognize situations and deduce appropriate actions. Furthermore, the user and his/her characteristics (e.g. physical, cognitive, preferences) need to be represented with a high degree of expressiveness in order to allow software agents a precise evaluation of the users’ context models. The correct interpretation of these context models highly depends on temporal, spatial circumstances as well as individual user preferences. In most AAL approaches, model representations of real world situations represent the current state of a universe of discourse at a given point in time by neglecting transitions between a set of states. However, the AAL domain currently lacks sufficient approaches that contemplate on the dynamic adaptions of context-related representations. Semantic representations of relevant real-world excerpts (e.g. user activities) help cognitive, rule-based agents to reason and make decisions in order to help users in appropriate tasks and situations. Furthermore, rules and reasoning on semantic models are not sufficient for handling uncertainty and fuzzy situations. A certain situation can require different (re-)actions in order to achieve the best results with respect to the user and his/her needs. But what is the best result? To answer this question, we need to consider that every smart agent requires to achieve an objective, but this objective is mostly defined by domain experts who can also fail in their estimation of what is desired by the user and what not. Hence, a smart agent has to be able to learn from context history data and estimate or predict what is most likely in certain contexts. Furthermore, different agents with contrary objectives can cause collisions as their actions influence the user’s context and constituting conditions in unintended or uncontrolled ways. We present an approach for dynamically updating a semantic model with respect to the current user context that allows flexibility of the software agents and enhances their conformance in order to improve the user experience. The presented approach adapts rules by learning sensor evidence and user actions using probabilistic reasoning approaches, based on given expert knowledge. The semantic domain model consists basically of device-, service- and user profile representations. In this paper, we present how this semantic domain model can be used in order to compute the probability of matching rules and actions. We apply this probability estimation to compare the current domain model representation with the computed one in order to adapt the formal semantic representation. Our approach aims at minimizing the likelihood of unintended interferences in order to eliminate conflicts and unpredictable side-effects by updating pre-defined expert knowledge according to the most probable context representation. This enables agents to adapt to dynamic changes in the environment which enhances the provision of adequate assistance and affects positively the user satisfaction.

Keywords: ambient intelligence, machine learning, semantic web, software agents

Procedia PDF Downloads 266
11357 Rheological Characteristics of Ice Slurries Based on Propylene- and Ethylene-Glycol at High Ice Fractions

Authors: Senda Trabelsi, Sébastien Poncet, Michel Poirier

Abstract:

Ice slurries are considered as a promising phase-changing secondary fluids for air-conditioning, packaging or cooling industrial processes. An experimental study has been here carried out to measure the rheological characteristics of ice slurries. Ice slurries consist in a solid phase (flake ice crystals) and a liquid phase. The later is composed of a mixture of liquid water and an additive being here either (1) Propylene-Glycol (PG) or (2) Ethylene-Glycol (EG) used to lower the freezing point of water. Concentrations of 5%, 14% and 24% of both additives are investigated with ice mass fractions ranging from 5% to 85%. The rheological measurements are carried out using a Discovery HR-2 vane-concentric cylinder with four full-length blades. The experimental results show that the behavior of ice slurries is generally non-Newtonian with shear-thinning or shear-thickening behaviors depending on the experimental conditions. In order to determine the consistency and the flow index, the Herschel-Bulkley model is used to describe the behavior of ice slurries. The present results are finally validated against an experimental database found in the literature and the predictions of an Artificial Neural Network model.

Keywords: ice slurry, propylene-glycol, ethylene-glycol, rheology

Procedia PDF Downloads 248
11356 Investigation of Gas Tungsten Arc Welding Parameters on Residual Stress of Heat Affected Zone in Inconel X750 Super Alloy Welding Using Finite Element Method

Authors: Kimia Khoshdel Vajari, Saber Saffar

Abstract:

Reducing the residual stresses caused by welding is desirable for the industry. The effect of welding sequence, as well as the effect of yield stress on the number of residual stresses generated in Inconel X750 superalloy sheets and beams, have been investigated. The finite element model used in this research is a three-dimensional thermal and mechanical model, and the type of analysis is indirect coupling. This analysis is done in two stages. First, thermal analysis is performed, and then the thermal changes of the first analysis are used as the applied load in the second analysis. ABAQUS has been used for modeling, and the Dflux subroutine has been used in the Fortran programming environment to move the arc and the molten pool. The results of this study show that the amount of tensile residual stress in symmetric, discontinuous, and symmetric-discontinuous welds is reduced to a maximum of 27%, 54%, and 37% compared to direct welding, respectively. The results also show that the amount of residual stresses created by welding increases linearly with increasing yield stress with a slope of 40%.

Keywords: residual stress, X750 superalloy, finite element, welding, thermal analysis

Procedia PDF Downloads 91
11355 Extraction of Forest Plantation Resources in Selected Forest of San Manuel, Pangasinan, Philippines Using LiDAR Data for Forest Status Assessment

Authors: Mark Joseph Quinto, Roan Beronilla, Guiller Damian, Eliza Camaso, Ronaldo Alberto

Abstract:

Forest inventories are essential to assess the composition, structure and distribution of forest vegetation that can be used as baseline information for management decisions. Classical forest inventory is labor intensive and time-consuming and sometimes even dangerous. The use of Light Detection and Ranging (LiDAR) in forest inventory would improve and overcome these restrictions. This study was conducted to determine the possibility of using LiDAR derived data in extracting high accuracy forest biophysical parameters and as a non-destructive method for forest status analysis of San Manual, Pangasinan. Forest resources extraction was carried out using LAS tools, GIS, Envi and .bat scripts with the available LiDAR data. The process includes the generation of derivatives such as Digital Terrain Model (DTM), Canopy Height Model (CHM) and Canopy Cover Model (CCM) in .bat scripts followed by the generation of 17 composite bands to be used in the extraction of forest classification covers using ENVI 4.8 and GIS software. The Diameter in Breast Height (DBH), Above Ground Biomass (AGB) and Carbon Stock (CS) were estimated for each classified forest cover and Tree Count Extraction was carried out using GIS. Subsequently, field validation was conducted for accuracy assessment. Results showed that the forest of San Manuel has 73% Forest Cover, which is relatively much higher as compared to the 10% canopy cover requirement. On the extracted canopy height, 80% of the tree’s height ranges from 12 m to 17 m. CS of the three forest covers based on the AGB were: 20819.59 kg/20x20 m for closed broadleaf, 8609.82 kg/20x20 m for broadleaf plantation and 15545.57 kg/20x20m for open broadleaf. Average tree counts for the tree forest plantation was 413 trees/ha. As such, the forest of San Manuel has high percent forest cover and high CS.

Keywords: carbon stock, forest inventory, LiDAR, tree count

Procedia PDF Downloads 368
11354 Monocytic Paraoxonase 2 (PON 2) Lactonase Activity Is Related to Myocardial Infarction

Authors: Mukund Ramchandra Mogarekar, Pankaj Kumar, Shraddha V. More

Abstract:

Background: Total cholesterol (TC), low-density lipoprotein cholesterol (LDL-C), very low-density lipoprotein cholesterol (VLDL-C), Apo B, and lipoprotein(a) was found as atherogenic factors while high-density lipoprotein cholesterol (HDL-C) was anti-atherogenic. Methods and Results: The study group consists of 40 MI subjects as cases and 40 healthy as controls. Monocytic PON 2 Lactonase (LACT) activity was measured by using Dihydrocoumarine (DHC) as substrate. Phenotyping was done by method of Mogarekar MR et al, serum AOPP by modified method of Witko-Sarsat V et al and Apo B by Turbidimetric immunoassay. PON 2 LACT activities were significantly lower (p< 0.05) and AOPPs & Apo B were higher in MI subjects (p> 0.05). Trimodal distribution of QQ, QR & RR phenotypes of study population showed no significant difference among cases and controls (p> 0.05). Univariate binary logistic regression analysis showed independent association of TC, HDL, LDL, AOPP, Apo B, and PON 2 LACT activity with MI and multiple forward binary logistic regression showed PON 2 LACT activity and serum Apo B as an independent predictor of MI. Conclusions- Decrease in PON 2 LACT activity in MI subjects than in controls suggests increased oxidative stress in MI which is reflected by significantly increased AOPP and Apo B. PON 1 polymorphism of QQ, QR and RR showed no significant difference in protection against MI. Univariate and multiple forward binary logistic regression showed PON 2 LACT activity and serum Apo B as an independent predictor of MI.

Keywords: advanced oxidation protein products, apolipoprotein-B, myocardial infarction, paraoxonase 2 lactonase

Procedia PDF Downloads 226
11353 Contribution of Automated Early Warning Score Usage to Patient Safety

Authors: Phang Moon Leng

Abstract:

Automated Early Warning Scores is a newly developed clinical decision tool that is used to streamline and improve the process of obtaining a patient’s vital signs so a clinical decision can be made at an earlier stage to prevent the patient from further deterioration. This technology provides immediate update on the score and clinical decision to be taken based on the outcome. This paper aims to study the use of an automated early warning score system on whether the technology has assisted the hospital in early detection and escalation of clinical condition and improve patient outcome. The hospital adopted the Modified Early Warning Scores (MEWS) Scoring System and MEWS Clinical Response into Philips IntelliVue Guardian Automated Early Warning Score equipment and studied whether the process has been leaned, whether the use of technology improved the usage & experience of the nurses, and whether the technology has improved patient care and outcome. It was found the steps required to obtain vital signs has been significantly reduced and is used more frequently to obtain patient vital signs. The number of deaths, and length of stay has significantly decreased as clinical decisions can be made and escalated more quickly with the Automated EWS. The automated early warning score equipment has helped improve work efficiency by removing the need for documenting into patient’s EMR. The technology streamlines clinical decision-making and allows faster care and intervention to be carried out and improves overall patient outcome which translates to better care for patient.

Keywords: automated early warning score, clinical quality and safety, patient safety, medical technology

Procedia PDF Downloads 166
11352 Waste-Based Surface Modification to Enhance Corrosion Resistance of Aluminium Bronze Alloy

Authors: Wilson Handoko, Farshid Pahlevani, Isha Singla, Himanish Kumar, Veena Sahajwalla

Abstract:

Aluminium bronze alloys are well known for their superior abrasion, tensile strength and non-magnetic properties, due to the co-presence of iron (Fe) and aluminium (Al) as alloying elements and have been commonly used in many industrial applications. However, continuous exposure to the marine environment will accelerate the risk of a tendency to Al bronze alloys parts failures. Although a higher level of corrosion resistance properties can be achieved by modifying its elemental composition, it will come at a price through the complex manufacturing process and increases the risk of reducing the ductility of Al bronze alloy. In this research, the use of ironmaking slag and waste plastic as the input source for surface modification of Al bronze alloy was implemented. Microstructural analysis conducted using polarised light microscopy and scanning electron microscopy (SEM) that is equipped with energy dispersive spectroscopy (EDS). An electrochemical corrosion test was carried out through Tafel polarisation method and calculation of protection efficiency against the base-material was determined. Results have indicated that uniform modified surface which is as the result of selective diffusion process, has enhanced corrosion resistance properties up to 12.67%. This approach has opened a new opportunity to access various industrial utilisations in commercial scale through minimising the dependency on natural resources by transforming waste sources into the protective coating in environmentally friendly and cost-effective ways.

Keywords: aluminium bronze, waste-based surface modification, tafel polarisation, corrosion resistance

Procedia PDF Downloads 226
11351 Nonparametric Path Analysis with Truncated Spline Approach in Modeling Rural Poverty in Indonesia

Authors: Usriatur Rohma, Adji Achmad Rinaldo Fernandes

Abstract:

Nonparametric path analysis is a statistical method that does not rely on the assumption that the curve is known. The purpose of this study is to determine the best nonparametric truncated spline path function between linear and quadratic polynomial degrees with 1, 2, and 3-knot points and to determine the significance of estimating the best nonparametric truncated spline path function in the model of the effect of population migration and agricultural economic growth on rural poverty through the variable unemployment rate using the t-test statistic at the jackknife resampling stage. The data used in this study are secondary data obtained from statistical publications. The results showed that the best model of nonparametric truncated spline path analysis is quadratic polynomial degree with 3-knot points. In addition, the significance of the best-truncated spline nonparametric path function estimation using jackknife resampling shows that all exogenous variables have a significant influence on the endogenous variables.

Keywords: nonparametric path analysis, truncated spline, linear, quadratic, rural poverty, jackknife resampling

Procedia PDF Downloads 22
11350 Evolution of Performance Measurement Methods in Conditions of Uncertainty: The Implementation of Fuzzy Sets in Performance Measurement

Authors: E. A. Tkachenko, E. M. Rogova, V. V. Klimov

Abstract:

One of the basic issues of development management is connected with performance measurement as a prerequisite for identifying the achievement of development objectives. The aim of our research is to develop an improved model of assessing a company’s development results. The model should take into account the cyclical nature of development and the high degree of uncertainty in dealing with numerous management tasks. Our hypotheses may be formulated as follows: Hypothesis 1. The cycle of a company’s development may be studied from the standpoint of a project cycle. To do that, methods and tools of project analysis are to be used. Hypothesis 2. The problem of the uncertainty when justifying managerial decisions within the framework of a company’s development cycle can be solved through the use of the mathematical apparatus of fuzzy logic. The reasoned justification of the validity of the hypotheses made is given in the suggested article. The fuzzy logic toolkit applies to the case of technology shift within an enterprise. It is proven that some restrictions in performance measurement that are incurred to conventional methods could be eliminated by implementation of the fuzzy logic apparatus in performance measurement models.

Keywords: logic, fuzzy sets, performance measurement, project analysis

Procedia PDF Downloads 359
11349 Modelling of Aerosols in Absorption Column

Authors: Hammad Majeed, Hanna Knuutila, Magne Hillestad, Hallvard F. Svendsen

Abstract:

Formation of aerosols can cause serious complications in industrial exhaust gas cleaning processes. Small mist droplets and fog formed can normally not be removed in conventional demisting equipment because their submicron size allows the particles or droplets to follow the gas flow. As a consequence of this, aerosol based emissions in the order of grams per Nm3 have been identified from PCCC plants. The model predicts the droplet size, the droplet internal variable profiles, and the mass transfer fluxes as function of position in the absorber. The Matlab model is based on a subclass method of weighted residuals for boundary value problems named, orthogonal collocation method. This paper presents results describing the basic simulation tool for the characterization of aerosols formed in CO2 absorption columns and describes how various entering droplets grow or shrink through an absorber and how their composition changes with respect to time. Below are given some preliminary simulation results for an aerosol droplet composition and temperature profiles.

Keywords: absorption columns, aerosol formation, amine emissions, internal droplet profiles, monoethanolamine (MEA), post combustion CO2 capture, simulation

Procedia PDF Downloads 232
11348 Optimization Studies on Biosorption of Ni(II) and Cd(II) from Wastewater Using Pseudomonas putida in a Packed Bed Bioreactor

Authors: K.Narasimhulu, Y. Pydi Setty

Abstract:

The objective of this present study is the optimization of process parameters in biosorption of Ni(II) and Cd(II) ions by Pseudomonas putida using Response Surface Methodology in a Packed bed bioreactor. The experimental data were also tested with theoretical models to find the best fit model. The present paper elucidates RSM as an efficient approach for predictive model building and optimization of Ni(II) and Cd(II) ions using Pseudomonas putida. In packed bed biosorption studies, comparison of the breakthrough curves of Ni(II) and Cd(II) for Agar immobilized and PAA immobilized Pseudomonas putida at optimum conditions of flow rate of 300 mL/h, initial metal ion concentration of 100 mg/L and bed height of 20 cm with weight of biosorbent of 12 g, it was found that the Agar immobilized Pseudomonas putida showed maximum percent biosorption and bed saturation occurred at 20 minutes. Optimization results of Ni(II) and Cd(II) by Pseudomonas putida from the Design Expert software were obtained as bed height of 19.93 cm, initial metal ion concentration of 103.85 mg/L, and flow rate of 310.57 mL/h. The percent biosorption of Ni(II) and Cd(II) is 87.2% and 88.2% respectively. The predicted optimized parameters are in agreement with the experimental results.

Keywords: packed bed bioreactor, response surface mthodology, pseudomonas putida, biosorption, waste water

Procedia PDF Downloads 440
11347 Alcoxysilanes Production from Silica and Dimethylcarbonate Promoted by Alkali Bases: A DFT Investigation of the Reaction Mechanism

Authors: Valeria Butera, Norihisa Fukaya, Jun-Chu Choi, Kazuhiko Sato, Yoong-Kee Choe

Abstract:

Several silicon dioxide sources can react with dimethyl carbonate (DMC) in presence of alkali bases catalysts to ultimately produce tetramethoxysilane (TMOS). Experimental findings suggested that the reaction proceeds through several steps in which the first molecule of DMC is converted to dimethylsilyloxide (DMOS) and CO₂. Following the same mechanistic steps, a second molecule of DMC reacts with the DMOS to afford the final product TMOS. Using a cluster model approach, a quantum-mechanical investigation of the first part of the reaction leading to DMOS formation is reported with a twofold purpose: (1) verify the viability of the reaction mechanism proposed on the basis of experimental evidences .(2) compare the behaviors of three different alkali hydroxides MOH, where M=Li, K and Cs, to determine whether diverse ionic radius and charge density can be considered responsible for the observed differences in reactivity. Our findings confirm the observed experimental trend and furnish important information about the effective role of the alkali hydroxides giving an explanation of the different catalytic activity of the three metal cations.

Keywords: Alcoxysilanes production, cluster model approach, DFT, DMC conversion

Procedia PDF Downloads 260
11346 Aerodynamic Optimum Nose Shape Change of High-Speed Train by Design Variable Variation

Authors: Minho Kwak, Suhwan Yun, Choonsoo Park

Abstract:

Nose shape optimizations of high-speed train are performed for the improvement of aerodynamic characteristics. Based on the commercial train, KTX-Sancheon, multi-objective optimizations are conducted for the improvement of the side wind stability and the micro-pressure wave following the optimization for the reduction of aerodynamic drag. 3D nose shapes are modelled by the Vehicle Modeling Function. Aerodynamic drag and side wind stability are calculated by three-dimensional compressible Navier-Stokes solver, and micro pressure wave is done by axi-symmetric compressible Navier-Stokes solver. The Maxi-min Latin Hypercube Sampling method is used to extract sampling points to construct the approximation model. The kriging model is constructed for the approximation model and the NSGA-II algorithm was used as the multi-objective optimization algorithm. Nose length, nose tip height, and lower surface curvature are design variables. Because nose length is a dominant variable for aerodynamic characteristics of train nose, two optimization processes are progressed respectively with and without the design variable, nose length. Each pareto set was obtained and each optimized nose shape is selected respectively considering Honam high-speed rail line infrastructure in South Korea. Through the optimization process with the nose length, when compared to KTX Sancheon, aerodynamic drag was reduced by 9.0%, side wind stability was improved by 4.5%, micro-pressure wave was reduced by 5.4% whereas aerodynamic drag by 7.3%, side wind stability by 3.9%, micro-pressure wave by 3.9%, without the nose length. As a result of comparison between two optimized shapes, similar shapes are extracted other than the effect of nose length.

Keywords: aerodynamic characteristics, design variable, multi-objective optimization, train nose shape

Procedia PDF Downloads 339
11345 Prediction of Critical Flow Rate in Tubular Heat Exchangers for the Onset of Damaging Flow-Induced Vibrations

Authors: Y. Khulief, S. Bashmal, S. Said, D. Al-Otaibi, K. Mansour

Abstract:

The prediction of flow rates at which the vibration-induced instability takes place in tubular heat exchangers due to cross-flow is of major importance to the performance and service life of such equipment. In this paper, the semi-analytical model for square tube arrays was extended and utilized to study the triangular tube patterns. A laboratory test rig with instrumented test section is used to measure the fluidelastic coefficients to be used for tuning the mathematical model. The test section can be made of any bundle pattern. In this study, two test sections were constructed for both the normal triangular and the rotated triangular tube arrays. The developed scheme is utilized in predicting the onset of flow-induced instability in the two triangular tube arrays. The results are compared to those obtained for two other bundle configurations. The results of the four different tube patterns are viewed in the light of TEMA predictions. The comparison demonstrated that TEMA guidelines are more conservative in all configurations considered

Keywords: fluid-structure interaction, cross-flow, heat exchangers,

Procedia PDF Downloads 263
11344 Prioritizing Forest Conservation Strategies Using a Multi-Attribute Decision Model to Address Concerns with the Survival of the Endangered Dragon Tree (Dracaena ombet Kotschy and Peyr.)

Authors: Tesfay Gidey, Emiru Birhane, Ashenafi Manaye, Hailemariam Kassa, Tesfay Atsbha, Negasi Solomon, Hadgu Hishe, Aklilu Negussie, Petr Madera, Jose G. Borges

Abstract:

The globally endangered Dracaena ombet is one of the ten dragon multipurpose tree species in arid ecosystems. Anthropogenic and natural factors are now impacting the sustainability of the species. This study was conducted to prioritize criteria and alternative strategies for the conservation of the species using the analytical hierarchy process (AHP) model by involving all relevant stakeholders in the Desa'a dry Afromontane forest in northern Ethiopia. Information about the potential alternative strategies and the criteria for their evaluation was first collected from experts, personal experiences, and literature reviews. Afterward, they were validated using stakeholders' focus group discussions. Five candidate strategies with three evaluation criteria were considered for prioritization using the AHP techniques. The overall priority ranking value of the stakeholders showed that the ecological criterion was deemed as the most essential factor for the choice of alternative strategies, followed by the economic and social criteria. The minimum cut-off strategy, combining exclosures with the collection of only 5% of plant parts from the species, soil and water conservation, and silviculture interventions, was selected as the best alternative strategy for sustainable D. ombet conservation. The livelihood losses due to the selected strategy should be compensated by the collection of non-timber forest products, poultry farming, home gardens, rearing small ruminants, beekeeping, and agroforestry. This approach may be extended to study other dragon tree species and explore strategies for the conservation of other arid ecosystems.

Keywords: conservation strategies, analytical hierarchy process model, Desa'a forest, endangered species, Ethiopia, overexploitation

Procedia PDF Downloads 66
11343 Characterization of Aerosol Droplet in Absorption Columns to Avoid Amine Emissions

Authors: Hammad Majeed, Hanna Knuutila, Magne Hilestad, Hallvard Svendsen

Abstract:

Formation of aerosols can cause serious complications in industrial exhaust gas CO2 capture processes. SO3 present in the flue gas can cause aerosol formation in an absorption based capture process. Small mist droplets and fog formed can normally not be removed in conventional demisting equipment because their submicron size allows the particles or droplets to follow the gas flow. As a consequence of this aerosol based emissions in the order of grams per Nm3 have been identified from PCCC plants. In absorption processes aerosols are generated by spontaneous condensation or desublimation processes in supersaturated gas phases. Undesired aerosol development may lead to amine emissions many times larger than what would be encountered in a mist free gas phase in PCCC development. It is thus of crucial importance to understand the formation and build-up of these aerosols in order to mitigate the problem.Rigorous modelling of aerosol dynamics leads to a system of partial differential equations. In order to understand mechanics of a particle entering an absorber an implementation of the model is created in Matlab. The model predicts the droplet size, the droplet internal variable profiles and the mass transfer fluxes as function of position in the absorber. The Matlab model is based on a subclass method of weighted residuals for boundary value problems named, orthogonal collocation method. The model comprises a set of mass transfer equations for transferring components and the essential diffusion reaction equations to describe the droplet internal profiles for all relevant constituents. Also included is heat transfer across the interface and inside the droplet. This paper presents results describing the basic simulation tool for the characterization of aerosols formed in CO2 absorption columns and gives examples as to how various entering droplets grow or shrink through an absorber and how their composition changes with respect to time. Below are given some preliminary simulation results for an aerosol droplet composition and temperature profiles. Results: As an example a droplet of initial size of 3 microns, initially containing a 5M MEA, solution is exposed to an atmosphere free of MEA. Composition of the gas phase and temperature is changing with respect to time throughout the absorber.

Keywords: amine solvents, emissions, global climate change, simulation and modelling, aerosol generation

Procedia PDF Downloads 249
11342 Evaluation of the Families' Psychological Nature and the Relationship between the Academic Success According to the Students' Opinion

Authors: Sebnem Erismen, Ahmet Guneyli, Azize Ummanel

Abstract:

The purpose of this study is to explore the relationship between the students' academic success and families' psychological nature. The study based upon the quantitative research, and descriptive model is used. Relational descriptive model is used while evaluating the relation between families’ psychological nature and the academic success level of the students. A total of 523 secondary school students have participated the study. Personal Information Form, Family Structure Evaluation Form (FSEF) and School Reports were employed as the primary methods of data gathering. ANOVA and LSD Scheffe Test were used for analysing the data. Results of the study indicate that there are differences between the FSEF scores according to the students’ and teachers’ gender; however, no differences between the class level and seniority of the teachers were seen. Regarding the academic success of the students, it was seen that majority of them have high points. It was also seen that the academic success level of the students differentiates regarding to the classroom teachers’ gender and seniority. In conclusion, it was seen that there is a relation between the families’ psychological nature and students' academic success.

Keywords: families’ perceived psychological nature, academic success, families effect on the academic success, education

Procedia PDF Downloads 274
11341 Cooperative Game Theory and Small Hold Farming: Towards A Conceptual Model

Authors: Abel Kahuni

Abstract:

Cooperative game theory (CGT) postulates that groups of players are crucial units of the decision-making and impose cooperative behaviour. Accordingly, cooperative games are regarded as competition between coalitions of players, rather than between individual players. However, the basic supposition in CGT is that the cooperative is formed by all players. One of the emerging questions in CGT is how to develop cooperatives and fairly allocate the payoff. Cooperative Game Theory (CGT) may provide a framework and insights into the ways small holder farmers in rural resettlements may develop competitive advantage through marketing cooperatives. This conceptual paper proposes a non-competition model for small holder farmers of homogenous agri-commodity under CGT conditions. This paper will also provide brief insights into to the theory of cooperative games in-order to generate an understanding of CGT, cooperative marketing gains and its application in small holder farming arrangements. Accordingly, the objective is to provide a basic introduction to this theory in connection with economic competitive theories in the context of small holder farmers. The key value proposition of CGT is the equitable and fair sharing of cooperative gains.

Keywords: game theory, cooperative game theory, cooperatives, competition

Procedia PDF Downloads 63
11340 Macroeconomic Impact of Economic Growth on Unemployment: A Case of South Africa

Authors: Ashika Govender

Abstract:

This study seeks to determine whether Okun’s Law is valid for the South African economy, using time series data for the period 2004 to 2014. The data were accessed from the South African Reserve Bank and Stats SA. The stationarity of the variables was analysed by applying unit root tests via the Augmented Dickey-Fuller test (ADF), the Phillips-Perron (PP) test, and the Kwiatkowski–Phillips–Schmidt–Shin test (KPSS) test. The study used an ordinary least square (OLS) model in analysing the dynamic version of Okun’s law. The Error Correction Model (ECM) was used to analyse the short-run impact of GDP growth on unemployment, as well as the speed of adjustment. The results indicate a short run and long run relationship between unemployment rate and GDP growth rate in period 2004q1-2014q4, suggesting that Okun’s law is valid for the South African economy. With a 1 percent increase in GDP, unemployment can decrease by 0.13 percent, ceteris paribus. The research culminates in important policy recommendations, highlighting the relationship between unemployment and economic growth in the spirit of the National Development Plan.

Keywords: unemployment, economic growth, Okun's law, South Africa

Procedia PDF Downloads 259
11339 Leveraging Quality Metrics in Voting Model Based Thread Retrieval

Authors: Atefeh Heydari, Mohammadali Tavakoli, Zuriati Ismail, Naomie Salim

Abstract:

Seeking and sharing knowledge on online forums have made them popular in recent years. Although online forums are valuable sources of information, due to variety of sources of messages, retrieving reliable threads with high quality content is an issue. Majority of the existing information retrieval systems ignore the quality of retrieved documents, particularly, in the field of thread retrieval. In this research, we present an approach that employs various quality features in order to investigate the quality of retrieved threads. Different aspects of content quality, including completeness, comprehensiveness, and politeness, are assessed using these features, which lead to finding not only textual, but also conceptual relevant threads for a user query within a forum. To analyse the influence of the features, we used an adopted version of voting model thread search as a retrieval system. We equipped it with each feature solely and also various combinations of features in turn during multiple runs. The results show that incorporating the quality features enhances the effectiveness of the utilised retrieval system significantly.

Keywords: content quality, forum search, thread retrieval, voting techniques

Procedia PDF Downloads 199
11338 Synthesis and Characterization of Lactic Acid Grafted TiO2 Nanocomposites

Authors: Qasar Saleem

Abstract:

The aim of this project was to synthesize and analyze Polylactic acid-grafted TiO2 nanocomposite. When dispersed at the nanoscale TiO2 can behave as see through transparent UV filters and thermomechanical materials. The synthesis plan involved three stages. First, dispersion of TiO2 white powder in water/ethanol solvent system. Second grafting TiO2 surface by oligomers of lactic acid aimed at changing its surface features. Third polymerization of lactic acid monomer with grafted TiO2 in the presence of anhydrous stannous chloride as a catalyst. Polylactic acid grafted-TiO2 nanocomposite was synthesized by melt polycondensation in situ of lactic acid onto titanium oxide (TiO2) nanoparticles surface. The product was characterized by TGA, DSC, FTIR, and UV analysis and degradation observation. An idea regarding bonds between the grafting polymer and surface modified titanium oxide nanoparticles. Characteristics peaks of Ti–carbonyl bond, the related intensities of the Fourier transmission absorption peaks of graft composite, the melt and decomposition behavior stages of Polylactic acid-grafted TiO2 nanocomposite convinced that oligomers of polylactic acid were chemically bonded on the surface of TiO2 nanoparticles. Through grafting polylactic acid, the Polylactic acid grafted -TiO2 sample shown good absorption in UV region and degradation behavior under normal atmospheric conditions. Regaining transparency of degraded white opaque Polylactic acid-grafted TiO2 nanocomposite on heating was another character. Polylactic acid-grafted TiO2 nanocomposite will be a potential candidate in future for biomedical, UV shielding and environment friendly material.

Keywords: condensation, nanocomposites, oligomers, polylactic

Procedia PDF Downloads 198