Search results for: value clusters
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 624

Search results for: value clusters

474 Molecular Characterization of Listeria monocytogenes from Fresh Fish and Fish Products

Authors: Beata Lachtara, Renata Szewczyk, Katarzyna Bielinska, Kinga Wieczorek, Jacek Osek

Abstract:

Listeria monocytogenes is an important human and animal pathogen that causes foodborne outbreaks. The bacteria may be present in different types of food: cheese, raw vegetables, sliced meat products and vacuum-packed sausages, poultry, meat, fish. The most common method, which has been used for the investigation of genetic diversity of L. monocytogenes, is PFGE. This technique is reliable and reproducible and established as gold standard for typing of L. monocytogenes. The aim of the study was characterization by molecular serotyping and PFGE analysis of L. monocytogenes strains isolated from fresh fish and fish products in Poland. A total of 301 samples, including fresh fish (n = 129) and fish products (n = 172) were, collected between January 2014 and March 2016. The bacteria were detected using the ISO 11290-1 standard method. Molecular serotyping was performed with PCR. The isolates were tested with the PFGE method according to the protocol developed by the European Union Reference Laboratory for L. monocytogenes with some modifications. Based on the PFGE profiles, two dendrograms were generated for strains digested separately with two restriction enzymes: AscI and ApaI. Analysis of the fingerprint profiles was performed using Bionumerics software version 6.6 (Applied Maths, Belgium). The 95% of similarity was applied to differentiate the PFGE pulsotypes. The study revealed that 57 of 301 (18.9%) samples were positive for L. monocytogenes. The bacteria were identified in 29 (50.9%) ready-to-eat fish products and in 28 (49.1%) fresh fish. It was found that 40 (70.2%) strains were of serotype 1/2a, 14 (24.6%) 1/2b, two (4.3%) 4b and one (1.8%) 1/2c. Serotypes 1/2a, 1/2b, and 4b were presented with the same frequency in both categories of food, whereas serotype 1/2c was detected only in fresh fish. The PFGE analysis with AscI demonstrated 43 different pulsotypes; among them 33 (76.7%) were represented by only one strain. The remaining 10 profiles contained more than one isolate. Among them 8 pulsotypes comprised of two L. monocytogenes isolates, one profile of three isolates and one restriction type of 5 strains. In case of ApaI typing, the PFGE analysis showed 27 different pulsotypes including 17 (63.0%) types represented by only one strain. Ten (37.0%) clusters contained more than one strain among which four profiles covered two strains; three had three isolates, one with five strains, one with eight strains and one with ten isolates. It was observed that the isolates assigned to the same PFGE type were usually of the same serotype (1/2a or 1/2b). The majority of the clusters had strains of both sources (fresh fish and fish products) isolated at different time. Most of the strains grouped in one cluster of the AscI restriction was assigned to the same groups in ApaI investigation. In conclusion, PFGE used in the study showed a high genetic diversity among L. monocytogenes. The strains were grouped into varied clonal clusters, which may suggest different sources of contamination. The results demonstrated that 1/2a serotype was the most common among isolates from fresh fish and fish products in Poland.

Keywords: Listeria monocytogenes, molecular characteristic, PFGE, serotyping

Procedia PDF Downloads 289
473 Eco-Cities in Challenging Environments: Pollution As A Polylemma in The Uae

Authors: Shaima A. Al Mansoori

Abstract:

Eco-cities have become part of the broader and universal discourse and embrace of sustainable communities. Given the ideals and ‘potential’ benefits of eco-cities for people, the environment and prosperity, hardly can an argument be made against the desirability of eco-cities. Yet, this paper posits that it is necessary for urban scholars, technocrats and policy makers to engage in discussions of the pragmatism of implementing the ideals of eco-cities, for example, from the political, budgetary, cultural and other dimensions. In the context of such discourse, this paper examines the feasibility of one of the cardinal principles and goals of eco-cities, which is the reduction or elimination of pollution through various creative and innovative initiatives, in the UAE. This paper contends and argues that, laudable and desirable as this goal is, it is a polylemma and, therefore, overly ambitious and practically unattainable in the UAE. The paper uses the mixed method research strategy, in which data is sourced from secondary and general sources through desktop research, from public records in governmental agencies, and from the conceptual academic and professional literature. Information from these sources will be used, first, to define and review pollution as a concept and multifaceted phenomenon with multidimensional impacts. Second, the paper will use society’s five goal clusters as a framework to identify key causes and impacts of pollution in the UAE. Third, the paper will identify and analyze specific public policies, programs and projects that make pollution in the UAE a polylemma. Fourth, the paper will argue that the phenomenal rates of population increase, urbanization, economic growth, consumerism and development in the UAE make pollution an inevitable product and burden that society must live with. This ‘reality’ makes the goal and desire of pollution-free cities pursuable but unattainable. The paper will conclude by identifying and advocating creative and innovative initiatives that can be taken by the various stakeholders in the country to reduce and mitigate pollution in the short- and long-term.

Keywords: goal clusters, pollution, polylemma, sustainable communities

Procedia PDF Downloads 385
472 Ferromagnetic Potts Models with Multi Site Interaction

Authors: Nir Schreiber, Reuven Cohen, Simi Haber

Abstract:

The Potts model has been widely explored in the literature for the last few decades. While many analytical and numerical results concern with the traditional two site interaction model in various geometries and dimensions, little is yet known about models where more than two spins simultaneously interact. We consider a ferromagnetic four site interaction Potts model on the square lattice (FFPS), where the four spins reside in the corners of an elementary square. Each spin can take an integer value 1,2,...,q. We write the partition function as a sum over clusters consisting of monochromatic faces. When the number of faces becomes large, tracing out spin configurations is equivalent to enumerating large lattice animals. It is known that the asymptotic number of animals with k faces is governed by λᵏ, with λ ≈ 4.0626. Based on this observation, systems with q < 4 and q > 4 exhibit a second and first order phase transitions, respectively. The transition nature of the q = 4 case is borderline. For any q, a critical giant component (GC) is formed. In the finite order case, GC is simple, while it is fractal when the transition is continuous. Using simple equilibrium arguments, we obtain a (zero order) bound on the transition point. It is claimed that this bound should apply for other lattices as well. Next, taking into account higher order sites contributions, the critical bound becomes tighter. Moreover, for q > 4, if corrections due to contributions from small clusters are negligible in the thermodynamic limit, the improved bound should be exact. The improved bound is used to relate the critical point to the finite correlation length. Our analytical predictions are confirmed by an extensive numerical study of FFPS, using the Wang-Landau method. In particular, the q=4 marginal case is supported by a very ambiguous pseudo-critical finite size behavior.

Keywords: entropic sampling, lattice animals, phase transitions, Potts model

Procedia PDF Downloads 160
471 Cas9-Assisted Direct Cloning and Refactoring of a Silent Biosynthetic Gene Cluster

Authors: Peng Hou

Abstract:

Natural products produced from marine bacteria serve as an immense reservoir for anti-infective drugs and therapeutic agents. Nowadays, heterologous expression of gene clusters of interests has been widely adopted as an effective strategy for natural product discovery. Briefly, the heterologous expression flowchart would be: biosynthetic gene cluster identification, pathway construction and expression, and product detection. However, gene cluster capture using traditional Transformation-associated recombination (TAR) protocol is low-efficient (0.5% positive colony rate). To make things worse, most of these putative new natural products are only predicted by bioinformatics analysis such as antiSMASH, and their corresponding natural products biosynthetic pathways are either not expressed or expressed at very low levels under laboratory conditions. Those setbacks have inspired us to focus on seeking new technologies to efficiently edit and refractor of biosynthetic gene clusters. Recently, two cutting-edge techniques have attracted our attention - the CRISPR-Cas9 and Gibson Assembly. By now, we have tried to pretreat Brevibacillus laterosporus strain genomic DNA with CRISPR-Cas9 nucleases that specifically generated breaks near the gene cluster of interest. This trial resulted in an increase in the efficiency of gene cluster capture (9%). Moreover, using Gibson Assembly by adding/deleting certain operon and tailoring enzymes regardless of end compatibility, the silent construct (~80kb) has been successfully refactored into an active one, yielded a series of analogs expected. With the appearances of the novel molecular tools, we are confident to believe that development of a high throughput mature pipeline for DNA assembly, transformation, product isolation and identification would no longer be a daydream for marine natural product discovery.

Keywords: biosynthesis, CRISPR-Cas9, DNA assembly, refactor, TAR cloning

Procedia PDF Downloads 282
470 Variation among East Wollega Coffee (Coffea arabica L.) Landraces for Quality Attributes

Authors: Getachew Weldemichael, Sentayehu Alamerew, Leta Tulu, Gezahegn Berecha

Abstract:

Coffee quality improvement program is becoming the focus of coffee research, as the world coffee consumption pattern shifted to high-quality coffee. However, there is limited information on the genetic variation of C. Arabica for quality improvement in potential specialty coffee growing areas of Ethiopia. Therefore, this experiment was conducted with the objectives of determining the magnitude of variation among 105 coffee accessions collected from east Wollega coffee growing areas and assessing correlations between the different coffee qualities attributes. It was conducted in RCRD with three replications. Data on green bean physical characters (shape and make, bean color and odor) and organoleptic cup quality traits (aromatic intensity, aromatic quality, acidity, astringency, bitterness, body, flavor, and overall standard of the liquor) were recorded. Analysis of variance, clustering, genetic divergence, principal component and correlation analysis was performed using SAS software. The result revealed that there were highly significant differences (P<0.01) among the accessions for all quality attributes except for odor and bitterness. Among the tested accessions, EW104 /09, EW101 /09, EW58/09, EW77/09, EW35/09, EW71/09, EW68/09, EW96 /09, EW83/09 and EW72/09 had the highest total coffee quality values (the sum of bean physical and cup quality attributes). These genotypes could serve as a source of genes for green bean physical characters and cup quality improvement in Arabica coffee. Furthermore, cluster analysis grouped the coffee accessions into five clusters with significant inter-cluster distances implying that there is moderate diversity among the accessions and crossing accessions from these divergent inter-clusters would result in hetrosis and recombinants in segregating generations. The principal component analysis revealed that the first three principal components with eigenvalues greater than unity accounted for 83.1% of the total variability due to the variation of nine quality attributes considered for PC analysis, indicating that all quality attributes equally contribute to a grouping of the accessions in different clusters. Organoleptic cup quality attributes showed positive and significant correlations both at the genotypic and phenotypic levels, demonstrating the possibility of simultaneous improvement of the traits. Path coefficient analysis revealed that acidity, flavor, and body had a high positive direct effect on overall cup quality, implying that these traits can be used as indirect criteria to improve overall coffee quality. Therefore, it was concluded that there is considerable variation among the accessions, which need to be properly conserved for future improvement of the coffee quality. However, the variability observed for quality attributes must be further verified using biochemical and molecular analysis.

Keywords: accessions, Coffea arabica, cluster analysis, correlation, principal component

Procedia PDF Downloads 166
469 Interpersonal Variation of Salivary Microbiota Using Denaturing Gradient Gel Electrophoresis

Authors: Manjula Weerasekera, Chris Sissons, Lisa Wong, Sally Anderson, Ann Holmes, Richard Cannon

Abstract:

The aim of this study was to characterize bacterial population and yeasts in saliva by Polymerase chain reaction followed by denaturing gradient gel electrophoresis (PCR-DGGE) and measure yeast levels by culture. PCR-DGGE was performed to identify oral bacteria and yeasts in 24 saliva samples. DNA was extracted and used to generate DNA amplicons of the V2–V3 hypervariable region of the bacterial 16S rDNA gene using PCR. Further universal primers targeting the large subunit rDNA gene (25S-28S) of fungi were used to amplify yeasts present in human saliva. Resulting PCR products were subjected to denaturing gradient gel electrophoresis using Universal mutation detection system. DGGE bands were extracted and sequenced using Sanger method. A potential relationship was evaluated between groups of bacteria identified by cluster analysis of DGGE fingerprints with the yeast levels and with their diversity. Significant interpersonal variation of salivary microbiome was observed. Cluster and principal component analysis of the bacterial DGGE patterns yielded three significant major clusters, and outliers. Seventeen of the 24 (71%) saliva samples were yeast positive going up to 10³ cfu/mL. Predominately, C. albicans, and six other species of yeast were detected. The presence, amount and species of yeast showed no clear relationship to the bacterial clusters. Microbial community in saliva showed a significant variation between individuals. A lack of association between yeasts and the bacterial fingerprints in saliva suggests the significant ecological person-specific independence in highly complex oral biofilm systems under normal oral conditions.

Keywords: bacteria, denaturing gradient gel electrophoresis, oral biofilm, yeasts

Procedia PDF Downloads 222
468 A Multi-Objective Decision Making Model for Biodiversity Conservation and Planning: Exploring the Concept of Interdependency

Authors: M. Mohan, J. P. Roise, G. P. Catts

Abstract:

Despite living in an era where conservation zones are de-facto the central element in any sustainable wildlife management strategy, we still find ourselves grappling with several pareto-optimal situations regarding resource allocation and area distribution for the same. In this paper, a multi-objective decision making (MODM) model is presented to answer the question of whether or not we can establish mutual relationships between these contradicting objectives. For our study, we considered a Red-cockaded woodpecker (Picoides borealis) habitat conservation scenario in the coastal plain of North Carolina, USA. Red-cockaded woodpecker (RCW) is a non-migratory territorial bird that excavates cavities in living pine trees for roosting and nesting. The RCW groups nest in an aggregation of cavity trees called ‘cluster’ and for our model we use the number of clusters to be established as a measure of evaluating the size of conservation zone required. The case study is formulated as a linear programming problem and the objective function optimises the Red-cockaded woodpecker clusters, carbon retention rate, biofuel, public safety and Net Present Value (NPV) of the forest. We studied the variation of individual objectives with respect to the amount of area available and plotted a two dimensional dynamic graph after establishing interrelations between the objectives. We further explore the concept of interdependency by integrating the MODM model with GIS, and derive a raster file representing carbon distribution from the existing forest dataset. Model results demonstrate the applicability of interdependency from both linear and spatial perspectives, and suggest that this approach holds immense potential for enhancing environmental investment decision making in future.

Keywords: conservation, interdependency, multi-objective decision making, red-cockaded woodpecker

Procedia PDF Downloads 337
467 Effect of Distance to Health Facilities on Maternal Service Use and Neonatal Mortality in Ethiopia

Authors: Getiye Dejenu Kibret, Daniel Demant, Andrew Hayen

Abstract:

Introduction: In Ethiopia, more than half of newborn babies do not have access to Emergency Obstetric and Neonatal Care (EmONC) services. Understanding the effect of distance to health facilities on service use and neonatal survival is crucial to recommend policymakers and improve resource distribution. We aimed to investigate the effect of distance to health services on maternal service use and neonatal mortality. Methods: We implemented a data linkage method based on geographic coordinates and calculated straight-line (Euclidean) distances from the Ethiopian 2016 demographic and health survey clusters to the closest health facility. We computed the distance in ESRI ArcGIS Version 10.3 using the geographic coordinates of DHS clusters and health facilities. Generalised Structural Equation Modelling (GSEM) was used to estimate the effect of distance on neonatal mortality. Results: Poor geographic accessibility to health facilities affects maternal service usage and increases the risk of newborn mortality. For every ten kilometres (km) increase in distance to a health facility, the odds of neonatal mortality increased by 1.33% (95% CI: 1.06% to 1.67%). Distance also negatively affected antenatal care, facility delivery and postnatal counselling service use. Conclusions: A lack of geographical access to health facilities decreases the likelihood of newborns surviving their first month of life and affects health services use during pregnancy and immediately after birth. The study also showed that antenatal care use was positively associated with facility delivery service use and that both positively influenced postnatal care use, demonstrating the interconnectedness of the continuum of care for maternal and neonatal care services. Policymakers can leverage the findings from this study to improve accessibility barriers to health services.

Keywords: acessibility, distance, maternal health service, neonatal mortality

Procedia PDF Downloads 112
466 Improving Fake News Detection Using K-means and Support Vector Machine Approaches

Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy

Abstract:

Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.

Keywords: clustering, fake news detection, feature selection, machine learning, social media, support vector machine

Procedia PDF Downloads 176
465 Maximization of Lifetime for Wireless Sensor Networks Based on Energy Efficient Clustering Algorithm

Authors: Frodouard Minani

Abstract:

Since last decade, wireless sensor networks (WSNs) have been used in many areas like health care, agriculture, defense, military, disaster hit areas and so on. Wireless Sensor Networks consist of a Base Station (BS) and more number of wireless sensors in order to monitor temperature, pressure, motion in different environment conditions. The key parameter that plays a major role in designing a protocol for Wireless Sensor Networks is energy efficiency which is a scarcest resource of sensor nodes and it determines the lifetime of sensor nodes. Maximizing sensor node’s lifetime is an important issue in the design of applications and protocols for Wireless Sensor Networks. Clustering sensor nodes mechanism is an effective topology control approach for helping to achieve the goal of this research. In this paper, the researcher presents an energy efficiency protocol to prolong the network lifetime based on Energy efficient clustering algorithm. The Low Energy Adaptive Clustering Hierarchy (LEACH) is a routing protocol for clusters which is used to lower the energy consumption and also to improve the lifetime of the Wireless Sensor Networks. Maximizing energy dissipation and network lifetime are important matters in the design of applications and protocols for wireless sensor networks. Proposed system is to maximize the lifetime of the Wireless Sensor Networks by choosing the farthest cluster head (CH) instead of the closest CH and forming the cluster by considering the following parameter metrics such as Node’s density, residual-energy and distance between clusters (inter-cluster distance). In this paper, comparisons between the proposed protocol and comparative protocols in different scenarios have been done and the simulation results showed that the proposed protocol performs well over other comparative protocols in various scenarios.

Keywords: base station, clustering algorithm, energy efficient, sensors, wireless sensor networks

Procedia PDF Downloads 144
464 Feature Evaluation Based on Random Subspace and Multiple-K Ensemble

Authors: Jaehong Yu, Seoung Bum Kim

Abstract:

Clustering analysis can facilitate the extraction of intrinsic patterns in a dataset and reveal its natural groupings without requiring class information. For effective clustering analysis in high dimensional datasets, unsupervised dimensionality reduction is an important task. Unsupervised dimensionality reduction can generally be achieved by feature extraction or feature selection. In many situations, feature selection methods are more appropriate than feature extraction methods because of their clear interpretation with respect to the original features. The unsupervised feature selection can be categorized as feature subset selection and feature ranking method, and we focused on unsupervised feature ranking methods which evaluate the features based on their importance scores. Recently, several unsupervised feature ranking methods were developed based on ensemble approaches to achieve their higher accuracy and stability. However, most of the ensemble-based feature ranking methods require the true number of clusters. Furthermore, these algorithms evaluate the feature importance depending on the ensemble clustering solution, and they produce undesirable evaluation results if the clustering solutions are inaccurate. To address these limitations, we proposed an ensemble-based feature ranking method with random subspace and multiple-k ensemble (FRRM). The proposed FRRM algorithm evaluates the importance of each feature with the random subspace ensemble, and all evaluation results are combined with the ensemble importance scores. Moreover, FRRM does not require the determination of the true number of clusters in advance through the use of the multiple-k ensemble idea. Experiments on various benchmark datasets were conducted to examine the properties of the proposed FRRM algorithm and to compare its performance with that of existing feature ranking methods. The experimental results demonstrated that the proposed FRRM outperformed the competitors.

Keywords: clustering analysis, multiple-k ensemble, random subspace-based feature evaluation, unsupervised feature ranking

Procedia PDF Downloads 339
463 Performance Evaluation and Plugging Characteristics of Controllable Self-Aggregating Colloidal Particle Profile Control Agent

Authors: Zhiguo Yang, Xiangan Yue, Minglu Shao, Yue Yang, Rongjie Yan

Abstract:

It is difficult to realize deep profile control because of the small pore-throats and easy water channeling in low-permeability heterogeneous reservoir, and the traditional polymer microspheres have the contradiction between injection and plugging. In order to solve this contradiction, the controllable self-aggregating colloidal particles (CSA) containing amide groups on the surface of microspheres was prepared based on emulsion polymerization of styrene and acrylamide. The dispersed solution of CSA colloidal particles, whose particle size is much smaller than the diameter of pore-throats, was injected into the reservoir. When the microspheres migrated to the deep part of reservoir, , these CSA colloidal particles could automatically self-aggregate into large particle clusters under the action of the shielding agent and the control agent, so as to realize the plugging of the water channels. In this paper, the morphology, temperature resistance and self-aggregation properties of CSA microspheres were studied by transmission electron microscopy (TEM) and bottle test. The results showed that CSA microspheres exhibited heterogeneous core-shell structure, good dispersion, and outstanding thermal stability. The microspheres remain regular and uniform spheres at 100℃ after aging for 35 days. With the increase of the concentration of the cations, the self-aggregation time of CSA was gradually shortened, and the influence of bivalent cations was greater than that of monovalent cations. Core flooding experiments showed that CSA polymer microspheres have good injection properties, CSA particle clusters can effective plug the water channels and migrate to the deep part of the reservoir for profile control.

Keywords: heterogeneous reservoir, deep profile control, emulsion polymerization, colloidal particles, plugging characteristic

Procedia PDF Downloads 241
462 Some Results on Cluster Synchronization

Authors: Shahed Vahedi, Mohd Salmi Md Noorani

Abstract:

This paper investigates cluster synchronization phenomena between community networks. We focus on the situation where a variety of dynamics occur in the clusters. In particular, we show that different synchronization states simultaneously occur between the networks. The controller is designed having an adaptive control gain, and theoretical results are derived via Lyapunov stability. Simulations on well-known dynamical systems are provided to elucidate our results.

Keywords: cluster synchronization, adaptive control, community network, simulation

Procedia PDF Downloads 476
461 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model

Authors: Donatella Giuliani

Abstract:

In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.

Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation

Procedia PDF Downloads 217
460 Modeling Aggregation of Insoluble Phase in Reactors

Authors: A. Brener, B. Ismailov, G. Berdalieva

Abstract:

In the paper we submit the modification of kinetic Smoluchowski equation for binary aggregation applying to systems with chemical reactions of first and second orders in which the main product is insoluble. The goal of this work is to create theoretical foundation and engineering procedures for calculating the chemical apparatuses in the conditions of joint course of chemical reactions and processes of aggregation of insoluble dispersed phases which are formed in working zones of the reactor.

Keywords: binary aggregation, clusters, chemical reactions, insoluble phases

Procedia PDF Downloads 307
459 Influence of Microstructure on Deformation Mechanisms and Mechanical Properties of Additively Manufactured Steel

Authors: Etienne Bonnaud, David Lindell

Abstract:

Correlations between microstructure, deformation mechanisms, and mechanical properties in additively manufactured 316L steel components have been investigated. Mechanical properties in the vertical direction (building direction) and in the horizontal direction (in plane directions) are markedly different. Vertically built specimens show lower yield stress but higher elongation than their horizontally built counterparts. Microscopic observations by electron back scattered diffraction (EBSD) for both build orientations reveal a strong [110] fiber texture in the build direction but different grain morphologies. These microstructures are used as input in subsequent crystal plasticity numerical simulations to understand their influence on the deformation mechanisms and the mechanical properties. Mean field simulations using a visco plastic self consistent (VPSC) model were carried out first but did not give results consistent with the tensile test experiments. A more detailed full-field model had to be used based on the Visco Plastic Fast Fourier Transform (VPFTT) method. A more accurate microstructure description was then input to the simulation model, where thin vertical regions of smaller grains were also taken into account. It turned out that these small grain clusters were responsible for the discrepancies in yield stress and hardening. Texture and morphology have a strong effect on mechanical properties. The different mechanical behaviors between vertically and horizontally printed specimens could be explained by means of numerical full-field crystal plasticity simulations, and the presence of thin clusters of smaller grains was shown to play a central role in the deformation mechanisms.

Keywords: additive manufacturing, crystal plasticity, full-field simulations, mean-field simulations, texture

Procedia PDF Downloads 70
458 Design and Analysis of Deep Excavations

Authors: Barham J. Nareeman, Ilham I. Mohammed

Abstract:

Excavations in urban developed area are generally supported by deep excavation walls such as; diaphragm wall, bored piles, soldier piles and sheet piles. In some cases, these walls may be braced by internal braces or tie back anchors. Tie back anchors are by far the predominant method for wall support, the large working space inside the excavation provided by a tieback anchor system has a significant construction advantage. This paper aims to analyze a deep excavation bracing system of contiguous pile wall braced by pre-stressed tie back anchors, which is a part of a huge residential building project, located in Turkey/Gaziantep province. The contiguous pile wall will be constructed with a length of 270 m that consists of 285 piles, each having a diameter of 80 cm, and a center to center spacing of 95 cm. The deformation analysis was carried out by a finite element analysis tool using PLAXIS. In the analysis, beam element method together with an elastic perfect plastic soil model and Soil Hardening Model was used to design the contiguous pile wall, the tieback anchor system, and the soil. The two soil clusters which are limestone and a filled soil were modelled with both Hardening soil and Mohr Coulomb models. According to the basic design, both soil clusters are modelled as drained condition. The simulation results show that the maximum horizontal movement of the walls and the maximum settlement of the ground are convenient with 300 individual case histories which are ranging between 1.2mm and 2.3mm for walls, and 15mm and 6.5mm for the settlements. It was concluded that tied-back contiguous pile wall can be satisfactorily modelled using Hardening soil model.

Keywords: deep excavation, finite element, pre-stressed tie back anchors, contiguous pile wall, PLAXIS, horizontal deflection, ground settlement

Procedia PDF Downloads 255
457 Disease Trajectories in Relation to Poor Sleep Health in the UK Biobank

Authors: Jiajia Peng, Jianqing Qiu, Jianjun Ren, Yu Zhao

Abstract:

Background: Insufficient sleep has been focused on as a public health epidemic. However, a comprehensive analysis of disease trajectory associated with unhealthy sleep habits is still unclear currently. Objective: This study sought to comprehensively clarify the disease's trajectory in relation to the overall poor sleep pattern and unhealthy sleep behaviors separately. Methods: 410,682 participants with available information on sleep behaviors were collected from the UK Biobank at the baseline visit (2006-2010). These participants were classified as having high- and low risk of each sleep behavior and were followed from 2006 to 2020 to identify the increased risks of diseases. We used Cox regression to estimate the associations of high-risk sleep behaviors with the elevated risks of diseases, and further established diseases trajectory using significant diseases. The low-risk unhealthy sleep behaviors were defined as the reference. Thereafter, we also examined the trajectory of diseases linked with the overall poor sleep pattern by combining all of these unhealthy sleep behaviors. To visualize the disease's trajectory, network analysis was used for presenting these trajectories. Results: During a median follow-up of 12.2 years, we noted 12 medical conditions in relation to unhealthy sleep behaviors and the overall poor sleep pattern among 410,682 participants with a median age of 58.0 years. The majority of participants had unhealthy sleep behaviors; in particular, 75.62% with frequent sleeplessness, and 72.12% had abnormal sleep durations. Besides, a total of 16,032 individuals with an overall poor sleep pattern were identified. In general, three major disease clusters were associated with overall poor sleep status and unhealthy sleep behaviors according to the disease trajectory and network analysis, mainly in the digestive, musculoskeletal and connective tissue, and cardiometabolic systems. Of note, two circularity disease pairs (I25→I20 and I48→I50) showed the highest risks following these unhealthy sleep habits. Additionally, significant differences in disease trajectories were observed in relation to sex and sleep medication among individuals with poor sleep status. Conclusions: We identified the major disease clusters and high-risk diseases following participants with overall poor sleep health and unhealthy sleep behaviors, respectively. It may suggest the need to investigate the potential interventions targeting these key pathways.

Keywords: sleep, poor sleep, unhealthy sleep behaviors, disease trajectory, UK Biobank

Procedia PDF Downloads 92
456 Implementation of Algorithm K-Means for Grouping District/City in Central Java Based on Macro Economic Indicators

Authors: Nur Aziza Luxfiati

Abstract:

Clustering is partitioning data sets into sub-sets or groups in such a way that elements certain properties have shared property settings with a high level of similarity within one group and a low level of similarity between groups. . The K-Means algorithm is one of thealgorithmsclustering as a grouping tool that is most widely used in scientific and industrial applications because the basic idea of the kalgorithm is-means very simple. In this research, applying the technique of clustering using the k-means algorithm as a method of solving the problem of national development imbalances between regions in Central Java Province based on macroeconomic indicators. The data sample used is secondary data obtained from the Central Java Provincial Statistics Agency regarding macroeconomic indicator data which is part of the publication of the 2019 National Socio-Economic Survey (Susenas) data. score and determine the number of clusters (k) using the elbow method. After the clustering process is carried out, the validation is tested using themethodsBetween-Class Variation (BCV) and Within-Class Variation (WCV). The results showed that detection outlier using z-score normalization showed no outliers. In addition, the results of the clustering test obtained a ratio value that was not high, namely 0.011%. There are two district/city clusters in Central Java Province which have economic similarities based on the variables used, namely the first cluster with a high economic level consisting of 13 districts/cities and theclustersecondwith a low economic level consisting of 22 districts/cities. And in the cluster second, namely, between low economies, the authors grouped districts/cities based on similarities to macroeconomic indicators such as 20 districts of Gross Regional Domestic Product, with a Poverty Depth Index of 19 districts, with 5 districts in Human Development, and as many as Open Unemployment Rate. 10 districts.

Keywords: clustering, K-Means algorithm, macroeconomic indicators, inequality, national development

Procedia PDF Downloads 158
455 Effect of Crown Gall and Phylloxera Resistant Rootstocks on Grafted Vitis Vinifera CV. Sultana Grapevine

Authors: Hassan Mahmoudzadeh

Abstract:

The bacterium of Agrobacterium vitis causes crown and root gall disease, an important disease of grapevine, Vitis vinifera L. Also, Phylloxera is one of the most important pests in viticulture. Grapevine rootstocks were developed to provide increased resistance to soil-borne pests and diseases, but rootstock effects on some traits remain unclear. The interaction between rootstock, scion and environment can induce different responses to the grapevine physiology. 'Sultsna' (Vitis vinifera L.) is one of the most valuable raisin grape cultivars in Iran. Thus, the aim of this study was to determine the rootstock effect on the growth characteristics and yield components and quality of 'Sultana' grapevine grown in the Urmia viticulture region. The experimental design was completely randomized blocks, with four treatments, four replicates and 10 vines per plot. The results show that all variables evaluated were significantly affected by the rootstock. The Sultana/110R and Sultana/Nazmieh were among other combinations influenced by the year and had a higher significant yield/vine (13.25 and 12.14, respectively). Indeed, they were higher than that of Sultana/5BB (10.56 kg/vine) and Sultana/Spota (10.25 kg/vine). The number of clusters per burst bud and per vine and the weight of clusters were affected by the rootstock as well. Pruning weight/vine, yield/pruning weight, leaf area/vine and leaf area index are variables related to the physiology of grapevine, which was also affected by the rootstocks. In general, rootstocks had adapted well to the environment where the experiment was carried out, giving vigor and high yield to Sultana grapevine, which means that they may be used by grape growers in this region. In sum, the study found the best rootstocks for 'Sultana' to be Nazmieh and 110R in terms of root and shoot growth. However, the choice of the right rootstock depends on various aspects, such as those related to soil characteristics, climate conditions, grape varieties, and even clones, and production purposes.

Keywords: grafting, vineyards, grapevine, succeptability

Procedia PDF Downloads 127
454 Cleaning of Scientific References in Large Patent Databases Using Rule-Based Scoring and Clustering

Authors: Emiel Caron

Abstract:

Patent databases contain patent related data, organized in a relational data model, and are used to produce various patent statistics. These databases store raw data about scientific references cited by patents. For example, Patstat holds references to tens of millions of scientific journal publications and conference proceedings. These references might be used to connect patent databases with bibliographic databases, e.g. to study to the relation between science, technology, and innovation in various domains. Problematic in such studies is the low data quality of the references, i.e. they are often ambiguous, unstructured, and incomplete. Moreover, a complete bibliographic reference is stored in only one attribute. Therefore, a computerized cleaning and disambiguation method for large patent databases is developed in this work. The method uses rule-based scoring and clustering. The rules are based on bibliographic metadata, retrieved from the raw data by regular expressions, and are transparent and adaptable. The rules in combination with string similarity measures are used to detect pairs of records that are potential duplicates. Due to the scoring, different rules can be combined, to join scientific references, i.e. the rules reinforce each other. The scores are based on expert knowledge and initial method evaluation. After the scoring, pairs of scientific references that are above a certain threshold, are clustered by means of single-linkage clustering algorithm to form connected components. The method is designed to disambiguate all the scientific references in the Patstat database. The performance evaluation of the clustering method, on a large golden set with highly cited papers, shows on average a 99% precision and a 95% recall. The method is therefore accurate but careful, i.e. it weighs precision over recall. Consequently, separate clusters of high precision are sometimes formed, when there is not enough evidence for connecting scientific references, e.g. in the case of missing year and journal information for a reference. The clusters produced by the method can be used to directly link the Patstat database with bibliographic databases as the Web of Science or Scopus.

Keywords: clustering, data cleaning, data disambiguation, data mining, patent analysis, scientometrics

Procedia PDF Downloads 194
453 Hierarchical Clustering Algorithms in Data Mining

Authors: Z. Abdullah, A. R. Hamdan

Abstract:

Clustering is a process of grouping objects and data into groups of clusters to ensure that data objects from the same cluster are identical to each other. Clustering algorithms in one of the areas in data mining and it can be classified into partition, hierarchical, density based, and grid-based. Therefore, in this paper, we do a survey and review for four major hierarchical clustering algorithms called CURE, ROCK, CHAMELEON, and BIRCH. The obtained state of the art of these algorithms will help in eliminating the current problems, as well as deriving more robust and scalable algorithms for clustering.

Keywords: clustering, unsupervised learning, algorithms, hierarchical

Procedia PDF Downloads 885
452 Numerical Simulation of Hydraulic Fracture Propagation in Marine-continental Transitional Tight Sandstone Reservoirs by Boundary Element Method: A Case Study of Shanxi Formation in China

Authors: Jiujie Cai, Fengxia LI, Haibo Wang

Abstract:

After years of research, offshore oil and gas development now are shifted to unconventional reservoirs, where multi-stage hydraulic fracturing technology has been widely used. However, the simulation of complex hydraulic fractures in tight reservoirs is faced with geological and engineering difficulties, such as large burial depths, sand-shale interbeds, and complex stress barriers. The objective of this work is to simulate the hydraulic fracture propagation in the tight sandstone matrix of the marine-continental transitional reservoirs, where the Shanxi Formation in Tianhuan syncline of the Dongsheng gas field was used as the research target. The characteristic parameters of the vertical rock samples with rich beddings were clarified through rock mechanics experiments. The influence of rock mechanical parameters, vertical stress difference of pay-zone and bedding layer, and fracturing parameters (such as injection rates, fracturing fluid viscosity, and number of perforation clusters within single stage) on fracture initiation and propagation were investigated. In this paper, a 3-D fracture propagation model was built to investigate the complex fracture propagation morphology by boundary element method, considering the strength of bonding surface between layers, vertical stress difference and fracturing parameters (such as injection rates, fluid volume and viscosity). The research results indicate that on the condition of vertical stress difference (3 MPa), the fracture height can break through and enter the upper interlayer when the thickness of the overlying bedding layer is 6-9 m, considering effect of the weak bonding surface between layers. The fracture propagates within the pay zone when overlying interlayer is greater than 13 m. Difference in fluid volume distribution between clusters could be more than 20% when the stress difference of each cluster in the segment exceeds 2MPa. Fracture cluster in high stress zones cannot initiate when the stress difference in the segment exceeds 5MPa. The simulation results of fracture height are much higher if the effect of weak bonding surface between layers is not involved. By increasing the injection rates, increasing fracturing fluid viscosity, and reducing the number of clusters within single stage can promote the fracture height propagation through layers. Optimizing the perforation position and reducing the number of perforations can promote the uniform expansion of fractures. Typical curves of fracture height estimation were established for the tight sandstone of the Lower Permian Shanxi Formation. The model results have good consistency with micro-seismic monitoring results of hydraulic fracturing in Well 1HF.

Keywords: fracture propagation, boundary element method, fracture height, offshore oil and gas, marine-continental transitional reservoirs, rock mechanics experiment

Procedia PDF Downloads 127
451 Growth of Droplet in Radiation-Induced Plasma of Own Vapour

Authors: P. Selyshchev

Abstract:

The theoretical approach is developed to describe the change of drops in the atmosphere of own steam and buffer gas under irradiation. It is shown that the irradiation influences on size of stable droplet and on the conditions under which the droplet exists. Under irradiation the change of drop becomes more complex: the not monotone and periodical change of size of drop becomes possible. All possible solutions are represented by means of phase portrait. It is found all qualitatively different phase portraits as function of critical parameters: rate generation of clusters and substance density.

Keywords: irradiation, steam, plasma, cluster formation, liquid droplets, evolution

Procedia PDF Downloads 441
450 The Role of Knowledge Management in Innovation: Spanish Evidence

Authors: María Jesús Luengo-Valderrey, Mónica Moso-Díez

Abstract:

In the knowledge-based economy, innovation is considered essential in order to achieve survival and growth in organizations. On the other hand, knowledge management is currently understood as one of the keys to innovation process. Both factors are generally admitted as generators of competitive advantage in organizations. Specifically, activities on R&D&I and those that generate internal knowledge have a positive influence in innovation results. This paper examines this effect and if it is similar or not is what we aimed to quantify in this paper. We focus on the impact that proportion of knowledge workers, the R&D&I investment, the amounts destined for ICTs and training for innovation have on the variation of tangible and intangibles returns for the sector of high and medium technology in Spain. To do this, we have performed an empirical analysis on the results of questionnaires about innovation in enterprises in Spain, collected by the National Statistics Institute. First, using clusters methodology, the behavior of these enterprises regarding knowledge management is identified. Then, using SEM methodology, we performed, for each cluster, the study about cause-effect relationships among constructs defined through variables, setting its type and quantification. The cluster analysis results in four groups in which cluster number 1 and 3 presents the best performance in innovation with differentiating nuances among them, while clusters 2 and 4 obtained divergent results to a similar innovative effort. However, the results of SEM analysis for each cluster show that, in all cases, knowledge workers are those that affect innovation performance most, regardless of the level of investment, and that there is a strong correlation between knowledge workers and investment in knowledge generation. The main findings reached is that Spanish high and medium technology companies improve their innovation performance investing in internal knowledge generation measures, specially, in terms of R&D activities, and underinvest in external ones. This, and the strong correlation between knowledge workers and the set of activities that promote the knowledge generation, should be taken into account by managers of companies, when making decisions about their investments for innovation, since they are key for improving their opportunities in the global market.

Keywords: high and medium technology sector, innovation, knowledge management, Spanish companies

Procedia PDF Downloads 237
449 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering

Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott

Abstract:

Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.

Keywords: cancer research, graph theory, machine learning, single cell analysis

Procedia PDF Downloads 113
448 Contextual SenSe Model: Word Sense Disambiguation using Sense and Sense Value of Context Surrounding the Target

Authors: Vishal Raj, Noorhan Abbas

Abstract:

Ambiguity in NLP (Natural language processing) refers to the ability of a word, phrase, sentence, or text to have multiple meanings. This results in various kinds of ambiguities such as lexical, syntactic, semantic, anaphoric and referential am-biguities. This study is focused mainly on solving the issue of Lexical ambiguity. Word Sense Disambiguation (WSD) is an NLP technique that aims to resolve lexical ambiguity by determining the correct meaning of a word within a given context. Most WSD solutions rely on words for training and testing, but we have used lemma and Part of Speech (POS) tokens of words for training and testing. Lemma adds generality and POS adds properties of word into token. We have designed a novel method to create an affinity matrix to calculate the affinity be-tween any pair of lemma_POS (a token where lemma and POS of word are joined by underscore) of given training set. Additionally, we have devised an al-gorithm to create the sense clusters of tokens using affinity matrix under hierar-chy of POS of lemma. Furthermore, three different mechanisms to predict the sense of target word using the affinity/similarity value are devised. Each contex-tual token contributes to the sense of target word with some value and whichever sense gets higher value becomes the sense of target word. So, contextual tokens play a key role in creating sense clusters and predicting the sense of target word, hence, the model is named Contextual SenSe Model (CSM). CSM exhibits a noteworthy simplicity and explication lucidity in contrast to contemporary deep learning models characterized by intricacy, time-intensive processes, and chal-lenging explication. CSM is trained on SemCor training data and evaluated on SemEval test dataset. The results indicate that despite the naivety of the method, it achieves promising results when compared to the Most Frequent Sense (MFS) model.

Keywords: word sense disambiguation (wsd), contextual sense model (csm), most frequent sense (mfs), part of speech (pos), natural language processing (nlp), oov (out of vocabulary), lemma_pos (a token where lemma and pos of word are joined by underscore), information retrieval (ir), machine translation (mt)

Procedia PDF Downloads 108
447 A Concept for Flexible Battery Cell Manufacturing from Low to Medium Volumes

Authors: Tim Giesen, Raphael Adamietz, Pablo Mayer, Philipp Stiefel, Patrick Alle, Dirk Schlenker

Abstract:

The competitiveness and success of new electrical energy storages such as battery cells are significantly dependent on a short time-to-market. Producers who decide to supply new battery cells to the market need to be easily adaptable in manufacturing with respect to the early customers’ needs in terms of cell size, materials, delivery time and quantity. In the initial state, the required output rates do not yet allow the producers to have a fully automated manufacturing line nor to supply handmade battery cells. Yet there was no solution for manufacturing battery cells in low to medium volumes in a reproducible way. Thus, in terms of cell format and output quantity, a concept for the flexible assembly of battery cells was developed by the Fraunhofer-Institute for Manufacturing Engineering and Automation. Based on clustered processes, the modular system platform can be modified, enlarged or retrofitted in a short time frame according to the ordered product. The paper shows the analysis of the production steps from a conventional battery cell assembly line. Process solutions were found by using I/O-analysis, functional structures, and morphological boxes. The identified elementary functions were subsequently clustered by functional coherences for automation solutions and thus the single process cluster was generated. The result presented in this paper enables to manufacture different cell products on the same production system using seven process clusters. The paper shows the solution for a batch-wise flexible battery cell production using advanced process control. Further, the performed tests and benefits by using the process clusters as cyber-physical systems for an integrated production and value chain are discussed. The solution lowers the hurdles for SMEs to launch innovative cell products on the global market.

Keywords: automation, battery production, carrier, advanced process control, cyber-physical system

Procedia PDF Downloads 338
446 Multivariate Statistical Analysis of Heavy Metals Pollution of Dietary Vegetables in Swabi, Khyber Pakhtunkhwa, Pakistan

Authors: Fawad Ali

Abstract:

Toxic heavy metal contamination has a negative impact on soil quality which ultimately pollutes the agriculture system. In the current work, we analyzed uptake of various heavy metals by dietary vegetables grown in wastewater irrigated areas of Swabi city. The samples of soil and vegetables were analyzed for heavy metals viz Cd, Cr, Mn, Fe, Ni, Cu, Zn and Pb using Atomic Absorption Spectrophotometer. High levels of metals were found in wastewater irrigated soil and vegetables in the study area. Especially the concentrations of Pb and Cd in the dietary vegetable crossed the permissible level of World Health Organization. Substantial positive correlation was found among the soil and vegetable contamination. Transfer factor for some metals including Cr, Zn, Mn, Ni, Cd and Cu was greater than 0.5 which shows enhanced accumulation of these metals due to contamination by domestic discharges and industrial effluents. Linear regression analysis indicated significant correlation of heavy metals viz Pb, Cr, Cd, Ni, Zn, Cu, Fe and Mn in vegetables with concentration in soil of 0.964 at P≤0.001. Abelmoschus esculentus indicated Health Risk Index (HRI) of Pb >1 in adults and children. The source identification analysis carried out by Principal Component Analysis (PCA) and Cluster Analysis (CA) showed that ground water and soil were being polluted by the trace metals coming out from industries and domestic wastes. Hierarchical cluster analysis (HCA) divided metals into two clusters for wastewater and soil but into five clusters for soil of control area. PCA extracted two factors for wastewater, each contributing 61.086 % and 16.229 % of the total 77.315 % variance. PCA extracted two factors, for soil samples, having total variance of 79.912 % factor 1 and factor 2 contributed 63.889 % and 16.023 % of the total variance. PCA for sub soil extracted two factors with a total variance of 76.136 % factor 1 being 61.768 % and factor 2 being 14.368 %of the total variance. High pollution load index for vegetables in the study area due to metal polluted soil has opened a study area for proper legislation to protect further contamination of vegetables. This work would further reveal serious health risks to human population of the study area.

Keywords: health risk, vegetables, wastewater, atomic absorption sepctrophotometer

Procedia PDF Downloads 70
445 Superchaotropicity: Grafted Surface to Probe the Adsorption of Nano-Ions

Authors: Raimoana Frogier, Luc Girard, Pierre Bauduin, Diane Rebiscoul, Olivier Diat

Abstract:

Nano-ions (NIs) are ionic species or clusters of nanometric size. Their low charge density and the delocalization of their charges give special properties to some of NIs belonging to chemical classes of polyoxometalates (POMs) or boron clusters. They have the particularity of interacting non-covalently with neutral hydrated surface or interfaces such as assemblies of surface-active molecules (micelles, vesicles, lyotropic liquid crystals), foam bubbles or emulsion droplets. This makes possible to classify those NIs in the Hofmeister series as superchaotropic ions. The mechanism of adsorption is complex, linked to the simultaneous dehydration of the ion and the molecule or supramolecular assembly with which it can interact, all with an enthalpic gain on the free energy of the system. This interaction process is reversible and is sufficiently pronounced to induce changes in molecular and supramolecular shape or conformation, phase transitions in the liquid phase, all at sub-millimolar ionic concentrations. This new property of some NIs opens up new possibilities for applications in fields as varied as biochemistry for solubilization, recovery of metals of interest by foams in the form of NIs... In order to better understand the physico-chemical mechanisms at the origin of this interaction, we use silicon wafers functionalized by non-ionic oligomers (polyethylene glycol chains or PEG) to study in situ by X-ray reflectivity this interaction of NIs with the grafted chains. This study carried out at ESRF (European Synchrotron Radiation Facility) and has shown that the adsorption of the NIs, such as POMs, has a very fast kinetics. Moreover the distribution of the NIs in the grafted PEG chain layer was quantify. These results are very encouraging and confirm what has been observed on soft interfaces such as micelles or foams. The possibility to play on the density, length and chemical nature of the grafted chains makes this system an ideal tool to provide kinetic and thermodynamic information to decipher the complex mechanisms at the origin of this adsorption.

Keywords: adsorption, nano-ions, solid-liquid interface, superchaotropicity

Procedia PDF Downloads 67