Search results for: tags' clusters
512 Molecular Characterization of Listeria monocytogenes from Fresh Fish and Fish Products
Authors: Beata Lachtara, Renata Szewczyk, Katarzyna Bielinska, Kinga Wieczorek, Jacek Osek
Abstract:
Listeria monocytogenes is an important human and animal pathogen that causes foodborne outbreaks. The bacteria may be present in different types of food: cheese, raw vegetables, sliced meat products and vacuum-packed sausages, poultry, meat, fish. The most common method, which has been used for the investigation of genetic diversity of L. monocytogenes, is PFGE. This technique is reliable and reproducible and established as gold standard for typing of L. monocytogenes. The aim of the study was characterization by molecular serotyping and PFGE analysis of L. monocytogenes strains isolated from fresh fish and fish products in Poland. A total of 301 samples, including fresh fish (n = 129) and fish products (n = 172) were, collected between January 2014 and March 2016. The bacteria were detected using the ISO 11290-1 standard method. Molecular serotyping was performed with PCR. The isolates were tested with the PFGE method according to the protocol developed by the European Union Reference Laboratory for L. monocytogenes with some modifications. Based on the PFGE profiles, two dendrograms were generated for strains digested separately with two restriction enzymes: AscI and ApaI. Analysis of the fingerprint profiles was performed using Bionumerics software version 6.6 (Applied Maths, Belgium). The 95% of similarity was applied to differentiate the PFGE pulsotypes. The study revealed that 57 of 301 (18.9%) samples were positive for L. monocytogenes. The bacteria were identified in 29 (50.9%) ready-to-eat fish products and in 28 (49.1%) fresh fish. It was found that 40 (70.2%) strains were of serotype 1/2a, 14 (24.6%) 1/2b, two (4.3%) 4b and one (1.8%) 1/2c. Serotypes 1/2a, 1/2b, and 4b were presented with the same frequency in both categories of food, whereas serotype 1/2c was detected only in fresh fish. The PFGE analysis with AscI demonstrated 43 different pulsotypes; among them 33 (76.7%) were represented by only one strain. The remaining 10 profiles contained more than one isolate. Among them 8 pulsotypes comprised of two L. monocytogenes isolates, one profile of three isolates and one restriction type of 5 strains. In case of ApaI typing, the PFGE analysis showed 27 different pulsotypes including 17 (63.0%) types represented by only one strain. Ten (37.0%) clusters contained more than one strain among which four profiles covered two strains; three had three isolates, one with five strains, one with eight strains and one with ten isolates. It was observed that the isolates assigned to the same PFGE type were usually of the same serotype (1/2a or 1/2b). The majority of the clusters had strains of both sources (fresh fish and fish products) isolated at different time. Most of the strains grouped in one cluster of the AscI restriction was assigned to the same groups in ApaI investigation. In conclusion, PFGE used in the study showed a high genetic diversity among L. monocytogenes. The strains were grouped into varied clonal clusters, which may suggest different sources of contamination. The results demonstrated that 1/2a serotype was the most common among isolates from fresh fish and fish products in Poland.Keywords: Listeria monocytogenes, molecular characteristic, PFGE, serotyping
Procedia PDF Downloads 288511 Eco-Cities in Challenging Environments: Pollution As A Polylemma in The Uae
Authors: Shaima A. Al Mansoori
Abstract:
Eco-cities have become part of the broader and universal discourse and embrace of sustainable communities. Given the ideals and ‘potential’ benefits of eco-cities for people, the environment and prosperity, hardly can an argument be made against the desirability of eco-cities. Yet, this paper posits that it is necessary for urban scholars, technocrats and policy makers to engage in discussions of the pragmatism of implementing the ideals of eco-cities, for example, from the political, budgetary, cultural and other dimensions. In the context of such discourse, this paper examines the feasibility of one of the cardinal principles and goals of eco-cities, which is the reduction or elimination of pollution through various creative and innovative initiatives, in the UAE. This paper contends and argues that, laudable and desirable as this goal is, it is a polylemma and, therefore, overly ambitious and practically unattainable in the UAE. The paper uses the mixed method research strategy, in which data is sourced from secondary and general sources through desktop research, from public records in governmental agencies, and from the conceptual academic and professional literature. Information from these sources will be used, first, to define and review pollution as a concept and multifaceted phenomenon with multidimensional impacts. Second, the paper will use society’s five goal clusters as a framework to identify key causes and impacts of pollution in the UAE. Third, the paper will identify and analyze specific public policies, programs and projects that make pollution in the UAE a polylemma. Fourth, the paper will argue that the phenomenal rates of population increase, urbanization, economic growth, consumerism and development in the UAE make pollution an inevitable product and burden that society must live with. This ‘reality’ makes the goal and desire of pollution-free cities pursuable but unattainable. The paper will conclude by identifying and advocating creative and innovative initiatives that can be taken by the various stakeholders in the country to reduce and mitigate pollution in the short- and long-term.Keywords: goal clusters, pollution, polylemma, sustainable communities
Procedia PDF Downloads 385510 An Expert System for Assessment of Learning Outcomes for ABET Accreditation
Authors: M. H. Imam, Imran A. Tasadduq, Abdul-Rahim Ahmad, Fahd M. Aldosari
Abstract:
Learning outcomes of a course (CLOs) and the abilities at the time of graduation referred to as Student Outcomes (SOs) are required to be assessed for ABET accreditation. A question in an assessment must target a CLO as well as an SO and must represent a required level of competence. This paper presents the idea of an Expert System (ES) to select a proper question to satisfy ABET accreditation requirements. For ES implementation, seven attributes of a question are considered including the learning outcomes and Bloom’s Taxonomy level. A database contains all the data about a course including course content topics, course learning outcomes and the CLO-SO relationship matrix. The knowledge base of the presented ES contains a pool of questions each with tags of the specified attributes. Questions and the attributes represent expert opinions. With implicit rule base the inference engine finds the best possible question satisfying the required attributes. It is shown that the novel idea of such an ES can be implemented and applied to a course with success. An application example is presented to demonstrate the working of the proposed ES.Keywords: expert system, student outcomes, course learning outcomes, question attributes
Procedia PDF Downloads 251509 Named Entity Recognition System for Tigrinya Language
Authors: Sham Kidane, Fitsum Gaim, Ibrahim Abdella, Sirak Asmerom, Yoel Ghebrihiwot, Simon Mulugeta, Natnael Ambassager
Abstract:
The lack of annotated datasets is a bottleneck to the progress of NLP in low-resourced languages. The work presented here consists of large-scale annotated datasets and models for the named entity recognition (NER) system for the Tigrinya language. Our manually constructed corpus comprises over 340K words tagged for NER, with over 118K of the tokens also having parts-of-speech (POS) tags, annotated with 12 distinct classes of entities, represented using several types of tagging schemes. We conducted extensive experiments covering convolutional neural networks and transformer models; the highest performance achieved is 88.8% weighted F1-score. These results are especially noteworthy given the unique challenges posed by Tigrinya’s distinct grammatical structure and complex word morphologies. The system can be an essential building block for the advancement of NLP systems in Tigrinya and other related low-resourced languages and serve as a bridge for cross-referencing against higher-resourced languages.Keywords: Tigrinya NER corpus, TiBERT, TiRoBERTa, BiLSTM-CRF
Procedia PDF Downloads 130508 Ferromagnetic Potts Models with Multi Site Interaction
Authors: Nir Schreiber, Reuven Cohen, Simi Haber
Abstract:
The Potts model has been widely explored in the literature for the last few decades. While many analytical and numerical results concern with the traditional two site interaction model in various geometries and dimensions, little is yet known about models where more than two spins simultaneously interact. We consider a ferromagnetic four site interaction Potts model on the square lattice (FFPS), where the four spins reside in the corners of an elementary square. Each spin can take an integer value 1,2,...,q. We write the partition function as a sum over clusters consisting of monochromatic faces. When the number of faces becomes large, tracing out spin configurations is equivalent to enumerating large lattice animals. It is known that the asymptotic number of animals with k faces is governed by λᵏ, with λ ≈ 4.0626. Based on this observation, systems with q < 4 and q > 4 exhibit a second and first order phase transitions, respectively. The transition nature of the q = 4 case is borderline. For any q, a critical giant component (GC) is formed. In the finite order case, GC is simple, while it is fractal when the transition is continuous. Using simple equilibrium arguments, we obtain a (zero order) bound on the transition point. It is claimed that this bound should apply for other lattices as well. Next, taking into account higher order sites contributions, the critical bound becomes tighter. Moreover, for q > 4, if corrections due to contributions from small clusters are negligible in the thermodynamic limit, the improved bound should be exact. The improved bound is used to relate the critical point to the finite correlation length. Our analytical predictions are confirmed by an extensive numerical study of FFPS, using the Wang-Landau method. In particular, the q=4 marginal case is supported by a very ambiguous pseudo-critical finite size behavior.Keywords: entropic sampling, lattice animals, phase transitions, Potts model
Procedia PDF Downloads 160507 Cas9-Assisted Direct Cloning and Refactoring of a Silent Biosynthetic Gene Cluster
Authors: Peng Hou
Abstract:
Natural products produced from marine bacteria serve as an immense reservoir for anti-infective drugs and therapeutic agents. Nowadays, heterologous expression of gene clusters of interests has been widely adopted as an effective strategy for natural product discovery. Briefly, the heterologous expression flowchart would be: biosynthetic gene cluster identification, pathway construction and expression, and product detection. However, gene cluster capture using traditional Transformation-associated recombination (TAR) protocol is low-efficient (0.5% positive colony rate). To make things worse, most of these putative new natural products are only predicted by bioinformatics analysis such as antiSMASH, and their corresponding natural products biosynthetic pathways are either not expressed or expressed at very low levels under laboratory conditions. Those setbacks have inspired us to focus on seeking new technologies to efficiently edit and refractor of biosynthetic gene clusters. Recently, two cutting-edge techniques have attracted our attention - the CRISPR-Cas9 and Gibson Assembly. By now, we have tried to pretreat Brevibacillus laterosporus strain genomic DNA with CRISPR-Cas9 nucleases that specifically generated breaks near the gene cluster of interest. This trial resulted in an increase in the efficiency of gene cluster capture (9%). Moreover, using Gibson Assembly by adding/deleting certain operon and tailoring enzymes regardless of end compatibility, the silent construct (~80kb) has been successfully refactored into an active one, yielded a series of analogs expected. With the appearances of the novel molecular tools, we are confident to believe that development of a high throughput mature pipeline for DNA assembly, transformation, product isolation and identification would no longer be a daydream for marine natural product discovery.Keywords: biosynthesis, CRISPR-Cas9, DNA assembly, refactor, TAR cloning
Procedia PDF Downloads 282506 Variation among East Wollega Coffee (Coffea arabica L.) Landraces for Quality Attributes
Authors: Getachew Weldemichael, Sentayehu Alamerew, Leta Tulu, Gezahegn Berecha
Abstract:
Coffee quality improvement program is becoming the focus of coffee research, as the world coffee consumption pattern shifted to high-quality coffee. However, there is limited information on the genetic variation of C. Arabica for quality improvement in potential specialty coffee growing areas of Ethiopia. Therefore, this experiment was conducted with the objectives of determining the magnitude of variation among 105 coffee accessions collected from east Wollega coffee growing areas and assessing correlations between the different coffee qualities attributes. It was conducted in RCRD with three replications. Data on green bean physical characters (shape and make, bean color and odor) and organoleptic cup quality traits (aromatic intensity, aromatic quality, acidity, astringency, bitterness, body, flavor, and overall standard of the liquor) were recorded. Analysis of variance, clustering, genetic divergence, principal component and correlation analysis was performed using SAS software. The result revealed that there were highly significant differences (P<0.01) among the accessions for all quality attributes except for odor and bitterness. Among the tested accessions, EW104 /09, EW101 /09, EW58/09, EW77/09, EW35/09, EW71/09, EW68/09, EW96 /09, EW83/09 and EW72/09 had the highest total coffee quality values (the sum of bean physical and cup quality attributes). These genotypes could serve as a source of genes for green bean physical characters and cup quality improvement in Arabica coffee. Furthermore, cluster analysis grouped the coffee accessions into five clusters with significant inter-cluster distances implying that there is moderate diversity among the accessions and crossing accessions from these divergent inter-clusters would result in hetrosis and recombinants in segregating generations. The principal component analysis revealed that the first three principal components with eigenvalues greater than unity accounted for 83.1% of the total variability due to the variation of nine quality attributes considered for PC analysis, indicating that all quality attributes equally contribute to a grouping of the accessions in different clusters. Organoleptic cup quality attributes showed positive and significant correlations both at the genotypic and phenotypic levels, demonstrating the possibility of simultaneous improvement of the traits. Path coefficient analysis revealed that acidity, flavor, and body had a high positive direct effect on overall cup quality, implying that these traits can be used as indirect criteria to improve overall coffee quality. Therefore, it was concluded that there is considerable variation among the accessions, which need to be properly conserved for future improvement of the coffee quality. However, the variability observed for quality attributes must be further verified using biochemical and molecular analysis.Keywords: accessions, Coffea arabica, cluster analysis, correlation, principal component
Procedia PDF Downloads 165505 Interpersonal Variation of Salivary Microbiota Using Denaturing Gradient Gel Electrophoresis
Authors: Manjula Weerasekera, Chris Sissons, Lisa Wong, Sally Anderson, Ann Holmes, Richard Cannon
Abstract:
The aim of this study was to characterize bacterial population and yeasts in saliva by Polymerase chain reaction followed by denaturing gradient gel electrophoresis (PCR-DGGE) and measure yeast levels by culture. PCR-DGGE was performed to identify oral bacteria and yeasts in 24 saliva samples. DNA was extracted and used to generate DNA amplicons of the V2–V3 hypervariable region of the bacterial 16S rDNA gene using PCR. Further universal primers targeting the large subunit rDNA gene (25S-28S) of fungi were used to amplify yeasts present in human saliva. Resulting PCR products were subjected to denaturing gradient gel electrophoresis using Universal mutation detection system. DGGE bands were extracted and sequenced using Sanger method. A potential relationship was evaluated between groups of bacteria identified by cluster analysis of DGGE fingerprints with the yeast levels and with their diversity. Significant interpersonal variation of salivary microbiome was observed. Cluster and principal component analysis of the bacterial DGGE patterns yielded three significant major clusters, and outliers. Seventeen of the 24 (71%) saliva samples were yeast positive going up to 10³ cfu/mL. Predominately, C. albicans, and six other species of yeast were detected. The presence, amount and species of yeast showed no clear relationship to the bacterial clusters. Microbial community in saliva showed a significant variation between individuals. A lack of association between yeasts and the bacterial fingerprints in saliva suggests the significant ecological person-specific independence in highly complex oral biofilm systems under normal oral conditions.Keywords: bacteria, denaturing gradient gel electrophoresis, oral biofilm, yeasts
Procedia PDF Downloads 222504 Test and Evaluation of Patient Tracking Platform in an Earthquake Simulation
Authors: Nahid Tavakoli, Mohammad H. Yarmohammadian, Ali Samimi
Abstract:
In earthquake situation, medical response communities such as field and referral hospitals are challenged with injured victims’ identification and tracking. In our project, it was developed a patient tracking platform (PTP) where first responders triage the patients with an electronic tag which report the location and some information of each patient during his/her movement. This platform includes: 1) near field communication (NFC) tags (ISO 14443), 2) smart mobile phones (Android-base version 4.2.2), 3) Base station laptops (Windows), 4) server software, 5) Android software to use by first responders, 5) disaster command software, and 6) system architecture. Our model has been completed through literature review, Delphi technique, focus group, design the platform, and implement in an earthquake exercise. This paper presents consideration for content, function, and technologies that must apply for patient tracking in medical emergencies situations. It is demonstrated the robustness of the patient tracking platform (PTP) in tracking 6 patients in a simulated earthquake situation in the yard of the relief and rescue department of Isfahan’s Red Crescent.Keywords: test and evaluation, patient tracking platform, earthquake, simulation
Procedia PDF Downloads 139503 A Multi-Objective Decision Making Model for Biodiversity Conservation and Planning: Exploring the Concept of Interdependency
Authors: M. Mohan, J. P. Roise, G. P. Catts
Abstract:
Despite living in an era where conservation zones are de-facto the central element in any sustainable wildlife management strategy, we still find ourselves grappling with several pareto-optimal situations regarding resource allocation and area distribution for the same. In this paper, a multi-objective decision making (MODM) model is presented to answer the question of whether or not we can establish mutual relationships between these contradicting objectives. For our study, we considered a Red-cockaded woodpecker (Picoides borealis) habitat conservation scenario in the coastal plain of North Carolina, USA. Red-cockaded woodpecker (RCW) is a non-migratory territorial bird that excavates cavities in living pine trees for roosting and nesting. The RCW groups nest in an aggregation of cavity trees called ‘cluster’ and for our model we use the number of clusters to be established as a measure of evaluating the size of conservation zone required. The case study is formulated as a linear programming problem and the objective function optimises the Red-cockaded woodpecker clusters, carbon retention rate, biofuel, public safety and Net Present Value (NPV) of the forest. We studied the variation of individual objectives with respect to the amount of area available and plotted a two dimensional dynamic graph after establishing interrelations between the objectives. We further explore the concept of interdependency by integrating the MODM model with GIS, and derive a raster file representing carbon distribution from the existing forest dataset. Model results demonstrate the applicability of interdependency from both linear and spatial perspectives, and suggest that this approach holds immense potential for enhancing environmental investment decision making in future.Keywords: conservation, interdependency, multi-objective decision making, red-cockaded woodpecker
Procedia PDF Downloads 337502 Effect of Distance to Health Facilities on Maternal Service Use and Neonatal Mortality in Ethiopia
Authors: Getiye Dejenu Kibret, Daniel Demant, Andrew Hayen
Abstract:
Introduction: In Ethiopia, more than half of newborn babies do not have access to Emergency Obstetric and Neonatal Care (EmONC) services. Understanding the effect of distance to health facilities on service use and neonatal survival is crucial to recommend policymakers and improve resource distribution. We aimed to investigate the effect of distance to health services on maternal service use and neonatal mortality. Methods: We implemented a data linkage method based on geographic coordinates and calculated straight-line (Euclidean) distances from the Ethiopian 2016 demographic and health survey clusters to the closest health facility. We computed the distance in ESRI ArcGIS Version 10.3 using the geographic coordinates of DHS clusters and health facilities. Generalised Structural Equation Modelling (GSEM) was used to estimate the effect of distance on neonatal mortality. Results: Poor geographic accessibility to health facilities affects maternal service usage and increases the risk of newborn mortality. For every ten kilometres (km) increase in distance to a health facility, the odds of neonatal mortality increased by 1.33% (95% CI: 1.06% to 1.67%). Distance also negatively affected antenatal care, facility delivery and postnatal counselling service use. Conclusions: A lack of geographical access to health facilities decreases the likelihood of newborns surviving their first month of life and affects health services use during pregnancy and immediately after birth. The study also showed that antenatal care use was positively associated with facility delivery service use and that both positively influenced postnatal care use, demonstrating the interconnectedness of the continuum of care for maternal and neonatal care services. Policymakers can leverage the findings from this study to improve accessibility barriers to health services.Keywords: acessibility, distance, maternal health service, neonatal mortality
Procedia PDF Downloads 112501 Improving Fake News Detection Using K-means and Support Vector Machine Approaches
Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy
Abstract:
Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.Keywords: clustering, fake news detection, feature selection, machine learning, social media, support vector machine
Procedia PDF Downloads 176500 Maximization of Lifetime for Wireless Sensor Networks Based on Energy Efficient Clustering Algorithm
Authors: Frodouard Minani
Abstract:
Since last decade, wireless sensor networks (WSNs) have been used in many areas like health care, agriculture, defense, military, disaster hit areas and so on. Wireless Sensor Networks consist of a Base Station (BS) and more number of wireless sensors in order to monitor temperature, pressure, motion in different environment conditions. The key parameter that plays a major role in designing a protocol for Wireless Sensor Networks is energy efficiency which is a scarcest resource of sensor nodes and it determines the lifetime of sensor nodes. Maximizing sensor node’s lifetime is an important issue in the design of applications and protocols for Wireless Sensor Networks. Clustering sensor nodes mechanism is an effective topology control approach for helping to achieve the goal of this research. In this paper, the researcher presents an energy efficiency protocol to prolong the network lifetime based on Energy efficient clustering algorithm. The Low Energy Adaptive Clustering Hierarchy (LEACH) is a routing protocol for clusters which is used to lower the energy consumption and also to improve the lifetime of the Wireless Sensor Networks. Maximizing energy dissipation and network lifetime are important matters in the design of applications and protocols for wireless sensor networks. Proposed system is to maximize the lifetime of the Wireless Sensor Networks by choosing the farthest cluster head (CH) instead of the closest CH and forming the cluster by considering the following parameter metrics such as Node’s density, residual-energy and distance between clusters (inter-cluster distance). In this paper, comparisons between the proposed protocol and comparative protocols in different scenarios have been done and the simulation results showed that the proposed protocol performs well over other comparative protocols in various scenarios.Keywords: base station, clustering algorithm, energy efficient, sensors, wireless sensor networks
Procedia PDF Downloads 144499 Feature Evaluation Based on Random Subspace and Multiple-K Ensemble
Authors: Jaehong Yu, Seoung Bum Kim
Abstract:
Clustering analysis can facilitate the extraction of intrinsic patterns in a dataset and reveal its natural groupings without requiring class information. For effective clustering analysis in high dimensional datasets, unsupervised dimensionality reduction is an important task. Unsupervised dimensionality reduction can generally be achieved by feature extraction or feature selection. In many situations, feature selection methods are more appropriate than feature extraction methods because of their clear interpretation with respect to the original features. The unsupervised feature selection can be categorized as feature subset selection and feature ranking method, and we focused on unsupervised feature ranking methods which evaluate the features based on their importance scores. Recently, several unsupervised feature ranking methods were developed based on ensemble approaches to achieve their higher accuracy and stability. However, most of the ensemble-based feature ranking methods require the true number of clusters. Furthermore, these algorithms evaluate the feature importance depending on the ensemble clustering solution, and they produce undesirable evaluation results if the clustering solutions are inaccurate. To address these limitations, we proposed an ensemble-based feature ranking method with random subspace and multiple-k ensemble (FRRM). The proposed FRRM algorithm evaluates the importance of each feature with the random subspace ensemble, and all evaluation results are combined with the ensemble importance scores. Moreover, FRRM does not require the determination of the true number of clusters in advance through the use of the multiple-k ensemble idea. Experiments on various benchmark datasets were conducted to examine the properties of the proposed FRRM algorithm and to compare its performance with that of existing feature ranking methods. The experimental results demonstrated that the proposed FRRM outperformed the competitors.Keywords: clustering analysis, multiple-k ensemble, random subspace-based feature evaluation, unsupervised feature ranking
Procedia PDF Downloads 339498 Performance Evaluation and Plugging Characteristics of Controllable Self-Aggregating Colloidal Particle Profile Control Agent
Authors: Zhiguo Yang, Xiangan Yue, Minglu Shao, Yue Yang, Rongjie Yan
Abstract:
It is difficult to realize deep profile control because of the small pore-throats and easy water channeling in low-permeability heterogeneous reservoir, and the traditional polymer microspheres have the contradiction between injection and plugging. In order to solve this contradiction, the controllable self-aggregating colloidal particles (CSA) containing amide groups on the surface of microspheres was prepared based on emulsion polymerization of styrene and acrylamide. The dispersed solution of CSA colloidal particles, whose particle size is much smaller than the diameter of pore-throats, was injected into the reservoir. When the microspheres migrated to the deep part of reservoir, , these CSA colloidal particles could automatically self-aggregate into large particle clusters under the action of the shielding agent and the control agent, so as to realize the plugging of the water channels. In this paper, the morphology, temperature resistance and self-aggregation properties of CSA microspheres were studied by transmission electron microscopy (TEM) and bottle test. The results showed that CSA microspheres exhibited heterogeneous core-shell structure, good dispersion, and outstanding thermal stability. The microspheres remain regular and uniform spheres at 100℃ after aging for 35 days. With the increase of the concentration of the cations, the self-aggregation time of CSA was gradually shortened, and the influence of bivalent cations was greater than that of monovalent cations. Core flooding experiments showed that CSA polymer microspheres have good injection properties, CSA particle clusters can effective plug the water channels and migrate to the deep part of the reservoir for profile control.Keywords: heterogeneous reservoir, deep profile control, emulsion polymerization, colloidal particles, plugging characteristic
Procedia PDF Downloads 241497 Some Results on Cluster Synchronization
Authors: Shahed Vahedi, Mohd Salmi Md Noorani
Abstract:
This paper investigates cluster synchronization phenomena between community networks. We focus on the situation where a variety of dynamics occur in the clusters. In particular, we show that different synchronization states simultaneously occur between the networks. The controller is designed having an adaptive control gain, and theoretical results are derived via Lyapunov stability. Simulations on well-known dynamical systems are provided to elucidate our results.Keywords: cluster synchronization, adaptive control, community network, simulation
Procedia PDF Downloads 475496 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model
Authors: Donatella Giuliani
Abstract:
In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation
Procedia PDF Downloads 217495 Modeling Aggregation of Insoluble Phase in Reactors
Authors: A. Brener, B. Ismailov, G. Berdalieva
Abstract:
In the paper we submit the modification of kinetic Smoluchowski equation for binary aggregation applying to systems with chemical reactions of first and second orders in which the main product is insoluble. The goal of this work is to create theoretical foundation and engineering procedures for calculating the chemical apparatuses in the conditions of joint course of chemical reactions and processes of aggregation of insoluble dispersed phases which are formed in working zones of the reactor.Keywords: binary aggregation, clusters, chemical reactions, insoluble phases
Procedia PDF Downloads 307494 Influence of Microstructure on Deformation Mechanisms and Mechanical Properties of Additively Manufactured Steel
Authors: Etienne Bonnaud, David Lindell
Abstract:
Correlations between microstructure, deformation mechanisms, and mechanical properties in additively manufactured 316L steel components have been investigated. Mechanical properties in the vertical direction (building direction) and in the horizontal direction (in plane directions) are markedly different. Vertically built specimens show lower yield stress but higher elongation than their horizontally built counterparts. Microscopic observations by electron back scattered diffraction (EBSD) for both build orientations reveal a strong [110] fiber texture in the build direction but different grain morphologies. These microstructures are used as input in subsequent crystal plasticity numerical simulations to understand their influence on the deformation mechanisms and the mechanical properties. Mean field simulations using a visco plastic self consistent (VPSC) model were carried out first but did not give results consistent with the tensile test experiments. A more detailed full-field model had to be used based on the Visco Plastic Fast Fourier Transform (VPFTT) method. A more accurate microstructure description was then input to the simulation model, where thin vertical regions of smaller grains were also taken into account. It turned out that these small grain clusters were responsible for the discrepancies in yield stress and hardening. Texture and morphology have a strong effect on mechanical properties. The different mechanical behaviors between vertically and horizontally printed specimens could be explained by means of numerical full-field crystal plasticity simulations, and the presence of thin clusters of smaller grains was shown to play a central role in the deformation mechanisms.Keywords: additive manufacturing, crystal plasticity, full-field simulations, mean-field simulations, texture
Procedia PDF Downloads 70493 Design and Analysis of Deep Excavations
Authors: Barham J. Nareeman, Ilham I. Mohammed
Abstract:
Excavations in urban developed area are generally supported by deep excavation walls such as; diaphragm wall, bored piles, soldier piles and sheet piles. In some cases, these walls may be braced by internal braces or tie back anchors. Tie back anchors are by far the predominant method for wall support, the large working space inside the excavation provided by a tieback anchor system has a significant construction advantage. This paper aims to analyze a deep excavation bracing system of contiguous pile wall braced by pre-stressed tie back anchors, which is a part of a huge residential building project, located in Turkey/Gaziantep province. The contiguous pile wall will be constructed with a length of 270 m that consists of 285 piles, each having a diameter of 80 cm, and a center to center spacing of 95 cm. The deformation analysis was carried out by a finite element analysis tool using PLAXIS. In the analysis, beam element method together with an elastic perfect plastic soil model and Soil Hardening Model was used to design the contiguous pile wall, the tieback anchor system, and the soil. The two soil clusters which are limestone and a filled soil were modelled with both Hardening soil and Mohr Coulomb models. According to the basic design, both soil clusters are modelled as drained condition. The simulation results show that the maximum horizontal movement of the walls and the maximum settlement of the ground are convenient with 300 individual case histories which are ranging between 1.2mm and 2.3mm for walls, and 15mm and 6.5mm for the settlements. It was concluded that tied-back contiguous pile wall can be satisfactorily modelled using Hardening soil model.Keywords: deep excavation, finite element, pre-stressed tie back anchors, contiguous pile wall, PLAXIS, horizontal deflection, ground settlement
Procedia PDF Downloads 254492 Design and Field Programmable Gate Array Implementation of Radio Frequency Identification for Boosting up Tag Data Processing
Authors: G. Rajeshwari, V. D. M. Jabez Daniel
Abstract:
Radio Frequency Identification systems are used for automated identification in various applications such as automobiles, health care and security. It is also called as the automated data collection technology. RFID readers are placed in any area to scan large number of tags to cover a wide distance. The placement of the RFID elements may result in several types of collisions. A major challenge in RFID system is collision avoidance. In the previous works the collision was avoided by using algorithms such as ALOHA and tree algorithm. This work proposes collision reduction and increased throughput through reading enhancement method with tree algorithm. The reading enhancement is done by improving interrogation procedure and increasing the data handling capacity of RFID reader with parallel processing. The work is simulated using Xilinx ISE 14.5 verilog language. By implementing this in the RFID system, we can able to achieve high throughput and avoid collision in the reader at a same instant of time. The overall system efficiency will be increased by implementing this.Keywords: antenna, anti-collision protocols, data management system, reader, reading enhancement, tag
Procedia PDF Downloads 306491 Disease Trajectories in Relation to Poor Sleep Health in the UK Biobank
Authors: Jiajia Peng, Jianqing Qiu, Jianjun Ren, Yu Zhao
Abstract:
Background: Insufficient sleep has been focused on as a public health epidemic. However, a comprehensive analysis of disease trajectory associated with unhealthy sleep habits is still unclear currently. Objective: This study sought to comprehensively clarify the disease's trajectory in relation to the overall poor sleep pattern and unhealthy sleep behaviors separately. Methods: 410,682 participants with available information on sleep behaviors were collected from the UK Biobank at the baseline visit (2006-2010). These participants were classified as having high- and low risk of each sleep behavior and were followed from 2006 to 2020 to identify the increased risks of diseases. We used Cox regression to estimate the associations of high-risk sleep behaviors with the elevated risks of diseases, and further established diseases trajectory using significant diseases. The low-risk unhealthy sleep behaviors were defined as the reference. Thereafter, we also examined the trajectory of diseases linked with the overall poor sleep pattern by combining all of these unhealthy sleep behaviors. To visualize the disease's trajectory, network analysis was used for presenting these trajectories. Results: During a median follow-up of 12.2 years, we noted 12 medical conditions in relation to unhealthy sleep behaviors and the overall poor sleep pattern among 410,682 participants with a median age of 58.0 years. The majority of participants had unhealthy sleep behaviors; in particular, 75.62% with frequent sleeplessness, and 72.12% had abnormal sleep durations. Besides, a total of 16,032 individuals with an overall poor sleep pattern were identified. In general, three major disease clusters were associated with overall poor sleep status and unhealthy sleep behaviors according to the disease trajectory and network analysis, mainly in the digestive, musculoskeletal and connective tissue, and cardiometabolic systems. Of note, two circularity disease pairs (I25→I20 and I48→I50) showed the highest risks following these unhealthy sleep habits. Additionally, significant differences in disease trajectories were observed in relation to sex and sleep medication among individuals with poor sleep status. Conclusions: We identified the major disease clusters and high-risk diseases following participants with overall poor sleep health and unhealthy sleep behaviors, respectively. It may suggest the need to investigate the potential interventions targeting these key pathways.Keywords: sleep, poor sleep, unhealthy sleep behaviors, disease trajectory, UK Biobank
Procedia PDF Downloads 92490 Implementation of Algorithm K-Means for Grouping District/City in Central Java Based on Macro Economic Indicators
Authors: Nur Aziza Luxfiati
Abstract:
Clustering is partitioning data sets into sub-sets or groups in such a way that elements certain properties have shared property settings with a high level of similarity within one group and a low level of similarity between groups. . The K-Means algorithm is one of thealgorithmsclustering as a grouping tool that is most widely used in scientific and industrial applications because the basic idea of the kalgorithm is-means very simple. In this research, applying the technique of clustering using the k-means algorithm as a method of solving the problem of national development imbalances between regions in Central Java Province based on macroeconomic indicators. The data sample used is secondary data obtained from the Central Java Provincial Statistics Agency regarding macroeconomic indicator data which is part of the publication of the 2019 National Socio-Economic Survey (Susenas) data. score and determine the number of clusters (k) using the elbow method. After the clustering process is carried out, the validation is tested using themethodsBetween-Class Variation (BCV) and Within-Class Variation (WCV). The results showed that detection outlier using z-score normalization showed no outliers. In addition, the results of the clustering test obtained a ratio value that was not high, namely 0.011%. There are two district/city clusters in Central Java Province which have economic similarities based on the variables used, namely the first cluster with a high economic level consisting of 13 districts/cities and theclustersecondwith a low economic level consisting of 22 districts/cities. And in the cluster second, namely, between low economies, the authors grouped districts/cities based on similarities to macroeconomic indicators such as 20 districts of Gross Regional Domestic Product, with a Poverty Depth Index of 19 districts, with 5 districts in Human Development, and as many as Open Unemployment Rate. 10 districts.Keywords: clustering, K-Means algorithm, macroeconomic indicators, inequality, national development
Procedia PDF Downloads 158489 Effect of Crown Gall and Phylloxera Resistant Rootstocks on Grafted Vitis Vinifera CV. Sultana Grapevine
Authors: Hassan Mahmoudzadeh
Abstract:
The bacterium of Agrobacterium vitis causes crown and root gall disease, an important disease of grapevine, Vitis vinifera L. Also, Phylloxera is one of the most important pests in viticulture. Grapevine rootstocks were developed to provide increased resistance to soil-borne pests and diseases, but rootstock effects on some traits remain unclear. The interaction between rootstock, scion and environment can induce different responses to the grapevine physiology. 'Sultsna' (Vitis vinifera L.) is one of the most valuable raisin grape cultivars in Iran. Thus, the aim of this study was to determine the rootstock effect on the growth characteristics and yield components and quality of 'Sultana' grapevine grown in the Urmia viticulture region. The experimental design was completely randomized blocks, with four treatments, four replicates and 10 vines per plot. The results show that all variables evaluated were significantly affected by the rootstock. The Sultana/110R and Sultana/Nazmieh were among other combinations influenced by the year and had a higher significant yield/vine (13.25 and 12.14, respectively). Indeed, they were higher than that of Sultana/5BB (10.56 kg/vine) and Sultana/Spota (10.25 kg/vine). The number of clusters per burst bud and per vine and the weight of clusters were affected by the rootstock as well. Pruning weight/vine, yield/pruning weight, leaf area/vine and leaf area index are variables related to the physiology of grapevine, which was also affected by the rootstocks. In general, rootstocks had adapted well to the environment where the experiment was carried out, giving vigor and high yield to Sultana grapevine, which means that they may be used by grape growers in this region. In sum, the study found the best rootstocks for 'Sultana' to be Nazmieh and 110R in terms of root and shoot growth. However, the choice of the right rootstock depends on various aspects, such as those related to soil characteristics, climate conditions, grape varieties, and even clones, and production purposes.Keywords: grafting, vineyards, grapevine, succeptability
Procedia PDF Downloads 125488 Cleaning of Scientific References in Large Patent Databases Using Rule-Based Scoring and Clustering
Authors: Emiel Caron
Abstract:
Patent databases contain patent related data, organized in a relational data model, and are used to produce various patent statistics. These databases store raw data about scientific references cited by patents. For example, Patstat holds references to tens of millions of scientific journal publications and conference proceedings. These references might be used to connect patent databases with bibliographic databases, e.g. to study to the relation between science, technology, and innovation in various domains. Problematic in such studies is the low data quality of the references, i.e. they are often ambiguous, unstructured, and incomplete. Moreover, a complete bibliographic reference is stored in only one attribute. Therefore, a computerized cleaning and disambiguation method for large patent databases is developed in this work. The method uses rule-based scoring and clustering. The rules are based on bibliographic metadata, retrieved from the raw data by regular expressions, and are transparent and adaptable. The rules in combination with string similarity measures are used to detect pairs of records that are potential duplicates. Due to the scoring, different rules can be combined, to join scientific references, i.e. the rules reinforce each other. The scores are based on expert knowledge and initial method evaluation. After the scoring, pairs of scientific references that are above a certain threshold, are clustered by means of single-linkage clustering algorithm to form connected components. The method is designed to disambiguate all the scientific references in the Patstat database. The performance evaluation of the clustering method, on a large golden set with highly cited papers, shows on average a 99% precision and a 95% recall. The method is therefore accurate but careful, i.e. it weighs precision over recall. Consequently, separate clusters of high precision are sometimes formed, when there is not enough evidence for connecting scientific references, e.g. in the case of missing year and journal information for a reference. The clusters produced by the method can be used to directly link the Patstat database with bibliographic databases as the Web of Science or Scopus.Keywords: clustering, data cleaning, data disambiguation, data mining, patent analysis, scientometrics
Procedia PDF Downloads 194487 Enhancing the Performance of Bug Reporting System by Handling Duplicate Reporting Reports: Artificial Intelligence Based Mantis
Authors: Afshan Saad, Muhammad Saad, Shah Muhammad Emaduddin
Abstract:
Bug reporting systems are most important tool that guides regarding different maintenance activities in software engineering. Duplicate bug reports which describe the bugs and issues in bug reporting system repository increases processing time of bug triage that monitors all such activities and software programmers who are working and spending time on reports which were assigned by triage. These reports can reveal imperfections and degrade software quality. As there is a number of the potential duplicate bug reports increases, the number of bug reports in bug repository increases. Identifying duplicate bug reports help in decreasing development work load in fixing defects. However, it is difficult to manually identify all possible duplicates because of the huge number of already reported bug reports. In this paper, an artificial intelligence based system using Mantis is proposed to automatically detect duplicate bug reports. When new bugs are submitted to repository triages will mark it with a tag. It will investigate that whether it is a duplicate of an existing bug report by matching or not. Reports with duplicate tags will be eliminated from the repository which not only will improve the performance of the system but can also save cost and effort waste on bug triage and finding the duplicate bug.Keywords: bug tracking, triager, tool, quality assurance
Procedia PDF Downloads 193486 Hierarchical Clustering Algorithms in Data Mining
Authors: Z. Abdullah, A. R. Hamdan
Abstract:
Clustering is a process of grouping objects and data into groups of clusters to ensure that data objects from the same cluster are identical to each other. Clustering algorithms in one of the areas in data mining and it can be classified into partition, hierarchical, density based, and grid-based. Therefore, in this paper, we do a survey and review for four major hierarchical clustering algorithms called CURE, ROCK, CHAMELEON, and BIRCH. The obtained state of the art of these algorithms will help in eliminating the current problems, as well as deriving more robust and scalable algorithms for clustering.Keywords: clustering, unsupervised learning, algorithms, hierarchical
Procedia PDF Downloads 885485 Numerical Simulation of Hydraulic Fracture Propagation in Marine-continental Transitional Tight Sandstone Reservoirs by Boundary Element Method: A Case Study of Shanxi Formation in China
Authors: Jiujie Cai, Fengxia LI, Haibo Wang
Abstract:
After years of research, offshore oil and gas development now are shifted to unconventional reservoirs, where multi-stage hydraulic fracturing technology has been widely used. However, the simulation of complex hydraulic fractures in tight reservoirs is faced with geological and engineering difficulties, such as large burial depths, sand-shale interbeds, and complex stress barriers. The objective of this work is to simulate the hydraulic fracture propagation in the tight sandstone matrix of the marine-continental transitional reservoirs, where the Shanxi Formation in Tianhuan syncline of the Dongsheng gas field was used as the research target. The characteristic parameters of the vertical rock samples with rich beddings were clarified through rock mechanics experiments. The influence of rock mechanical parameters, vertical stress difference of pay-zone and bedding layer, and fracturing parameters (such as injection rates, fracturing fluid viscosity, and number of perforation clusters within single stage) on fracture initiation and propagation were investigated. In this paper, a 3-D fracture propagation model was built to investigate the complex fracture propagation morphology by boundary element method, considering the strength of bonding surface between layers, vertical stress difference and fracturing parameters (such as injection rates, fluid volume and viscosity). The research results indicate that on the condition of vertical stress difference (3 MPa), the fracture height can break through and enter the upper interlayer when the thickness of the overlying bedding layer is 6-9 m, considering effect of the weak bonding surface between layers. The fracture propagates within the pay zone when overlying interlayer is greater than 13 m. Difference in fluid volume distribution between clusters could be more than 20% when the stress difference of each cluster in the segment exceeds 2MPa. Fracture cluster in high stress zones cannot initiate when the stress difference in the segment exceeds 5MPa. The simulation results of fracture height are much higher if the effect of weak bonding surface between layers is not involved. By increasing the injection rates, increasing fracturing fluid viscosity, and reducing the number of clusters within single stage can promote the fracture height propagation through layers. Optimizing the perforation position and reducing the number of perforations can promote the uniform expansion of fractures. Typical curves of fracture height estimation were established for the tight sandstone of the Lower Permian Shanxi Formation. The model results have good consistency with micro-seismic monitoring results of hydraulic fracturing in Well 1HF.Keywords: fracture propagation, boundary element method, fracture height, offshore oil and gas, marine-continental transitional reservoirs, rock mechanics experiment
Procedia PDF Downloads 127484 Diet-Induced Epigenetic Transgenerational Inheritance
Authors: Gaby Fahmy
Abstract:
The last decades have seen a rise in metabolic disorders like diabetes, obesity, and fatty liver disease around the world. Environmental factors, especially nutrition, have contributed to this increase. Additionally, pre-conceptional parental nutritional choices have been shown to result in epigenetic modifications affecting gene expression during the developmental process in-utero. These epigenetic modifications have also been seen to extend to the following offspring in a trans-generational effect. This further highlights the significance and relevance of epigenetics and epigenetic tags, which were previously thought to be stripped in newly formed embryos. Suitable prenatal nutrition may partially counteract adverse outcomes caused by exposures to environmental contaminants, ultimately resulting in improved metabolic profiles like body weight and glucose homeostasis. This was seen in patients who were given dietary interventions like restrictive caloric intake, intermittent fasting, and time-restricted feeding. Changes in nutrition are pivotal in the regulation of epigenetic modifications that are transgenerational. For example, dietary choices such as fatty foods vs. vegetables and nuts in fathers were shown to significantly affect sperm motility and volume. This was pivotal in understanding the importance of paternal inheritance. Further research in the field is needed as it remains unclear how many generations are affected by these changes.Keywords: epigenetics, transgenerational, diet, fasting
Procedia PDF Downloads 96483 Growth of Droplet in Radiation-Induced Plasma of Own Vapour
Authors: P. Selyshchev
Abstract:
The theoretical approach is developed to describe the change of drops in the atmosphere of own steam and buffer gas under irradiation. It is shown that the irradiation influences on size of stable droplet and on the conditions under which the droplet exists. Under irradiation the change of drop becomes more complex: the not monotone and periodical change of size of drop becomes possible. All possible solutions are represented by means of phase portrait. It is found all qualitatively different phase portraits as function of critical parameters: rate generation of clusters and substance density.Keywords: irradiation, steam, plasma, cluster formation, liquid droplets, evolution
Procedia PDF Downloads 440