Search results for: double layered fuzzy graph
1797 Design and Simulation of a Double-Stator Linear Induction Machine with Short Squirrel-Cage Mover
Authors: David Rafetseder, Walter Bauer, Florian Poltschak, Wolfgang Amrhein
Abstract:
A flat double-stator linear induction machine (DSLIM) with a short squirrel-cage mover is designed for high thrust force at moderate speed < 5m/s. The performance and motor parameters are determined on the basis of a 2D time-transient simulation with the finite element (FE) software Maxwell 2015. Design guidelines and transformation rules for space vector theory of the LIM are presented. Resulting thrust calculated by flux and current vectors is compared with the FE results showing good coherence and reduced noise. The parameters of the equivalent circuit model are obtained.Keywords: equivalent circuit model, finite element model, linear induction motor, space vector theory
Procedia PDF Downloads 5651796 Parameter Estimation for Contact Tracing in Graph-Based Models
Authors: Augustine Okolie, Johannes Müller, Mirjam Kretzchmar
Abstract:
We adopt a maximum-likelihood framework to estimate parameters of a stochastic susceptible-infected-recovered (SIR) model with contact tracing on a rooted random tree. Given the number of detectees per index case, our estimator allows to determine the degree distribution of the random tree as well as the tracing probability. Since we do not discover all infectees via contact tracing, this estimation is non-trivial. To keep things simple and stable, we develop an approximation suited for realistic situations (contract tracing probability small, or the probability for the detection of index cases small). In this approximation, the only epidemiological parameter entering the estimator is the basic reproduction number R0. The estimator is tested in a simulation study and applied to covid-19 contact tracing data from India. The simulation study underlines the efficiency of the method. For the empirical covid-19 data, we are able to compare different degree distributions and perform a sensitivity analysis. We find that particularly a power-law and a negative binomial degree distribution meet the data well and that the tracing probability is rather large. The sensitivity analysis shows no strong dependency on the reproduction number.Keywords: stochastic SIR model on graph, contact tracing, branching process, parameter inference
Procedia PDF Downloads 751795 Language Development and Growing Spanning Trees in Children Semantic Network
Authors: Somayeh Sadat Hashemi Kamangar, Fatemeh Bakouie, Shahriar Gharibzadeh
Abstract:
In this study, we target to exploit Maximum Spanning Trees (MST) of children's semantic networks to investigate their language development. To do so, we examine the graph-theoretic properties of word-embedding networks. The networks are made of words children learn prior to the age of 30 months as the nodes and the links which are built from the cosine vector similarity of words normatively acquired by children prior to two and a half years of age. These networks are weighted graphs and the strength of each link is determined by the numerical similarities of the two words (nodes) on the sides of the link. To avoid changing the weighted networks to the binaries by setting a threshold, constructing MSTs might present a solution. MST is a unique sub-graph that connects all the nodes in such a way that the sum of all the link weights is maximized without forming cycles. MSTs as the backbone of the semantic networks are suitable to examine developmental changes in semantic network topology in children. From these trees, several parameters were calculated to characterize the developmental change in network organization. We showed that MSTs provides an elegant method sensitive to capture subtle developmental changes in semantic network organization.Keywords: maximum spanning trees, word-embedding, semantic networks, language development
Procedia PDF Downloads 1451794 Patient Satisfaction Measurement Using Face-Q for Non-Incisional Double-Eyelid Blepharoplasty with Modified Single-Knot Continuous Buried Suture Technique
Authors: Kwei Huan Liw, Sashi B. Darshan
Abstract:
Background: Double eyelid surgery has become one of the most sought-after aesthetic procedures among Asians. Many surgeons perform surgical blepharoplasty and various other methods of non-incisional blepharoplasty. Face-Q is a validated method of measuring patient satisfaction for facial aesthetic procedures. Here we have analyzed the overall eye satisfaction score, the upper eyelid appraisal score and the adverse effect on eyes score Methods: 274 patients (548 eyes), aged between 18 to 40 years old, were recruited from 2015-2018. Each patient underwent a non-incisional double-eyelid blepharoplasty using a single-knotted continuous buried suture. 3 – 5 stab incisions were made depending on the upper eyelid size. A needle loaded with 7-0 nylon is passed from the lateral most wound through the dermis and the conjunctiva in an alternate fashion into the remaining stab wounds. The suture is then tunneled back laterally in the deeper dermis and knotted securely with the suture end. The knot is then buried within the orbicularis oculi muscle. Each patient was required to fill the Face-Q questionnaire before the procedure and 2 weeks post procedure. The results are described based on the percentage of the maximum achievable score. Patients were reviewed after 12 to 18 months to assess the long-term outcome. Results: The overall eye satisfaction score demonstrated a high level of post-operative satisfaction (97.85%), compared to 27.32% pre-operatively. The appraisal of upper eyelid scores showed drastic improvement in perception post-operatively (95.31%) compared to 21.44% pre-operatively. Adverse effect on eyes score showed a very low post-operative complication rate (0.4%) The long-term follow-up showed 6 cases that had developed asymmetrical folds. Only 1 patient agreed for revision surgery. The other 5 patients were still satisfied with the outcome and were not keen for revision surgery. None of the cases had loosening of knots. Conclusion: Modified single-knot continuous buried suture technique is a simple and non-invasive method to create aesthetically pleasing non-surgical double-eyelids, which has long-term effects. Proper patient selection is crucial and good surgical technique is required to achieve a desirable outcome.Keywords: blepharoplasty, double-eyelid, face-Q, non-incisional
Procedia PDF Downloads 1191793 Game Structure and Spatio-Temporal Action Detection in Soccer Using Graphs and 3D Convolutional Networks
Authors: Jérémie Ochin
Abstract:
Soccer analytics are built on two data sources: the frame-by-frame position of each player on the terrain and the sequences of events, such as ball drive, pass, cross, shot, throw-in... With more than 2000 ball-events per soccer game, their precise and exhaustive annotation, based on a monocular video stream such as a TV broadcast, remains a tedious and costly manual task. State-of-the-art methods for spatio-temporal action detection from a monocular video stream, often based on 3D convolutional neural networks, are close to reach levels of performances in mean Average Precision (mAP) compatibles with the automation of such task. Nevertheless, to meet their expectation of exhaustiveness in the context of data analytics, such methods must be applied in a regime of high recall – low precision, using low confidence score thresholds. This setting unavoidably leads to the detection of false positives that are the product of the well documented overconfidence behaviour of neural networks and, in this case, their limited access to contextual information and understanding of the game: their predictions are highly unstructured. Based on the assumption that professional soccer players’ behaviour, pose, positions and velocity are highly interrelated and locally driven by the player performing a ball-action, it is hypothesized that the addition of information regarding surrounding player’s appearance, positions and velocity in the prediction methods can improve their metrics. Several methods are compared to build a proper representation of the game surrounding a player, from handcrafted features of the local graph, based on domain knowledge, to the use of Graph Neural Networks trained in an end-to-end fashion with existing state-of-the-art 3D convolutional neural networks. It is shown that the inclusion of information regarding surrounding players helps reaching higher metrics.Keywords: fine-grained action recognition, human action recognition, convolutional neural networks, graph neural networks, spatio-temporal action recognition
Procedia PDF Downloads 221792 Optimizing Boiler Combustion System in a Petrochemical Plant Using Neuro-Fuzzy Inference System and Genetic Algorithm
Authors: Yul Y. Nazaruddin, Anas Y. Widiaribowo, Satriyo Nugroho
Abstract:
Boiler is one of the critical unit in a petrochemical plant. Steam produced by the boiler is used for various processes in the plant such as urea and ammonia plant. An alternative method to optimize the boiler combustion system is presented in this paper. Adaptive Neuro-Fuzzy Inference System (ANFIS) approach is applied to model the boiler using real-time operational data collected from a boiler unit of the petrochemical plant. Nonlinear equation obtained is then used to optimize the air to fuel ratio using Genetic Algorithm, resulting an optimal ratio of 15.85. This optimal ratio is then maintained constant by ratio controller designed using inverse dynamics based on ANFIS. As a result, constant value of oxygen content in the flue gas is obtained which indicates more efficient combustion process.Keywords: ANFIS, boiler, combustion process, genetic algorithm, optimization.
Procedia PDF Downloads 2491791 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning
Authors: Akeel A. Shah, Tong Zhang
Abstract:
Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning
Procedia PDF Downloads 371790 A Finite Element Analysis of Hexagonal Double-Arrowhead Auxetic Structure with Enhanced Energy Absorption Characteristics and Stiffness
Abstract:
Auxetic materials, as an emerging artificial designed metamaterial has attracted growing attention due to their promising negative Poisson’s ratio behaviors and tunable properties. The conventional auxetic lattice structures for which the deformation process is governed by a bending-dominated mechanism have faced the limitation of poor mechanical performance for many potential engineering applications. Recently, both load-bearing and energy absorption capabilities have become a crucial consideration in auxetic structure design. This study reports the finite element analysis of a class of hexagonal double-arrowhead auxetic structures with enhanced stiffness and energy absorption performance. The structure design was developed by extending the traditional double-arrowhead honeycomb to a hexagon frame, the stretching-dominated deformation mechanism was determined according to Maxwell’s stability criterion. The finite element (FE) models of 2D lattice structures established with stainless steel material were analyzed in ABAQUS/Standard for predicting in-plane structural deformation mechanism, failure process, and compressive elastic properties. Based on the computational simulation, the parametric analysis was studied to investigate the effect of the structural parameters on Poisson’s ratio and mechanical properties. The geometrical optimization was then implemented to achieve the optimal Poisson’s ratio for the maximum specific energy absorption. In addition, the optimized 2D lattice structure was correspondingly converted into a 3D geometry configuration by using the orthogonally splicing method. The numerical results of 2D and 3D structures under compressive quasi-static loading conditions were compared separately with the traditional double-arrowhead re-entrant honeycomb in terms of specific Young's moduli, Poisson's ratios, and specified energy absorption. As a result, the energy absorption capability and stiffness are significantly reinforced with a wide range of Poisson’s ratio compared to traditional double-arrowhead re-entrant honeycomb. The auxetic behaviors, energy absorption capability, and yield strength of the proposed structure are adjustable with different combinations of joint angle, struts thickness, and the length-width ratio of the representative unit cell. The numerical prediction in this study suggests the proposed concept of hexagonal double-arrowhead structure could be a suitable candidate for the energy absorption applications with a constant request of load-bearing capacity. For future research, experimental analysis is required for the validation of the numerical simulation.Keywords: auxetic, energy absorption capacity, finite element analysis, negative Poisson's ratio, re-entrant hexagonal honeycomb
Procedia PDF Downloads 861789 Topological Language for Classifying Linear Chord Diagrams via Intersection Graphs
Authors: Michela Quadrini
Abstract:
Chord diagrams occur in mathematics, from the study of RNA to knot theory. They are widely used in theory of knots and links for studying the finite type invariants, whereas in molecular biology one important motivation to study chord diagrams is to deal with the problem of RNA structure prediction. An RNA molecule is a linear polymer, referred to as the backbone, that consists of four types of nucleotides. Each nucleotide is represented by a point, whereas each chord of the diagram stands for one interaction for Watson-Crick base pairs between two nonconsecutive nucleotides. A chord diagram is an oriented circle with a set of n pairs of distinct points, considered up to orientation preserving diffeomorphisms of the circle. A linear chord diagram (LCD) is a special kind of graph obtained cutting the oriented circle of a chord diagram. It consists of a line segment, called its backbone, to which are attached a number of chords with distinct endpoints. There is a natural fattening on any linear chord diagram; the backbone lies on the real axis, while all the chords are in the upper half-plane. Each linear chord diagram has a natural genus of its associated surface. To each chord diagram and linear chord diagram, it is possible to associate the intersection graph. It consists of a graph whose vertices correspond to the chords of the diagram, whereas the chord intersections are represented by a connection between the vertices. Such intersection graph carries a lot of information about the diagram. Our goal is to define an LCD equivalence class in terms of identity of intersection graphs, from which many chord diagram invariants depend. For studying these invariants, we introduce a new representation of Linear Chord Diagrams based on a set of appropriate topological operators that permits to model LCD in terms of the relations among chords. Such set is composed of: crossing, nesting, and concatenations. The crossing operator is able to generate the whole space of linear chord diagrams, and a multiple context free grammar able to uniquely generate each LDC starting from a linear chord diagram adding a chord for each production of the grammar is defined. In other words, it allows to associate a unique algebraic term to each linear chord diagram, while the remaining operators allow to rewrite the term throughout a set of appropriate rewriting rules. Such rules define an LCD equivalence class in terms of the identity of intersection graphs. Starting from a modelled RNA molecule and the linear chord, some authors proposed a topological classification and folding. Our LCD equivalence class could contribute to the RNA folding problem leading to the definition of an algorithm that calculates the free energy of the molecule more accurately respect to the existing ones. Such LCD equivalence class could be useful to obtain a more accurate estimate of link between the crossing number and the topological genus and to study the relation among other invariants.Keywords: chord diagrams, linear chord diagram, equivalence class, topological language
Procedia PDF Downloads 2011788 Comparison of Unit Hydrograph Models to Simulate Flood Events at the Field Scale
Authors: Imene Skhakhfa, Lahbaci Ouerdachi
Abstract:
To ensure the overall coherence of simulated results, it is necessary to develop a robust validation process. In many applications, it is no longer content to calibrate and validate the model only in relation to the hydro graph measured at the outlet, but we try to better simulate the functioning of the watershed in space. Therefore the timing also performs compared to other variables such as water level measurements in intermediate stations or groundwater levels. As part of this work, we limit ourselves to modeling flood of short duration for which the process of evapotranspiration is negligible. The main parameters to identify the models are related to the method of unit hydro graph (HU). Three different models were tested: SNYDER, CLARK and SCS. These models differ in their mathematical structure and parameters to be calibrated while hydrological data are the same, the initial water content and precipitation. The models are compared on the basis of their performance in terms six objective criteria, three global criteria and three criteria representing volume, peak flow, and the mean square error. The first type of criteria gives more weight to strong events whereas the second considers all events to be of equal weight. The results show that the calibrated parameter values are dependent and also highlight the problems associated with the simulation of low flow events and intermittent precipitation.Keywords: model calibration, intensity, runoff, hydrograph
Procedia PDF Downloads 4841787 An Insite to the Probabilistic Assessment of Reserves in Conventional Reservoirs
Authors: Sai Sudarshan, Harsh Vyas, Riddhiman Sherlekar
Abstract:
The oil and gas industry has been unwilling to adopt stochastic definition of reserves. Nevertheless, Monte Carlo simulation methods have gained acceptance by engineers, geoscientists and other professionals who want to evaluate prospects or otherwise analyze problems that involve uncertainty. One of the common applications of Monte Carlo simulation is the estimation of recoverable hydrocarbon from a reservoir.Monte Carlo Simulation makes use of random samples of parameters or inputs to explore the behavior of a complex system or process. It finds application whenever one needs to make an estimate, forecast or decision where there is significant uncertainty. First, the project focuses on performing Monte-Carlo Simulation on a given data set using U. S Department of Energy’s MonteCarlo Software, which is a freeware e&p tool. Further, an algorithm for simulation has been developed for MATLAB and program performs simulation by prompting user for input distributions and parameters associated with each distribution (i.e. mean, st.dev, min., max., most likely, etc.). It also prompts user for desired probability for which reserves are to be calculated. The algorithm so developed and tested in MATLAB further finds implementation in Python where existing libraries on statistics and graph plotting have been imported to generate better outcome. With PyQt designer, codes for a simple graphical user interface have also been written. The graph so plotted is then validated with already available results from U.S DOE MonteCarlo Software.Keywords: simulation, probability, confidence interval, sensitivity analysis
Procedia PDF Downloads 3811786 An Optimized Association Rule Mining Algorithm
Authors: Archana Singh, Jyoti Agarwal, Ajay Rana
Abstract:
Data Mining is an efficient technology to discover patterns in large databases. Association Rule Mining techniques are used to find the correlation between the various item sets in a database, and this co-relation between various item sets are used in decision making and pattern analysis. In recent years, the problem of finding association rules from large datasets has been proposed by many researchers. Various research papers on association rule mining (ARM) are studied and analyzed first to understand the existing algorithms. Apriori algorithm is the basic ARM algorithm, but it requires so many database scans. In DIC algorithm, less amount of database scan is needed but complex data structure lattice is used. The main focus of this paper is to propose a new optimized algorithm (Friendly Algorithm) and compare its performance with the existing algorithms A data set is used to find out frequent itemsets and association rules with the help of existing and proposed (Friendly Algorithm) and it has been observed that the proposed algorithm also finds all the frequent itemsets and essential association rules from databases as compared to existing algorithms in less amount of database scan. In the proposed algorithm, an optimized data structure is used i.e. Graph and Adjacency Matrix.Keywords: association rules, data mining, dynamic item set counting, FP-growth, friendly algorithm, graph
Procedia PDF Downloads 4191785 Software Assessment Using Ant Colony Optimization Algorithm
Authors: Saad M. Darwish
Abstract:
Recently, software quality issues have come to be seen as important subject as we see an enormous growth of agencies involved in software industries. However,these agencies cannot guarantee the quality of their products, thus leaving users in uncertainties. Software certification is the extension of quality by means that quality needs to be measured prior to certification granting process. This research participates in solving the problem of software assessment by proposing a model for assessment and certification of software product that uses a fuzzy inference engine to integrate both of process–driven and application-driven quality assurance strategies. The key idea of the on hand model is to improve the compactness and the interpretability of the model’s fuzzy rules via employing an ant colony optimization algorithm (ACO), which tries to find good rules description by dint of compound rules initially expressed with traditional single rules. The model has been tested by case study and the results have demonstrated feasibility and practicability of the model in a real environment.Keywords: optimization technique, quality assurance, software certification model, software assessment
Procedia PDF Downloads 4851784 Electronic Spectral Function of Double Quantum Dots–Superconductors Nanoscopic Junction
Authors: Rajendra Kumar
Abstract:
We study the Electronic spectral density of a double coupled quantum dots sandwich between superconducting leads, where one of the superconducting leads (QD1) are connected with left superconductor lead and (QD1) also connected right superconductor lead. (QD1) and (QD2) are coupling to each other. The electronic spectral density through a quantum dots between superconducting leads having s-wave symmetry of the superconducting order parameter. Such junction is called superconducting –quantum dot (S-QD-S) junction. For this purpose, we have considered a renormalized Anderson model that includes the double coupled of the superconducting leads with the quantum dots level and an attractive BCS-type effective interaction in superconducting leads. We employed the Green’s function technique to obtain superconducting order parameter with the BCS framework and Ambegaoker-Baratoff formalism to analyze the electronic spectral density through such (S-QD-S) junction. It has been pointed out that electronic spectral density through such a junction is dominated by the attractive the paring interaction in the leads, energy of the level on the dot with respect to Fermi energy and also on the coupling parameter of the two in an essential way. On the basis of numerical analysis we have compared the theoretical results of electronic spectral density with the recent transport existing theoretical analysis. QDs is the charging energy that may give rise to effects based on the interplay of Coulomb repulsion and superconducting correlations. It is, therefore, an interesting question to ask how the discrete level spectrum and the charging energy affect the DC and AC Josephson transport between two superconductors coupled via a QD. In the absence of a bias voltage, a finite DC current can be sustained in such an S-QD-S by the DC Josephson effect.Keywords: quantum dots, S-QD-S junction, BCS superconductors, Anderson model
Procedia PDF Downloads 3721783 Multilayer Neural Network and Fuzzy Logic Based Software Quality Prediction
Authors: Sadaf Sahar, Usman Qamar, Sadaf Ayaz
Abstract:
In the software development lifecycle, the quality prediction techniques hold a prime importance in order to minimize future design errors and expensive maintenance. There are many techniques proposed by various researchers, but with the increasing complexity of the software lifecycle model, it is crucial to develop a flexible system which can cater for the factors which in result have an impact on the quality of the end product. These factors include properties of the software development process and the product along with its operation conditions. In this paper, a neural network (perceptron) based software quality prediction technique is proposed. Using this technique, the stakeholders can predict the quality of the resulting software during the early phases of the lifecycle saving time and resources on future elimination of design errors and costly maintenance. This technique can be brought into practical use using successful training.Keywords: software quality, fuzzy logic, perception, prediction
Procedia PDF Downloads 3161782 Evolving Software Assessment and Certification Models Using Ant Colony Optimization Algorithm
Authors: Saad M. Darwish
Abstract:
Recently, software quality issues have come to be seen as important subject as we see an enormous growth of agencies involved in software industries. However, these agencies cannot guarantee the quality of their products, thus leaving users in uncertainties. Software certification is the extension of quality by means that quality needs to be measured prior to certification granting process. This research participates in solving the problem of software assessment by proposing a model for assessment and certification of software product that uses a fuzzy inference engine to integrate both of process–driven and application-driven quality assurance strategies. The key idea of the on hand model is to improve the compactness and the interpretability of the model’s fuzzy rules via employing an ant colony optimization algorithm (ACO), which tries to find good rules description by dint of compound rules initially expressed with traditional single rules. The model has been tested by case study and the results have demonstrated feasibility and practicability of the model in a real environment.Keywords: software quality, quality assurance, software certification model, software assessment
Procedia PDF Downloads 5211781 The Relevance of Environmental, Social, and Governance in Sustainable Supplier Selection
Authors: Christoph Koester
Abstract:
Supplier selection is one of the key issues in supply chain management with a growing emphasis on sustainability driven by increasing stakeholder expectations and proactivity. In addition, new regulations, such as the German Supply Chain Act, fostered the inclusion of sustainable incl. governance selection criteria in the selection process. In order to provide a systematic approach to select the most suitable sustainable suppliers, this study quantifies the importance and prioritizes the relevant selection criteria across 17 German industries using the Fuzzy Analytical Hierarchy Process. Results show that economic criteria are still the most important in the selection decision averaging a global weight of 51%. However, environmental, social, and governance (ESG) criteria are combined, on average, almost equally important, with global weights of 22%, 16%, and 11%, respectively. While the type of industry influences criteria weights, other factors, such as type of purchasing or demographic factors, appear to have little impact.Keywords: ESG, fuzzy analytical hierarchy process, sustainable supplier selection, sustainability
Procedia PDF Downloads 851780 A Grid Synchronization Phase Locked Loop Method for Grid-Connected Inverters Systems
Authors: Naima Ikken, Abdelhadi Bouknadel, Nour-eddine Tariba Ahmed Haddou, Hafsa El Omari
Abstract:
The operation of grid-connected inverters necessity a single-phase phase locked loop (PLL) is proposed in this article to accurately and quickly estimate and detect the grid phase angle. This article presents the improvement of a method of phase-locked loop. The novelty is to generate a method (PLL) of synchronizing the grid with a Notch filter based on adaptive fuzzy logic for inverter systems connected to the grid. The performance of the proposed method was tested under normal and abnormal operating conditions (amplitude, frequency and phase shift variations). In addition, simulation results with ISPM software are developed to verify the effectiveness of the proposed method strategy. Finally, the experimental test will be used to extract the result and discuss the validity of the proposed algorithm.Keywords: phase locked loop, PLL, notch filter, fuzzy logic control, grid connected inverters
Procedia PDF Downloads 1471779 Multi-Criteria Goal Programming Model for Sustainable Development of India
Authors: Irfan Ali, Srikant Gupta, Aquil Ahmed
Abstract:
Every country needs a sustainable development (SD) for its economic growth by forming suitable policies and initiative programs for the development of different sectors of the country. This paper is comprised of modeling and optimization of different sectors of India that form a multi-criterion model. In this paper, we developed a fractional goal programming (FGP) model that helps in providing the efficient allocation of resources simultaneously by achieving the sustainable goals in gross domestic product (GDP), electricity consumption (EC) and greenhouse gasses (GHG) emission by the year 2030. Also, a weighted model of FGP is presented to obtain varying solution according to the priorities set by the policy maker for achieving future goals of GDP growth, EC, and GHG emission. The presented models provide a useful insight to the decision makers for implementing strategies in a different sector.Keywords: sustainable and economic development, multi-objective fractional programming, fuzzy goal programming, weighted fuzzy goal programming
Procedia PDF Downloads 2211778 Back Stepping Sliding Mode Control of Blood Glucose for Type I Diabetes
Authors: N. Tadrisi Parsa, A. R. Vali, R. Ghasemi
Abstract:
Diabetes is a growing health problem in worldwide. Especially, the patients with Type 1 diabetes need strict glycemic control because they have deficiency of insulin production. This paper attempts to control blood glucose based on body mathematical body model. The Bergman minimal mathematical model is used to develop the nonlinear controller. A novel back-stepping based sliding mode control (B-SMC) strategy is proposed as a solution that guarantees practical tracking of a desired glucose concentration. In order to show the performance of the proposed design, it is compared with conventional linear and fuzzy controllers which have been done in previous researches. The numerical simulation result shows the advantages of sliding mode back stepping controller design to linear and fuzzy controllers.Keywords: bergman model, nonlinear control, back stepping, sliding mode control
Procedia PDF Downloads 3811777 Domain specific Ontology-Based Knowledge Extraction Using R-GNN and Large Language Models
Authors: Andrey Khalov
Abstract:
The rapid proliferation of unstructured data in IT infrastructure management demands innovative approaches for extracting actionable knowledge. This paper presents a framework for ontology-based knowledge extraction that combines relational graph neural networks (R-GNN) with large language models (LLMs). The proposed method leverages the DOLCE framework as the foundational ontology, extending it with concepts from ITSMO for domain-specific applications in IT service management and outsourcing. A key component of this research is the use of transformer-based models, such as DeBERTa-v3-large, for automatic entity and relationship extraction from unstructured texts. Furthermore, the paper explores how transfer learning techniques can be applied to fine-tune large language models (LLaMA) for using to generate synthetic datasets to improve precision in BERT-based entity recognition and ontology alignment. The resulting IT Ontology (ITO) serves as a comprehensive knowledge base that integrates domain-specific insights from ITIL processes, enabling more efficient decision-making. Experimental results demonstrate significant improvements in knowledge extraction and relationship mapping, offering a cutting-edge solution for enhancing cognitive computing in IT service environments.Keywords: ontology mapping, R-GNN, knowledge extraction, large language models, NER, knowlege graph
Procedia PDF Downloads 141776 Influence of Composite Adherents Properties on the Dynamic Behavior of Double Lap Bonded Joint
Authors: P. Saleh, G. Challita, R. Hazimeh, K. Khalil
Abstract:
In this paper 3D FEM analysis was carried out on double lap bonded joint with composite adherents subjected to dynamic shear. The adherents are made of Carbon/Epoxy while the adhesive is epoxy Araldite 2031. The maximum average shear stress and the stress homogeneity in the adhesive layer were examined. Three fibers textures were considered: UD; 2.5D and 3D with same volume fiber then a parametric study based on changing the thickness and the type of fibers texture in 2.5D was accomplished. Moreover, adherents’ dissimilarity was also investigated. It was found that the main parameter influencing the behavior is the longitudinal stiffness of the adherents. An increase in the adherents’ longitudinal stiffness induces an increase in the maximum average shear stress in the adhesive layer and an improvement in the shear stress homogeneity within the joint. No remarkable improvement was observed for dissimilar adherents.Keywords: adhesive, composite adherents, impact shear, finite element
Procedia PDF Downloads 4411775 Study on the Effects of Geometrical Parameters of Helical Fins on Heat Transfer Enhancement of Finned Tube Heat Exchangers
Authors: H. Asadi, H. Naderan Tahan
Abstract:
The aim of this paper is to investigate the effect of geometrical properties of helical fins in double pipe heat exchangers. On the other hand, the purpose of this project is to derive the hydraulic and thermal design tables and equations of double heat exchangers with helical fins. The numerical modeling is implemented to calculate the considered parameters. Design tables and correlated equations are generated by repeating the parametric numerical procedure for different fin geometries. Friction factor coefficient and Nusselt number are calculated for different amounts of Reynolds, fluid Prantle and fin twist angles for the range of laminar fluid flow in annular tube with helical fins. Results showed that friction factor coefficient and Nusselt number will be increased for higher Reynolds numbers and fins’ twist angles in general. These two parameters follow different patterns in response to Reynolds number increment. Thermal performance factor is defined to analyze these different patterns. Temperature and velocity contours are plotted against twist angle and number of fins to describe the changes in flow patterns in different geometries of twisted finned annulus. Finally twisted finned annulus friction factor coefficient, Nusselt Number and thermal performance factor are correlated by simulating the model in different design points.Keywords: double pipe heat exchangers, heat exchanger performance, twisted fins, computational fluid dynamics
Procedia PDF Downloads 2881774 Prevalence of the Double Burden of Malnutrition in Women of Childbearing Age in Morocco: Coexistence of Iron Deficiency Anemia and Overweight
Authors: Fall Abdourahmane, Lazrak Meryem, El Hsaini Houda, El Ammari Laila, Gamih Hasnae, Yahyane Abdelhakim, Benjouad Abdelaziz, Aguenaou Hassan, El Kari Khalid
Abstract:
Introduction: The double burden of malnutrition (DBM), characterized by the coexistence of undernutrition and overnutrition, is a significant health challenge, particularly in low- and middle-income countries. In Morocco, 61.3% of women of reproductive age (WRA) are overweight or obese, including 30.4% who were obese, while 34.4% were anaemic, and 49.7% have iron deficiency anaemia. Objective: This study aims to determine the prevalence of DBM at the individual level among Moroccan WRA, defined by the coexistence of iron deficiency anaemia and overweight/obesity. Methods: a cross-sectional national survey was conducted among a representative sample of 2090 Moroccan WRA. Data collected included socio-economic parameters, anthropometric measurements, and blood samples. Haemoglobin levels were measured photometrically using Hemocue, while ferritin and CRP were assessed through immunoturbudimetry. Results: The prevalence of overweight/obesity, iron deficiency, anaemia and iron deficiency anaemia among WRA in Morocco were 60.2%, 30.6%, 34.4% and 50.0% respectively. The coexistence of overweight/obesity with anaemia and iron deficiency was observed in 19.2% and 16.3% of women, respectively. Among overweight/obese women, 32.5% were anaemic, 28.4% were iron deficient, and 47.6% had iron deficiency anaemia. the prevalence of DBM was higher in urban areas compared to rural settings. Conclusion: The coexistence of undernutrition and overnutrition among WRA highlights the urgent need for integrated public health interventions addressing both anaemia and obesity simultaneously. Tailored strategies should consider the specific socio-economic and geographical contexts to effectively combat this dual burden.Keywords: the double burden of malnutrition, iron deficiency anaemia, overweight, obesity
Procedia PDF Downloads 311773 An Axiomatic Approach to Constructing an Applied Theory of Possibility
Authors: Oleksii Bychkov
Abstract:
The fundamental difference between randomness and vagueness is that the former requires statistical research. These issues were studied by Zadeh L, Dubois D., Prad A. The theory of possibility works with expert assessments, hypotheses, etc. gives an idea of the characteristics of the problem situation, the nature of the goals and real limitations. Possibility theory examines experiments that are not repeated. The article discusses issues related to the formalization of a fuzzy, uncertain idea of reality. The author proposes to expand the classical model of the theory of possibilities by introducing a measure of necessity. The proposed model of the theory of possibilities allows us to extend the measures of possibility and necessity onto a Boolean while preserving the properties of the measure. Thus, upper and lower estimates are obtained to describe the fact that the event will occur. Knowledge of the patterns that govern mass random, uncertain, fuzzy events allows us to predict how these events will proceed. The article proposed for publication quite fully reveals the essence of the construction and use of the theory of probability and the theory of possibility.Keywords: possibility, artificial, modeling, axiomatics, intellectual approach
Procedia PDF Downloads 321772 Examining Social Connectivity through Email Network Analysis: Study of Librarians' Emailing Groups in Pakistan
Authors: Muhammad Arif Khan, Haroon Idrees, Imran Aziz, Sidra Mushtaq
Abstract:
Social platforms like online discussion and mailing groups are well aligned with academic as well as professional learning spaces. Professional communities are increasingly moving to online forums for sharing and capturing the intellectual abilities. This study investigated dynamics of social connectivity of yahoo mailing groups of Pakistani Library and Information Science (LIS) professionals using Graph Theory technique. Design/Methodology: Social Network Analysis is the increasingly concerned domain for scientists in identifying whether people grow together through online social interaction or, whether they just reflect connectivity. We have conducted a longitudinal study using Network Graph Theory technique to analyze the large data-set of email communication. The data was collected from three yahoo mailing groups using network analysis software over a period of six months i.e. January to June 2016. Findings of the network analysis were reviewed through focus group discussion with LIS experts and selected respondents of the study. Data were analyzed in Microsoft Excel and network diagrams were visualized using NodeXL and ORA-Net Scene package. Findings: Findings demonstrate that professionals and students exhibit intellectual growth the more they get tied within a network by interacting and participating in communication through online forums. The study reports on dynamics of the large network by visualizing the email correspondence among group members in a network consisting vertices (members) and edges (randomized correspondence). The model pair wise relationship between group members was illustrated to show characteristics, reasons, and strength of ties. Connectivity of nodes illustrated the frequency of communication among group members through examining node coupling, diffusion of networks, and node clustering has been demonstrated in-depth. Network analysis was found to be a useful technique in investigating the dynamics of the large network.Keywords: emailing networks, network graph theory, online social platforms, yahoo mailing groups
Procedia PDF Downloads 2391771 Fuzzy Control of Thermally Isolated Greenhouse Building by Utilizing Underground Heat Exchanger and Outside Weather Conditions
Authors: Raghad Alhusari, Farag Omar, Moustafa Fadel
Abstract:
A traditional greenhouse is a metal frame agricultural building used for cultivation plants in a controlled environment isolated from external climatic changes. Using greenhouses in agriculture is an efficient way to reduce the water consumption, where agriculture field is considered the biggest water consumer world widely. Controlling greenhouse environment yields better productivity of plants but demands an increase of electric power. Although various control approaches have been used towards greenhouse automation, most of them are applied to traditional greenhouses with ventilation fans and/or evaporation cooling system. Such approaches are still demanding high energy and water consumption. The aim of this research is to develop a fuzzy control system that minimizes water and energy consumption by utilizing outside weather conditions and underground heat exchanger to maintain the optimum climate of the greenhouse. The proposed control system is implemented on an experimental model of thermally isolated greenhouse structure with dimensions of 6x5x2.8 meters. It uses fans for extracting heat from the ground heat exchanger system, motors for automatic open/close of the greenhouse windows and LED as lighting system. The controller is integrated also with environmental condition sensors. It was found that using the air-to-air horizontal ground heat exchanger with 90 mm diameter and 2 mm thickness placed 2.5 m below the ground surface results in decreasing the greenhouse temperature of 3.28 ˚C which saves around 3 kW of consumed energy. It also eliminated the water consumption needed in evaporation cooling systems which are traditionally used for cooling the greenhouse environment.Keywords: automation, earth-to-air heat exchangers, fuzzy control, greenhouse, sustainable buildings
Procedia PDF Downloads 1271770 The Interplay of Community-based Social Capital and Neighbourhood Dynamics in Enhancing SMEs’ Resilience During Crises: A Fuzzy-Set Qualitative Comparative Analysis Approach
Authors: Arash Sadeghi, Taimaz Larimian
Abstract:
This study explores the intricate interplay between community-based social capital (CBSC) and neighbourhood dynamics in enhancing resilience of Iranian SMEs, particularly under the strain of international sanctions. Utilising fuzzy-set Qualitative Comparative Analysis (fsQCA), we examine how different dimensions of CBSC—structural, relational, and cognitive—interact with neighbourhood socio-economic and built-environment characteristics to influence SME resilience. Findings reveal four configurations that contribute to the presence of resistance and five configurations associated with the adaptation outcome. Each configuration demonstrates a distinct combination of social capital elements, which vary according to the specific socio-economic and built-environmental characteristics of the neighbourhoods. The first configuration highlights the importance of structural social capital in deprived areas for building resistance, while the second emphasises the role of relational social capital in low-density, minimally deprived areas. Overall, cognitive social capital seems to be less effective in driving economic resilience compared to structural and relational types. This research contributes to the literature by providing a nuanced understanding of the synergistic effects of CBSC dimensions and neighbourhood characteristics on SME resilience. By adopting a configurational approach, we move beyond traditional methodologies, offering a comprehensive view of the complex dynamics of CBSC and neighbourhood characteristics and their impact on SME resilience in varying neighbourhoods.Keywords: community-based social capital, fuzzy-set qualitative comparative analysis (fsQCA), place-based resilience, resistance
Procedia PDF Downloads 511769 Sliding Mode Speed Controller of Photovoltaic Pumping System
Authors: Kessal Abdelhalim, Zebiri Fouad, Rahmani Lazhar
Abstract:
This paper presents an analysis by which the dynamic performances of a permanent magnet brushless DC (PMBLDC) motor is controlled through a hysteresis current loop and an outer speed loop with different controllers. The dynamics of the photovoltaic pumping drive system with sliding mode speed controllers are presented. The proposed structure is constituted of photovoltaic generator associated to DC-DC converter controlled by fuzzy logic to ensure the maximum power point tracking. The PWM signals are generated by the interaction of the motor speed closed-loop system and the current hysteresis. The motor reference current is compared with the motor speed feedback signal. The considered model has been implemented in Matlab/Simpower environment. The results show the effectiveness of the proposed method to increase the performance of the water pumping system.Keywords: photovoltaic, permanent magnet brushless DC (PMBLDC) motor, MPPT, speed control, fuzzy, sliding mode
Procedia PDF Downloads 6741768 Lattice Twinning and Detwinning Processes in Phase Transformation in Shape Memory Alloys
Authors: Osman Adiguzel
Abstract:
Shape memory effect is a peculiar property exhibited by certain alloy systems and based on martensitic transformation, and shape memory properties are closely related to the microstructures of the material. Shape memory effect is linked with martensitic transformation, which is a solid state phase transformation and occurs with the cooperative movement of atoms by means of lattice invariant shears on cooling from high-temperature parent phase. Lattice twinning and detwinning can be considered as elementary processes activated during the transformation. Thermally induced martensite occurs as martensite variants, in self-accommodating manner and consists of lattice twins. Also, this martensite is called the twinned martensite or multivariant martensite. Deformation of shape memory alloys in martensitic state proceeds through a martensite variant reorientation. The martensite variants turn into the reoriented single variants with deformation, and the reorientation process has great importance for the shape memory behavior. Copper based alloys exhibit this property in metastable β- phase region, which has DO3 –type ordered lattice in ternary case at high temperature, and these structures martensiticaly turn into the layered complex structures with lattice twinning mechanism, on cooling from high temperature parent phase region. The twinning occurs as martensite variants with lattice invariant shears in two opposite directions, <110 > -type directions on the {110}- type plane of austenite matrix. Lattice invariant shear is not uniform in copper based ternary alloys and gives rise to the formation of unusual layered structures, like 3R, 9R, or 18R depending on the stacking sequences on the close-packed planes of the ordered lattice. The unit cell and periodicity are completed through 18 atomic layers in case of 18R-structure. On the other hand, the deformed material recovers the original shape on heating above the austenite finish temperature. Meanwhile, the material returns to the twinned martensite structures (thermally induced martensite structure) in one way (irreversible) shape memory effect on cooling below the martensite finish temperature, whereas the material returns to the detwinned martensite structure (deformed martensite) in two-way (reversible) shape memory effect. Shortly one can say that the microstructural mechanisms, responsible for the shape memory effect are the twinning and detwinning processes as well as martensitic transformation. In the present contribution, x-ray diffraction, transmission electron microscopy (TEM) and differential scanning calorimetry (DSC) studies were carried out on two copper-based ternary alloys, CuZnAl, and CuAlMn.Keywords: shape memory effect, martensitic transformation, twinning and detwinning, layered structures
Procedia PDF Downloads 427