Search results for: computational methods
13449 African Swine Fewer Situation and Diagnostic Methods in Lithuania
Authors: Simona Pileviciene
Abstract:
On 24th January 2014, Lithuania notified two primary cases of African swine fever (ASF) in wild boars. The animals were tested positive for ASF virus (ASFV) genome by real-time PCR at the National Reference Laboratory for ASF in Lithuania (NRL), results were confirmed by the European Union Reference Laboratory for African swine fever (CISA-INIA). Intensive wild and domestic animal monitoring program was started. During the period of 2014-2017 ASF was confirmed in two large commercial pig holding with the highest biosecurity. Pigs were killed and destroyed. Since 2014 ASF outbreak territory from east and south has expanded to the middle of Lithuania. Diagnosis by PCR is one of the highly recommended diagnostic methods by World Organization for Animal Health (OIE) for diagnosis of ASF. The aim of the present study was to compare singleplex real-time PCR assays to a duplex assay allowing the identification of ASF and internal control in a single PCR tube and to compare primers, that target the p72 gene (ASF 250 bp and ASF 75 bp) effectivity. Multiplex real-time PCR assays prove to be less time consuming and cost-efficient and therefore have a high potential to be applied in the routine analysis. It is important to have effective and fast method that allows virus detection at the beginning of disease for wild boar population and in outbreaks for domestic pigs. For experiments, we used reference samples (INIA, Spain), and positive samples from infected animals in Lithuania. Results show 100% sensitivity and specificity.Keywords: African swine fewer, real-time PCR, wild boar, domestic pig
Procedia PDF Downloads 17113448 Prevalence and Genetic Determinant of Drug Resistant Tuberculosis among Patients Completing Intensive Phase of Treatment in a Tertiary Referral Center in Nigeria
Authors: Aminu Bashir Mohammad, Agwu Ezera, Abdulrazaq G. Habib, Garba Iliyasu
Abstract:
Background: Drug resistance tuberculosis (DR-TB) continues to be a challenge in developing countries with poor resources. Routine screening for primary DR-TB before commencing treatment is not done in public hospitals in Nigeria, even with the large body of evidence that shows a high prevalence of primary DR-TB. Data on drug resistance and its genetic determinant among follow up TB patients is lacking in Nigeria. Hence the aim of this study was to determine the prevalence and genetic determinant of drug resistance among follow up TB patients in a tertiary hospital in Nigeria. Methods: This was a cross-sectional laboratory-based study conducted on 384 sputum samples collected from consented follow-up tuberculosis patients. Standard microbiology methods (Zeil-Nielsen staining and microscopy) and PCR (Line Probe Assay)] were used to analyze the samples collected. Person’s Chi-square was used to analyze the data generated. Results: Out of three hundred and eighty-four (384) sputum samples analyzed for mycobacterium tuberculosis (MTB) and DR-TB twenty-five 25 (6.5%) were found to be AFB positive. These samples were subjected to PCR (Line Probe Assay) out of which 18(72%) tested positive for DR-TB. Mutations conferring resistance to rifampicin (rpo B) and isoniazid (katG, and or inhA) were detected in 12/18(66.7%) and 6/18(33.3%), respectively. Transmission dynamic of DR-TB was not significantly (p>0.05) dependent on demographic characteristics. Conclusion: There is a need to strengthened the laboratory capacity for diagnosis of TB and drug resistance testing and make these services available, affordable, and accessible to the patients who need them.Keywords: drug resistance tuberculosis, genetic determinant, intensive phase, Nigeria
Procedia PDF Downloads 29113447 Using Seismic Base Isolation Systems in High-Rise Hospital Buildings and a Hybrid Proposal
Authors: Elif Bakkaloglu, Necdet Torunbalci
Abstract:
The fact of earthquakes in Turkiye is an inevitable natural disaster. Therefore, buildings must be prepared for this natural hazard. Especially in hospital buildings, earthquake resistance is an essential point because hospitals are one of the first places where people come after an earthquake. Although hospital buildings are more suitable for horizontal architecture, it is necessary to construct and expand multi-storey hospital buildings due to difficulties in finding suitable places as a result of excessive urbanization, difficulties in obtaining appropriate size land and decrease in suitable places and increase in land values. In Turkiye, using seismic isolators in public hospitals, which are placed in first-degree earthquake zone and have more than 100 beds, is made obligatory by general instruction. As a result of this decision, it may sometimes be necessary to construct seismic isolated multi-storey hospital buildings in cities where those problems are experienced. Although widespread use of seismic isolators in Japan, there are few multi-storey buildings in which seismic isolators are used in Turkiye. As it is known, base isolation systems are the most effective methods of earthquake resistance, as number of floors increases, center of gravity moves away from base in multi-storey buildings, increasing the overturning effect and limiting the use of these systems. In this context, it is aimed to investigate structural systems of multi-storey buildings which built using seismic isolation methods in the World. In addition to this, a working principle is suggested for disseminating seismic isolators in multi-storey hospital buildings. The results to be obtained from the study will guide architects who design multi-storey hospital buildings in their architectural designs and engineers in terms of structural system design.Keywords: earthquake, energy absorbing systems, hospital, seismic isolation systems
Procedia PDF Downloads 15813446 Homogenization of a Non-Linear Problem with a Thermal Barrier
Authors: Hassan Samadi, Mustapha El Jarroudi
Abstract:
In this work, we consider the homogenization of a non-linear problem in periodic medium with two periodic connected media exchanging a heat flux throughout their common interface. The interfacial exchange coefficient λ is assumed to tend to zero or to infinity following a rate λ=λ(ε) when the size ε of the basic cell tends to zero. Three homogenized problems are determined according to some critical value depending of λ and ε. Our method is based on Γ-Convergence techniques.Keywords: variational methods, epiconvergence, homogenization, convergence technique
Procedia PDF Downloads 52713445 Atmospheric Oxidation of Carbonyls: Insight to Mechanism, Kinetic and Thermodynamic Parameters
Authors: Olumayede Emmanuel Gbenga, Adeniyi Azeez Adebayo
Abstract:
Carbonyls are the first-generation products from tropospheric degradation reactions of volatile organic compounds (VOCs). This computational study examined the mechanism of removal of carbonyls from the atmosphere via hydroxyl radical. The kinetics of the reactions were computed from the activation energy (using enthalpy (ΔH**) and Gibbs free energy (ΔG**). The minimum energy path (MEP) analysis reveals that in all the molecules, the products have more stable energy than the reactants, which implies that the forward reaction is more thermodynamically favorable. The hydrogen abstraction of the aromatic aldehyde, especially without methyl substituents, is more kinetically favorable compared with the other aldehydes in the order of aromatic (without methyl or meta methyl) > alkene (short chain) > diene > long-chain aldehydes. The activation energy is much lower for the forward reaction than the backward, indicating that the forward reactions are more kinetically stable than their backward reaction. In terms of thermodynamic stability, the aromatic compounds are found to be less favorable in comparison to the aliphatic. The study concludes that the chemistry of the carbonyl bond of the aldehyde changed significantly from the reactants to the products.Keywords: atmospheric carbonyls, oxidation, mechanism, kinetic, thermodynamic
Procedia PDF Downloads 5713444 Effect of Piston and its Weight on the Performance of a Gun Tunnel via Computational Fluid Dynamics
Authors: A. A. Ahmadi, A. R. Pishevar, M. Nili
Abstract:
As the test gas in a gun tunnel is non-isentropically compressed and heated by a light weight piston. Here, first consideration is the optimum piston weight. Although various aspects of the influence of piston weight on gun tunnel performance have been studied, it is not possible to decide from the existing literature what piston weight is required for optimum performance in various conditions. The technique whereby the piston is rapidly brought to rest at the end of the gun tunnel barrel, and the resulted peak pressure is equal in magnitude to the final equilibrium pressure, is called the equilibrium piston technique. The equilibrium piston technique was developed to estimate the equilibrium piston mass; but this technique cannot give an appropriate estimate for the optimum piston weight. In the present work, a gun tunnel with diameter of 3 in. is described and its performance is investigated numerically to obtain the effect of piston and its weight. Numerical results in the present work are in very good agreement with experimental results. Significant influence of the existence of a piston is shown by comparing the gun tunnel results with results of a conventional shock tunnel in the same dimension and same initial condition. In gun tunnel, an increase of around 250% in running time is gained relative to shock tunnel. Also, Numerical results show that equilibrium piston technique is not a good way to estimate suitable piston weight and there will be a lighter piston which can increase running time of the gun tunnel around 60%.Keywords: gun tunnel, hypersonic flow, piston, shock tunnel
Procedia PDF Downloads 37513443 Contactless Electromagnetic Detection of Stress Fluctuations in Steel Elements
Authors: M. A. García, J. Vinolas, A. Hernando
Abstract:
Steel is nowadays one of the most important structural materials because of its outstanding mechanical properties. Therefore, in order to look for a sustainable economic model and to optimize the use of extensive resources, new methods to monitor and prevent failure of steel-based facilities are required. The classical mechanical tests, as for instance building tasting, are invasive and destructive. Moreover, for facilities where the steel element is embedded, (as reinforced concrete) these techniques are directly non applicable. Hence, non-invasive monitoring techniques to prevent failure, without altering the structural properties of the elements are required. Among them, electromagnetic methods are particularly suitable for non-invasive inspection of the mechanical state of steel-based elements. The magnetoelastic coupling effects induce a modification of the electromagnetic properties of an element upon applied stress. Since most steels are ferromagnetic because of their large Fe content, it is possible to inspect their structure and state in a non-invasive way. We present here a distinct electromagnetic method for contactless evaluation of internal stress in steel-based elements. In particular, this method relies on measuring the magnetic induction between two coils with the steel specimen in between them. We found that the alteration of electromagnetic properties of the steel specimen induced by applied stress-induced changes in the induction allowed us to detect stress well below half of the elastic limit of the material. Hence, it represents an outstanding non-invasive method to prevent failure in steel-based facilities. We here describe the theoretical model, present experimental results to validate it and finally we show a practical application for detection of stress and inhomogeneities in train railways.Keywords: magnetoelastic, magnetic induction, mechanical stress, steel
Procedia PDF Downloads 5313442 Optimal Design of 3-Way Reversing Valve Considering Cavitation Effect
Authors: Myeong-Gon Lee, Yang-Gyun Kim, Tae-Young Kim, Seung-Ho Han
Abstract:
The high-pressure valve uses one set of 2-way valves for the purpose of reversing fluid direction. If there is no accurate control device for the 2-way valves, lots of surging can be generated. The surging is a kind of pressure ripple that occurs in rapid changes of fluid motions under inaccurate valve control. To reduce the surging effect, a 3-way reversing valve can be applied which provides a rapid and precise change of water flow directions without any accurate valve control system. However, a cavitation occurs due to a complicated internal trim shape of the 3-way reversing valve. The cavitation causes not only noise and vibration but also decreasing the efficiency of valve-operation, in which the bubbles generated below the saturated vapor pressure are collapsed rapidly at higher pressure zone. The shape optimization of the 3-way reversing valve to minimize the cavitation effect is necessary. In this study, the cavitation index according to the international standard ISA was introduced to estimate macroscopically the occurrence of the cavitation effect. Computational fluid dynamic analysis was carried out, and the cavitation effect was quantified by means of the percent of cavitation converted from calculated results of vapor volume fraction. In addition, the shape optimization of the 3-way reversing valve was performed by taking into account of the percent of cavitation.Keywords: 3-Way reversing valve, cavitation, shape optimization, vapor volume fraction
Procedia PDF Downloads 37513441 Suicide, Help-Seeking and LGBT Youth: A Mixed Methods Study
Authors: Elizabeth McDermott, Elizabeth Hughes, Victoria Rawlings
Abstract:
Globally, suicide is the second leading cause of death among 15–29 year-olds. Young people who identify as lesbian, gay, bisexual and transgender (LGBT) have elevated rates of suicide and self-harm. Despite the increased risk, there is a paucity of research on LGBT help-seeking and suicidality. This is the first national study to investigate LGBT youth help-seeking for suicidal feelings and self-harm. We report on a UK sequential exploratory mixed method study that employed face-to-face and online methods in two stages. Stage one involved 29 online (n=15) and face-to-face (n=14) semi-structured interviews with LGBT youth aged under 25 years old. Stage two utilized an online LGBT youth questionnaire employing a community-based sampling strategy (n=789). We found across the sample that LGBT youth who self-harmed or felt suicidal were reluctant to seek help. Results indicated that participants were normalizing their emotional distress and only asked for help when they reached crisis point and were no longer coping. Those who self-harmed (p<0.001, OR=2.82), had attempted or planned suicide (p<0.05, OR=1.48), or had experience of abuse related to their sexuality or gender (p<0.01, OR=1.80), were most likely to seek help. There were a number of interconnecting reasons that contributed to participants’ problems accessing help. The most prominent of these were: negotiating norms in relation to sexuality, gender, mental health and age; being unable to talk about emotions, and coping and self-reliance. It is crucial that policies and practices that aim to prevent LGBT youth suicide recognize that norms and normalizing processes connected to sexual orientation and gender identity are additional difficulties that LGBT youth have accessing mental health support.Keywords: help-seeking, LGBT, suicide, youth
Procedia PDF Downloads 27813440 Analog Input Output Buffer Information Specification Modelling Techniques for Single Ended Inter-Integrated Circuit and Differential Low Voltage Differential Signaling I/O Interfaces
Authors: Monika Rawat, Rahul Kumar
Abstract:
Input output Buffer Information Specification (IBIS) models are used for describing the analog behavior of the Input Output (I/O) buffers of a digital device. They are widely used to perform signal integrity analysis. Advantages of using IBIS models include simple structure, IP protection and fast simulation time with reasonable accuracy. As design complexity of driver and receiver increases, capturing exact behavior from transistor level model into IBIS model becomes an essential task to achieve better accuracy. In this paper, an improvement in existing methodology of generating IBIS model for complex I/O interfaces such as Inter-Integrated Circuit (I2C) and Low Voltage Differential Signaling (LVDS) is proposed. Furthermore, the accuracy and computational performance of standard method and proposed approach with respect to SPICE are presented. The investigations will be useful to further improve the accuracy of IBIS models and to enhance their wider acceptance.Keywords: IBIS, signal integrity, open-drain buffer, low voltage differential signaling, behavior modelling, transient simulation
Procedia PDF Downloads 19913439 Green Organic Chemistry, a New Paradigm in Pharmaceutical Sciences
Authors: Pesaru Vigneshwar Reddy, Parvathaneni Pavan
Abstract:
Green organic chemistry which is the latest and one of the most researched topics now-a- days has been in demand since 1990’s. Majority of the research in green organic chemistry chemicals are some of the important starting materials for greater number of major chemical industries. The production of organic chemicals has raw materials (or) reagents for other application is major sector of manufacturing polymers, pharmaceuticals, pesticides, paints, artificial fibers, food additives etc. organic synthesis on a large scale compound to the labratory scale, involves the use of energy, basic chemical ingredients from the petro chemical sectors, catalyst and after the end of the reaction, seperation, purification, storage, packing distribution etc. During these processes there are many problems of health and safety for workers in addition to the environmental problems caused there by use and deposition as waste. Green chemistry with its 12 principles would like to see changes in conventional way that were used for decades to make synthetic organic chemical and the use of less toxic starting materials. Green chemistry would like to increase the efficiency of synthetic methods, to use less toxic solvents, reduce the stage of synthetic routes and minimize waste as far as practically possible. In this way, organic synthesis will be part of the effort for sustainable development Green chemistry is also interested for research and alternatives innovations on many practical aspects of organic synthesis in the university and research labaratory of institutions. By changing the methodologies of organic synthesis, health and safety will be advanced in the small scale laboratory level but also will be extended to the industrial large scale production a process through new techniques. The three key developments in green chemistry include the use of super critical carbondioxide as green solvent, aqueous hydrogen peroxide as an oxidising agent and use of hydrogen in asymmetric synthesis. It also focuses on replacing traditional methods of heating with that of modern methods of heating like microwaves traditions, so that carbon foot print should reduces as far as possible. Another beneficiary of this green chemistry is that it will reduce environmental pollution through the use of less toxic reagents, minimizing of waste and more bio-degradable biproducts. In this present paper some of the basic principles, approaches, and early achievements of green chemistry has a branch of chemistry that studies the laws of passing of chemical reactions is also considered, with the summarization of green chemistry principles. A discussion about E-factor, old and new synthesis of ibuprofen, microwave techniques, and some of the recent advancements also considered.Keywords: energy, e-factor, carbon foot print, micro-wave, sono-chemistry, advancement
Procedia PDF Downloads 31113438 Support Vector Machine Based Retinal Therapeutic for Glaucoma Using Machine Learning Algorithm
Authors: P. S. Jagadeesh Kumar, Mingmin Pan, Yang Yung, Tracy Lin Huan
Abstract:
Glaucoma is a group of visual maladies represented by the scheduled optic nerve neuropathy; means to the increasing dwindling in vision ground, resulting in loss of sight. In this paper, a novel support vector machine based retinal therapeutic for glaucoma using machine learning algorithm is conservative. The algorithm has fitting pragmatism; subsequently sustained on correlation clustering mode, it visualizes perfect computations in the multi-dimensional space. Support vector clustering turns out to be comparable to the scale-space advance that investigates the cluster organization by means of a kernel density estimation of the likelihood distribution, where cluster midpoints are idiosyncratic by the neighborhood maxima of the concreteness. The predicted planning has 91% attainment rate on data set deterrent on a consolidation of 500 realistic images of resolute and glaucoma retina; therefore, the computational benefit of depending on the cluster overlapping system pedestal on machine learning algorithm has complete performance in glaucoma therapeutic.Keywords: machine learning algorithm, correlation clustering mode, cluster overlapping system, glaucoma, kernel density estimation, retinal therapeutic
Procedia PDF Downloads 25813437 Current Epizootic Situation of Q Fever in Polish Cattle
Authors: Monika Szymańska-Czerwińska, Agnieszka Jodełko, Krzysztof Niemczuk
Abstract:
Q fever (coxiellosis) is an infectious disease of animals and humans causes by C. burnetii and widely distributed throughout the world. Cattle and small ruminants are commonly known as shedders of C. burnetii. The aims of this study were the evaluation of seroprevalence and shedding of C. burnetii in cattle. Genotypes of the pathogen present in the tested specimens were also identified using MLVA (Multiple Locus Variable-Number Tandem Repeat Analysis) and MST (multispacer sequence typing) methods. Sampling was conducted in different regions of Poland in 2018-2021. In total, 2180 bovine serum samples from 801 cattle herds were tested by ELISA (enzyme-linked immunosorbent assay). 489 specimens from 157 cattle herds such as: individual milk samples (n=407), bulk tank milk (n=58), vaginal swabs (n=20), placenta (n=3) and feces (n=1) were subjected to C. burnetii specific qPCR. The qPCR (IS1111 transposon-like repetitive region) was performed using Adiavet COX RealTime PCR kit. Genotypic characterization of the strains was conducted utilizing MLVA and MST methods. MLVA was performed using 6 variable loci. The overall herd-level seroprevalence of C. burnetii infection was 36.74% (801/2180). Shedders were detected in 29.3% (46/157) cattle herds in all tested regions. ST 61 sequence type was identified in 10 out of 18 genotyped strains. Interestingly one strain represents sequence type which has never been recorded previously. MLVA method identified three previously known genotypes: most common was J but also I and BE were recognized. Moreover, a one genotype has never been described previously. Seroprevalence and shedding of C. burnetii in cattle is common and strains are genetically diverse.Keywords: Coxiella burnetii, cattle, MST, MLVA, Q fever
Procedia PDF Downloads 8913436 Relative Effectiveness of Inquiry: Approach and Expository Instructional Methods in Fostering Students’ Retention in Chemistry
Authors: Joy Johnbest Egbo
Abstract:
The study was designed to investigate the relative effectiveness of inquiry role approach and expository instructional methods in fostering students’ retention in chemistry. Two research questions were answered and three null hypotheses were formulated and tested at 0.05 level of significance. A quasi-experimental (the non-equivalent pretest, posttest control group) design was adopted for the study. The population for the study comprised all senior secondary school class two (SS II) students who were offering Chemistry in single sex schools in Enugu Education Zone. The instrument for data collection was a self-developed Chemistry Retention Test (CRT). Relevant data were collected from a sample of one hundred and forty–one (141) students drawn from two secondary schools (1 male and 1 female schools) using simple random sampling technique. A reliability co-efficient of 0.82 was obtained for the instrument using Kuder Richardson formular20 (K-R20). Mean and Standard deviation scores were used to answer the research questions while two–way analysis of covariance (ANCOVA) was used to test the hypotheses. The findings showed that the students taught with Inquiry role approach retained the chemistry concept significantly higher than their counterparts taught with expository method. Female students retained slightly higher than their male counterparts. There is significant interaction between instructional packages and gender on Chemistry students’ retention. It was recommended, among others, that teachers should be encouraged to employ the use of Inquiry-role approach more in the teaching of chemistry and other subjects in general. By so doing, students’ retention of the subject could be increased.Keywords: inquiry role approach, retention, exposition method, chemistry
Procedia PDF Downloads 51613435 Antagonistic Potential of Epiphytic Bacteria Isolated in Kazakhstan against Erwinia amylovora, the Causal Agent of Fire Blight
Authors: Assel E. Molzhigitova, Amankeldi K. Sadanov, Elvira T. Ismailova, Kulyash A. Iskandarova, Olga N. Shemshura, Ainur I. Seitbattalova
Abstract:
Fire blight is a very harmful for commercial apple and pear production quarantine bacterial disease. To date, several different methods have been proposed for disease control, including the use of copperbased preparations and antibiotics, which are not always reliable or effective. The use of bacteria as biocontrol agents is one of the most promising and eco-friendly alternative methods. Bacteria with protective activity against the causal agent of fire blight are often present among the epiphytic microorganisms of the phyllosphere of host plants. Therefore, the main objective of our study was screening of local epiphytic bacteria as possible antagonists against Erwinia amylovora, the causal agent of fire blight. Samples of infected organs of apple and pear trees (shoots, leaves, fruits) were collected from the industrial horticulture areas in various agro-ecological zones of Kazakhstan. Epiphytic microorganisms were isolated by standard and modified methods on specific nutrient media. The primary screening of selected microorganisms under laboratory conditions to determine the ability to suppress the growth of Erwinia amylovora was performed by agar-diffusion-test. Among 142 bacteria isolated from the fire blight host plants, 5 isolates, belonging to the genera Bacillus, Lactobacillus, Pseudomonas, Paenibacillus and Pantoea showed higher antagonistic activity against the pathogen. The diameters of inhibition zone have been depended on the species and ranged from 10 mm to 48 mm. The maximum diameter of inhibition zone (48 mm) was exhibited by B. amyloliquefaciens. Less inhibitory effect was showed by Pantoea agglomerans PA1 (19 mm). The study of inhibitory effect of Lactobacillus species against E. amylovora showed that among 7 isolates tested only one (Lactobacillus plantarum 17M) demonstrated inhibitory zone (30 mm). In summary, this study was devoted to detect the beneficial epiphytic bacteria from plants organs of pear and apple trees due to fire blight control in Kazakhstan. Results obtained from the in vitro experiments showed that the most efficient bacterial isolates are Lactobacillus plantarum 17M, Bacillus amyloliquefaciens MB40, and Pantoea agglomerans PA1. These antagonists are suitable for development as biocontrol agents for fire blight control. Their efficacies will be evaluated additionally, in biological tests under in vitro and field conditions during our further study.Keywords: antagonists, epiphytic bacteria, Erwinia amylovora, fire blight
Procedia PDF Downloads 17713434 Determination of Klebsiella Pneumoniae Susceptibility to Antibiotics Using Infrared Spectroscopy and Machine Learning Algorithms
Authors: Manal Suleiman, George Abu-Aqil, Uraib Sharaha, Klaris Riesenberg, Itshak Lapidot, Ahmad Salman, Mahmoud Huleihel
Abstract:
Klebsiella pneumoniae is one of the most aggressive multidrug-resistant bacteria associated with human infections resulting in high mortality and morbidity. Thus, for an effective treatment, it is important to diagnose both the species of infecting bacteria and their susceptibility to antibiotics. Current used methods for diagnosing the bacterial susceptibility to antibiotics are time-consuming (about 24h following the first culture). Thus, there is a clear need for rapid methods to determine the bacterial susceptibility to antibiotics. Infrared spectroscopy is a well-known method that is known as sensitive and simple which is able to detect minor biomolecular changes in biological samples associated with developing abnormalities. The main goal of this study is to evaluate the potential of infrared spectroscopy in tandem with Random Forest and XGBoost machine learning algorithms to diagnose the susceptibility of Klebsiella pneumoniae to antibiotics within approximately 20 minutes following the first culture. In this study, 1190 Klebsiella pneumoniae isolates were obtained from different patients with urinary tract infections. The isolates were measured by the infrared spectrometer, and the spectra were analyzed by machine learning algorithms Random Forest and XGBoost to determine their susceptibility regarding nine specific antibiotics. Our results confirm that it was possible to classify the isolates into sensitive and resistant to specific antibiotics with a success rate range of 80%-85% for the different tested antibiotics. These results prove the promising potential of infrared spectroscopy as a powerful diagnostic method for determining the Klebsiella pneumoniae susceptibility to antibiotics.Keywords: urinary tract infection (UTI), Klebsiella pneumoniae, bacterial susceptibility, infrared spectroscopy, machine learning
Procedia PDF Downloads 17613433 A Relative Entropy Regularization Approach for Fuzzy C-Means Clustering Problem
Authors: Ouafa Amira, Jiangshe Zhang
Abstract:
Clustering is an unsupervised machine learning technique; its aim is to extract the data structures, in which similar data objects are grouped in the same cluster, whereas dissimilar objects are grouped in different clusters. Clustering methods are widely utilized in different fields, such as: image processing, computer vision , and pattern recognition, etc. Fuzzy c-means clustering (fcm) is one of the most well known fuzzy clustering methods. It is based on solving an optimization problem, in which a minimization of a given cost function has been studied. This minimization aims to decrease the dissimilarity inside clusters, where the dissimilarity here is measured by the distances between data objects and cluster centers. The degree of belonging of a data point in a cluster is measured by a membership function which is included in the interval [0, 1]. In fcm clustering, the membership degree is constrained with the condition that the sum of a data object’s memberships in all clusters must be equal to one. This constraint can cause several problems, specially when our data objects are included in a noisy space. Regularization approach took a part in fuzzy c-means clustering technique. This process introduces an additional information in order to solve an ill-posed optimization problem. In this study, we focus on regularization by relative entropy approach, where in our optimization problem we aim to minimize the dissimilarity inside clusters. Finding an appropriate membership degree to each data object is our objective, because an appropriate membership degree leads to an accurate clustering result. Our clustering results in synthetic data sets, gaussian based data sets, and real world data sets show that our proposed model achieves a good accuracy.Keywords: clustering, fuzzy c-means, regularization, relative entropy
Procedia PDF Downloads 26413432 Grammarly: Great Writings Get Work Done Using AI
Authors: Neha Intikhab Khan, Alanoud AlBalwi, Farah Alqazlan, Tala Almadoudi
Abstract:
Background: Grammarly, a widely utilized writing assistant launched in 2009, leverages advanced artificial intelligence and natural language processing to enhance writing quality across various platforms. Methods: To collect data on user perceptions of Grammarly, a structured survey was designed and distributed via Google Forms. The survey included a series of quantitative and qualitative questions aimed at assessing various aspects of Grammarly's performance. The survey comprised multiple-choice questions, Likert scale items (ranging from "strongly disagree" to "strongly agree"), and open-ended questions to capture detailed user feedback. The target population included students, friends, and family members. The collected responses were analyzed using statistical methods to quantify user satisfaction. Participation in the survey was voluntary, and respondents were assured anonymity and confidentiality. Results: The survey of 28 respondents revealed a generally favorable perception of Grammarly's AI capabilities. A significant 39.3% strongly agreed that it effectively improves text tone, with an additional 46.4% agreeing, while 10.7% remained neutral. For clarity suggestions, 28.6% strongly agreed, and 57.1% agreed, totaling 85.7% recognition of its value. Regarding grammatical accuracy across various genres, 46.4% rated it a perfect score of 5, contributing to 78.5% who found it highly effective. Conclusion: The evolution of Grammarly from a basic grammar checker to a robust AI-driven application underscores its adaptability and commitment to helping users develop their writing skills.Keywords: Grammarly, writing tool, user engagement, AI capabilities, effectiveness
Procedia PDF Downloads 1013431 Evaluation of a Hybrid Knowledge-Based System Using Fuzzy Approach
Authors: Kamalendu Pal
Abstract:
This paper describes the main features of a knowledge-based system evaluation method. System evaluation is placed in the context of a hybrid legal decision-support system, Advisory Support for Home Settlement in Divorce (ASHSD). Legal knowledge for ASHSD is represented in two forms, as rules and previously decided cases. Besides distinguishing the two different forms of knowledge representation, the paper outlines the actual use of these forms in a computational framework that is designed to generate a plausible solution for a given case, by using rule-based reasoning (RBR) and case-based reasoning (CBR) in an integrated environment. The nature of suitability assessment of a solution has been considered as a multiple criteria decision making process in ASHAD evaluation. The evaluation was performed by a combination of discussions and questionnaires with different user groups. The answers to questionnaires used in this evaluations method have been measured as a combination of linguistic variables, fuzzy numbers, and by using defuzzification process. The results show that the designed evaluation method creates suitable mechanism in order to improve the performance of the knowledge-based system.Keywords: case-based reasoning, fuzzy number, legal decision-support system, linguistic variable, rule-based reasoning, system evaluation
Procedia PDF Downloads 37013430 Groupthink: The Dark Side of Team Cohesion
Authors: Farhad Eizakshiri
Abstract:
The potential for groupthink to explain the issues contributing to deterioration of decision-making ability within the unitary team and so to cause poor outcomes attracted a great deal of attention from a variety of disciplines, including psychology, social and organizational studies, political science, and others. Yet what remains unclear is how and why the team members’ strivings for unanimity and cohesion override their motivation to realistically appraise alternative courses of action. In this paper, the findings of a sequential explanatory mixed-methods research containing an experiment with thirty groups of three persons each and interviews with all experimental groups to investigate this issue is reported. The experiment sought to examine how individuals aggregate their views in order to reach a consensual group decision concerning the completion time of a task. The results indicated that groups made better estimates when they had no interaction between members in comparison with the situation that groups collectively agreed on time estimates. To understand the reasons, the qualitative data and informal observations collected during the task were analyzed through conversation analysis, thus leading to four reasons that caused teams to neglect divergent viewpoints and reduce the number of ideas being considered. Reasons found were the concurrence-seeking tendency, pressure on dissenters, self-censorship, and the illusion of invulnerability. It is suggested that understanding the dynamics behind the aforementioned reasons of groupthink will help project teams to avoid making premature group decisions by enhancing careful evaluation of available information and analysis of available decision alternatives and choices.Keywords: groupthink, group decision, cohesiveness, project teams, mixed-methods research
Procedia PDF Downloads 40013429 The Sensitivity of Electrical Geophysical Methods for Mapping Salt Stores within the Soil Profile
Authors: Fathi Ali Swaid
Abstract:
Soil salinization is one of the most hazardous phenomenons accelerating the land degradation processes. It either occurs naturally or is human-induced. High levels of soil salinity negatively affect crop growth and productivity leading land degradation ultimately. Thus, it is important to monitor and map soil salinity at an early stage to enact effective soil reclamation program that helps lessen or prevent future increase in soil salinity. Geophysical method has outperformed the traditional method for assessing soil salinity offering more informative and professional rapid assessment techniques for monitoring and mapping soil salinity. Soil sampling, EM38 and 2D conductivity imaging have been evaluated for their ability to delineate and map the level of salinity variations at Second Ponds Creek. The three methods have shown that the subsoil in the study area is saline. Salt variations were successfully observed under either method. However, EM38 reading and 2D inversion data show a clear spatial structure comparing to EC1:5 of soil samples in spite of that all soil samples, EM38 and 2D imaging were collected from the same location. Because EM38 readings and 2D imaging data are a weighted average of electrical soil conductance, it is more representative of soil properties than the soil samples method. The mapping of subsurface soil at the study area has been successful and the resistivity imaging has proven to be an advantage. The soil salinity analysis (EC1:5) correspond well to the true resistivity bringing together a good result of soil salinity. Soil salinity clearly indicated by previous investigation EM38 have been confirmed by the interpretation of the true resistivity at study area.Keywords: 2D conductivity imaging, EM38 readings, soil salinization, true resistivity, urban salinity
Procedia PDF Downloads 38113428 Spatial Organization of Organelles in Living Cells: Insights from Mathematical Modelling
Authors: Congping Lin
Abstract:
Intracellular transport in fungi has a number of important roles in, e.g., filamentous fungal growth and cellular metabolism. Two basic mechanisms for intracellular transport are motor-driven trafficking along microtubules (MTs) and diffusion. Mathematical modelling has been actively developed to understand such intracellular transport and provide unique insight into cellular complexity. Based on live-cell imaging data in Ustilago hyphal cells, probabilistic models have been developed to study mechanism underlying spatial organization of molecular motors and organelles. In particular, anther mechanism - stochastic motility of dynein motors along MTs has been found to contribute to half of its accumulation at hyphal tip in order to support early endosome (EE) recycling. The EE trafficking not only facilitates the directed motion of peroxisomes but also enhances their diffusive motion. Considering the importance of spatial organization of early endosomes in supporting peroxisome movement, computational and experimental approaches have been combined to a whole-cell level. Results from this interdisciplinary study promise insights into requirements for other membrane trafficking systems (e.g., in neurons), but also may inform future 'synthetic biology' studies.Keywords: intracellular transport, stochastic process, molecular motors, spatial organization
Procedia PDF Downloads 13713427 Evaluation of Electro-Flocculation for Biomass Production of Marine Microalgae Phaodactylum tricornutum
Authors: Luciana C. Ramos, Leandro J. Sousa, Antônio Ferreira da Silva, Valéria Gomes Oliveira Falcão, Suzana T. Cunha Lima
Abstract:
The commercial production of biodiesel using microalgae demands a high-energy input for harvesting biomass, making production economically unfeasible. Methods currently used involve mechanical, chemical, and biological procedures. In this work, a flocculation system is presented as a cost and energy effective process to increase biomass production of Phaeodactylum tricornutum. This diatom is the only species of the genus that present fast growth and lipid accumulation ability that are of great interest for biofuel production. The algae, selected from the Bank of Microalgae, Institute of Biology, Federal University of Bahia (Brazil), have been bred in tubular reactor with photoperiod of 12 h (clear/dark), providing luminance of about 35 μmol photons m-2s-1, and temperature of 22 °C. The medium used for growing cells was the Conway medium, with addition of silica. The seaweed growth curve was accompanied by cell count in Neubauer camera and by optical density in spectrophotometer, at 680 nm. The precipitation occurred at the end of the stationary phase of growth, 21 days after inoculation, using two methods: centrifugation at 5000 rpm for 5 min, and electro-flocculation at 19 EPD and 95 W. After precipitation, cells were frozen at -20 °C and, subsequently, lyophilized. Biomass obtained by electro-flocculation was approximately four times greater than the one achieved by centrifugation. The benefits of this method are that no addition of chemical flocculants is necessary and similar cultivation conditions can be used for the biodiesel production and pharmacological purposes. The results may contribute to improve biodiesel production costs using marine microalgae.Keywords: biomass, diatom, flocculation, microalgae
Procedia PDF Downloads 33113426 Exploring the 1,3-Dipolar Cycloaddition Reaction between Nitrilimine and 6-Methyl-4,5-dihydropyridazin-3(2h)-one through MEDT and Molecular Docking Analysis
Authors: Zineb Ouahdi
Abstract:
Spirocyclic compound derivatives, with their unique heterocyclic motifs, serve as a continual source of inspiration in the pursuit of developing potential therapeutic agents. These compounds are diverse in their chemical structures; some have fully saturated skeletons, while others are partially unsaturated. Nevertheless, these compounds share a characteristic feature with natural products - the presence of at least one heteroatom in one of their rings. The inclusion of a C = O dipolarophile in pyridazinones imparts an exciting aspect for 1,3-dipolar cycloaddition reactions, the focal point of our study. Our research has involved a detailed theoretical investigation of the reaction between ethyl (Z)-2-bromo-2-(2-(p-tolyl)hydrazono)acetate and 6-methyl-4,5-dihydropyridazine-3(2H)-one. This has been accomplished using the DFT/B3LYP/6-31g(d,p) method, intending to illuminate the chemical pathway of this reaction. The chemical reactivity theories we used for this purpose included FMO, TS, and local and global indices derived from conceptual DFT. The theoretical framework outlined in this study allowed us to propose a reaction mechanism for cycloaddition reactions. It also enabled the identification of the potential activities of the analyzed compounds (P1, P2, P3, P4, P5, and P6) against the major protease of the coronavirus disease (COVID-19). This was achieved using various computational tools, including AutoDock Tools, Autodock Vina, Autodock 4, and PYRX.Keywords: MEDT, pyridazin, cycloaddition, FMO, DFT, docking
Procedia PDF Downloads 10713425 HcDD: The Hybrid Combination of Disk Drives in Active Storage Systems
Authors: Shu Yin, Zhiyang Ding, Jianzhong Huang, Xiaojun Ruan, Xiaomin Zhu, Xiao Qin
Abstract:
Since large-scale and data-intensive applications have been widely deployed, there is a growing demand for high-performance storage systems to support data-intensive applications. Compared with traditional storage systems, next-generation systems will embrace dedicated processor to reduce computational load of host machines and will have hybrid combinations of different storage devices. The advent of flash- memory-based solid state disk has become a critical role in revolutionizing the storage world. However, instead of simply replacing the traditional magnetic hard disk with the solid state disk, it is believed that finding a complementary approach to corporate both of them is more challenging and attractive. This paper explores an idea of active storage, an emerging new storage configuration, in terms of the architecture and design, the parallel processing capability, the cooperation of other machines in cluster computing environment, and a disk configuration, the hybrid combination of different types of disk drives. Experimental results indicate that the proposed HcDD achieves better I/O performance and longer storage system lifespan.Keywords: arallel storage system, hybrid storage system, data inten- sive, solid state disks, reliability
Procedia PDF Downloads 45313424 Automatic Tagging and Accuracy in Assamese Text Data
Authors: Chayanika Hazarika Bordoloi
Abstract:
This paper is an attempt to work on a highly inflectional language called Assamese. This is also one of the national languages of India and very little has been achieved in terms of computational research. Building a language processing tool for a natural language is not very smooth as the standard and language representation change at various levels. This paper presents inflectional suffixes of Assamese verbs and how the statistical tools, along with linguistic features, can improve the tagging accuracy. Conditional random fields (CRF tool) was used to automatically tag and train the text data; however, accuracy was improved after linguistic featured were fed into the training data. Assamese is a highly inflectional language; hence, it is challenging to standardizing its morphology. Inflectional suffixes are used as a feature of the text data. In order to analyze the inflections of Assamese word forms, a list of suffixes is prepared. This list comprises suffixes, comprising of all possible suffixes that various categories can take is prepared. Assamese words can be classified into inflected classes (noun, pronoun, adjective and verb) and un-inflected classes (adverb and particle). The corpus used for this morphological analysis has huge tokens. The corpus is a mixed corpus and it has given satisfactory accuracy. The accuracy rate of the tagger has gradually improved with the modified training data.Keywords: CRF, morphology, tagging, tagset
Procedia PDF Downloads 19813423 A Computational Fluid Dynamics Study of Turbulence Flow and Parameterization of an Aerofoil
Authors: Mohamed Z. M. Duwahir, Shian Gao
Abstract:
The main objective of this project was to introduce and test a new scheme for parameterization of subsonic aerofoil, using a function called Shape Function. Python programming was used to create a user interactive environment for geometry generation of aerofoil using NACA and Shape Function methodologies. Two aerofoils, NACA 0012 and NACA 1412, were generated using this function. Testing the accuracy of the Shape Function scheme was done by Linear Square Fitting using Python and CFD modelling the aerofoil in Fluent. NACA 0012 (symmetrical aerofoil) was better approximated using Shape Function than NACA 1412 (cambered aerofoil). The second part of the project involved comparing two turbulent models, k-ε and Spalart-Allmaras (SA), in Fluent by modelling the aerofoils NACA 0012 and NACA 1412 in conditions of Reynolds number of 3 × 106. It was shown that SA modelling is better for aerodynamic purpose. The experimental coefficient of lift (Cl) and coefficient of drag (Cd) were compared with empirical wind tunnel data for a range of angle of attack (AOA). As a further step, this project involved drawing and meshing 3D wings in Gambit. The 3D wing flow was solved and compared with 2D aerofoil section experimental results and wind tunnel data.Keywords: CFD simulation, shape function, turbulent modelling, aerofoil
Procedia PDF Downloads 36113422 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings
Authors: Jude K. Safo
Abstract:
Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics
Procedia PDF Downloads 7213421 Detecting Anomalous Matches: An Empirical Study from National Basketball Association
Authors: Jacky Liu, Dulani Jayasuriya, Ryan Elmore
Abstract:
Match fixing and anomalous sports events have increasingly threatened the integrity of professional sports, prompting concerns about existing detection methods. This study addresses prior research limitations in match fixing detection, improving the identification of potential fraudulent matches by incorporating advanced anomaly detection techniques. We develop a novel method to identify anomalous matches and player performances by examining series of matches, such as playoffs. Additionally, we investigate bettors' potential profits when avoiding anomaly matches and explore factors behind unusual player performances. Our literature review covers match fixing detection, match outcome forecasting models, and anomaly detection methods, underscoring current limitations and proposing a new sports anomaly detection method. Our findings reveal anomalous series in the 2022 NBA playoffs, with the Phoenix Suns vs Dallas Mavericks series having the lowest natural occurrence probability. We identify abnormal player performances and bettors' profits significantly decrease when post-season matches are included. This study contributes by developing a new approach to detect anomalous matches and player performances, and assisting investigators in identifying responsible parties. While we cannot conclusively establish reasons behind unusual player performances, our findings suggest factors such as team financial difficulties, executive mismanagement, and individual player contract issues.Keywords: anomaly match detection, match fixing, match outcome forecasting, problematic players identification
Procedia PDF Downloads 8413420 Architecture for QoS Based Service Selection Using Local Approach
Authors: Gopinath Ganapathy, Chellammal Surianarayanan
Abstract:
Services are growing rapidly and generally they are aggregated into a composite service to accomplish complex business processes. There may be several services that offer the same required function of a particular task in a composite service. Hence a choice has to be made for selecting suitable services from alternative functionally similar services. Quality of Service (QoS)plays as a discriminating factor in selecting which component services should be selected to satisfy the quality requirements of a user during service composition. There are two categories of approaches for QoS based service selection, namely global and local approaches. Global approaches are known to be Non-Polynomial (NP) hard in time and offer poor scalability in large scale composition. As an alternative to global methods, local selection methods which reduce the search space by breaking up the large/complex problem of selecting services for the workflow into independent sub problems of selecting services for individual tasks are coming up. In this paper, distributed architecture for selecting services based on QoS using local selection is presented with an overview of local selection methodology. The architecture describes the core components, namely, selection manager and QoS manager needed to implement the local approach and their functions. Selection manager consists of two components namely constraint decomposer which decomposes the given global or workflow level constraints in local or task level constraints and service selector which selects appropriate service for each task with maximum utility, satisfying the corresponding local constraints. QoS manager manages the QoS information at two levels namely, service class level and individual service level. The architecture serves as an implementation model for local selection.Keywords: architecture of service selection, local method for service selection, QoS based service selection, approaches for QoS based service selection
Procedia PDF Downloads 428