Search results for: neural tube defects
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2945

Search results for: neural tube defects

395 An Efficient and Low Cost Protocol for Rapid and Mass in vitro Propagation of Hyssopus officinalis L.

Authors: Ira V. Stancheva, Ely G. Zayova, Maria P. Geneva, Marieta G. Hristozkova, Lyudmila I. Dimitrova, Maria I. Petrova

Abstract:

The study describes a highly efficient and low-cost protocol for rapid and mass in vitro propagation of medicinal and aromatic plant species (Hyssopus officinalis L., Lamiaceae). Hyssop is an important aromatic herb used for its medicinal values because of its antioxidant, anti-inflammatory and antimicrobial properties. The protocol for large-scale multiplication of this aromatic plant was developed using young stem tips explants. The explants were sterilized with 0.04% mercuric chloride (HgCl₂) solution for 20 minutes and washing three times with sterile distilled water in 15 minutes. The cultural media was full and half strength Murashige and Skoog medium containing indole-3-butyric acid. Full and ½ Murashige and Skoog media without auxin were used as controls. For each variant 20 glass tubes with two plants were used. In each tube two tip and nodal explants were inoculated. Maximum shoot and root number were obtained on ½ Murashige and Skoog medium supplemented with 0.1 mg L-1 indole-3-butyric acid at the same time after four weeks of culture. The number of shoots per explant and shoot height were considered. The data on rooting percentage, the number of roots per plant and root length were collected after the same cultural period. The highest percentage of survival 85% for this medicinal plant was recorded in mixture of soil, sand and perlite (2:1:1 v/v/v). This mixture was most suitable for acclimatization of all propagated plants. Ex vitro acclimatization was carried out at 24±1 °C and 70% relative humidity under 16 h illuminations (50 μmol m⁻²s⁻¹). After adaptation period, the all plants were transferred to the field. The plants flowered within three months after transplantation. Phenotypic variations in the acclimatized plants were not observed. An average of 90% of the acclimatized plants survived after transferring into the field. All the in vitro propagated plants displayed normal development under the field conditions. Developed in vitro techniques could provide a promising alternative tool for large-scale propagation that increases the number of homologous plants for field cultivation. Acknowledgments: This study was conducted with financial support from National Science Fund at the Bulgarian Ministry of Education and Science, Project DN06/7 17.12.16.

Keywords: Hyssopus officinalis L., in vitro culture, micro propagation, acclimatization

Procedia PDF Downloads 303
394 Develop a Conceptual Data Model of Geotechnical Risk Assessment in Underground Coal Mining Using a Cloud-Based Machine Learning Platform

Authors: Reza Mohammadzadeh

Abstract:

The major challenges in geotechnical engineering in underground spaces arise from uncertainties and different probabilities. The collection, collation, and collaboration of existing data to incorporate them in analysis and design for given prospect evaluation would be a reliable, practical problem solving method under uncertainty. Machine learning (ML) is a subfield of artificial intelligence in statistical science which applies different techniques (e.g., Regression, neural networks, support vector machines, decision trees, random forests, genetic programming, etc.) on data to automatically learn and improve from them without being explicitly programmed and make decisions and predictions. In this paper, a conceptual database schema of geotechnical risks in underground coal mining based on a cloud system architecture has been designed. A new approach of risk assessment using a three-dimensional risk matrix supported by the level of knowledge (LoK) has been proposed in this model. Subsequently, the model workflow methodology stages have been described. In order to train data and LoK models deployment, an ML platform has been implemented. IBM Watson Studio, as a leading data science tool and data-driven cloud integration ML platform, is employed in this study. As a Use case, a data set of geotechnical hazards and risk assessment in underground coal mining were prepared to demonstrate the performance of the model, and accordingly, the results have been outlined.

Keywords: data model, geotechnical risks, machine learning, underground coal mining

Procedia PDF Downloads 258
393 R-Killer: An Email-Based Ransomware Protection Tool

Authors: B. Lokuketagoda, M. Weerakoon, U. Madushan, A. N. Senaratne, K. Y. Abeywardena

Abstract:

Ransomware has become a common threat in past few years and the recent threat reports show an increase of growth in Ransomware infections. Researchers have identified different variants of Ransomware families since 2015. Lack of knowledge of the user about the threat is a major concern. Ransomware detection methodologies are still growing through the industry. Email is the easiest method to send Ransomware to its victims. Uninformed users tend to click on links and attachments without much consideration assuming the emails are genuine. As a solution to this in this paper R-Killer Ransomware detection tool is introduced. Tool can be integrated with existing email services. The core detection Engine (CDE) discussed in the paper focuses on separating suspicious samples from emails and handling them until a decision is made regarding the suspicious mail. It has the capability of preventing execution of identified ransomware processes. On the other hand, Sandboxing and URL analyzing system has the capability of communication with public threat intelligence services to gather known threat intelligence. The R-Killer has its own mechanism developed in its Proactive Monitoring System (PMS) which can monitor the processes created by downloaded email attachments and identify potential Ransomware activities. R-killer is capable of gathering threat intelligence without exposing the user’s data to public threat intelligence services, hence protecting the confidentiality of user data.

Keywords: ransomware, deep learning, recurrent neural networks, email, core detection engine

Procedia PDF Downloads 197
392 Preparation of hydrophobic silica membranes supported on alumina hollow fibers for pervaporation applications

Authors: Ami Okabe, Daisuke Gondo, Akira Ogawa, Yasuhisa Hasegawa, Koichi Sato, Sadao Araki, Hideki Yamamoto

Abstract:

Membrane separation draws attention as the energy-saving technology. Pervaporation (PV) uses hydrophobic ceramic membranes to separate organic compounds from industrial wastewaters. PV makes it possible to separate organic compounds from azeotropic mixtures and from aqueous solutions. For the PV separation of low concentrations of organics from aqueous solutions, hydrophobic ceramic membranes are expected to have high separation performance compared with that of conventional hydrophilic membranes. Membrane separation performance is evaluated based on the pervaporation separation index (PSI), which depends on both the separation factor and the permeate flux. Ingenuity is required to increase the PSI such that the permeate flux increases without reducing the separation factor or to increase the separation factor without reducing the flux. A thin separation layer without defects and pinholes is required. In addition, it is known that the flux can be increased without reducing the separation factor by reducing the diffusion resistance of the membrane support. In a previous study, we prepared hydrophobic silica membranes by a molecular templating sol−gel method using cetyltrimethylammonium bromide (CTAB) to form pores suitable for permitting the passage of organic compounds through the membrane. We separated low-concentration organics from aqueous solutions by PV using these membranes. In the present study, hydrophobic silica membranes were prepared on a porous alumina hollow fiber support that is thinner than the previously used alumina support. Ethyl acetate (EA) is used in large industrial quantities, so it was selected as the organic substance to be separated. Hydrophobic silica membranes were prepared by dip-coating porous alumina supports with a -alumina interlayer into a silica sol containing CTAB and vinyltrimethoxysilane (VTMS) as the silica precursor. Membrane thickness increases with the lifting speed of the sol in the dip-coating process. Different thicknesses of the γ-alumina layer were prepared by dip-coating the support into a boehmite sol at different lifting speeds (0.5, 1, 3, and 5 mm s-1). Silica layers were subsequently formed by dip-coating using an immersion time of 60 s and lifting speed of 1 mm s-1. PV measurements of the EA (5 wt.%)/water system were carried out using VTMS hydrophobic silica membranes prepared on -alumina layers of different thicknesses. Water and EA flux showed substantially constant value despite of the change of the lifting speed to form the γ-alumina interlayer. All prepared hydrophobic silica membranes showed the higher PSI compared with the hydrophobic membranes using the previous alumina support of hollow fiber.

Keywords: membrane separation, pervaporation, hydrophobic, silica

Procedia PDF Downloads 391
391 The Impact of a Prior Haemophilus influenzae Infection in the Incidence of Prostate Cancer

Authors: Maximiliano Guerra, Lexi Frankel, Amalia D. Ardeljan, Sarah Ghali, Diya Kohli, Omar M. Rashid.

Abstract:

Introduction/Background: Haemophilus influenzae is present as a commensal organism in the nasopharynx of most healthy adults from where it can spread to cause both systemic and respiratory tract infection. Pathogenic properties of this bacterium as well as defects in host defense may result in the spread of these bacteria throughout the body. This can result in a proinflammatory state and colonization particularly in the lungs. Recent studies have failed to determine a link between H. Influenzae colonization and prostate cancer, despite previous research demonstrating the presence of proinflammatory states in preneoplastic and neoplastic prostate lesions. Given these contradictory findings, the primary goal of this study was to evaluate the correlation between H. Influenzae infection and the incidence of prostate cancer. Methods: To evaluate the incidence of Haemophilus influenzae infection and the development of prostate cancer in the future we used data provided by a Health Insurance Portability and Accountability Act (HIPAA) compliant national database. We were afforded access to this database by Holy Cross Health, Fort Lauderdale for the express purpose of academic research. Standard statistical methods were employed in this study including Pearson’s chi-square tests. Results: Between January 2010 and December 2019, the query was analyzed and resulted in 13, 691 patients in both the control and C. difficile infected groups, respectively. The two groups were matched by age range and CCI score. In the Haemophilus influenzae infected group, the incidence of prostate cancer was 1.46%, while the incidence of the prostate cancer control group was 4.56%. The observed difference in cancer incidence was determined to be a statistically significant p-value (< 2.2x10^-16). This suggests that patients with a history of C. difficile have less risk of developing prostate cancer (OR 0.425, 95% CI: 0.382 - 0.472). Treatment bias was considered, the data was analyzed and resulted in two groups matched groups of 3,208 patients in both the infected with H. Influenzae treated group and the control who used the same medications for a different cause. Patients infected with H. Influenzae and treated had an incidence of prostate cancer of 2.49% whereas the control group incidence of prostate cancer was 4.92% with a p-value (< 2.2x10^-16) OR 0.455 CI 95% (0.526 -0.754), proving that the initial results were not due to the use of medications. Conclusion: The findings of our study reveal a statistically significant correlation between H. Influenzae infection and a decreased incidence of prostate cancer. Our findings suggest that prior infection with H. Influenzae may confer some degree of protection to patients and reduce their risk for developing prostate cancer. Future research is recommended to further characterize the potential role of Haemophilus influenzae in the pathogenesis of prostate cancer.

Keywords: Haemophilus Influenzae, incidence, prostate cancer, risk.

Procedia PDF Downloads 186
390 DCDNet: Lightweight Document Corner Detection Network Based on Attention Mechanism

Authors: Kun Xu, Yuan Xu, Jia Qiao

Abstract:

The document detection plays an important role in optical character recognition and text analysis. Because the traditional detection methods have weak generalization ability, and deep neural network has complex structure and large number of parameters, which cannot be well applied in mobile devices, this paper proposes a lightweight Document Corner Detection Network (DCDNet). DCDNet is a two-stage architecture. The first stage with Encoder-Decoder structure adopts depthwise separable convolution to greatly reduce the network parameters. After introducing the Feature Attention Union (FAU) module, the second stage enhances the feature information of spatial and channel dim and adaptively adjusts the size of receptive field to enhance the feature expression ability of the model. Aiming at solving the problem of the large difference in the number of pixel distribution between corner and non-corner, Weighted Binary Cross Entropy Loss (WBCE Loss) is proposed to define corner detection problem as a classification problem to make the training process more efficient. In order to make up for the lack of Dataset of document corner detection, a Dataset containing 6620 images named Document Corner Detection Dataset (DCDD) is made. Experimental results show that the proposed method can obtain fast, stable and accurate detection results on DCDD.

Keywords: document detection, corner detection, attention mechanism, lightweight

Procedia PDF Downloads 342
389 Prenatal Use of Serotonin Reuptake Inhibitors (SRIs) and Congenital Heart Anomalies (CHA): An Exploratory Pharmacogenetics Study

Authors: Aizati N. A. Daud, Jorieke E. H. Bergman, Wilhelmina S. Kerstjens-Frederikse, Pieter Van Der Vlies, Eelko Hak, Rolf M. F. Berger, Henk Groen, Bob Wilffert

Abstract:

Prenatal use of SRIs was previously associated with Congenital Heart Anomalies (CHA). The aim of the study is to explore whether pharmacogenetics plays a role in this teratogenicity using a gene-environment interaction study. A total of 33 case-mother dyads and 2 mother-only (children deceased) registered in EUROCAT Northern Netherlands were included in a case-only study. Five case-mother dyads and two mothers-only were exposed to SRIs (paroxetine=3, fluoxetine=2, venlafaxine=1, paroxetine and venlafaxine=1) in the first trimester of pregnancy. The remaining 28 case-mother dyads were not exposed to SRIs. Ten genes that encode the enzymes or proteins important in determining fetal exposure to SRIs or its mechanism of action were selected: CYPs (CYP1A2, CYP2C9, CYP2C19, CYP2D6), ABCB1 (placental P-glycoprotein), SLC6A4 (serotonin transporter) and serotonin receptor genes (HTR1A, HTR1B, HTR2A, and HTR3B). All included subjects were genotyped for 58 genetic variations in these ten genes. Logistic regression analyses were performed to determine the interaction odds ratio (OR) between genetic variations and SRIs exposure on the risk of CHA. Due to low phenotype frequencies of CYP450 poor metabolizers among exposed cases, the OR cannot be calculated. For ABCB1, there was no indication of changes in the risk of CHA with any of the ABCB1 SNPs in the children and their mothers. Several genetic variations of the serotonin transporter and receptors (SLC6A4 5-HTTLPR and 5-HTTVNTR, HTR1A rs1364043, HTR1B rs6296 & rs6298, HTR3B rs1176744) were associated with an increased risk of CHA, but with too limited sample size to reach statistical significance. For SLC6A4 genetic variations, the mean genetic scores of the exposed case-mothers tended to be higher than the unexposed mothers (2.5 ± 0.8 and 1.88 ± 0.7, respectively; p=0.061). For SNPs of the serotonin receptors, the mean genetic score for exposed cases (children) tended to be higher than the unexposed cases (3.4 ± 2.2, and 1.9 ± 1.6, respectively; p=0.065). This study might be among the first to explore the potential gene-environment interaction between pharmacogenetic determinants and SRIs use on the risk of CHA. With small sample sizes, it was not possible to find a significant interaction. However, there were indications for a role of serotonin receptor polymorphisms in fetuses exposed to SRIs on fetal risk of CHA which warrants further investigation.

Keywords: gene-environment interaction, heart defects, pharmacogenetics, serotonin reuptake inhibitors, teratogenicity

Procedia PDF Downloads 210
388 Reconstructability Analysis for Landslide Prediction

Authors: David Percy

Abstract:

Landslides are a geologic phenomenon that affects a large number of inhabited places and are constantly being monitored and studied for the prediction of future occurrences. Reconstructability analysis (RA) is a methodology for extracting informative models from large volumes of data that work exclusively with discrete data. While RA has been used in medical applications and social science extensively, we are introducing it to the spatial sciences through applications like landslide prediction. Since RA works exclusively with discrete data, such as soil classification or bedrock type, working with continuous data, such as porosity, requires that these data are binned for inclusion in the model. RA constructs models of the data which pick out the most informative elements, independent variables (IVs), from each layer that predict the dependent variable (DV), landslide occurrence. Each layer included in the model retains its classification data as a primary encoding of the data. Unlike other machine learning algorithms that force the data into one-hot encoding type of schemes, RA works directly with the data as it is encoded, with the exception of continuous data, which must be binned. The usual physical and derived layers are included in the model, and testing our results against other published methodologies, such as neural networks, yields accuracy that is similar but with the advantage of a completely transparent model. The results of an RA session with a data set are a report on every combination of variables and their probability of landslide events occurring. In this way, every combination of informative state combinations can be examined.

Keywords: reconstructability analysis, machine learning, landslides, raster analysis

Procedia PDF Downloads 48
387 Biosensor for Determination of Immunoglobulin A, E, G and M

Authors: Umut Kokbas, Mustafa Nisari

Abstract:

Immunoglobulins, also known as antibodies, are glycoprotein molecules produced by activated B cells that transform into plasma cells and result in them. Antibodies are critical molecules of the immune response to fight, which help the immune system specifically recognize and destroy antigens such as bacteria, viruses, and toxins. Immunoglobulin classes differ in their biological properties, structures, targets, functions, and distributions. Five major classes of antibodies have been identified in mammals: IgA, IgD, IgE, IgG, and IgM. Evaluation of the immunoglobulin isotype can provide a useful insight into the complex humoral immune response. Evaluation and knowledge of immunoglobulin structure and classes are also important for the selection and preparation of antibodies for immunoassays and other detection applications. The immunoglobulin test measures the level of certain immunoglobulins in the blood. IgA, IgG, and IgM are usually measured together. In this way, they can provide doctors with important information, especially regarding immune deficiency diseases. Hypogammaglobulinemia (HGG) is one of the main groups of primary immunodeficiency disorders. HGG is caused by various defects in B cell lineage or function that result in low levels of immunoglobulins in the bloodstream. This affects the body's immune response, causing a wide range of clinical features, from asymptomatic diseases to severe and recurrent infections, chronic inflammation and autoimmunity Transient infant hypogammaglobulinemia (THGI), IgM deficiency (IgMD), Bruton agammaglobulinemia, IgA deficiency (SIgAD) HGG samples are a few. Most patients can continue their normal lives by taking prophylactic antibiotics. However, patients with severe infections require intravenous immune serum globulin (IVIG) therapy. The IgE level may rise to fight off parasitic infections, as well as a sign that the body is overreacting to allergens. Also, since the immune response can vary with different antigens, measuring specific antibody levels also aids in the interpretation of the immune response after immunization or vaccination. Immune deficiencies usually occur in childhood. In Immunology and Allergy clinics, apart from the classical methods, it will be more useful in terms of diagnosis and follow-up of diseases, if it is fast, reliable and especially in childhood hypogammaglobulinemia, sampling from children with a method that is more convenient and uncomplicated. The antibodies were attached to the electrode surface via the poly hydroxyethyl methacrylamide cysteine nanopolymer. It was used to evaluate the anodic peak results obtained in the electrochemical study. According to the data obtained, immunoglobulin determination can be made with a biosensor. However, in further studies, it will be useful to develop a medical diagnostic kit with biomedical engineering and to increase its sensitivity.

Keywords: biosensor, immunosensor, immunoglobulin, infection

Procedia PDF Downloads 81
386 Health Trajectory Clustering Using Deep Belief Networks

Authors: Farshid Hajati, Federico Girosi, Shima Ghassempour

Abstract:

We present a Deep Belief Network (DBN) method for clustering health trajectories. Deep Belief Network (DBN) is a deep architecture that consists of a stack of Restricted Boltzmann Machines (RBM). In a deep architecture, each layer learns more complex features than the past layers. The proposed method depends on DBN in clustering without using back propagation learning algorithm. The proposed DBN has a better a performance compared to the deep neural network due the initialization of the connecting weights. We use Contrastive Divergence (CD) method for training the RBMs which increases the performance of the network. The performance of the proposed method is evaluated extensively on the Health and Retirement Study (HRS) database. The University of Michigan Health and Retirement Study (HRS) is a nationally representative longitudinal study that has surveyed more than 27,000 elderly and near-elderly Americans since its inception in 1992. Participants are interviewed every two years and they collect data on physical and mental health, insurance coverage, financial status, family support systems, labor market status, and retirement planning. The dataset is publicly available and we use the RAND HRS version L, which is easy to use and cleaned up version of the data. The size of sample data set is 268 and the length of the trajectories is equal to 10. The trajectories do not stop when the patient dies and represent 10 different interviews of live patients. Compared to the state-of-the-art benchmarks, the experimental results show the effectiveness and superiority of the proposed method in clustering health trajectories.

Keywords: health trajectory, clustering, deep learning, DBN

Procedia PDF Downloads 356
385 Effects of the Quality Construction of Public Construction in Taiwan to Implementation Three Levels Quality Management Institution

Authors: Hsin-Hung Lai, Wei Lo

Abstract:

Whether it is in virtue or vice for a construction quality of public construction project, it is one of the important indicators for national economic development and overall construction, the impact on the quality of national life is very deep. In recent years, a number of scandal of public construction project occurred, the requirements of the government agencies and the public require the quality of construction of public construction project are getting stricter than ever, the three-level public construction project construction quality of quality control system implemented by the government has a profound impact. This study mainly aggregated the evolution of ISO 9000 quality control system, the difference between the practice of implementing management of construction quality by many countries and three-level quality control of our country, so we explored and found that almost all projects of enhancing construction quality are dominated by civil organizations in foreign countries, whereas, it is induced by the national power in our country and develop our three-level quality control system and audit mechanism based on IOS system and implement the works by legislation, we also explored its enhancement and relevance with construction quality of public construction project that are intervened by such system and national power, and it really presents the effectiveness of construction quality been enhanced by the audited result. The three-level quality control system of our country to promote the policy of public construction project is almost same with the quality control system of many developed countries; however our country mainly implements such system on public construction project only, we promote the three-level quality control system is for enhancing the quality of public construction project, for establishing effective quality management system, so as to urge, correct and prevent the defects of quality management by manufacturers, whereas, those developed countries is comprehensively promoting (both public construction project and civil construction) such system. Therefore, this study is to explore the scope for public construction project only; the most important is the quality recognition by the executor, either good quality or deterioration is not a single event, there is a certain procedure extends from the demand and feasibility analysis, design, tendering, contracting, construction performance, inspection, continuous improvement, completion and acceptance, transferring and meeting the needs of the users, all of mentioned above have a causal relationship and it is a systemic problems. So the best construction quality would be manufactured and managed by reasonable cost if it is by extensive thinking and be preventive. We aggregated the implemented results in the past 10 years (2005 to 2015), the audited results of both in central units and local ones were slightly increased in A-grade while those listed in B-grade were decreased, although the levels were not evidently upgraded, yet, such result presents that the construction quality of concept of manufacturers are improving, and the construction quality has been established in the design stage, thus it is relatively beneficial to the enhancement of construction quality of overall public construction project.

Keywords: ISO 9000, three-level quality control system, audit and review mechanism for construction implementation, quality of construction implementation

Procedia PDF Downloads 329
384 The Classification Accuracy of Finance Data through Holder Functions

Authors: Yeliz Karaca, Carlo Cattani

Abstract:

This study focuses on the local Holder exponent as a measure of the function regularity for time series related to finance data. In this study, the attributes of the finance dataset belonging to 13 countries (India, China, Japan, Sweden, France, Germany, Italy, Australia, Mexico, United Kingdom, Argentina, Brazil, USA) located in 5 different continents (Asia, Europe, Australia, North America and South America) have been examined.These countries are the ones mostly affected by the attributes with regard to financial development, covering a period from 2012 to 2017. Our study is concerned with the most important attributes that have impact on the development of finance for the countries identified. Our method is comprised of the following stages: (a) among the multi fractal methods and Brownian motion Holder regularity functions (polynomial, exponential), significant and self-similar attributes have been identified (b) The significant and self-similar attributes have been applied to the Artificial Neuronal Network (ANN) algorithms (Feed Forward Back Propagation (FFBP) and Cascade Forward Back Propagation (CFBP)) (c) the outcomes of classification accuracy have been compared concerning the attributes that have impact on the attributes which affect the countries’ financial development. This study has enabled to reveal, through the application of ANN algorithms, how the most significant attributes are identified within the relevant dataset via the Holder functions (polynomial and exponential function).

Keywords: artificial neural networks, finance data, Holder regularity, multifractals

Procedia PDF Downloads 236
383 Theoretical Study of Gas Adsorption in Zirconium Clusters

Authors: Rasha Al-Saedi, Anthony Meijer

Abstract:

The progress of new porous materials has increased rapidly over the past decade for use in applications such as catalysis, gas storage and removal of environmentally unfriendly species due to their high surface area and high thermal stability. In this work, a theoretical study of the zirconium-based metal organic framework (MOFs) were examined in order to determine their potential for gas adsorption of various guest molecules: CO2, N2, CH4 and H2. The zirconium cluster consists of an inner Zr6O4(OH)4 core in which the triangular faces of the Zr6- octahedron are alternatively capped by O and OH groups which bound to nine formate groups and three benzoate groups linkers. General formula is [Zr(μ-O)4(μ-OH)4(HCOO)9((phyO2C)3X))] where X= CH2OH, CH2NH2, CH2CONH2, n(NH2); (n = 1-3). Three types of adsorption sites on the Zr metal center have been studied, named according to capped chemical groups as the ‘−O site’; the H of (μ-OH) site removed and added to (μ-O) site, ‘–OH site’; (μ-OH) site removed, the ‘void site’ where H2O molecule removed; (μ-OH) from one site and H from other (μ-OH) site, in addition to no defect versions. A series of investigations have been performed aiming to address this important issue. First, density functional theory DFT-B3LYP method with 6-311G(d,p) basis set was employed using Gaussian 09 package in order to evaluate the gas adsorption performance of missing-linker defects in zirconium cluster. Next, study the gas adsorption behaviour on different functionalised zirconium clusters. Those functional groups as mentioned above include: amines, alcohol, amide, in comparison with non-substitution clusters. Then, dispersion-corrected density functional theory (DFT-D) calculations were performed to further understand the enhanced gas binding on zirconium clusters. Finally, study the water effect on CO2 and N2 adsorption. The small functionalized Zr clusters were found to result in good CO2 adsorption over N2, CH4, and H2 due to the quadrupole moment of CO2 while N2, CH4 and H2 weakly polar or non-polar. The adsorption efficiency was determined using the dispersion method where the adsorption binding improved as most of the interactions, for example, van der Waals interactions are missing with the conventional DFT method. The calculated gas binding strengths on the no defect site are higher than those on the −O site, −OH site and the void site, this difference is especially notable for CO2. It has been stated that the enhanced affinity of CO2 of no defect versions is most likely due to the electrostatic interactions between the negatively charged O of CO2 and the positively charged H of (μ-OH) metal site. The uptake of the gas molecule does not enhance in presence of water as the latter binds to Zr clusters more strongly than gas species which attributed to the competition on adsorption sites.

Keywords: density functional theory, gas adsorption, metal- organic frameworks, molecular simulation, porous materials, theoretical chemistry

Procedia PDF Downloads 174
382 Generalized Synchronization in Systems with a Complex Topology of Attractor

Authors: Olga I. Moskalenko, Vladislav A. Khanadeev, Anastasya D. Koloskova, Alexey A. Koronovskii, Anatoly A. Pivovarov

Abstract:

Generalized synchronization is one of the most intricate phenomena in nonlinear science. It can be observed both in systems with a unidirectional and mutual type of coupling including the complex networks. Such a phenomenon has a number of practical applications, for example, for the secure information transmission through the communication channel with a high level of noise. Known methods for the secure information transmission needs in the increase of the privacy of data transmission that arises a question about the observation of such phenomenon in systems with a complex topology of chaotic attractor possessing two or more positive Lyapunov exponents. The present report is devoted to the study of such phenomenon in two unidirectionally and mutually coupled dynamical systems being in chaotic (with one positive Lyapunov exponent) and hyperchaotic (with two or more positive Lyapunov exponents) regimes, respectively. As the systems under study, we have used two mutually coupled modified Lorenz oscillators and two unidirectionally coupled time-delayed generators. We have shown that in both cases the generalized synchronization regime can be detected by means of the calculation of Lyapunov exponents and phase tube approach whereas due to the complex topology of attractor the nearest neighbor method is misleading. Moreover, the auxiliary system approaches being the standard method for the synchronous regime observation, for the mutual type of coupling results in incorrect results. To calculate the Lyapunov exponents in time-delayed systems we have proposed an approach based on the modification of Gram-Schmidt orthogonalization procedure in the context of the time-delayed system. We have studied in detail the mechanisms resulting in the generalized synchronization regime onset paying a great attention to the field where one positive Lyapunov exponent has already been become negative whereas the second one is a positive yet. We have found the intermittency here and studied its characteristics. To detect the laminar phase lengths the method based on a calculation of local Lyapunov exponents has been proposed. The efficiency of the method has been verified using the example of two unidirectionally coupled Rössler systems being in the band chaos regime. We have revealed the main characteristics of intermittency, i.e. the distribution of the laminar phase lengths and dependence of the mean length of the laminar phases on the criticality parameter, for all systems studied in the report. This work has been supported by the Russian President's Council grant for the state support of young Russian scientists (project MK-531.2018.2).

Keywords: complex topology of attractor, generalized synchronization, hyperchaos, Lyapunov exponents

Procedia PDF Downloads 261
381 Human Immunodeficiency Virus (HIV) Test Predictive Modeling and Identify Determinants of HIV Testing for People with Age above Fourteen Years in Ethiopia Using Data Mining Techniques: EDHS 2011

Authors: S. Abera, T. Gidey, W. Terefe

Abstract:

Introduction: Testing for HIV is the key entry point to HIV prevention, treatment, and care and support services. Hence, predictive data mining techniques can greatly benefit to analyze and discover new patterns from huge datasets like that of EDHS 2011 data. Objectives: The objective of this study is to build a predictive modeling for HIV testing and identify determinants of HIV testing for adults with age above fourteen years using data mining techniques. Methods: Cross-Industry Standard Process for Data Mining (CRISP-DM) was used to predict the model for HIV testing and explore association rules between HIV testing and the selected attributes among adult Ethiopians. Decision tree, Naïve-Bayes, logistic regression and artificial neural networks of data mining techniques were used to build the predictive models. Results: The target dataset contained 30,625 study participants; of which 16, 515 (53.9%) were women. Nearly two-fifth; 17,719 (58%), have never been tested for HIV while the rest 12,906 (42%) had been tested. Ethiopians with higher wealth index, higher educational level, belonging 20 to 29 years old, having no stigmatizing attitude towards HIV positive person, urban residents, having HIV related knowledge, information about family planning on mass media and knowing a place where to get testing for HIV showed an increased patterns with respect to HIV testing. Conclusion and Recommendation: Public health interventions should consider the identified determinants to promote people to get testing for HIV.

Keywords: data mining, HIV, testing, ethiopia

Procedia PDF Downloads 477
380 Wind Speed Forecasting Based on Historical Data Using Modern Prediction Methods in Selected Sites of Geba Catchment, Ethiopia

Authors: Halefom Kidane

Abstract:

This study aims to assess the wind resource potential and characterize the urban area wind patterns in Hawassa City, Ethiopia. The estimation and characterization of wind resources are crucial for sustainable urban planning, renewable energy development, and climate change mitigation strategies. A secondary data collection method was used to carry out the study. The collected data at 2 meters was analyzed statistically and extrapolated to the standard heights of 10-meter and 30-meter heights using the power law equation. The standard deviation method was used to calculate the value of scale and shape factors. From the analysis presented, the maximum and minimum mean daily wind speed at 2 meters in 2016 was 1.33 m/s and 0.05 m/s in 2017, 1.67 m/s and 0.14 m/s in 2018, 1.61m and 0.07 m/s, respectively. The maximum monthly average wind speed of Hawassa City in 2016 at 2 meters was noticed in the month of December, which is around 0.78 m/s, while in 2017, the maximum wind speed was recorded in the month of January with a wind speed magnitude of 0.80 m/s and in 2018 June was maximum speed which is 0.76 m/s. On the other hand, October was the month with the minimum mean wind speed in all years, with a value of 0.47 m/s in 2016,0.47 in 2017 and 0.34 in 2018. The annual mean wind speed was 0.61 m/s in 2016,0.64, m/s in 2017 and 0.57 m/s in 2018 at a height of 2 meters. From extrapolation, the annual mean wind speeds for the years 2016,2017 and 2018 at 10 heights were 1.17 m/s,1.22 m/s, and 1.11 m/s, and at the height of 30 meters, were 3.34m/s,3.78 m/s, and 3.01 m/s respectively/Thus, the site consists mainly primarily classes-I of wind speed even at the extrapolated heights.

Keywords: artificial neural networks, forecasting, min-max normalization, wind speed

Procedia PDF Downloads 59
379 Effects of Plasma Technology in Biodegradable Films for Food Packaging

Authors: Viviane P. Romani, Bradley D. Olsen, Vilásia G. Martins

Abstract:

Biodegradable films for food packaging have gained growing attention due to environmental pollution caused by synthetic films and the interest in the better use of resources from nature. Important research advances were made in the development of materials from proteins, polysaccharides, and lipids. However, the commercial use of these new generation of sustainable materials for food packaging is still limited due to their low mechanical and barrier properties that could compromise the food quality and safety. Thus, strategies to improve the performance of these materials have been tested, such as chemical modifications, incorporation of reinforcing structures and others. Cold plasma is a versatile, fast and environmentally friendly technology. It consists of a partially ionized gas containing free electrons, ions, and radicals and neutral particles able to react with polymers and start different reactions, leading to the polymer degradation, functionalization, etching and/or cross-linking. In the present study, biodegradable films from fish protein prepared through the casting technique were plasma treated using an AC glow discharge equipment. The reactor was preliminary evacuated to ~7 Pa and the films were exposed to air plasma for 2, 5 and 8 min. The films were evaluated by their mechanical and water vapor permeability (WVP) properties and changes in the protein structure were observed using Scanning Electron Microscopy (SEM) and X-ray diffraction (XRD). Potential cross-links and elimination of surface defects by etching might be the reason for the increase in tensile strength and decrease in the elongation at break observed. Among the times of plasma application tested, no differences were observed when higher times of exposure were used. The X-ray pattern showed a broad peak at 2θ = 19.51º that corresponds to the distance of 4.6Å by applying the Bragg’s law. This distance corresponds to the average backbone distance within the α-helix. Thus, the changes observed in the films might indicate that the helical configuration of fish protein was disturbed by plasma treatment. SEM images showed surface damage in the films with 5 and 8 min of plasma treatment, indicating that 2 min was the most adequate time of treatment. It was verified that plasma removes water from the films once weight loss of 4.45% was registered for films treated during 2 min. However, after 24 h in 50% of relative humidity, the water lost was recovered. WVP increased from 0.53 to 0.65 g.mm/h.m².kPa after plasma treatment during 2 min, that is desired for some foods applications which require water passage through the packaging. In general, the plasma technology affects the properties and structure of fish protein films. Since this technology changes the surface of polymers, these films might be used to develop multilayer materials, as well as to incorporate active substances in the surface to obtain active packaging.

Keywords: fish protein films, food packaging, improvement of properties, plasma treatment

Procedia PDF Downloads 152
378 Alpha: A Groundbreaking Avatar Merging User Dialogue with OpenAI's GPT-3.5 for Enhanced Reflective Thinking

Authors: Jonas Colin

Abstract:

Standing at the vanguard of AI development, Alpha represents an unprecedented synthesis of logical rigor and human abstraction, meticulously crafted to mirror the user's unique persona and personality, a feat previously unattainable in AI development. Alpha, an avant-garde artefact in the realm of artificial intelligence, epitomizes a paradigmatic shift in personalized digital interaction, amalgamating user-specific dialogic patterns with the sophisticated algorithmic prowess of OpenAI's GPT-3.5 to engender a platform for enhanced metacognitive engagement and individualized user experience. Underpinned by a sophisticated algorithmic framework, Alpha integrates vast datasets through a complex interplay of neural network models and symbolic AI, facilitating a dynamic, adaptive learning process. This integration enables the system to construct a detailed user profile, encompassing linguistic preferences, emotional tendencies, and cognitive styles, tailoring interactions to align with individual characteristics and conversational contexts. Furthermore, Alpha incorporates advanced metacognitive elements, enabling real-time reflection and adaptation in communication strategies. This self-reflective capability ensures continuous refinement of its interaction model, positioning Alpha not just as a technological marvel but as a harbinger of a new era in human-computer interaction, where machines engage with us on a deeply personal and cognitive level, transforming our interaction with the digital world.

Keywords: chatbot, GPT 3.5, metacognition, symbiose

Procedia PDF Downloads 50
377 Development of Fault Diagnosis Technology for Power System Based on Smart Meter

Authors: Chih-Chieh Yang, Chung-Neng Huang

Abstract:

In power system, how to improve the fault diagnosis technology of transmission line has always been the primary goal of power grid operators. In recent years, due to the rise of green energy, the addition of all kinds of distributed power also has an impact on the stability of the power system. Because the smart meters are with the function of data recording and bidirectional transmission, the adaptive Fuzzy Neural inference system, ANFIS, as well as the artificial intelligence that has the characteristics of learning and estimation in artificial intelligence. For transmission network, in order to avoid misjudgment of the fault type and location due to the input of these unstable power sources, combined with the above advantages of smart meter and ANFIS, a method for identifying fault types and location of faults is proposed in this study. In ANFIS training, the bus voltage and current information collected by smart meters can be trained through the ANFIS tool in MATLAB to generate fault codes to identify different types of faults and the location of faults. In addition, due to the uncertainty of distributed generation, a wind power system is added to the transmission network to verify the diagnosis correctness of the study. Simulation results show that the method proposed in this study can correctly identify the fault type and location of fault with more efficiency, and can deal with the interference caused by the addition of unstable power sources.

Keywords: ANFIS, fault diagnosis, power system, smart meter

Procedia PDF Downloads 127
376 MIMIC: A Multi Input Micro-Influencers Classifier

Authors: Simone Leonardi, Luca Ardito

Abstract:

Micro-influencers are effective elements in the marketing strategies of companies and institutions because of their capability to create an hyper-engaged audience around a specific topic of interest. In recent years, many scientific approaches and commercial tools have handled the task of detecting this type of social media users. These strategies adopt solutions ranging from rule based machine learning models to deep neural networks and graph analysis on text, images, and account information. This work compares the existing solutions and proposes an ensemble method to generalize them with different input data and social media platforms. The deployed solution combines deep learning models on unstructured data with statistical machine learning models on structured data. We retrieve both social media accounts information and multimedia posts on Twitter and Instagram. These data are mapped into feature vectors for an eXtreme Gradient Boosting (XGBoost) classifier. Sixty different topics have been analyzed to build a rule based gold standard dataset and to compare the performances of our approach against baseline classifiers. We prove the effectiveness of our work by comparing the accuracy, precision, recall, and f1 score of our model with different configurations and architectures. We obtained an accuracy of 0.91 with our best performing model.

Keywords: deep learning, gradient boosting, image processing, micro-influencers, NLP, social media

Procedia PDF Downloads 162
375 An Investigation into Computer Vision Methods to Identify Material Other Than Grapes in Harvested Wine Grape Loads

Authors: Riaan Kleyn

Abstract:

Mass wine production companies across the globe are provided with grapes from winegrowers that predominantly utilize mechanical harvesting machines to harvest wine grapes. Mechanical harvesting accelerates the rate at which grapes are harvested, allowing grapes to be delivered faster to meet the demands of wine cellars. The disadvantage of the mechanical harvesting method is the inclusion of material-other-than-grapes (MOG) in the harvested wine grape loads arriving at the cellar which degrades the quality of wine that can be produced. Currently, wine cellars do not have a method to determine the amount of MOG present within wine grape loads. This paper seeks to find an optimal computer vision method capable of detecting the amount of MOG within a wine grape load. A MOG detection method will encourage winegrowers to deliver MOG-free wine grape loads to avoid penalties which will indirectly enhance the quality of the wine to be produced. Traditional image segmentation methods were compared to deep learning segmentation methods based on images of wine grape loads that were captured at a wine cellar. The Mask R-CNN model with a ResNet-50 convolutional neural network backbone emerged as the optimal method for this study to determine the amount of MOG in an image of a wine grape load. Furthermore, a statistical analysis was conducted to determine how the MOG on the surface of a grape load relates to the mass of MOG within the corresponding grape load.

Keywords: computer vision, wine grapes, machine learning, machine harvested grapes

Procedia PDF Downloads 75
374 Preprocessing and Fusion of Multiple Representation of Finger Vein patterns using Conventional and Machine Learning techniques

Authors: Tomas Trainys, Algimantas Venckauskas

Abstract:

Application of biometric features to the cryptography for human identification and authentication is widely studied and promising area of the development of high-reliability cryptosystems. Biometric cryptosystems typically are designed for patterns recognition, which allows biometric data acquisition from an individual, extracts feature sets, compares the feature set against the set stored in the vault and gives a result of the comparison. Preprocessing and fusion of biometric data are the most important phases in generating a feature vector for key generation or authentication. Fusion of biometric features is critical for achieving a higher level of security and prevents from possible spoofing attacks. The paper focuses on the tasks of initial processing and fusion of multiple representations of finger vein modality patterns. These tasks are solved by applying conventional image preprocessing methods and machine learning techniques, Convolutional Neural Network (SVM) method for image segmentation and feature extraction. An article presents a method for generating sets of biometric features from a finger vein network using several instances of the same modality. Extracted features sets were fused at the feature level. The proposed method was tested and compared with the performance and accuracy results of other authors.

Keywords: bio-cryptography, biometrics, cryptographic key generation, data fusion, information security, SVM, pattern recognition, finger vein method.

Procedia PDF Downloads 135
373 Deep Vision: A Robust Dominant Colour Extraction Framework for T-Shirts Based on Semantic Segmentation

Authors: Kishore Kumar R., Kaustav Sengupta, Shalini Sood Sehgal, Poornima Santhanam

Abstract:

Fashion is a human expression that is constantly changing. One of the prime factors that consistently influences fashion is the change in colour preferences. The role of colour in our everyday lives is very significant. It subconsciously explains a lot about one’s mindset and mood. Analyzing the colours by extracting them from the outfit images is a critical study to examine the individual’s/consumer behaviour. Several research works have been carried out on extracting colours from images, but to the best of our knowledge, there were no studies that extract colours to specific apparel and identify colour patterns geographically. This paper proposes a framework for accurately extracting colours from T-shirt images and predicting dominant colours geographically. The proposed method consists of two stages: first, a U-Net deep learning model is adopted to segment the T-shirts from the images. Second, the colours are extracted only from the T-shirt segments. The proposed method employs the iMaterialist (Fashion) 2019 dataset for the semantic segmentation task. The proposed framework also includes a mechanism for gathering data and analyzing India’s general colour preferences. From this research, it was observed that black and grey are the dominant colour in different regions of India. The proposed method can be adapted to study fashion’s evolving colour preferences.

Keywords: colour analysis in t-shirts, convolutional neural network, encoder-decoder, k-means clustering, semantic segmentation, U-Net model

Procedia PDF Downloads 96
372 Investigating the Essentiality of Oxazolidinones in Resistance-Proof Drug Combinations in Mycobacterium tuberculosis Selected under in vitro Conditions

Authors: Gail Louw, Helena Boshoff, Taeksun Song, Clifton Barry

Abstract:

Drug resistance in Mycobacterium tuberculosis is primarily attributed to mutations in target genes. These mutations incur a fitness cost and result in bacterial generations that are less fit, which subsequently acquire compensatory mutations to restore fitness. We hypothesize that mutations in specific drug target genes influence bacterial metabolism and cellular function, which affects its ability to develop subsequent resistance to additional agents. We aim to determine whether the sequential acquisition of drug resistance and specific mutations in a well-defined clinical M. tuberculosis strain promotes or limits the development of additional resistance. In vitro mutants resistant to pretomanid, linezolid, moxifloxacin, rifampicin and kanamycin were generated from a pan-susceptible clinical strain from the Beijing lineage. The resistant phenotypes to the anti-TB agents were confirmed by the broth microdilution assay and genetic mutations were identified by targeted gene sequencing. Growth of mono-resistant mutants was done in enriched medium for 14 days to assess in vitro fitness. Double resistant mutants were generated against anti-TB drug combinations at concentrations 5x and 10x the minimum inhibitory concentration. Subsequently, mutation frequencies for these anti-TB drugs in the different mono-resistant backgrounds were determined. The initial level of resistance and the mutation frequencies observed for the mono-resistant mutants were comparable to those previously reported. Targeted gene sequencing revealed the presence of known and clinically relevant mutations in the mutants resistant to linezolid, rifampicin, kanamycin and moxifloxacin. Significant growth defects were observed for mutants grown under in vitro conditions compared to the sensitive progenitor. Mutation frequencies determination in the mono-resistant mutants revealed a significant increase in mutation frequency against rifampicin and kanamycin, but a significant decrease in mutation frequency against linezolid and sutezolid. This suggests that these mono-resistant mutants are more prone to develop resistance to rifampicin and kanamycin, but less prone to develop resistance against linezolid and sutezolid. Even though kanamycin and linezolid both inhibit protein synthesis, these compounds target different subunits of the ribosome, thereby leading to different outcomes in terms of fitness in the mutants with impaired cellular function. These observations showed that oxazolidinone treatment is instrumental in limiting the development of multi-drug resistance in M. tuberculosis in vitro.

Keywords: oxazolidinones, mutations, resistance, tuberculosis

Procedia PDF Downloads 146
371 Optimization Based Extreme Learning Machine for Watermarking of an Image in DWT Domain

Authors: RAM PAL SINGH, VIKASH CHAUDHARY, MONIKA VERMA

Abstract:

In this paper, we proposed the implementation of optimization based Extreme Learning Machine (ELM) for watermarking of B-channel of color image in discrete wavelet transform (DWT) domain. ELM, a regularization algorithm, works based on generalized single-hidden-layer feed-forward neural networks (SLFNs). However, hidden layer parameters, generally called feature mapping in context of ELM need not to be tuned every time. This paper shows the embedding and extraction processes of watermark with the help of ELM and results are compared with already used machine learning models for watermarking.Here, a cover image is divide into suitable numbers of non-overlapping blocks of required size and DWT is applied to each block to be transformed in low frequency sub-band domain. Basically, ELM gives a unified leaning platform with a feature mapping, that is, mapping between hidden layer and output layer of SLFNs, is tried for watermark embedding and extraction purpose in a cover image. Although ELM has widespread application right from binary classification, multiclass classification to regression and function estimation etc. Unlike SVM based algorithm which achieve suboptimal solution with high computational complexity, ELM can provide better generalization performance results with very small complexity. Efficacy of optimization method based ELM algorithm is measured by using quantitative and qualitative parameters on a watermarked image even though image is subjected to different types of geometrical and conventional attacks.

Keywords: BER, DWT, extreme leaning machine (ELM), PSNR

Procedia PDF Downloads 298
370 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles

Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi

Abstract:

Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.

Keywords: artificial neural networks, fuel consumption, friedman test, machine learning, statistical hypothesis testing

Procedia PDF Downloads 167
369 Numerical Investigation of Multiphase Flow Structure for the Flue Gas Desulfurization

Authors: Cheng-Jui Li, Chien-Chou Tseng

Abstract:

This study adopts Computational Fluid Dynamics (CFD) technique to build the multiphase flow numerical model where the interface between the flue gas and desulfurization liquid can be traced by Eulerian-Eulerian model. Inside the tower, the contact of the desulfurization liquid flow from the spray nozzles and flue gas flow can trigger chemical reactions to remove the sulfur dioxide from the exhaust gas. From experimental observations of the industrial scale plant, the desulfurization mechanism depends on the mixing level between the flue gas and the desulfurization liquid. In order to significantly improve the desulfurization efficiency, the mixing efficiency and the residence time can be increased by perforated sieve trays. Hence, the purpose of this research is to investigate the flow structure of sieve trays for the flue gas desulfurization by numerical simulation. In this study, there is an outlet at the top of FGD tower to discharge the clean gas and the FGD tower has a deep tank at the bottom, which is used to collect the slurry liquid. In the major desulfurization zone, the desulfurization liquid and flue gas have a complex mixing flow. Because there are four perforated plates in the major desulfurization zone, which spaced 0.4m from each other, and the spray array is placed above the top sieve tray, which includes 33 nozzles. Each nozzle injects desulfurization liquid that consists of the Mg(OH)2 solution. On each sieve tray, the outside diameter, the hole diameter, and the porosity are 0.6m, 20 mm and 34.3%. The flue gas flows into the FGD tower from the space between the major desulfurization zone and the deep tank can finally become clean. The desulfurization liquid and the liquid slurry goes to the bottom tank and is discharged as waste. When the desulfurization solution flow impacts the sieve tray, the downward momentum will be converted to the upper surface of the sieve tray. As a result, a thin liquid layer can be developed above the sieve tray, which is the so-called the slurry layer. And the volume fraction value within the slurry layer is around 0.3~0.7. Therefore, the liquid phase can't be considered as a discrete phase under the Eulerian-Lagrangian framework. Besides, there is a liquid column through the sieve trays. The downward liquid column becomes narrow as it interacts with the upward gas flow. After the flue gas flows into the major desulfurization zone, the flow direction of the flue gas is upward (+y) in the tube between the liquid column and the solid boundary of the FGD tower. As a result, the flue gas near the liquid column may be rolled down to slurry layer, which developed a vortex or a circulation zone between any two sieve trays. The vortex structure between two sieve trays results in a sufficient large two-phase contact area. It also increases the number of times that the flue gas interacts with the desulfurization liquid. On the other hand, the sieve trays improve the two-phase mixing, which may improve the SO2 removal efficiency.

Keywords: Computational Fluid Dynamics (CFD), Eulerian-Eulerian Model, Flue Gas Desulfurization (FGD), perforated sieve tray

Procedia PDF Downloads 271
368 Multimodal Sentiment Analysis With Web Based Application

Authors: Shreyansh Singh, Afroz Ahmed

Abstract:

Sentiment Analysis intends to naturally reveal the hidden mentality that we hold towards an entity. The total of this assumption over a populace addresses sentiment surveying and has various applications. Current text-based sentiment analysis depends on the development of word embeddings and Machine Learning models that take in conclusion from enormous text corpora. Sentiment Analysis from text is presently generally utilized for consumer loyalty appraisal and brand insight investigation. With the expansion of online media, multimodal assessment investigation is set to carry new freedoms with the appearance of integral information streams for improving and going past text-based feeling examination using the new transforms methods. Since supposition can be distinguished through compelling follows it leaves, like facial and vocal presentations, multimodal opinion investigation offers good roads for examining facial and vocal articulations notwithstanding the record or printed content. These methodologies use the Recurrent Neural Networks (RNNs) with the LSTM modes to increase their performance. In this study, we characterize feeling and the issue of multimodal assessment investigation and audit ongoing advancements in multimodal notion examination in various spaces, including spoken surveys, pictures, video websites, human-machine, and human-human connections. Difficulties and chances of this arising field are additionally examined, promoting our theory that multimodal feeling investigation holds critical undiscovered potential.

Keywords: sentiment analysis, RNN, LSTM, word embeddings

Procedia PDF Downloads 105
367 The Development and Testing of a Small Scale Dry Electrostatic Precipitator for the Removal of Particulate Matter

Authors: Derek Wardle, Tarik Al-Shemmeri, Neil Packer

Abstract:

This paper presents a small tube/wire type electrostatic precipitator (ESP). In the ESPs present form, particle charging and collecting voltages and airflow rates were individually varied throughout 200 ambient temperature test runs ranging from 10 to 30 kV in increments on 5 kV and 0.5 m/s to 1.5 m/s, respectively. It was repeatedly observed that, at input air velocities of between 0.5 and 0.9 m/s and voltage settings of 20 kV to 30 kV, the collection efficiency remained above 95%. The outcomes of preliminary tests at combustion flue temperatures are, at present, inconclusive although indications are that there is little or no drop in comparable performance during ideal test conditions. A limited set of similar tests was carried out during which the collecting electrode was grounded, having been disconnected from the static generator. The collecting efficiency fell significantly, and for that reason, this approach was not pursued further. The collecting efficiencies during ambient temperature tests were determined by mass balance between incoming and outgoing dry PM. The efficiencies of combustion temperature runs are determined by analysing the difference in opacity of the flue gas at inlet and outlet compared to a reference light source. In addition, an array of Leit tabs (carbon coated, electrically conductive adhesive discs) was placed at inlet and outlet for a number of four-day continuous ambient temperature runs. Analysis of the discs’ contamination was carried out using scanning electron microscopy and ImageJ computer software that confirmed collection efficiencies of over 99% which gave unequivocal support to all the previous tests. The average efficiency for these runs was 99.409%. Emissions collected from a woody biomass combustion unit, classified to a diameter of 100 µm, were used in all ambient temperature trials test runs apart from two which collected airborne dust from within the laboratory. Sawdust and wood pellets were chosen for laboratory and field combustion trials. Video recordings were made of three ambient temperature test runs in which the smoke from a wood smoke generator was drawn through the precipitator. Although these runs were visual indicators only, with no objective other than to display, they provided a strong argument for the device’s claimed efficiency, as no emissions were visible at exit when energised.  The theoretical performance of ESPs, when applied to the geometry and configuration of the tested model, was compared to the actual performance and was shown to be in good agreement with it.

Keywords: electrostatic precipitators, air quality, particulates emissions, electron microscopy, image j

Procedia PDF Downloads 245
366 Fake News Detection for Korean News Using Machine Learning Techniques

Authors: Tae-Uk Yun, Pullip Chung, Kee-Young Kwahk, Hyunchul Ahn

Abstract:

Fake news is defined as the news articles that are intentionally and verifiably false, and could mislead readers. Spread of fake news may provoke anxiety, chaos, fear, or irrational decisions of the public. Thus, detecting fake news and preventing its spread has become very important issue in our society. However, due to the huge amount of fake news produced every day, it is almost impossible to identify it by a human. Under this context, researchers have tried to develop automated fake news detection using machine learning techniques over the past years. But, there have been no prior studies proposed an automated fake news detection method for Korean news to our best knowledge. In this study, we aim to detect Korean fake news using text mining and machine learning techniques. Our proposed method consists of two steps. In the first step, the news contents to be analyzed is convert to quantified values using various text mining techniques (topic modeling, TF-IDF, and so on). After that, in step 2, classifiers are trained using the values produced in step 1. As the classifiers, machine learning techniques such as logistic regression, backpropagation network, support vector machine, and deep neural network can be applied. To validate the effectiveness of the proposed method, we collected about 200 short Korean news from Seoul National University’s FactCheck. which provides with detailed analysis reports from 20 media outlets and links to source documents for each case. Using this dataset, we will identify which text features are important as well as which classifiers are effective in detecting Korean fake news.

Keywords: fake news detection, Korean news, machine learning, text mining

Procedia PDF Downloads 262