Search results for: computational diagnostics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2283

Search results for: computational diagnostics

1503 Numerical Analysis of NOₓ Emission in Staged Combustion for the Optimization of Once-Through-Steam-Generators

Authors: Adrien Chatel, Ehsan Askari Mahvelati, Laurent Fitschy

Abstract:

Once-Through-Steam-Generators are commonly used in the oil-sand industry in the heavy fuel oil extraction process. They are composed of three main parts: the burner, the radiant and convective sections. Natural gas is burned through staged diffusive flames stabilized by the burner. The heat generated by the combustion is transferred to the water flowing through the piping system in the radiant and convective sections. The steam produced within the pipes is then directed to the ground to reduce the oil viscosity and allow its pumping. With the rapid development of the oil-sand industry, the number of OTSG in operation has increased as well as the associated emissions of environmental pollutants, especially the Nitrous Oxides (NOₓ). To limit the environmental degradation, various international environmental agencies have established regulations on the pollutant discharge and pushed to reduce the NOₓ release. To meet these constraints, OTSG constructors have to rely on more and more advanced tools to study and predict the NOₓ emission. With the increase of the computational resources, Computational Fluid Dynamics (CFD) has emerged as a flexible tool to analyze the combustion and pollutant formation process. Moreover, to optimize the burner operating condition regarding the NOx emission, field characterization and measurements are usually accomplished. However, these kinds of experimental campaigns are particularly time-consuming and sometimes even impossible for industrial plants with strict operation schedule constraints. Therefore, the application of CFD seems to be more adequate in order to provide guidelines on the NOₓ emission and reduction problem. In the present work, two different software are employed to simulate the combustion process in an OTSG, namely the commercial software ANSYS Fluent and the open source software OpenFOAM. RANS (Reynolds-Averaged Navier–Stokes) equations combined with the Eddy Dissipation Concept to model the combustion and closed by the k-epsilon model are solved. A mesh sensitivity analysis is performed to assess the independence of the solution on the mesh. In the first part, the results given by the two software are compared and confronted with experimental data as a mean to assess the numerical modelling. Flame temperatures and chemical composition are used as reference fields to perform this validation. Results show a fair agreement between experimental and numerical data. In the last part, OpenFOAM is employed to simulate several operating conditions, and an Emission Characteristic Map of the combustion system is generated. The sources of high NOₓ production inside the OTSG are pointed and correlated to the physics of the flow. CFD is, therefore, a useful tool for providing an insight into the NOₓ emission phenomena in OTSG. Sources of high NOₓ production can be identified, and operating conditions can be adjusted accordingly. With the help of RANS simulations, an Emission Characteristics Map can be produced and then be used as a guide for a field tune-up.

Keywords: combustion, computational fluid dynamics, nitrous oxides emission, once-through-steam-generators

Procedia PDF Downloads 113
1502 Simulation of Focusing of Diamagnetic Particles in Ferrofluid Microflows with a Single Set of Overhead Permanent Magnets

Authors: Shuang Chen, Zongqian Shi, Jiajia Sun, Mingjia Li

Abstract:

Microfluidics is a technology that small amounts of fluids are manipulated using channels with dimensions of tens to hundreds of micrometers. At present, this significant technology is required for several applications in some fields, including disease diagnostics, genetic engineering, and environmental monitoring, etc. Among these fields, manipulation of microparticles and cells in microfluidic device, especially separation, have aroused general concern. In magnetic field, the separation methods include positive and negative magnetophoresis. By comparison, negative magnetophoresis is a label-free technology. It has many advantages, e.g., easy operation, low cost, and simple design. Before the separation of particles or cells, focusing them into a single tight stream is usually a necessary upstream operation. In this work, the focusing of diamagnetic particles in ferrofluid microflows with a single set of overhead permanent magnets is investigated numerically. The geometric model of the simulation is based on the configuration of previous experiments. The straight microchannel is 24mm long and has a rectangular cross-section of 100μm in width and 50μm in depth. The spherical diamagnetic particles of 10μm in diameter are suspended into ferrofluid. The initial concentration of the ferrofluid c₀ is 0.096%, and the flow rate of the ferrofluid is 1.8mL/h. The magnetic field is induced by five identical rectangular neodymium−iron− boron permanent magnets (1/8 × 1/8 × 1/8 in.), and it is calculated by equivalent charge source (ECS) method. The flow of the ferrofluid is governed by the Navier–Stokes equations. The trajectories of particles are solved by the discrete phase model (DPM) in the ANSYS FLUENT program. The positions of diamagnetic particles are recorded by transient simulation. Compared with the results of the mentioned experiments, our simulation shows consistent results that diamagnetic particles are gradually focused in ferrofluid under magnetic field. Besides, the diamagnetic particle focusing is studied by varying the flow rate of the ferrofluid. It is in agreement with the experiment that the diamagnetic particle focusing is better with the increase of the flow rate. Furthermore, it is investigated that the diamagnetic particle focusing is affected by other factors, e.g., the width and depth of the microchannel, the concentration of the ferrofluid and the diameter of diamagnetic particles.

Keywords: diamagnetic particle, focusing, microfluidics, permanent magnet

Procedia PDF Downloads 130
1501 Portfolio Risk Management Using Quantum Annealing

Authors: Thomas Doutre, Emmanuel De Meric De Bellefon

Abstract:

This paper describes the application of local-search metaheuristic quantum annealing to portfolio opti- mization. Heuristic technics are particularly handy when Markowitz’ classical Mean-Variance problem is enriched with additional realistic constraints. Once tailored to the problem, computational experiments on real collected data have shown the superiority of quantum annealing over simulated annealing for this constrained optimization problem, taking advantages of quantum effects such as tunnelling.

Keywords: optimization, portfolio risk management, quantum annealing, metaheuristic

Procedia PDF Downloads 383
1500 A Novel Approach of Secret Communication Using Douglas-Peucker Algorithm

Authors: R. Kiruthika, A. Kannan

Abstract:

Steganography is the problem of hiding secret messages in 'innocent – looking' public communication so that the presence of the secret message cannot be detected. This paper introduces a steganographic security in terms of computational in-distinguishability from a channel of probability distributions on cover messages. This method first splits the cover image into two separate blocks using Douglas – Peucker algorithm. The text message and the image will be hided in the Least Significant Bit (LSB) of the cover image.

Keywords: steganography, lsb, embedding, Douglas-Peucker algorithm

Procedia PDF Downloads 363
1499 Vibroacoustic Modulation of Wideband Vibrations and its Possible Application for Windmill Blade Diagnostics

Authors: Abdullah Alnutayfat, Alexander Sutin, Dong Liu

Abstract:

Wind turbine has become one of the most popular energy productions. However, failure of blades and maintenance costs evolve into significant issues in the wind power industry, so it is essential to detect the initial blade defects to avoid the collapse of the blades and structure. This paper aims to apply modulation of high-frequency blade vibrations by low-frequency blade rotation, which is close to the known Vibro-Acoustic Modulation (VAM) method. The high-frequency wideband blade vibration is produced by the interaction of the surface blades with the environment air turbulence, and the low-frequency modulation is produced by alternating bending stress due to gravity. The low-frequency load of rotational wind turbine blades ranges between 0.2-0.4 Hz and can reach up to 2 Hz for strong wind. The main difference between this study and previous ones on VAM methods is the use of a wideband vibration signal from the blade's natural vibrations. Different features of the vibroacoustic modulation are considered using a simple model of breathing crack. This model considers the simple mechanical oscillator, where the parameters of the oscillator are varied due to low-frequency blade rotation. During the blade's operation, the internal stress caused by the weight of the blade modifies the crack's elasticity and damping. The laboratory experiment using steel samples demonstrates the possibility of VAM using a probe wideband noise signal. A cycle load with a small amplitude was used as a pump wave to damage the tested sample, and a small transducer generated a wideband probe wave. The received signal demodulation was conducted using the Detecting of Envelope Modulation on Noise (DEMON) approach. In addition, the experimental results were compared with the modulation index (MI) technique regarding the harmonic pump wave. The wideband and traditional VAM methods demonstrated similar sensitivity for earlier detection of invisible cracks. Importantly, employing a wideband probe signal with the DEMON approach speeds up and simplifies testing since it eliminates the need to conduct tests repeatedly for various harmonic probe frequencies and to adjust the probe frequency.

Keywords: vibro-acoustic modulation, detecting of envelope modulation on noise, damage, turbine blades

Procedia PDF Downloads 99
1498 A Review of Optomechatronic Ecosystem

Authors: Sam Zhang

Abstract:

The landscape of Opto mechatronics is viewed along the line of light vs. matter, photonics vs. semiconductors, and optics vs. mechatronics. Optomechatronics is redefined as the integration of light and matter from the atom, device, and system to the application. The markets and megatrends in Opto mechatronics are further listed. The author then focuses on Opto mechatronic technology in the semiconductor industry as an example and reviews the practical systems, characteristics, and trends. Opto mechatronics, together with photonics and semiconductor, will continue producing the computational and smart infrastructure required for the 4th industrial revolution.

Keywords: photonics, semiconductor, optomechatronics, 4th industrial revolution

Procedia PDF Downloads 128
1497 Attitude to the Types of Organizational Change

Authors: O. Y. Yurieva, O. V. Yurieva, O. V. Kiselkina, A. V. Kamaseva

Abstract:

Since the early 2000s, there are some innovative changes in the civil service in Russia due to administrative reform. Perspectives of the reform of the civil service include a fundamental change in the personnel component, increasing the level of professionalism of officials, increasing their capacity for self-organization and self-regulation. In order to achieve this, the civil service must be able to continuously change. Organizational changes have long become the subject of scientific understanding; problems of research in the field of organizational change is presented by topics focused on the study of the methodological aspects of the implementation of the changes, the specifics of changes in different types of organizations (business, government, and so on), design changes in the organization, including based on the change in organizational culture. In this case, the organizational changes in the civil service are the least studied areas; research of problems of its transformation is carried out in fragments. According to the theory of resistance of Herbert Simon, the root of the opposition and rejection of change is in the person who will resist any change, if it threatens to undermine the degree of satisfaction as a member of the organization (regardless of the reasons for this change). Thus, the condition for successful adaptation to changes in the organization is the ability of its staff to perceive innovation. As part of the problem, the study sought to identify the innovation civil servants, to determine readiness for the development of proposals for the implementation of organizational change in the public service. To identify the relationship to organizational changes case study carried out by the method of "Attitudes to organizational change" of I. Motovilina, which allowed predicting the type of resistance to changes, to reveal the contradictions and hidden results. The advantage of the method of I. Motovilina is its brevity, simplicity, the analysis of the responses to each question, the use of "overlapping" issues potentially conflicting factors. Based on the study made by the authors, it was found that respondents have a positive attitude to change more local than those that take place in reality, such as "increase opportunities for professional growth", "increase the requirements for the level of professionalism of", "the emergence of possible manifestations initiatives from below". Implemented by the authors diagnostics related to organizational changes in the public service showed the presence of specific problem areas, with roots in the lack of understanding of the importance of innovation personnel in the process of bureaucratization of innovation in public service organizations.

Keywords: innovative changes, self-organization, self-regulation, civil service

Procedia PDF Downloads 459
1496 Designing Stochastic Non-Invasively Applied DC Pulses to Suppress Tremors in Multiple Sclerosis by Computational Modeling

Authors: Aamna Lawrence, Ashutosh Mishra

Abstract:

Tremors occur in 60% of the patients who have Multiple Sclerosis (MS), the most common demyelinating disease that affects the central and peripheral nervous system, and are the primary cause of disability in young adults. While pharmacological agents provide minimal benefits, surgical interventions like Deep Brain Stimulation and Thalamotomy are riddled with dangerous complications which make non-invasive electrical stimulation an appealing treatment of choice for dealing with tremors. Hence, we hypothesized that if the non-invasive electrical stimulation parameters (mainly frequency) can be computed by mathematically modeling the nerve fibre to take into consideration the minutest details of the axon morphologies, tremors due to demyelination can be optimally alleviated. In this computational study, we have modeled the random demyelination pattern in a nerve fibre that typically manifests in MS using the High-Density Hodgkin-Huxley model with suitable modifications to account for the myelin. The internode of the nerve fibre in our model could have up to ten demyelinated regions each having random length and myelin thickness. The arrival time of action potentials traveling the demyelinated and the normally myelinated nerve fibre between two fixed points in space was noted, and its relationship with the nerve fibre radius ranging from 5µm to 12µm was analyzed. It was interesting to note that there were no overlaps between the arrival time for action potentials traversing the demyelinated and normally myelinated nerve fibres even when a single internode of the nerve fibre was demyelinated. The study gave us an opportunity to design DC pulses whose frequency of application would be a function of the random demyelination pattern to block only the delayed tremor-causing action potentials. The DC pulses could be delivered to the peripheral nervous system non-invasively by an electrode bracelet that would suppress any shakiness beyond it thus paving the way for wearable neuro-rehabilitative technologies.

Keywords: demyelination, Hodgkin-Huxley model, non-invasive electrical stimulation, tremor

Procedia PDF Downloads 128
1495 Enhancing Residential Architecture through Generative Design: Balancing Aesthetics, Legal Constraints, and Environmental Considerations

Authors: Milena Nanova, Radul Shishkov, Martin Georgiev, Damyan Damov

Abstract:

This research paper presents an in-depth exploration of the use of generative design in urban residential architecture, with a dual focus on aligning aesthetic values with legal and environmental constraints. The study aims to demonstrate how generative design methodologies can innovate residential building designs that are not only legally compliant and environmentally conscious but also aesthetically compelling. At the core of our research is a specially developed generative design framework tailored for urban residential settings. This framework employs computational algorithms to produce diverse design solutions, meticulously balancing aesthetic appeal with practical considerations. By integrating site-specific features, urban legal restrictions, and environmental factors, our approach generates designs that resonate with the unique character of urban landscapes while adhering to regulatory frameworks. The paper explores how modern digital tools, particularly computational design, and algorithmic modelling, can optimize the early stages of residential building design. By creating a basic parametric model of a residential district, the paper investigates how automated design tools can explore multiple design variants based on predefined parameters (e.g., building cost, dimensions, orientation) and constraints. The paper aims to demonstrate how these tools can rapidly generate and refine architectural solutions that meet the required criteria for quality of life, cost efficiency, and functionality. The study utilizes computational design for database processing and algorithmic modelling within the fields of applied geodesy and architecture. It focuses on optimizing the forms of residential development by adjusting specific parameters and constraints. The results of multiple iterations are analysed, refined, and selected based on their alignment with predefined quality and cost criteria. The findings of this research will contribute to a modern, complex approach to residential area design. The paper demonstrates the potential for integrating BIM models into the design process and their application in virtual 3D Geographic Information Systems (GIS) environments. The study also examines the transformation of BIM models into suitable 3D GIS file formats, such as CityGML, to facilitate the visualization and evaluation of urban planning solutions. In conclusion, our research demonstrates that a generative parametric approach based on real geodesic data and collaborative decision-making could be introduced in the early phases of the design process. This gives the designers powerful tools to explore diverse design possibilities, significantly improving the qualities of the investment during its entire lifecycle.

Keywords: architectural design, residential buildings, urban development, geodesic data, generative design, parametric models, workflow optimization

Procedia PDF Downloads 5
1494 Development of a Data-Driven Method for Diagnosing the State of Health of Battery Cells, Based on the Use of an Electrochemical Aging Model, with a View to Their Use in Second Life

Authors: Desplanches Maxime

Abstract:

Accurate estimation of the remaining useful life of lithium-ion batteries for electronic devices is crucial. Data-driven methodologies encounter challenges related to data volume and acquisition protocols, particularly in capturing a comprehensive range of aging indicators. To address these limitations, we propose a hybrid approach that integrates an electrochemical model with state-of-the-art data analysis techniques, yielding a comprehensive database. Our methodology involves infusing an aging phenomenon into a Newman model, leading to the creation of an extensive database capturing various aging states based on non-destructive parameters. This database serves as a robust foundation for subsequent analysis. Leveraging advanced data analysis techniques, notably principal component analysis and t-Distributed Stochastic Neighbor Embedding, we extract pivotal information from the data. This information is harnessed to construct a regression function using either random forest or support vector machine algorithms. The resulting predictor demonstrates a 5% error margin in estimating remaining battery life, providing actionable insights for optimizing usage. Furthermore, the database was built from the Newman model calibrated for aging and performance using data from a European project called Teesmat. The model was then initialized numerous times with different aging values, for instance, with varying thicknesses of SEI (Solid Electrolyte Interphase). This comprehensive approach ensures a thorough exploration of battery aging dynamics, enhancing the accuracy and reliability of our predictive model. Of particular importance is our reliance on the database generated through the integration of the electrochemical model. This database serves as a crucial asset in advancing our understanding of aging states. Beyond its capability for precise remaining life predictions, this database-driven approach offers valuable insights for optimizing battery usage and adapting the predictor to various scenarios. This underscores the practical significance of our method in facilitating better decision-making regarding lithium-ion battery management.

Keywords: Li-ion battery, aging, diagnostics, data analysis, prediction, machine learning, electrochemical model, regression

Procedia PDF Downloads 69
1493 Data Model to Predict Customize Skin Care Product Using Biosensor

Authors: Ashi Gautam, Isha Shukla, Akhil Seghal

Abstract:

Biosensors are analytical devices that use a biological sensing element to detect and measure a specific chemical substance or biomolecule in a sample. These devices are widely used in various fields, including medical diagnostics, environmental monitoring, and food analysis, due to their high specificity, sensitivity, and selectivity. In this research paper, a machine learning model is proposed for predicting the suitability of skin care products based on biosensor readings. The proposed model takes in features extracted from biosensor readings, such as biomarker concentration, skin hydration level, inflammation presence, sensitivity, and free radicals, and outputs the most appropriate skin care product for an individual. This model is trained on a dataset of biosensor readings and corresponding skin care product information. The model's performance is evaluated using several metrics, including accuracy, precision, recall, and F1 score. The aim of this research is to develop a personalised skin care product recommendation system using biosensor data. By leveraging the power of machine learning, the proposed model can accurately predict the most suitable skin care product for an individual based on their biosensor readings. This is particularly useful in the skin care industry, where personalised recommendations can lead to better outcomes for consumers. The developed model is based on supervised learning, which means that it is trained on a labeled dataset of biosensor readings and corresponding skin care product information. The model uses these labeled data to learn patterns and relationships between the biosensor readings and skin care products. Once trained, the model can predict the most suitable skin care product for an individual based on their biosensor readings. The results of this study show that the proposed machine learning model can accurately predict the most appropriate skin care product for an individual based on their biosensor readings. The evaluation metrics used in this study demonstrate the effectiveness of the model in predicting skin care products. This model has significant potential for practical use in the skin care industry for personalised skin care product recommendations. The proposed machine learning model for predicting the suitability of skin care products based on biosensor readings is a promising development in the skin care industry. The model's ability to accurately predict the most appropriate skin care product for an individual based on their biosensor readings can lead to better outcomes for consumers. Further research can be done to improve the model's accuracy and effectiveness.

Keywords: biosensors, data model, machine learning, skin care

Procedia PDF Downloads 97
1492 Computational Modelling of Epoxy-Graphene Composite Adhesive towards the Development of Cryosorption Pump

Authors: Ravi Verma

Abstract:

Cryosorption pump is the best solution to achieve clean, vibration free ultra-high vacuum. Furthermore, the operation of cryosorption pump is free from the influence of electric and magnetic fields. Due to these attributes, this pump is used in the space simulation chamber to create the ultra-high vacuum. The cryosorption pump comprises of three parts (a) panel which is cooled with the help of cryogen or cryocooler, (b) an adsorbent which is used to adsorb the gas molecules, (c) an epoxy which holds the adsorbent and the panel together thereby aiding in heat transfer from adsorbent to the panel. The performance of cryosorption pump depends on the temperature of the adsorbent and hence, on the thermal conductivity of the epoxy. Therefore we have made an attempt to increase the thermal conductivity of epoxy adhesive by mixing nano-sized graphene filler particles. The thermal conductivity of epoxy-graphene composite adhesive is measured with the help of indigenously developed experimental setup in the temperature range from 4.5 K to 7 K, which is generally the operating temperature range of cryosorption pump for efficiently pumping of hydrogen and helium gas. In this article, we have presented the experimental results of epoxy-graphene composite adhesive in the temperature range from 4.5 K to 7 K. We have also proposed an analytical heat conduction model to find the thermal conductivity of the composite. In this case, the filler particles, such as graphene, are randomly distributed in a base matrix of epoxy. The developed model considers the complete spatial random distribution of filler particles and this distribution is explained by Binomial distribution. The results obtained by the model have been compared with the experimental results as well as with the other established models. The developed model is able to predict the thermal conductivity in both isotropic regions as well as in anisotropic region over the required temperature range from 4.5 K to 7 K. Due to the non-empirical nature of the proposed model, it will be useful for the prediction of other properties of composite materials involving the filler in a base matrix. The present studies will aid in the understanding of low temperature heat transfer which in turn will be useful towards the development of high performance cryosorption pump.

Keywords: composite adhesive, computational modelling, cryosorption pump, thermal conductivity

Procedia PDF Downloads 89
1491 A Hybrid of BioWin and Computational Fluid Dynamics Based Modeling of Biological Wastewater Treatment Plants for Model-Based Control

Authors: Komal Rathore, Kiesha Pierre, Kyle Cogswell, Aaron Driscoll, Andres Tejada Martinez, Gita Iranipour, Luke Mulford, Aydin Sunol

Abstract:

Modeling of Biological Wastewater Treatment Plants requires several parameters for kinetic rate expressions, thermo-physical properties, and hydrodynamic behavior. The kinetics and associated mechanisms become complex due to several biological processes taking place in wastewater treatment plants at varying times and spatial scales. A dynamic process model that incorporated the complex model for activated sludge kinetics was developed using the BioWin software platform for an Advanced Wastewater Treatment Plant in Valrico, Florida. Due to the extensive number of tunable parameters, an experimental design was employed for judicious selection of the most influential parameter sets and their bounds. The model was tuned using both the influent and effluent plant data to reconcile and rectify the forecasted results from the BioWin Model. Amount of mixed liquor suspended solids in the oxidation ditch, aeration rates and recycle rates were adjusted accordingly. The experimental analysis and plant SCADA data were used to predict influent wastewater rates and composition profiles as a function of time for extended periods. The lumped dynamic model development process was coupled with Computational Fluid Dynamics (CFD) modeling of the key units such as oxidation ditches in the plant. Several CFD models that incorporate the nitrification-denitrification kinetics, as well as, hydrodynamics was developed and being tested using ANSYS Fluent software platform. These realistic and verified models developed using BioWin and ANSYS were used to plan beforehand the operating policies and control strategies for the biological wastewater plant accordingly that further allows regulatory compliance at minimum operational cost. These models, with a little bit of tuning, can be used for other biological wastewater treatment plants as well. The BioWin model mimics the existing performance of the Valrico Plant which allowed the operators and engineers to predict effluent behavior and take control actions to meet the discharge limits of the plant. Also, with the help of this model, we were able to find out the key kinetic and stoichiometric parameters which are significantly more important for modeling of biological wastewater treatment plants. One of the other important findings from this model were the effects of mixed liquor suspended solids and recycle ratios on the effluent concentration of various parameters such as total nitrogen, ammonia, nitrate, nitrite, etc. The ANSYS model allowed the abstraction of information such as the formation of dead zones increases through the length of the oxidation ditches as compared to near the aerators. These profiles were also very useful in studying the behavior of mixing patterns, effect of aerator speed, and use of baffles which in turn helps in optimizing the plant performance.

Keywords: computational fluid dynamics, flow-sheet simulation, kinetic modeling, process dynamics

Procedia PDF Downloads 208
1490 Testing and Validation Stochastic Models in Epidemiology

Authors: Snigdha Sahai, Devaki Chikkavenkatappa Yellappa

Abstract:

This study outlines approaches for testing and validating stochastic models used in epidemiology, focusing on the integration and functional testing of simulation code. It details methods for combining simple functions into comprehensive simulations, distinguishing between deterministic and stochastic components, and applying tests to ensure robustness. Techniques include isolating stochastic elements, utilizing large sample sizes for validation, and handling special cases. Practical examples are provided using R code to demonstrate integration testing, handling of incorrect inputs, and special cases. The study emphasizes the importance of both functional and defensive programming to enhance code reliability and user-friendliness.

Keywords: computational epidemiology, epidemiology, public health, infectious disease modeling, statistical analysis, health data analysis, disease transmission dynamics, predictive modeling in health, population health modeling, quantitative public health, random sampling simulations, randomized numerical analysis, simulation-based analysis, variance-based simulations, algorithmic disease simulation, computational public health strategies, epidemiological surveillance, disease pattern analysis, epidemic risk assessment, population-based health strategies, preventive healthcare models, infection dynamics in populations, contagion spread prediction models, survival analysis techniques, epidemiological data mining, host-pathogen interaction models, risk assessment algorithms for disease spread, decision-support systems in epidemiology, macro-level health impact simulations, socioeconomic determinants in disease spread, data-driven decision making in public health, quantitative impact assessment of health policies, biostatistical methods in population health, probability-driven health outcome predictions

Procedia PDF Downloads 6
1489 Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches

Authors: Adrián D. García-Soto, Alejandro Hernández-Martínez, Jesús G. Valdés-Vázquez, Reyna A. Vizguerra-Alvarez

Abstract:

Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.

Keywords: structural reliability, reinforced concrete bridges, combined approach, point estimate method, monte carlo simulation

Procedia PDF Downloads 346
1488 Seroepidemiological Study of Toxoplasma gondii Infection in Women of Child-Bearing Age in Communities in Osun State, Nigeria

Authors: Olarinde Olaniran, Oluyomi A. Sowemimo

Abstract:

Toxoplasmosis is frequently misdiagnosed or underdiagnosed, and it is the third most common cause of hospitalization due to food-borne infection. Intra-uterine infection with Toxoplasma gondii due to active parasitaemia during pregnancy can cause severe and often fatal cerebral damage, abortion, and stillbirth of a fetus. The aim of the study was to investigate the prevalence of T. gondii infection in women of childbearing age in selected communities of Osun State with a view to determining the risk factors which predispose to the T. gondii infection. Five (5) ml of blood was collected by venopuncture into a plain blood collection tube by a medical laboratory scientist. Serum samples were separated by centrifuging the blood samples at 3000 rpm for 5 mins. The sera were collected with Eppendorf tubes and stored at -20°C analysis for the presence of IgG and IgM antibodies against T. gondii by commercially available enzyme-linked immunosorbent assay (ELISA) kit (Demeditec Diagnostics GmbH, Germany) conducted according to the manufacturer’s instructions. The optical densities of wells were measured by a photometer at a wavelength of 450 nm. Data collected were analysed using appropriate computer software. The overall seroprevalence of T. gondii among the women of child-bearing age in selected seven communities in Osun state was 76.3%. Out of 76.3% positive for Toxoplasma gondii infection, 70.0% were positive for anti- T. gondii IgG, and 32.3% were positive for IgM, and 26.7% for both IgG and IgM. The prevalence of T. gondii was lowest (58.9%) among women from Ile Ife, a peri-urban community, and highest (100%) in women residing in Alajue, a rural community. The prevalence of infection was significantly higher (P= 0.000) among Islamic women (87.5%) than in Christian women (70.8%). The highest prevalence (86.3%) was recorded in women with primary education, while the lowest (61.2%) was recorded in women with tertiary education (p =0.016). The highest prevalence (79.7%) was recorded in women that reside in rural areas, and the lowest (70.1%) was recorded in women that reside in peri-urban area (p=0.025). The prevalence of T. gondii infection was highest (81.4%) in women with one miscarriage, while the prevalence was lowest in women with no miscarriages (75.9%). The age of the women (p=0.042), Islamic religion (p=0.001), the residence of the women (p=0.001), and water source were all positively associated with T. gondii infection. The study concluded that there was a high seroprevalence of T. gondii recorded among women of child-bearing age in the study area. Hence, there is a need for health education and create awareness of the disease and its transmission to women of reproductive age group in general and pregnant women in particular to reduce the risk of T. gondii in pregnant women.

Keywords: seroepidemiology, Toxoplasma gondii, women, child-bearing, age, communities, Ile -Ife, Nigeria

Procedia PDF Downloads 177
1487 Advancements in AI Training and Education for a Future-Ready Healthcare System

Authors: Shamie Kumar

Abstract:

Background: Radiologists and radiographers (RR) need to educate themselves and their colleagues to ensure that AI is integrated safely, useful, and in a meaningful way with the direction it always benefits the patients. AI education and training are fundamental to the way RR work and interact with it, such that they feel confident using it as part of their clinical practice in a way they understand it. Methodology: This exploratory research will outline the current educational and training gaps for radiographers and radiologists in AI radiology diagnostics. It will review the status, skills, challenges of educating and teaching. Understanding the use of artificial intelligence within daily clinical practice, why it is fundamental, and justification on why learning about AI is essential for wider adoption. Results: The current knowledge among RR is very sparse, country dependent, and with radiologists being the majority of the end-users for AI, their targeted training and learning AI opportunities surpass the ones available to radiographers. There are many papers that suggest there is a lack of knowledge, understanding, and training of AI in radiology amongst RR, and because of this, they are unable to comprehend exactly how AI works, integrates, benefits of using it, and its limitations. There is an indication they wish to receive specific training; however, both professions need to actively engage in learning about it and develop the skills that enable them to effectively use it. There is expected variability amongst the profession on their degree of commitment to AI as most don’t understand its value; this only adds to the need to train and educate RR. Currently, there is little AI teaching in either undergraduate or postgraduate study programs, and it is not readily available. In addition to this, there are other training programs, courses, workshops, and seminars available; most of these are short and one session rather than a continuation of learning which cover a basic understanding of AI and peripheral topics such as ethics, legal, and potential of AI. There appears to be an obvious gap between the content of what the training program offers and what the RR needs and wants to learn. Due to this, there is a risk of ineffective learning outcomes and attendees feeling a lack of clarity and depth of understanding of the practicality of using AI in a clinical environment. Conclusion: Education, training, and courses need to have defined learning outcomes with relevant concepts, ensuring theory and practice are taught as a continuation of the learning process based on use cases specific to a clinical working environment. Undergraduate and postgraduate courses should be developed robustly, ensuring the delivery of it is with expertise within that field; in addition, training and other programs should be delivered as a way of continued professional development and aligned with accredited institutions for a degree of quality assurance.

Keywords: artificial intelligence, training, radiology, education, learning

Procedia PDF Downloads 85
1486 The Relationships between AntimüLlerian Hormone, Androgens and Ovarian Reserve in Non-Obese East Indian Women with and without Polycystic Ovary Syndrome

Authors: Dipanshu Sur, Ratnabali Chakravorty, Rimi Pal, Siddhartha Chatterjee, Joyshree Chaterjee, Amal Mallik

Abstract:

Background: Polycystic ovary syndrome (PCOS) is a common endocrine disease in reproductive women with a complex hormonal disturbance that affects the menstrual cycle and leads to metabolic consequences in later life. Hyperandrogenaemia is noticeable features of PCOS and influence the process of folliculogenesis in women. The levels of Antimüllerian Hormone (AMH) reflect the number of pre-antral follicles and thus are a marker of oocyte pool – germinal reserve of the ovary for reproduction. Besides its utilization in IVF (In-vitro fertilization), determination of AMH may serve as an additional marker in the diagnostics of PCOS, where increased AMH levels reflect the severity of the disease. The positive correlation of serum AMH with the number of antral follicles was found also in patients with PCOS. Objective: The objective of this study was to investigate the relationship between AMH androgens and whether AMH contributes to altered folliculogenesis in non-obese women with PCOS. Methods: We designed a prospective study which included a total of 65 IVF individuals. It enrolled 26 cases of PCOS based on 2003 Rotterdam criteria and 39 ovulatory normal- non PCOS, healthy, age-matched controls. AMH levels and ovarian morphology were assessed. The relationships between AMH and androgenaemia in patients with and without PCOS were studied. Results: Mean age of PCOS patients were slightly higher than controls (32±4 and 28±3 years, respectively). AMH generally increased with antral follicle count (AFC) [P=0.001], testosterone, and luteinising hormone, and decreased with age, and serum sex hormone binding globulin (SHBG). No significant relationships were found between circulating AMH levels and BMI between PCOS and non-PCOS patients. The calculation of AMH production per antral follicle (AMH/AF) showed that there was a significant difference in median AMH/AF between PCOS and non-PCOS (P =0.001). Both PCOS and non-PCOS groups showed a very similar increase in AMH with increases in AFC, but the PCOS patients had consistently higher AMH across all AFC levels. Conclusions: These observations indicate that there is a connection between AMH and androgens levels between PCOS and non-PCOS East Indian women. Excessive granulosa cell activity may be implicated in the abnormal follicular dynamic of the syndrome. They are higher in women with PCOS and, on the other hand, very low in women with an ovarian failure.

Keywords: anti-Mullerian hormone, polycystic ovary syndrome, antral follicle count, androgens

Procedia PDF Downloads 212
1485 Exhaled Breath Condensate in Lung Cancer: A Non-Invasive Sample for Easier Mutations Detection by Next Generation Sequencing

Authors: Omar Youssef, Aija Knuuttila, Paivi Piirilä, Virinder Sarhadi, Sakari Knuutila

Abstract:

Exhaled breath condensate (EBC) is a unique sample that allows studying different genetic changes in lung carcinoma through a non-invasive way. With the aid of next generation sequencing (NGS) technology, analysis of genetic mutations has been more efficient with increased sensitivity for detection of genetic variants. In order to investigate the possibility of applying this method for cancer diagnostics, mutations in EBC DNA from lung cancer patients and healthy individuals were studied by using NGS. The key aim is to assess the feasibility of using this approach to detect clinically important mutations in EBC. EBC was collected from 20 healthy individuals and 9 lung cancer patients (four lung adenocarcinomas, four 8 squamous cell carcinoma, and one case of mesothelioma). Mutations in hotpot regions of 22 genes were studied by using Ampliseq Colon and Lung cancer panel and sequenced on Ion PGM. Results demonstrated that all nine patients showed a total of 19 cosmic mutations in APC, BRAF, EGFR, ERBB4, FBXW7, FGFR1, KRAS, MAP2K1, NRAS, PIK3CA, PTEN, RET, SMAD4, and TP53. In controls, 15 individuals showed 35 cosmic mutations in BRAF, CTNNB1, DDR2, EGFR, ERBB2, FBXW7, FGFR3, KRAS, MET, NOTCH1, NRAS, PIK3CA, PTEN, SMAD4, and TP53. Additionally, 45 novel mutations not reported previously were also seen in patients’ samples, and 106 novel mutations were seen in controls’ specimens. KRAS exon 2 mutations G12D was identified in one control specimen with mutant allele fraction of 6.8%, while KRAS G13D mutation seen in one patient sample showed mutant allele fraction of 17%. These findings illustrate that hotspot mutations are present in DNA from EBC of both cancer patients and healthy controls. As some of the cosmic mutations were seen in controls too, no firm conclusion can be drawn on the clinical importance of cosmic mutations in patients. Mutations reported in controls could represent early neoplastic changes or normal homeostatic process of apoptosis occurring in lung tissue to get rid of mutant cells. At the same time, mutations detected in patients might represent a non-invasive easily accessible way for early cancer detection. Follow up of individuals with important cancer mutations is necessary to clarify the significance of these mutations in both healthy individuals and cancer patients.

Keywords: exhaled breath condensate, lung cancer, mutations, next generation sequencing

Procedia PDF Downloads 176
1484 Design, Synthesis, and Catalytic Applications of Functionalized Metal Complexes and Nanomaterials for Selective Oxidation and Coupling Reactions

Authors: Roghaye Behroozi

Abstract:

The development of functionalized metal complexes and nanomaterials has gained significant attention due to their potential in catalyzing selective oxidation and coupling reactions. These catalysts play a crucial role in various industrial and pharmaceutical processes, enhancing the efficiency, selectivity, and sustainability of chemical reactions. This research aims to design and synthesize new functionalized metal complexes and nanomaterials to explore their catalytic applications in the selective oxidation of alcohols and coupling reactions, focusing on improving yield, selectivity, and catalyst reusability. The study involves the synthesis of a nickel Schiff base complex stabilized within 41-MCM as a heterogeneous catalyst. A Schiff base ligand derived from glycine was used to create a tin (IV) metal complex characterized through spectroscopic techniques and computational analysis. Additionally, iron-based magnetic nanoparticles functionalized with melamine were synthesized for catalytic evaluation. Lastly, a palladium (IV) complex was prepared, and its oxidative stability was analyzed. The nickel Schiff base catalyst showed high selectivity in converting primary and secondary alcohols to aldehydes and ketones, with yields ranging from 73% to 90%. The tin (IV) complex demonstrated accurate structural and electronic properties, with consistent results between experimental and computational data. The melamine-functionalized iron nanoparticles exhibited efficient catalytic activity in producing triazoles, with enhanced reaction speed and reusability. The palladium (IV) complex displayed remarkable stability and low reactivity towards C–C bond formation due to its symmetrical structure. The synthesized metal complexes and nanomaterials demonstrated significant potential as efficient, selective, and reusable catalysts for oxidation and coupling reactions. These findings pave the way for developing environmentally friendly and cost-effective catalytic systems for industrial applications.

Keywords: catalysts, Schiff base complexes, metal-organic frameworks, oxidation reactions, nanoparticles, reusability

Procedia PDF Downloads 15
1483 Assessment of the Performance of the Sonoreactors Operated at Different Ultrasound Frequencies, to Remove Pollutants from Aqueous Media

Authors: Gabriela Rivadeneyra-Romero, Claudia del C. Gutierrez Torres, Sergio A. Martinez-Delgadillo, Victor X. Mendoza-Escamilla, Alejandro Alonzo-Garcia

Abstract:

Ultrasonic degradation is currently being used in sonochemical reactors to degrade pollutant compounds from aqueous media, as emerging contaminants (e.g. pharmaceuticals, drugs and personal care products.) because they can produce possible ecological impacts on the environment. For this reason, it is important to develop appropriate water and wastewater treatments able to reduce pollution and increase reuse. Pollutants such as textile dyes, aromatic and phenolic compounds, cholorobenzene, bisphenol-A and carboxylic acid and other organic pollutants, can be removed from wastewaters by sonochemical oxidation. The effect on the removal of pollutants depends on the type of the ultrasonic frequency used; however, not much studies have been done related to the behavior of the fluid into the sonoreactors operated at different ultrasonic frequencies. Based on the above, it is necessary to study the hydrodynamic behavior of the liquid generated by the ultrasonic irradiation to design efficient sonoreactors to reduce treatment times and costs. In this work, it was studied the hydrodynamic behavior of the fluid in sonochemical reactors at different frequencies (250 kHz, 500 kHz and 1000 kHz). The performances of the sonoreactors at those frequencies were simulated using computational fluid dynamics (CFD). Due to there is great sound speed gradient between piezoelectric and fluid, k-e models were used. Piezoelectric was defined as a vibration surface, to evaluate the different frequencies effect on the fluid into sonochemical reactor. Structured hexahedral cells were used to mesh the computational liquid domain, and fine triangular cells were used to mesh the piezoelectric transducers. Unsteady state conditions were used in the solver. Estimation of the dissipation rate, flow field velocities, Reynolds stress and turbulent quantities were evaluated by CFD and 2D-PIV measurements. Test results show that there is no necessary correlation between an increase of the ultrasonic frequency and the pollutant degradation, moreover, the reactor geometry and power density are important factors that should be considered in the sonochemical reactor design.

Keywords: CFD, reactor, ultrasound, wastewater

Procedia PDF Downloads 190
1482 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning

Authors: Akeel A. Shah, Tong Zhang

Abstract:

Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.

Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning

Procedia PDF Downloads 39
1481 Immunocytochemical Stability of Antigens in Cytological Samples Stored in In-house Liquid-Based Medium

Authors: Anamarija Kuhar, Veronika Kloboves Prevodnik, Nataša Nolde, Ulrika Klopčič

Abstract:

The decision for immunocytochemistry (ICC) is usually made in the basis of the findings in Giemsa- and/or Papanicolaou- smears. More demanding diagnostic cases require preparation of additional cytological preparations. Therefore, it is convenient to suspend cytological samples in a liquid based medium (LBM) that preserve antigen and morphological properties. However, the duration of these properties being preserved in the medium is usually unknown. Eventually, cell morphology becomes impaired and altered, as well as antigen properties may be lost or become diffused. In this study, the influence of cytological sample storage length in in-house liquid based medium on antigen properties and cell morphology is evaluated. The question is how long the cytological samples in this medium can be stored so that the results of immunocytochemical reactions are still reliable and can be safely used in routine cytopathological diagnostics. The stability of 6 ICC markers that are most frequently used in everyday routine work were tested; Cytokeratin AE1/AE3, Calretinin, Epithelial specific antigen Ep-CAM (MOC-31), CD 45, Oestrogen receptor (ER), and Melanoma triple cocktail were tested on methanol fixed cytospins prepared from fresh fine needle aspiration biopsies, effusion samples, and disintegrated lymph nodes suspended in in-house cell medium. Cytospins were prepared on the day of the sampling as well as on the second, fourth, fifth, and eight day after sample collection. Next, they were fixed in methanol and immunocytochemically stained. Finally, the percentage of positive stained cells, reaction intensity, counterstaining, and cell morphology were assessed using two assessment methods: the internal assessment and the UK NEQAS ICC scheme assessment. Results show that the antigen properties for Cytokeratin AE1/AE3, MOC-31, CD 45, ER, and Melanoma triple cocktail were preserved even after 8 days of storage in in-house LBM, while the antigen properties for Calretinin remained unchanged only for 4 days. The key parameters for assessing detection of antigen are the proportion of cells with a positive reaction and intensity of staining. Well preserved cell morphology is highly important for reliable interpretation of ICC reaction. Therefore, it would be valuable to perform a similar analysis for other ICC markers to determine the duration in which the antigen and morphological properties are preserved in LBM.

Keywords: cytology samples, cytospins, immunocytochemistry, liquid-based cytology

Procedia PDF Downloads 141
1480 Study of Morning-Glory Spillway Structure in Hydraulic Characteristics by CFD Model

Authors: Mostafa Zandi, Ramin Mansouri

Abstract:

Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. Morning-Glory spillway is one of the common spillways for discharging the overflow water behind dams, these kinds of spillways are constructed in dams with small reservoirs. In this research, the hydraulic flow characteristics of a morning-glory spillways are investigated with CFD model. Two dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k- and k-, were chosen to model Reynolds shear stress term. The power law scheme was used for discretization of momentum, k , and  equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k -ε (Standard) has the most consistent results with experimental results. When the jet is getting closer to end of basin, the computational results increase with the numerical results of their differences. The lower profile of the water jet has less sensitivity to the hydraulic jet profile than the hydraulic jet profile. In the pressure test, it was also found that the results show that the numerical values of the pressure in the lower landing number differ greatly in experimental results. The characteristics of the complex flows over a Morning-Glory spillway were studied numerically using a RANS solver. Grid study showed that numerical results of a 57512-node grid had the best agreement with the experimental values. The desired downstream channel length was preferred to be 1.5 meter, and the standard k-ε turbulence model produced the best results in Morning-Glory spillway. The numerical free-surface profiles followed the theoretical equations very well.

Keywords: morning-glory spillway, CFD model, hydraulic characteristics, wall function

Procedia PDF Downloads 77
1479 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 289
1478 Artificial Neural Network Based Model for Detecting Attacks in Smart Grid Cloud

Authors: Sandeep Mehmi, Harsh Verma, A. L. Sangal

Abstract:

Ever since the idea of using computing services as commodity that can be delivered like other utilities e.g. electric and telephone has been floated, the scientific fraternity has diverted their research towards a new area called utility computing. New paradigms like cluster computing and grid computing came into existence while edging closer to utility computing. With the advent of internet the demand of anytime, anywhere access of the resources that could be provisioned dynamically as a service, gave rise to the next generation computing paradigm known as cloud computing. Today, cloud computing has become one of the most aggressively growing computer paradigm, resulting in growing rate of applications in area of IT outsourcing. Besides catering the computational and storage demands, cloud computing has economically benefitted almost all the fields, education, research, entertainment, medical, banking, military operations, weather forecasting, business and finance to name a few. Smart grid is another discipline that direly needs to be benefitted from the cloud computing advantages. Smart grid system is a new technology that has revolutionized the power sector by automating the transmission and distribution system and integration of smart devices. Cloud based smart grid can fulfill the storage requirement of unstructured and uncorrelated data generated by smart sensors as well as computational needs for self-healing, load balancing and demand response features. But, security issues such as confidentiality, integrity, availability, accountability and privacy need to be resolved for the development of smart grid cloud. In recent years, a number of intrusion prevention techniques have been proposed in the cloud, but hackers/intruders still manage to bypass the security of the cloud. Therefore, precise intrusion detection systems need to be developed in order to secure the critical information infrastructure like smart grid cloud. Considering the success of artificial neural networks in building robust intrusion detection, this research proposes an artificial neural network based model for detecting attacks in smart grid cloud.

Keywords: artificial neural networks, cloud computing, intrusion detection systems, security issues, smart grid

Procedia PDF Downloads 318
1477 Numerical Solutions of Generalized Burger-Fisher Equation by Modified Variational Iteration Method

Authors: M. O. Olayiwola

Abstract:

Numerical solutions of the generalized Burger-Fisher are obtained using a Modified Variational Iteration Method (MVIM) with minimal computational efforts. The computed results with this technique have been compared with other results. The present method is seen to be a very reliable alternative method to some existing techniques for such nonlinear problems.

Keywords: burger-fisher, modified variational iteration method, lagrange multiplier, Taylor’s series, partial differential equation

Procedia PDF Downloads 430
1476 BLS-2/BSL-3 Laboratory for Diagnosis of Pathogens on the Colombia-Ecuador Border Region: A Post-COVID Commitment to Public Health

Authors: Anderson Rocha-Buelvas, Jaqueline Mena Huertas, Edith Burbano Rosero, Arsenio Hidalgo Troya, Mauricio Casas Cruz

Abstract:

COVID-19 is a disruptive pandemic for the public health and economic system of whole countries, including Colombia. Nariño Department is the southwest of the country and draws attention to being on the border with Ecuador, constantly facing demographic transition affecting infections between countries. In Nariño, the early routine diagnosis of SARS-CoV-2, which can be handled at BSL-2, has affected the transmission dynamics of COVID-19. However, new emerging and re-emerging viruses with biological flexibility classified as a Risk Group 3 agent can take advantage of epidemiological opportunities, generating the need to increase clinical diagnosis, mainly in border regions between countries. The overall objective of this project was to assure the quality of the analytical process in the diagnosis of high biological risk pathogens in Nariño by building a laboratory that includes biosafety level (BSL)-2 and (BSL)-3 containment zones. The delimitation of zones was carried out according to the Verification Tool of the National Health Institute of Colombia and following the standard requirements for the competence of testing and calibration laboratories of the International Organization for Standardization. This is achieved by harmonization of methods and equipment for effective and durable diagnostics of the large-scale spread of highly pathogenic microorganisms, employing negative-pressure containment systems and UV Systems in accordance with a finely controlled electrical system and PCR systems as new diagnostic tools. That increases laboratory capacity. Protection in BSL-3 zones will separate the handling of potentially infectious aerosols within the laboratory from the community and the environment. It will also allow the handling and inactivation of samples with suspected pathogens and the extraction of molecular material from them, allowing research with pathogens with high risks, such as SARS-CoV-2, Influenza, and syncytial virus, and malaria, among others. The diagnosis of these pathogens will be articulated across the spectrum of basic, applied, and translational research that could receive about 60 daily samples. It is expected that this project will be articulated with the health policies of neighboring countries to increase research capacity.

Keywords: medical laboratory science, SARS-CoV-2, public health surveillance, Colombia

Procedia PDF Downloads 91
1475 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood

Authors: Randa Alharbi, Vladislav Vyshemirsky

Abstract:

Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.

Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)

Procedia PDF Downloads 202
1474 Exploration of Cone Foam Breaker Behavior Using Computational Fluid Dynamic

Authors: G. St-Pierre-Lemieux, E. Askari Mahvelati, D. Groleau, P. Proulx

Abstract:

Mathematical modeling has become an important tool for the study of foam behavior. Computational Fluid Dynamic (CFD) can be used to investigate the behavior of foam around foam breakers to better understand the mechanisms leading to the ‘destruction’ of foam. The focus of this investigation was the simple cone foam breaker, whose performance has been identified in numerous studies. While the optimal pumping angle is known from the literature, the contribution of pressure drop, shearing, and centrifugal forces to the foam syneresis are subject to speculation. This work provides a screening of those factors against changes in the cone angle and foam rheology. The CFD simulation was made with the open source OpenFOAM toolkits on a full three-dimensional model discretized using hexahedral cells. The geometry was generated using a python script then meshed with blockMesh. The OpenFOAM Volume Of Fluid (VOF) method was used (interFOAM) to obtain a detailed description of the interfacial forces, and the model k-omega SST was used to calculate the turbulence fields. The cone configuration allows the use of a rotating wall boundary condition. In each case, a pair of immiscible fluids, foam/air or water/air was used. The foam was modeled as a shear thinning (Herschel-Buckley) fluid. The results were compared to our measurements and to results found in the literature, first by computing the pumping rate of the cone, and second by the liquid break-up at the exit of the cone. A 3D printed version of the cones submerged in foam (shaving cream or soap solution) and water, at speeds varying between 400 RPM and 1500 RPM, was also used to validate the modeling results by calculating the torque exerted on the shaft. While most of the literature is focusing on cone behavior using Newtonian fluids, this works explore its behavior in shear thinning fluid which better reflects foam apparent rheology. Those simulations bring new light on the cone behavior within the foam and allow the computation of shearing, pressure, and velocity of the fluid, enabling to better evaluate the efficiency of the cones as foam breakers. This study contributes to clarify the mechanisms behind foam breaker performances, at least in part, using modern CFD techniques.

Keywords: bioreactor, CFD, foam breaker, foam mitigation, OpenFOAM

Procedia PDF Downloads 203