Search results for: computational neural networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5561

Search results for: computational neural networks

1541 Detect Critical Thinking Skill in Written Text Analysis. The Use of Artificial Intelligence in Text Analysis vs Chat/Gpt

Authors: Lucilla Crosta, Anthony Edwards

Abstract:

Companies and the market place nowadays struggle to find employees with adequate skills in relation to anticipated growth of their businesses. At least half of workers will need to undertake some form of up-skilling process in the next five years in order to remain aligned with the requests of the market . In order to meet these challenges, there is a clear need to explore the potential uses of AI (artificial Intelligence) based tools in assessing transversal skills (critical thinking, communication and soft skills of different types in general) of workers and adult students while empowering them to develop those same skills in a reliable trustworthy way. Companies seek workers with key transversal skills that can make a difference between workers now and in the future. However, critical thinking seems to be the one of the most imprtant skill, bringing unexplored ideas and company growth in business contexts. What employers have been reporting since years now, is that this skill is lacking in the majority of workers and adult students, and this is particularly visible trough their writing. This paper investigates how critical thinking and communication skills are currently developed in Higher Education environments through use of AI tools at postgraduate levels. It analyses the use of a branch of AI namely Machine Learning and Big Data and of Neural Network Analysis. It also examines the potential effect the acquisition of these skills through AI tools and what kind of effects this has on employability This paper will draw information from researchers and studies both at national (Italy & UK) and international level in Higher Education. The issues associated with the development and use of one specific AI tool Edulai, will be examined in details. Finally comparisons will be also made between these tools and the more recent phenomenon of Chat GPT and forthcomings and drawbacks will be analysed.

Keywords: critical thinking, artificial intelligence, higher education, soft skills, chat GPT

Procedia PDF Downloads 107
1540 Laboratory and Numerical Hydraulic Modelling of Annular Pipe Electrocoagulation Reactors

Authors: Alejandra Martin-Dominguez, Javier Canto-Rios, Velitchko Tzatchkov

Abstract:

Electrocoagulation is a water treatment technology that consists of generating coagulant species in situ by electrolytic oxidation of sacrificial anode materials triggered by electric current. It removes suspended solids, heavy metals, emulsified oils, bacteria, colloidal solids and particles, soluble inorganic pollutants and other contaminants from water, offering an alternative to the use of metal salts or polymers and polyelectrolyte addition for breaking stable emulsions and suspensions. The method essentially consists of passing the water being treated through pairs of consumable conductive metal plates in parallel, which act as monopolar electrodes, commonly known as ‘sacrificial electrodes’. Physicochemical, electrochemical and hydraulic processes are involved in the efficiency of this type of treatment. While the physicochemical and electrochemical aspects of the technology have been extensively studied, little is known about the influence of the hydraulics. However, the hydraulic process is fundamental for the reactions that take place at the electrode boundary layers and for the coagulant mixing. Electrocoagulation reactors can be open (with free water surface) and closed (pressurized). Independently of the type of rector, hydraulic head loss is an important factor for its design. The present work focuses on the study of the total hydraulic head loss and flow velocity and pressure distribution in electrocoagulation reactors with single or multiple concentric annular cross sections. An analysis of the head loss produced by hydraulic wall shear friction and accessories (minor head losses) is presented, and compared to the head loss measured on a semi-pilot scale laboratory model for different flow rates through the reactor. The tests included laminar, transitional and turbulent flow. The observed head loss was compared also to the head loss predicted by several known conceptual theoretical and empirical equations, specific for flow in concentric annular pipes. Four single concentric annular cross section and one multiple concentric annular cross section reactor configuration were studied. The theoretical head loss resulted higher than the observed in the laboratory model in some of the tests, and lower in others of them, depending also on the assumed value for the wall roughness. Most of the theoretical models assume that the fluid elements in all annular sections have the same velocity, and that flow is steady, uniform and one-dimensional, with the same pressure and velocity profiles in all reactor sections. To check the validity of such assumptions, a computational fluid dynamics (CFD) model of the concentric annular pipe reactor was implemented using the ANSYS Fluent software, demonstrating that pressure and flow velocity distribution inside the reactor actually is not uniform. Based on the analysis, the equations that predict better the head loss in single and multiple annular sections were obtained. Other factors that may impact the head loss, such as the generation of coagulants and gases during the electrochemical reaction, the accumulation of hydroxides inside the reactor, and the change of the electrode material with time, are also discussed. The results can be used as tools for design and scale-up of electrocoagulation reactors, to be integrated into new or existing water treatment plants.

Keywords: electrocoagulation reactors, hydraulic head loss, concentric annular pipes, computational fluid dynamics model

Procedia PDF Downloads 217
1539 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores

Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan

Abstract:

Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.

Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics

Procedia PDF Downloads 128
1538 Acceleration of Lagrangian and Eulerian Flow Solvers via Graphics Processing Units

Authors: Pooya Niksiar, Ali Ashrafizadeh, Mehrzad Shams, Amir Hossein Madani

Abstract:

There are many computationally demanding applications in science and engineering which need efficient algorithms implemented on high performance computers. Recently, Graphics Processing Units (GPUs) have drawn much attention as compared to the traditional CPU-based hardware and have opened up new improvement venues in scientific computing. One particular application area is Computational Fluid Dynamics (CFD), in which mature CPU-based codes need to be converted to GPU-based algorithms to take advantage of this new technology. In this paper, numerical solutions of two classes of discrete fluid flow models via both CPU and GPU are discussed and compared. Test problems include an Eulerian model of a two-dimensional incompressible laminar flow case and a Lagrangian model of a two phase flow field. The CUDA programming standard is used to employ an NVIDIA GPU with 480 cores and a C++ serial code is run on a single core Intel quad-core CPU. Up to two orders of magnitude speed up is observed on GPU for a certain range of grid resolution or particle numbers. As expected, Lagrangian formulation is better suited for parallel computations on GPU although Eulerian formulation represents significant speed up too.

Keywords: CFD, Eulerian formulation, graphics processing units, Lagrangian formulation

Procedia PDF Downloads 412
1537 Numerical Design and Characterization of MOVPE Grown Nitride Based Semiconductors

Authors: J. Skibinski, P. Caban, T. Wejrzanowski, K. J. Kurzydlowski

Abstract:

In the present study numerical simulations of epitaxial growth of gallium nitride in Metal Organic Vapor Phase Epitaxy reactor AIX-200/4RF-S are addressed. The aim of this study was to design the optimal fluid flow and thermal conditions for obtaining the most homogeneous product. Since there are many agents influencing reactions on the crystal growth area such as temperature, pressure, gas flow or reactor geometry, it is difficult to design optimal process. Variations of process pressure and hydrogen mass flow rates have been considered. According to the fact that it’s impossible to determine experimentally the exact distribution of heat and mass transfer inside the reactor during crystal growth, detailed 3D modeling has been used to get an insight of the process conditions. Numerical simulations allow to understand the epitaxial process by calculation of heat and mass transfer distribution during growth of gallium nitride. Including chemical reactions in the numerical model allows to calculate the growth rate of the substrate. The present approach has been applied to enhance the performance of AIX-200/4RF-S reactor.

Keywords: computational fluid dynamics, finite volume method, epitaxial growth, gallium nitride

Procedia PDF Downloads 452
1536 A Quick Prediction for Shear Behaviour of RC Membrane Elements by Fixed-Angle Softened Truss Model with Tension-Stiffening

Authors: X. Wang, J. S. Kuang

Abstract:

The Fixed-angle Softened Truss Model with Tension-stiffening (FASTMT) has a superior performance in predicting the shear behaviour of reinforced concrete (RC) membrane elements, especially for the post-cracking behaviour. Nevertheless, massive computational work is inevitable due to the multiple transcendental equations involved in the stress-strain relationship. In this paper, an iterative root-finding technique is introduced to FASTMT for solving quickly the transcendental equations of the tension-stiffening effect of RC membrane elements. This fast FASTMT, which performs in MATLAB, uses the bisection method to calculate the tensile stress of the membranes. By adopting the simplification, the elapsed time of each loop is reduced significantly and the transcendental equations can be solved accurately. Owing to the high efficiency and good accuracy as compared with FASTMT, the fast FASTMT can be further applied in quick prediction of shear behaviour of complex large-scale RC structures.

Keywords: bisection method, FASTMT, iterative root-finding technique, reinforced concrete membrane

Procedia PDF Downloads 270
1535 Investigation of Different Machine Learning Algorithms in Large-Scale Land Cover Mapping within the Google Earth Engine

Authors: Amin Naboureh, Ainong Li, Jinhu Bian, Guangbin Lei, Hamid Ebrahimy

Abstract:

Large-scale land cover mapping has become a new challenge in land change and remote sensing field because of involving a big volume of data. Moreover, selecting the right classification method, especially when there are different types of landscapes in the study area is quite difficult. This paper is an attempt to compare the performance of different machine learning (ML) algorithms for generating a land cover map of the China-Central Asia–West Asia Corridor that is considered as one of the main parts of the Belt and Road Initiative project (BRI). The cloud-based Google Earth Engine (GEE) platform was used for generating a land cover map for the study area from Landsat-8 images (2017) by applying three frequently used ML algorithms including random forest (RF), support vector machine (SVM), and artificial neural network (ANN). The selected ML algorithms (RF, SVM, and ANN) were trained and tested using reference data obtained from MODIS yearly land cover product and very high-resolution satellite images. The finding of the study illustrated that among three frequently used ML algorithms, RF with 91% overall accuracy had the best result in producing a land cover map for the China-Central Asia–West Asia Corridor whereas ANN showed the worst result with 85% overall accuracy. The great performance of the GEE in applying different ML algorithms and handling huge volume of remotely sensed data in the present study showed that it could also help the researchers to generate reliable long-term land cover change maps. The finding of this research has great importance for decision-makers and BRI’s authorities in strategic land use planning.

Keywords: land cover, google earth engine, machine learning, remote sensing

Procedia PDF Downloads 112
1534 About Multi-Resolution Techniques for Large Eddy Simulation of Reactive Multi-Phase Flows

Authors: Giacomo Rossi, Bernardo Favini, Eugenio Giacomazzi, Franca Rita Picchia, Nunzio Maria Salvatore Arcidiacono

Abstract:

A numerical technique for mesh refinement in the HeaRT (Heat Release and Transfer) numerical code is presented. In the CFD framework, Large Eddy Simulation (LES) approach is gaining in importance as a tool for simulating turbulent combustion processes, also if this approach has an high computational cost due to the complexity of the turbulent modeling and the high number of grid points necessary to obtain a good numerical solution. In particular, when a numerical simulation of a big domain is performed with a structured grid, the number of grid points can increase so much that the simulation becomes impossible: this problem can be overcame with a mesh refinement technique. Mesh refinement technique developed for HeaRT numerical code (a staggered finite difference code) is based on an high order reconstruction of the variables at the grid interfaces by means of a least square quasi-ENO interpolation: numerical code is written in modern Fortran (2003 standard of newer) and is parallelized using domain decomposition and message passing interface (MPI) standard.

Keywords: LES, multi-resolution, ENO, fortran

Procedia PDF Downloads 364
1533 Investigation of Factors Affecting the Total Ionizing Dose Threshold of Electrically Erasable Read Only Memories for Use in Dose Rate Measurement

Authors: Liqian Li, Yu Liu, Karen Colins

Abstract:

The dose rate present in a seriously contaminated area can be indirectly determined by monitoring radiation damage to inexpensive commercial electronics, instead of deploying expensive radiation hardened sensors. EEPROMs (Electrically Erasable Read Only Memories) are a good candidate for this purpose because they are inexpensive and are sensitive to radiation exposure. When the total ionizing dose threshold is reached, an EEPROM chip will show signs of damage that can be monitored and transmitted by less susceptible electronics. The dose rate can then be determined from the known threshold dose and the exposure time, assuming the radiation field remains constant with time. Therefore, the threshold dose needs to be well understood before this method can be used. There are many factors affecting the threshold dose, such as the gamma ray energy spectrum, the operating voltage, etc. The purpose of this study was to experimentally determine how the threshold dose depends on dose rate, temperature, voltage, and duty factor. It was found that the duty factor has the strongest effect on the total ionizing dose threshold, while the effect of the other three factors that were investigated is less significant. The effect of temperature was found to be opposite to that expected to result from annealing and is yet to be understood.

Keywords: EEPROM, ionizing radiation, radiation effects on electronics, total ionizing dose, wireless sensor networks

Procedia PDF Downloads 183
1532 Compressed Sensing of Fetal Electrocardiogram Signals Based on Joint Block Multi-Orthogonal Least Squares Algorithm

Authors: Xiang Jianhong, Wang Cong, Wang Linyu

Abstract:

With the rise of medical IoT technologies, Wireless body area networks (WBANs) can collect fetal electrocardiogram (FECG) signals to support telemedicine analysis. The compressed sensing (CS)-based WBANs system can avoid the sampling of a large amount of redundant information and reduce the complexity and computing time of data processing, but the existing algorithms have poor signal compression and reconstruction performance. In this paper, a Joint block multi-orthogonal least squares (JBMOLS) algorithm is proposed. We apply the FECG signal to the Joint block sparse model (JBSM), and a comparative study of sparse transformation and measurement matrices is carried out. A FECG signal compression transmission mode based on Rbio5.5 wavelet, Bernoulli measurement matrix, and JBMOLS algorithm is proposed to improve the compression and reconstruction performance of FECG signal by CS-based WBANs. Experimental results show that the compression ratio (CR) required for accurate reconstruction of this transmission mode is increased by nearly 10%, and the runtime is saved by about 30%.

Keywords: telemedicine, fetal ECG, compressed sensing, joint sparse reconstruction, block sparse signal

Procedia PDF Downloads 127
1531 Whole Body Cooling Hypothermia Treatment Modelling Using a Finite Element Thermoregulation Model

Authors: Ana Beatriz C. G. Silva, Luiz Carlos Wrobel, Fernando Luiz B. Ribeiro

Abstract:

This paper presents a thermoregulation model using the finite element method to perform numerical analyses of brain cooling procedures as a contribution to the investigation on the use of therapeutic hypothermia after ischemia in adults. The use of computational methods can aid clinicians to observe body temperature using different cooling methods without the need of invasive techniques, and can thus be a valuable tool to assist clinical trials simulating different cooling options that can be used for treatment. In this work, we developed a FEM package applied to the solution of the continuum bioheat Pennes equation. Blood temperature changes were considered using a blood pool approach and a lumped analysis for intravascular catheter method of blood cooling. Some analyses are performed using a three-dimensional mesh based on a complex geometry obtained from computed tomography medical images, considering a cooling blanket and a intravascular catheter. A comparison is made between the results obtained and the effects of each case in brain temperature reduction in a required time, maintenance of body temperature at moderate hypothermia levels and gradual rewarming.

Keywords: brain cooling, finite element method, hypothermia treatment, thermoregulation

Procedia PDF Downloads 310
1530 Vaccine Development for Newcastle Disease Virus in Poultry

Authors: Muhammad Asif Rasheed

Abstract:

Newcastle disease virus (NDV), an avian orthoavulavirus, is a causative agent of Newcastle disease named (NDV) and can cause even the epidemics when the disease is not treated. Previously several vaccines based on attenuated and inactivated viruses have been reported, which are rendered useless with the passage of time due to versatile changes in viral genome. Therefore, we aimed to develop an effective multi-epitope vaccine against the haemagglutinin neuraminidase (HN) protein of 26 NDV strains from Pakistan through a modern immunoinformatic approaches. As a result, a vaccine chimaera was constructed by combining T-cell and B-cell epitopes with the appropriate linkers and adjuvant. The designed vaccine was highly immunogenic, non-allergen, and antigenic; therefore, the potential 3D-structureof multi epitope vaccine was constructed, refined, and validated. A molecular docking study of a multiepitope vaccine candidate with the chicken Toll-like receptor-4 indicated successful binding. An In silico immunological simulation was used to evaluate the candidate vaccine's ability to elicit an effective immune response. According to the computational studies, the proposed multiepitope vaccine is physically stable and may induce immune responses, whichsuggested it a strong candidate against 26 Newcastle disease virus strains from Pakistan. A wet lab study is under process to confirm the results.

Keywords: epitopes, newcastle disease virus, paramyxovirus virus, vaccine

Procedia PDF Downloads 117
1529 Turbulent Forced Convection of Cu-Water Nanofluid: CFD Models Comparison

Authors: I. Behroyan, P. Ganesan, S. He, S. Sivasankaran

Abstract:

This study compares the predictions of five types of Computational Fluid Dynamics (CFD) models, including two single-phase models (i.e. Newtonian and non-Newtonian) and three two-phase models (Eulerian-Eulerian, mixture and Eulerian-Lagrangian), to investigate turbulent forced convection of Cu-water nanofluid in a tube with a constant heat flux on the tube wall. The Reynolds (Re) number of the flow is between 10,000 and 25,000, while the volume fraction of Cu particles used is in the range of 0 to 2%. The commercial CFD package of ANSYS-Fluent is used. The results from the CFD models are compared with results from experimental investigations from literature. According to the results of this study, non-Newtonian single-phase model, in general, does not show a good agreement with Xuan and Li correlation in prediction of Nu number. Eulerian-Eulerian model gives inaccurate results expect for φ=0.5%. Mixture model gives a maximum error of 15%. Newtonian single-phase model and Eulerian-Lagrangian model, in overall, are the recommended models. This work can be used as a reference for selecting an appreciate model for future investigation. The study also gives a proper insight about the important factors such as Brownian motion, fluid behavior parameters and effective nanoparticle conductivity which should be considered or changed by the each model.

Keywords: heat transfer, nanofluid, single-phase models, two-phase models

Procedia PDF Downloads 482
1528 Employing a Knime-based and Open-source Tools to Identify AMI and VER Metabolites from UPLC-MS Data

Authors: Nouf Alourfi

Abstract:

This study examines the metabolism of amitriptyline (AMI) and verapamil (VER) using a KNIME-based method. KNIME improved workflow is an open-source data-analytics platform that integrates a number of open-source metabolomics tools such as CFMID and MetFrag to provide standard data visualisations, predict candidate metabolites, assess them against experimental data, and produce reports on identified metabolites. The use of this workflow is demonstrated by employing three types of liver microsomes (human, rat, and Guinea pig) to study the in vitro metabolism of the two drugs (AMI and VER). This workflow is used to create and treat UPLC-MS (Orbitrap) data. The formulas and structures of these drugs' metabolites can be assigned automatically. The key metabolic routes for amitriptyline are hydroxylation, N-dealkylation, N-oxidation, and conjugation, while N-demethylation, O-demethylation and N-dealkylation, and conjugation are the primary metabolic routes for verapamil. The identified metabolites are compatible to the published, clarifying the solidity of the workflow technique and the usage of computational tools like KNIME in supporting the integration and interoperability of emerging novel software packages in the metabolomics area.

Keywords: KNIME, CFMID, MetFrag, Data Analysis, Metabolomics

Procedia PDF Downloads 117
1527 Optimal Allocation of Multiple Emergency Resources for a Single Potential Accident Node: A Mixed Integer Linear Program

Authors: Yongjian Du, Jinhua Sun, Kim M. Liew, Huahua Xiao

Abstract:

Optimal allocation of emergency resources before a disaster is of great importance for emergency response. In reality, the pre-protection for a single critical node where accidents may occur is common. In this study, a model is developed to determine location and inventory decisions of multiple emergency resources among a set of candidate stations to minimize the total cost based on the constraints of budgetary and capacity. The total cost includes the economic accident loss which is accorded with probability distribution of time and the warehousing cost of resources which is increasing over time. A ratio is set to measure the degree of a storage station only serving the target node that becomes larger with the decrease of the distance between them. For the application of linear program, it is assumed that the length of travel time to the accident scene of emergency resources has a linear relationship with the economic accident loss. A computational experiment is conducted to illustrate how the proposed model works, and the results indicate its effectiveness and practicability.

Keywords: emergency response, integer linear program, multiple emergency resources, pre-allocation decisions, single potential accident node

Procedia PDF Downloads 152
1526 A Biomechanical Perfusion System for Microfluidic 3D Bioprinted Structure

Authors: M. Dimitri, M. Ricci, F. Bigi, M. Romiti, A. Corvi

Abstract:

Tissue engineering has reached a significant milestone with the integration of 3D printing for the creation of complex bioconstructs equipped with vascular networks, crucial for cell maintenance and growth. This study aims to demonstrate the effectiveness of a portable microperfusion system designed to adapt dynamically to the evolving conditions of cell growth within 3D-printed bioconstructs. The microperfusion system was developed to provide a constant and controlled flow of nutrients and oxygen through the integrated vessels in the bioconstruct, replicating in vivo physiological conditions. Through a series of preliminary experiments, we evaluated the system's ability to maintain a favorable environment for cell proliferation and differentiation. Measurements of cell density and viability were performed to monitor the health and functionality of the tissue over time. Preliminary results indicate that the portable microperfusion system not only supports but optimizes cell growth, effectively adapting to changes in metabolic needs during the bioconstruct maturation process. This research opens perspectives in tissue engineering, demonstrating that a portable microperfusion system can be successfully integrated into 3D-printed bioconstructs, promoting sustainable and uniform cell growth. The implications of this study are far-reaching, with potential applications in regenerative medicine and pharmacological research, providing a platform for the development of functional and complex tissues.

Keywords: biofabrication, microfluidic perfusion system, 4D bioprinting

Procedia PDF Downloads 28
1525 Effect of Channel Variation of Two-Dimensional Water Tunnel to Study Fluid Dynamics Phenomenon

Authors: Rizka Yunita, Mas Aji Rizki Wijayanto

Abstract:

Computational fluid dynamics (CFD) is the solution to explain how fluid dynamics behavior. In this work, we obtain the effect of channel width of two-dimensional fluid visualization. Using a horizontal water tunnel and flowing soap film, we got a visualization of continuous film that can be observe a graphical overview of the flow that occurs on a space or field in which the fluid flow. The horizontal water tunnel we used, divided into three parts, expansion area, parallel area that used to test the data, and contraction area. The width of channel is the boundary of parallel area with the originally width of 7.2 cm, and the variation of channel width we observed is about 1 cm and its times. To compute the velocity, vortex shedding, and other physical parameters of fluid, we used the cyclinder circular as an obstacle to create a von Karman vortex in fluid and analyzed that phenomenon by using Particle Imaging Velocimetry (PIV) method and comparing Reynolds number and Strouhal number from the visualization we got. More than width the channel, the film is more turbulent and have a separation zones that occurs of uncontinuous flowing fluid.

Keywords: flow visualization, width of channel, vortex, Reynolds number, Strouhal number

Procedia PDF Downloads 377
1524 Second Order Statistics of Dynamic Response of Structures Using Gamma Distributed Damping Parameters

Authors: Badreddine Chemali, Boualem Tiliouine

Abstract:

This article presents the main results of a numerical investigation on the uncertainty of dynamic response of structures with statistically correlated random damping Gamma distributed. A computational method based on a Linear Statistical Model (LSM) is implemented to predict second order statistics for the response of a typical industrial building structure. The significance of random damping with correlated parameters and its implications on the sensitivity of structural peak response in the neighborhood of a resonant frequency are discussed in light of considerable ranges of damping uncertainties and correlation coefficients. The results are compared to those generated using Monte Carlo simulation techniques. The numerical results obtained show the importance of damping uncertainty and statistical correlation of damping coefficients when obtaining accurate probabilistic estimates of dynamic response of structures. Furthermore, the effectiveness of the LSM model to efficiently predict uncertainty propagation for structural dynamic problems with correlated damping parameters is demonstrated.

Keywords: correlated random damping, linear statistical model, Monte Carlo simulation, uncertainty of dynamic response

Procedia PDF Downloads 280
1523 Elucidation of the Sequential Transcriptional Activity in Escherichia coli Using Time-Series RNA-Seq Data

Authors: Pui Shan Wong, Kosuke Tashiro, Satoru Kuhara, Sachiyo Aburatani

Abstract:

Functional genomics and gene regulation inference has readily expanded our knowledge and understanding of gene interactions with regards to expression regulation. With the advancement of transcriptome sequencing in time-series comes the ability to study the sequential changes of the transcriptome. This method presented here works to augment existing regulation networks accumulated in literature with transcriptome data gathered from time-series experiments to construct a sequential representation of transcription factor activity. This method is applied on a time-series RNA-Seq data set from Escherichia coli as it transitions from growth to stationary phase over five hours. Investigations are conducted on the various metabolic activities in gene regulation processes by taking advantage of the correlation between regulatory gene pairs to examine their activity on a dynamic network. Especially, the changes in metabolic activity during phase transition are analyzed with focus on the pagP gene as well as other associated transcription factors. The visualization of the sequential transcriptional activity is used to describe the change in metabolic pathway activity originating from the pagP transcription factor, phoP. The results show a shift from amino acid and nucleic acid metabolism, to energy metabolism during the transition to stationary phase in E. coli.

Keywords: Escherichia coli, gene regulation, network, time-series

Procedia PDF Downloads 370
1522 Agile Methodology for Modeling and Design of Data Warehouses -AM4DW-

Authors: Nieto Bernal Wilson, Carmona Suarez Edgar

Abstract:

The organizations have structured and unstructured information in different formats, sources, and systems. Part of these come from ERP under OLTP processing that support the information system, however these organizations in OLAP processing level, presented some deficiencies, part of this problematic lies in that does not exist interesting into extract knowledge from their data sources, as also the absence of operational capabilities to tackle with these kind of projects.  Data Warehouse and its applications are considered as non-proprietary tools, which are of great interest to business intelligence, since they are repositories basis for creating models or patterns (behavior of customers, suppliers, products, social networks and genomics) and facilitate corporate decision making and research. The following paper present a structured methodology, simple, inspired from the agile development models as Scrum, XP and AUP. Also the models object relational, spatial data models, and the base line of data modeling under UML and Big data, from this way sought to deliver an agile methodology for the developing of data warehouses, simple and of easy application. The methodology naturally take into account the application of process for the respectively information analysis, visualization and data mining, particularly for patterns generation and derived models from the objects facts structured.

Keywords: data warehouse, model data, big data, object fact, object relational fact, process developed data warehouse

Procedia PDF Downloads 407
1521 Gene Expression Profile Reveals Breast Cancer Proliferation and Metastasis

Authors: Nandhana Vivek, Bhaskar Gogoi, Ayyavu Mahesh

Abstract:

Breast cancer metastasis plays a key role in cancer progression and fatality. The present study examines the potential causes of metastasis in breast cancer by investigating the novel interactions between genes and their pathways. The gene expression profile of GSE99394, GSE1246464, and GSE103865 was downloaded from the GEO data repository to analyze the differentially expressed genes (DEGs). Protein-protein interactions, target factor interactions, pathways and gene relationships, and functional enrichment networks were investigated. The proliferation pathway was shown to be highly expressed in breast cancer progression and metastasis in all three datasets. Gene Ontology analysis revealed 11 DEGs as gene targets to control breast cancer metastasis: LYN, DLGAP5, CXCR4, CDC6, NANOG, IFI30, TXP2, AGTR1, MKI67, and FTH1. Upon studying the function, genomic and proteomic data, and pathway involvement of the target genes, DLGAP5 proved to be a promising candidate due to it being highly differentially expressed in all datasets. The study takes a unique perspective on the avenues through which DLGAP5 promotes metastasis. The current investigation helps pave the way in understanding the role DLGAP5 plays in metastasis, which leads to an increased incidence of death among breast cancer patients.

Keywords: genomics, metastasis, microarray, cancer

Procedia PDF Downloads 96
1520 Observer-Based Leader-Following Consensus of Nonlinear Fractional-Order Multi-Agent Systems

Authors: Ali Afaghi, Sehraneh Ghaemi

Abstract:

The coordination of the multi-agent systems has been one of the interesting topic in recent years, because of its potential applications in many branches of science and engineering such as sensor networks, flocking, underwater vehicles and etc. In the most of the related studies, it is assumed that the dynamics of the multi-agent systems are integer-order and linear and the multi-agent systems with the fractional-order nonlinear dynamics are rarely considered. However many phenomena in nature cannot be described within integer-order and linear characteristics. This paper investigates the leader-following consensus problem for a class of nonlinear fractional-order multi-agent systems based on observer-based cooperative control. In the system, the dynamics of each follower and leader are nonlinear. For a multi-agent system with fixed directed topology firstly, an observer-based consensus protocol is proposed based on the relative observer states of neighboring agents. Secondly, based on the property of the stability theory of fractional-order system, some sufficient conditions are presented for the asymptotical stability of the observer-based fractional-order control systems. The proposed method is applied on a five-agent system with the fractional-order nonlinear dynamics and unavailable states. The simulation example shows that the proposed scenario results in the good performance and can be used in many practical applications.

Keywords: fractional-order multi-agent systems, leader-following consensus, nonlinear dynamics, directed graphs

Procedia PDF Downloads 398
1519 Design of a Standard Weather Data Acquisition Device for the Federal University of Technology, Akure Nigeria

Authors: Isaac Kayode Ogunlade

Abstract:

Data acquisition (DAQ) is the process by which physical phenomena from the real world are transformed into an electrical signal(s) that are measured and converted into a digital format for processing, analysis, and storage by a computer. The DAQ is designed using PIC18F4550 microcontroller, communicating with Personal Computer (PC) through USB (Universal Serial Bus). The research deployed initial knowledge of data acquisition system and embedded system to develop a weather data acquisition device using LM35 sensor to measure weather parameters and the use of Artificial Intelligence(Artificial Neural Network - ANN)and statistical approach(Autoregressive Integrated Moving Average – ARIMA) to predict precipitation (rainfall). The device is placed by a standard device in the Department of Meteorology, Federal University of Technology, Akure (FUTA) to know the performance evaluation of the device. Both devices (standard and designed) were subjected to 180 days with the same atmospheric condition for data mining (temperature, relative humidity, and pressure). The acquired data is trained in MATLAB R2012b environment using ANN, and ARIMAto predict precipitation (rainfall). Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Correction Square (R2), and Mean Percentage Error (MPE) was deplored as standardize evaluation to know the performance of the models in the prediction of precipitation. The results from the working of the developed device show that the device has an efficiency of 96% and is also compatible with Personal Computer (PC) and laptops. The simulation result for acquired data shows that ANN models precipitation (rainfall) prediction for two months (May and June 2017) revealed a disparity error of 1.59%; while ARIMA is 2.63%, respectively. The device will be useful in research, practical laboratories, and industrial environments.

Keywords: data acquisition system, design device, weather development, predict precipitation and (FUTA) standard device

Procedia PDF Downloads 88
1518 Formal Implementation of Routing Information Protocol Using Event-B

Authors: Jawid Ahmad Baktash, Tadashi Shiroma, Tomokazu Nagata, Yuji Taniguchi, Morikazu Nakamura

Abstract:

The goal of this paper is to explore the use of formal methods for Dynamic Routing, The purpose of network communication with dynamic routing is sending a massage from one node to others by using pacific protocols. In dynamic routing connections are possible based on protocols of Distance vector (Routing Information Protocol, Border Gateway protocol), Link State (Open Shortest Path First, Intermediate system Intermediate System), Hybrid (Enhanced Interior Gateway Routing Protocol). The responsibility for proper verification becomes crucial with Dynamic Routing. Formal methods can play an essential role in the Routing, development of Networks and testing of distributed systems. Event-B is a formal technique consists of describing rigorously the problem; introduce solutions or details in the refinement steps to obtain more concrete specification, and verifying that proposed solutions are correct. The system is modeled in terms of an abstract state space using variables with set theoretic types and the events that modify state variables. Event-B is a variant of B, was designed for developing distributed systems. In Event-B, the events consist of guarded actions occurring spontaneously rather than being invoked. The invariant state properties must be satisfied by the variables and maintained by the activation of the events.

Keywords: dynamic rout RIP, formal method, event-B, pro-B

Procedia PDF Downloads 400
1517 Moving Object Detection Using Histogram of Uniformly Oriented Gradient

Authors: Wei-Jong Yang, Yu-Siang Su, Pau-Choo Chung, Jar-Ferr Yang

Abstract:

Moving object detection (MOD) is an important issue in advanced driver assistance systems (ADAS). There are two important moving objects, pedestrians and scooters in ADAS. In real-world systems, there exist two important challenges for MOD, including the computational complexity and the detection accuracy. The histogram of oriented gradient (HOG) features can easily detect the edge of object without invariance to changes in illumination and shadowing. However, to reduce the execution time for real-time systems, the image size should be down sampled which would lead the outlier influence to increase. For this reason, we propose the histogram of uniformly-oriented gradient (HUG) features to get better accurate description of the contour of human body. In the testing phase, the support vector machine (SVM) with linear kernel function is involved. Experimental results show the correctness and effectiveness of the proposed method. With SVM classifiers, the real testing results show the proposed HUG features achieve better than classification performance than the HOG ones.

Keywords: moving object detection, histogram of oriented gradient, histogram of uniformly-oriented gradient, linear support vector machine

Procedia PDF Downloads 592
1516 Critical Factors in the Formation, Development and Survival of an Eco-Industrial Park: A Systemic Understanding of Industrial Symbiosis

Authors: Iván González, Pablo Andrés Maya, Sebastián Jaén

Abstract:

Eco-industrial parks (EIPs) work as networks for the exchange of by-products, such as materials, water, or energy. This research identifies the relevant factors in the formation of EIPs in different industrial environments around the world. Then an aggregation of these factors is carried out to reduce them from 50 to 17 and classify them according to 5 fundamental axes. Subsequently, the Vester Sensitivity Model (VSM) systemic methodology is used to determine the influence of the 17 factors on an EIP system and the interrelationship between them. The results show that the sequence of effects between factors: Trust and Cooperation → Business Association → Flows → Additional Income represents the “backbone” of the system, being the most significant chain of influences. In addition, the Organizational Culture represents the turning point of the Industrial Symbiosis on which it must act correctly to avoid falling into unsustainable economic development. Finally, the flow of Information should not be lost since it is what feeds trust between the parties, and the latter strengthens the system in the face of individual or global imbalances. This systemic understanding will enable the formulation of pertinent policies by the actors that interact in the formation and permanence of the EIP. In this way, it seeks to promote large-scale sustainable industrial development, integrating various community actors, which in turn will give greater awareness and appropriation of the current importance of sustainability in industrial production.

Keywords: critical factors, eco-industrial park, industrial symbiosis, system methodology

Procedia PDF Downloads 121
1515 Energy Efficient Microgrid Design with Hybrid Power Systems

Authors: Pedro Esteban

Abstract:

Today’s electrical networks, including microgrids, are evolving into smart grids. The smart grid concept brings the idea that the power comes from various sources (continuous or intermittent), in various forms (AC or DC, high, medium or low voltage, etc.), and it must be integrated into the electric power system in a smart way to guarantee a continuous and reliable supply that complies with power quality and energy efficiency standards and grid code requirements. This idea brings questions for the different players like how the required power will be generated, what kind of power will be more suitable, how to store exceeding levels for short or long-term usage, and how to combine and distribute all the different generation power sources in an efficient way. To address these issues, there has been lots of development in recent years on the field of on-grid and off-grid hybrid power systems (HPS). These systems usually combine one or more modes of electricity generation together with energy storage to ensure optimal supply reliability and high level of energy security. Hybrid power systems combine power generation and energy storage technologies together with real-time energy management and innovative power quality and energy efficiency improvement functionalities. These systems help customers achieve targets for clean energy generation, they add flexibility to the electrical grid, and they optimize the installation by improving its power quality and energy efficiency.

Keywords: microgrids, hybrid power systems, energy storage, power quality improvement

Procedia PDF Downloads 140
1514 High Resolution Image Generation Algorithm for Archaeology Drawings

Authors: Xiaolin Zeng, Lei Cheng, Zhirong Li, Xueping Liu

Abstract:

Aiming at the problem of low accuracy and susceptibility to cultural relic diseases in the generation of high-resolution archaeology drawings by current image generation algorithms, an archaeology drawings generation algorithm based on a conditional generative adversarial network is proposed. An attention mechanism is added into the high-resolution image generation network as the backbone network, which enhances the line feature extraction capability and improves the accuracy of line drawing generation. A dual-branch parallel architecture consisting of two backbone networks is implemented, where the semantic translation branch extracts semantic features from orthophotographs of cultural relics, and the gradient screening branch extracts effective gradient features. Finally, the fusion fine-tuning module combines these two types of features to achieve the generation of high-quality and high-resolution archaeology drawings. Experimental results on the self-constructed archaeology drawings dataset of grotto temple statues show that the proposed algorithm outperforms current mainstream image generation algorithms in terms of pixel accuracy (PA), structural similarity (SSIM), and peak signal-to-noise ratio (PSNR) and can be used to assist in drawing archaeology drawings.

Keywords: archaeology drawings, digital heritage, image generation, deep learning

Procedia PDF Downloads 56
1513 TerraEnhance: High-Resolution Digital Elevation Model Generation using GANs

Authors: Siddharth Sarma, Ayush Majumdar, Nidhi Sabu, Mufaddal Jiruwaala, Shilpa Paygude

Abstract:

Digital Elevation Models (DEMs) are digital representations of the Earth’s topography, which include information about the elevation, slope, aspect, and other terrain attributes. DEMs play a crucial role in various applications, including terrain analysis, urban planning, and environmental modeling. In this paper, TerraEnhance is proposed, a distinct approach for high-resolution DEM generation using Generative Adversarial Networks (GANs) combined with Real-ESRGANs. By learning from a dataset of low-resolution DEMs, the GANs are trained to upscale the data by 10 times, resulting in significantly enhanced DEMs with improved resolution and finer details. The integration of Real-ESRGANs further enhances visual quality, leading to more accurate representations of the terrain. A post-processing layer is introduced, employing high-pass filtering to refine the generated DEMs, preserving important details while reducing noise and artifacts. The results demonstrate that TerraEnhance outperforms existing methods, producing high-fidelity DEMs with intricate terrain features and exceptional accuracy. These advancements make TerraEnhance suitable for various applications, such as terrain analysis and precise environmental modeling.

Keywords: DEM, ESRGAN, image upscaling, super resolution, computer vision

Procedia PDF Downloads 6
1512 Heritage and Tourism in the Era of Big Data: Analysis of Chinese Cultural Tourism in Catalonia

Authors: Xinge Liao, Francesc Xavier Roige Ventura, Dolores Sanchez Aguilera

Abstract:

With the development of the Internet, the study of tourism behavior has rapidly expanded from the traditional physical market to the online market. Data on the Internet is characterized by dynamic changes, and new data appear all the time. In recent years the generation of a large volume of data was characterized, such as forums, blogs, and other sources, which have expanded over time and space, together they constitute large-scale Internet data, known as Big Data. This data of technological origin that derives from the use of devices and the activity of multiple users is becoming a source of great importance for the study of geography and the behavior of tourists. The study will focus on cultural heritage tourist practices in the context of Big Data. The research will focus on exploring the characteristics and behavior of Chinese tourists in relation to the cultural heritage of Catalonia. Geographical information, target image, perceptions in user-generated content will be studied through data analysis from Weibo -the largest social networks of blogs in China. Through the analysis of the behavior of heritage tourists in the Big Data environment, this study will understand the practices (activities, motivations, perceptions) of cultural tourists and then understand the needs and preferences of tourists in order to better guide the sustainable development of tourism in heritage sites.

Keywords: Barcelona, Big Data, Catalonia, cultural heritage, Chinese tourism market, tourists’ behavior

Procedia PDF Downloads 137