Search results for: resistance-capacitance network model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19676

Search results for: resistance-capacitance network model

16766 Synthesis and Characterization of Model Amines for Corrosion Applications

Authors: John Vergara, Giuseppe Palmese

Abstract:

Fundamental studies aimed at elucidating the key contributions to corrosion performance are needed to make progress toward effective and environmentally compliant corrosion control. Epoxy/amine systems are typically employed as barrier coatings for corrosion control. However, the hardening agents used for coating applications can be very complex, making fundamental studies of water and oxygen permeability challenging to carry out. Creating model building blocks for epoxy/amine coatings is the first step in carrying out these studies. We will demonstrate the synthesis and characterization of model amine building blocks from saturated fatty acids and simple amines such as diethylenetriamine (DETA) and Bis(3-aminopropyl)amine. The structure-property relationship of thermosets made from these model amines and Diglycidyl ether of bisphenol A (DGBEA) will be discussed.

Keywords: building block, amine, synthesis, characterization

Procedia PDF Downloads 523
16765 Discussing Embedded versus Central Machine Learning in Wireless Sensor Networks

Authors: Anne-Lena Kampen, Øivind Kure

Abstract:

Machine learning (ML) can be implemented in Wireless Sensor Networks (WSNs) as a central solution or distributed solution where the ML is embedded in the nodes. Embedding improves privacy and may reduce prediction delay. In addition, the number of transmissions is reduced. However, quality factors such as prediction accuracy, fault detection efficiency and coordinated control of the overall system suffer. Here, we discuss and highlight the trade-offs that should be considered when choosing between embedding and centralized ML, especially for multihop networks. In addition, we present estimations that demonstrate the energy trade-offs between embedded and centralized ML. Although the total network energy consumption is lower with central prediction, it makes the network more prone for partitioning due to the high forwarding load on the one-hop nodes. Moreover, the continuous improvements in the number of operations per joule for embedded devices will move the energy balance toward embedded prediction.

Keywords: central machine learning, embedded machine learning, energy consumption, local machine learning, wireless sensor networks, WSN

Procedia PDF Downloads 145
16764 Estimation of Transition and Emission Probabilities

Authors: Aakansha Gupta, Neha Vadnere, Tapasvi Soni, M. Anbarsi

Abstract:

Protein secondary structure prediction is one of the most important goals pursued by bioinformatics and theoretical chemistry; it is highly important in medicine and biotechnology. Some aspects of protein functions and genome analysis can be predicted by secondary structure prediction. This is used to help annotate sequences, classify proteins, identify domains, and recognize functional motifs. In this paper, we represent protein secondary structure as a mathematical model. To extract and predict the protein secondary structure from the primary structure, we require a set of parameters. Any constants appearing in the model are specified by these parameters, which also provide a mechanism for efficient and accurate use of data. To estimate these model parameters there are many algorithms out of which the most popular one is the EM algorithm or called the Expectation Maximization Algorithm. These model parameters are estimated with the use of protein datasets like RS126 by using the Bayesian Probabilistic method (data set being categorical). This paper can then be extended into comparing the efficiency of EM algorithm to the other algorithms for estimating the model parameters, which will in turn lead to an efficient component for the Protein Secondary Structure Prediction. Further this paper provides a scope to use these parameters for predicting secondary structure of proteins using machine learning techniques like neural networks and fuzzy logic. The ultimate objective will be to obtain greater accuracy better than the previously achieved.

Keywords: model parameters, expectation maximization algorithm, protein secondary structure prediction, bioinformatics

Procedia PDF Downloads 472
16763 A Hybrid Fuzzy Clustering Approach for Fertile and Unfertile Analysis

Authors: Shima Soltanzadeh, Mohammad Hosain Fazel Zarandi, Mojtaba Barzegar Astanjin

Abstract:

Diagnosis of male infertility by the laboratory tests is expensive and, sometimes it is intolerable for patients. Filling out the questionnaire and then using classification method can be the first step in decision-making process, so only in the cases with a high probability of infertility we can use the laboratory tests. In this paper, we evaluated the performance of four classification methods including naive Bayesian, neural network, logistic regression and fuzzy c-means clustering as a classification, in the diagnosis of male infertility due to environmental factors. Since the data are unbalanced, the ROC curves are most suitable method for the comparison. In this paper, we also have selected the more important features using a filtering method and examined the impact of this feature reduction on the performance of each methods; generally, most of the methods had better performance after applying the filter. We have showed that using fuzzy c-means clustering as a classification has a good performance according to the ROC curves and its performance is comparable to other classification methods like logistic regression.

Keywords: classification, fuzzy c-means, logistic regression, Naive Bayesian, neural network, ROC curve

Procedia PDF Downloads 327
16762 Application of the Building Information Modeling Planning Approach to the Factory Planning

Authors: Peggy Näser

Abstract:

Factory planning is a systematic, objective-oriented process for planning a factory, structured into a sequence of phases, each of which is dependent on the preceding phase and makes use of particular methods and tools, and extending from the setting of objectives to the start of production. The digital factory, on the other hand, is the generic term for a comprehensive network of digital models, methods, and tools – including simulation and 3D visualisation – integrated by a continuous data management system. Its aim is the holistic planning, evaluation and ongoing improvement of all the main structures, processes and resources of the real factory in conjunction with the product. Digital factory planning has already become established in factory planning. The application of Building Information Modeling has not yet been established in factory planning but has been used predominantly in the planning of public buildings. Furthermore, this concept is limited to the planning of the buildings and does not include the planning of equipment of the factory (machines, technical equipment) and their interfaces to the building. BIM is a cooperative method of working, in which the information and data relevant to its lifecycle are consistently recorded, managed and exchanged in a transparent communication between the involved parties on the basis of digital models of a building. Both approaches, the planning approach of Building Information Modeling and the methodical approach of the Digital Factory, are based on the use of a comprehensive data model. Therefore it is necessary to examine how the approach of Building Information Modeling can be extended in the context of factory planning in such a way that an integration of the equipment planning, as well as the building planning, can take place in a common digital model. For this, a number of different perspectives have to be investigated: the equipment perspective including the tools used to implement a comprehensive digital planning process, the communication perspective between the planners of different fields, the legal perspective, that the legal certainty in each country and the quality perspective, on which the quality criteria are defined and the planning will be evaluated. The individual perspectives are examined and illustrated in the article. An approach model for the integration of factory planning into the BIM approach, in particular for the integrated planning of equipment and buildings and the continuous digital planning is developed. For this purpose, the individual factory planning phases are detailed in the sense of the integration of the BIM approach. A comprehensive software concept is shown on the tool. In addition, the prerequisites required for this integrated planning are presented. With the help of the newly developed approach, a better coordination between equipment and buildings is to be achieved, the continuity of the digital factory planning is improved, the data quality is improved and expensive implementation errors are avoided in the implementation.

Keywords: building information modeling, digital factory, digital planning, factory planning

Procedia PDF Downloads 259
16761 Numerical Simulation on Bacteria-Carrying Particles Transport and Deposition in an Open Surgical Wound

Authors: Xiuguo Zhao, He Li, Alireza Yazdani, Xiaoning Zheng, Xinxi Xu

Abstract:

Wound infected poses a serious threat to the surgery on the patient during the process of surgery. Understanding the bacteria-carrying particles (BCPs) transportation and deposition in the open surgical wound model play essential role in protecting wound against being infected. Therefore BCPs transportation and deposition in the surgical wound model were investigated using force-coupling method (FCM) based computational fluid dynamics. The BCPs deposition in the wound was strongly associated with BCPs diameter and concentration. The results showed that the rise on the BCPs deposition was increasing not only with the increase of BCPs diameters but also with the increase of the BCPs concentration. BCPs deposition morphology was impacted by the combination of size distribution, airflow patterns and model geometry. The deposition morphology exhibited the characteristic with BCPs deposition on the sidewall in wound model and no BCPs deposition on the bottom of the wound model mainly because the airflow movement in one direction from up to down and then side created by laminar system constructing airflow patterns and then made BCPs hard deposit in the bottom of the wound model due to wound geometry limit. It was also observed that inertial impact becomes a main mechanism of the BCPs deposition. This work may contribute to next study in BCPs deposition limit, as well as wound infected estimation in surgical-site infections.

Keywords: BCPs deposition, computational fluid dynamics, force-coupling method (FCM), numerical simulation, open surgical wound model

Procedia PDF Downloads 283
16760 Towards a Simulation Model to Ensure the Availability of Machines in Maintenance Activities

Authors: Maryam Gallab, Hafida Bouloiz, Youness Chater, Mohamed Tkiouat

Abstract:

The aim of this paper is to present a model based on multi-agent systems in order to manage the maintenance activities and to ensure the reliability and availability of machines just with the required resources (operators, tools). The interest of the simulation is to solve the complexity of the system and to find results without cost or wasting time. An implementation of the model is carried out on the AnyLogic platform to display the defined performance indicators.

Keywords: maintenance, complexity, simulation, multi-agent systems, AnyLogic platform

Procedia PDF Downloads 298
16759 Automata-Based String Analysis for Detecting Malware in Android Programs

Authors: Assad Maalouf, Lunjin Lu, James Lynott

Abstract:

We design and implement a precise model of string operations using finite state machine transformers and state transformers to approximate the values string variables can take throughout the execution of the program.We use our model to analyze Android program string variables. Our experimental results show that our string analysis is very efficient at detecting the contextual effect of string operations on the string variables. Our model proved to be very useful when it came to verifying statements about the string variables of the program.

Keywords: abstract interpretation, android, static analysis, string analysis

Procedia PDF Downloads 176
16758 Reconstruction of a Genome-Scale Metabolic Model to Simulate Uncoupled Growth of Zymomonas mobilis

Authors: Maryam Saeidi, Ehsan Motamedian, Seyed Abbas Shojaosadati

Abstract:

Zymomonas mobilis is known as an example of the uncoupled growth phenomenon. This microorganism also has a unique metabolism that degrades glucose by the Entner–Doudoroff (ED) pathway. In this paper, a genome-scale metabolic model including 434 genes, 757 reactions and 691 metabolites was reconstructed to simulate uncoupled growth and study its effect on flux distribution in the central metabolism. The model properly predicted that ATPase was activated in experimental growth yields of Z. mobilis. Flux distribution obtained from model indicates that the major carbon flux passed through ED pathway that resulted in the production of ethanol. Small amounts of carbon source were entered into pentose phosphate pathway and TCA cycle to produce biomass precursors. Predicted flux distribution was in good agreement with experimental data. The model results also indicated that Z. mobilis metabolism is able to produce biomass with maximum growth yield of 123.7 g (mol glucose)-1 if ATP synthase is coupled with growth and produces 82 mmol ATP gDCW-1h-1. Coupling the growth and energy reduced ethanol secretion and changed the flux distribution to produce biomass precursors.

Keywords: genome-scale metabolic model, Zymomonas mobilis, uncoupled growth, flux distribution, ATP dissipation

Procedia PDF Downloads 481
16757 Privacy-Preserving Location Sharing System with Client/Server Architecture in Mobile Online Social Network

Authors: Xi Xiao, Chunhui Chen, Xinyu Liu, Guangwu Hu, Yong Jiang

Abstract:

Location sharing is a fundamental service in mobile Online Social Networks (mOSNs), which raises significant privacy concerns in recent years. Now, most location-based service applications adopt client/server architecture. In this paper, a location sharing system, named CSLocShare, is presented to provide flexible privacy-preserving location sharing with client/server architecture in mOSNs. CSLocShare enables location sharing between both trusted social friends and untrusted strangers without the third-party server. In CSLocShare, Location-Storing Social Network Server (LSSNS) provides location-based services but do not know the users’ real locations. The thorough analysis indicates that the users’ location privacy is protected. Meanwhile, the storage and the communication cost are saved. CSLocShare is more suitable and effective in reality.

Keywords: mobile online social networks, client/server architecture, location sharing, privacy-preserving

Procedia PDF Downloads 320
16756 Developing Measurement Model of Interpersonal Skills of Youth

Authors: Mohd Yusri Ibrahim

Abstract:

Although it is known that interpersonal skills are essential for personal development, the debate however continues as to how to measure those skills, especially in youths. This study was conducted to develop a measurement model of interpersonal skills by suggesting three construct namely personal, skills and relationship; six function namely self, perception, listening, conversation, emotion and conflict management; and 30 behaviours as indicators. This cross-sectional survey by questionnaires was applied in east side of peninsula of Malaysia for 150 respondents, and analyzed by structural equation modelling (SEM) by AMOS. The suggested constructs, functions and indicators were consider accepted as measurement elements by observing on regression weight for standard loading, average variance extracted (AVE) for convergent validity, square root of AVE for discriminant validity, composite reliability (CR), and at least three fit indexes for model fitness. Finally, a measurement model of interpersonal skill for youth was successfully developed.

Keywords: interpersonal communication, interpersonal skill, youth, communication skill

Procedia PDF Downloads 308
16755 Detecting Overdispersion for Mortality AIDS in Zero-inflated Negative Binomial Death Rate (ZINBDR) Co-infection Patients in Kelantan

Authors: Mohd Asrul Affedi, Nyi Nyi Naing

Abstract:

Overdispersion is present in count data, and basically when a phenomenon happened, a Negative Binomial (NB) is commonly used to replace a standard Poisson model. Analysis of count data event, such as mortality cases basically Poisson regression model is appropriate. Hence, the model is not appropriate when existing a zero values. The zero-inflated negative binomial model is appropriate. In this article, we modelled the mortality cases as a dependent variable by age categorical. The objective of this study to determine existing overdispersion in mortality data of AIDS co-infection patients in Kelantan.

Keywords: negative binomial death rate, overdispersion, zero-inflation negative binomial death rate, AIDS

Procedia PDF Downloads 459
16754 Mitigation of Electromagnetic Interference Generated by GPIB Control-Network in AC-DC Transfer Measurement System

Authors: M. M. Hlakola, E. Golovins, D. V. Nicolae

Abstract:

The field of instrumentation electronics is undergoing an explosive growth, due to its wide range of applications. The proliferation of electrical devices in a close working proximity can negatively influence each other’s performance. The degradation in the performance is due to electromagnetic interference (EMI). This paper investigates the negative effects of electromagnetic interference originating in the General Purpose Interface Bus (GPIB) control-network of the ac-dc transfer measurement system. Remedial measures of reducing measurement errors and failure of range of industrial devices due to EMI have been explored. The ac-dc transfer measurement system was analyzed for the common-mode (CM) EMI effects. Further investigation of coupling path as well as more accurate identification of noise propagation mechanism has been outlined. To prevent the occurrence of common-mode (ground loops) which was identified between the GPIB system control circuit and the measurement circuit, a microcontroller-driven GPIB switching isolator device was designed, prototyped, programmed and validated. This mitigation technique has been explored to reduce EMI effectively.

Keywords: CM, EMI, GPIB, ground loops

Procedia PDF Downloads 287
16753 A Traceability Index for Food

Authors: Hari Pulapaka

Abstract:

This paper defines and develops the notion of a traceability index for food and may be used by any consumer (restaurant, distributor, average consumer etc.). The concept is then extended to a region's food system as a way to measure how well a regional food system utilizes its own bounty or at least, is connected to its food sources. With increasing emphases on the sustainability of aspects of regional and ultimately, the global food system, it is reasonable to accept that if we know how close (in relative terms) an end-user of a set of ingredients (as they traverse through the maze of supply chains) is from the sources, we may be better equipped to evaluate the quality of the set as measured by any number of qualitative and quantitative criteria. We propose a mathematical model which may be adapted to a number of contexts and sizes. Two hypothetical cases of different scope are presented which highlight how the model works as an evaluator of steps between an end-user and the source(s) of the ingredients they consume. The variables in the model are flexible enough to be adapted to other applications beyond food systems.

Keywords: food, traceability, supply chain, mathematical model

Procedia PDF Downloads 269
16752 Analytics Model in a Telehealth Center Based on Cloud Computing and Local Storage

Authors: L. Ramirez, E. Guillén, J. Sánchez

Abstract:

Some of the main goals about telecare such as monitoring, treatment, telediagnostic are deployed with the integration of applications with specific appliances. In order to achieve a coherent model to integrate software, hardware, and healthcare systems, different telehealth models with Internet of Things (IoT), cloud computing, artificial intelligence, etc. have been implemented, and their advantages are still under analysis. In this paper, we propose an integrated model based on IoT architecture and cloud computing telehealth center. Analytics module is presented as a solution to control an ideal diagnostic about some diseases. Specific features are then compared with the recently deployed conventional models in telemedicine. The main advantage of this model is the availability of controlling the security and privacy about patient information and the optimization on processing and acquiring clinical parameters according to technical characteristics.

Keywords: analytics, telemedicine, internet of things, cloud computing

Procedia PDF Downloads 318
16751 A New Distribution and Application on the Lifetime Data

Authors: Gamze Ozel, Selen Cakmakyapan

Abstract:

We introduce a new model called the Marshall-Olkin Rayleigh distribution which extends the Rayleigh distribution using Marshall-Olkin transformation and has increasing and decreasing shapes for the hazard rate function. Various structural properties of the new distribution are derived including explicit expressions for the moments, generating and quantile function, some entropy measures, and order statistics are presented. The model parameters are estimated by the method of maximum likelihood and the observed information matrix is determined. The potentiality of the new model is illustrated by means of real life data set.

Keywords: Marshall-Olkin distribution, Rayleigh distribution, estimation, maximum likelihood

Procedia PDF Downloads 498
16750 Range Suitability Model for Livestock Grazing in Taleghan Rangelands

Authors: Hossein Arzani, Masoud Jafari Shalamzari, Z. Arzani

Abstract:

This paper follows FAO model of suitability analysis. Influential factors affecting extensive grazing were determined and converted into a model. Taleghan rangelands were examined for common types of grazing animals as an example. Advantages and limitations were elicited. All range ecosystems’ components affect range suitability but due to the time and money restrictions, the most important and feasible elements were investigated. From which three sub-models including water accessibility, forage production and erosion sensitivity were considered. Suitable areas in four levels of suitability were calculated using GIS. This suitability modeling approach was adopted due to its simplicity and the minimal time that is required for transforming and analyzing the data sets. Managers could be benefited from the model to devise the measures more wisely to cope with the limitations and enhance the rangelands health and condition.

Keywords: range suitability, land-use, extensive grazing, modeling, land evaluation

Procedia PDF Downloads 333
16749 Numerical Simulation of Ultraviolet Disinfection in a Water Reactor

Authors: H. Shokouhmand, H. Sobhani, B. Sajadi, M. Degheh

Abstract:

In recent years, experimental and numerical investigation of water UV reactors has increased significantly. The main drawback of experimental methods is confined and expensive survey of UV reactors features. In this study, a CFD model utilizing the eulerian-lagrangian framework is applied to analysis the disinfection performance of a closed conduit reactor which contains four UV lamps perpendicular to the flow. A discrete ordinates (DO) model was employed to evaluate the UV irradiance field. To investigate the importance of each of lamps on the inactivation performance, in addition to the reference model (with 4 bright lamps), several models with one or two bright lamps in various arrangements were considered. All results were reported in three inactivation kinetics. The results showed that the log inactivation of the two central bright lamps model was between 88-99 percent, close to the reference model results. Also, whatever the lamps are closer to the main flow region, they have more effect on microbial inactivation. The effect of some operational parameters such as water flow rate, inlet water temperature, and lamps power were also studied.

Keywords: Eulerian-Lagrangian framework, inactivation kinetics, log inactivation, water UV reactor

Procedia PDF Downloads 243
16748 A Deterministic Approach for Solving the Hull and White Interest Rate Model with Jump Process

Authors: Hong-Ming Chen

Abstract:

This work considers the resolution of the Hull and White interest rate model with the jump process. A deterministic process is adopted to model the random behavior of interest rate variation as deterministic perturbations, which is depending on the time t. The Brownian motion and jumps uncertainty are denoted as the integral functions piecewise constant function w(t) and point function θ(t). It shows that the interest rate function and the yield function of the Hull and White interest rate model with jump process can be obtained by solving a nonlinear semi-infinite programming problem. A relaxed cutting plane algorithm is then proposed for solving the resulting optimization problem. The method is calibrated for the U.S. treasury securities at 3-month data and is used to analyze several effects on interest rate prices, including interest rate variability, and the negative correlation between stock returns and interest rates. The numerical results illustrate that our approach essentially generates the yield functions with minimal fitting errors and small oscillation.

Keywords: optimization, interest rate model, jump process, deterministic

Procedia PDF Downloads 158
16747 Development of a Numerical Model to Predict Wear in Grouted Connections for Offshore Wind Turbine Generators

Authors: Paul Dallyn, Ashraf El-Hamalawi, Alessandro Palmeri, Bob Knight

Abstract:

In order to better understand the long term implications of the grout wear failure mode in large-diameter plain-sided grouted connections, a numerical model has been developed and calibrated that can take advantage of existing operational plant data to predict the wear accumulation for the actual load conditions experienced over a given period, thus limiting the need for expensive monitoring systems. This model has been derived and calibrated based on site structural condition monitoring (SCM) data and supervisory control and data acquisition systems (SCADA) data for two operational wind turbine generator substructures afflicted with this challenge, along with experimentally derived wear rates.

Keywords: grouted connection, numerical model, offshore structure, wear, wind energy

Procedia PDF Downloads 447
16746 Seismic Assessment of Old Existing RC Buildings with Masonry Infill in Madinah as Per ASCE

Authors: Tarek M. Alguhane, Ayman H. Khalil, Nour M. Fayed, Ayman M. Ismail

Abstract:

An existing RC building in Madinah is seismically evaluated with and without infill wall. Four model systems have been considered i. e. model I (no infill), model IIA (strut infill-update from field test), model IIB (strut infill- ASCE/SEI 41) and model IIC (strut infill-Soft storey-ASCE/SEI 41). Three dimensional pushover analyses have been carried out using SAP 2000 software incorporating inelastic material behavior for concrete, steel and infill walls. Infill wall has been modeled as equivalent strut according to suggested equation matching field test measurements and to the ASCE/SEI 41 equation. The effect of building modeling on the performance point as well as capacity and demand spectra due to EQ design spectrum function in Madinah area has been investigated. The response modification factor (R) for the 5 story RC building is evaluated from capacity and demand spectra (ATC-40) for the studied models. The results are summarized and discussed.

Keywords: infill wall, pushover analysis, response modification factor, seismic assessment

Procedia PDF Downloads 391
16745 Organizational Resilience in the Perspective of Supply Chain Risk Management: A Scholarly Network Analysis

Authors: William Ho, Agus Wicaksana

Abstract:

Anecdotal evidence in the last decade shows that the occurrence of disruptive events and uncertainties in the supply chain is increasing. The coupling of these events with the nature of an increasingly complex and interdependent business environment leads to devastating impacts that quickly propagate within and across organizations. For example, the recent COVID-19 pandemic increased the global supply chain disruption frequency by at least 20% in 2020 and is projected to have an accumulative cost of $13.8 trillion by 2024. This crisis raises attention to organizational resilience to weather business uncertainty. However, the concept has been criticized for being vague and lacking a consistent definition, thus reducing the significance of the concept for practice and research. This study is intended to solve that issue by providing a comprehensive review of the conceptualization, measurement, and antecedents of operational resilience that have been discussed in the supply chain risk management literature (SCRM). We performed a Scholarly Network Analysis, combining citation-based and text-based approaches, on 252 articles published from 2000 to 2021 in top-tier journals based on three parameters: AJG ranking and ABS ranking, UT Dallas and FT50 list, and editorial board review. We utilized a hybrid scholarly network analysis by combining citation-based and text-based approaches to understand the conceptualization, measurement, and antecedents of operational resilience in the SCRM literature. Specifically, we employed a Bibliographic Coupling Analysis in the research cluster formation stage and a Co-words Analysis in the research cluster interpretation and analysis stage. Our analysis reveals three major research clusters of resilience research in the SCRM literature, namely (1) supply chain network design and optimization, (2) organizational capabilities, and (3) digital technologies. We portray the research process in the last two decades in terms of the exemplar studies, problems studied, commonly used approaches and theories, and solutions provided in each cluster. We then provide a conceptual framework on the conceptualization and antecedents of resilience based on studies in these clusters and highlight potential areas that need to be studied further. Finally, we leverage the concept of abnormal operating performance to propose a new measurement strategy for resilience. This measurement overcomes the limitation of most current measurements that are event-dependent and focus on the resistance or recovery stage - without capturing the growth stage. In conclusion, this study provides a robust literature review through a scholarly network analysis that increases the completeness and accuracy of research cluster identification and analysis to understand conceptualization, antecedents, and measurement of resilience. It also enables us to perform a comprehensive review of resilience research in SCRM literature by including research articles published during the pandemic and connects this development with a plethora of articles published in the last two decades. From the managerial perspective, this study provides practitioners with clarity on the conceptualization and critical success factors of firm resilience from the SCRM perspective.

Keywords: supply chain risk management, organizational resilience, scholarly network analysis, systematic literature review

Procedia PDF Downloads 69
16744 A Holistic View of Microbial Community Dynamics during a Toxic Harmful Algal Bloom

Authors: Shi-Bo Feng, Sheng-Jie Zhang, Jin Zhou

Abstract:

The relationship between microbial diversity and algal bloom has received considerable attention for decades. Microbes undoubtedly affect annual bloom events and impact the physiology of both partners, as well as shape ecosystem diversity. However, knowledge about interactions and network correlations among broader-spectrum microbes that lead to the dynamics in a complete bloom cycle are limited. In this study, pyrosequencing and network approaches simultaneously assessed the associate patterns among bacteria, archaea, and microeukaryotes in surface water and sediments in response to a natural dinoflagellate (Alexandrium sp.) bloom. In surface water, among the bacterial community, Gamma-Proteobacteria and Bacteroidetes dominated in the initial bloom stage, while Alpha-Proteobacteria, Cyanobacteria, and Actinobacteria become the most abundant taxa during the post-stage. In the archaea biosphere, it clustered predominantly with Methanogenic members in the early pre-bloom period while the majority of species identified in the later-bloom stage were ammonia-oxidizing archaea and Halobacteriales. In eukaryotes, dinoflagellate (Alexandrium sp.) was dominated in the onset stage, whereas multiply species (such as microzooplankton, diatom, green algae, and rotifera) coexistence in bloom collapse stag. In sediments, the microbial species biomass and richness are much higher than the water body. Only Flavobacteriales and Rhodobacterales showed a slight response to bloom stages. Unlike the bacteria, there are small fluctuations of archaeal and eukaryotic structure in the sediment. The network analyses among the inter-specific associations show that bacteria (Alteromonadaceae, Oceanospirillaceae, Cryomorphaceae, and Piscirickettsiaceae) and some zooplankton (Mediophyceae, Mamiellophyceae, Dictyochophyceae and Trebouxiophyceae) have a stronger impact on the structuring of phytoplankton communities than archaeal effects. The changes in population were also significantly shaped by water temperature and substrate availability (N & P resources). The results suggest that clades are specialized at different time-periods and that the pre-bloom succession was mainly a bottom-up controlled, and late-bloom period was controlled by top-down patterns. Additionally, phytoplankton and prokaryotic communities correlated better with each other, which indicate interactions among microorganisms are critical in controlling plankton dynamics and fates. Our results supplied a wider view (temporal and spatial scales) to understand the microbial ecological responses and their network association during algal blooming. It gives us a potential multidisciplinary explanation for algal-microbe interaction and helps us beyond the traditional view linked to patterns of algal bloom initiation, development, decline, and biogeochemistry.

Keywords: microbial community, harmful algal bloom, ecological process, network

Procedia PDF Downloads 106
16743 Weed Classification Using a Two-Dimensional Deep Convolutional Neural Network

Authors: Muhammad Ali Sarwar, Muhammad Farooq, Nayab Hassan, Hammad Hassan

Abstract:

Pakistan is highly recognized for its agriculture and is well known for producing substantial amounts of wheat, cotton, and sugarcane. However, some factors contribute to a decline in crop quality and a reduction in overall output. One of the main factors contributing to this decline is the presence of weed and its late detection. This process of detection is manual and demands a detailed inspection to be done by the farmer itself. But by the time detection of weed, the farmer will be able to save its cost and can increase the overall production. The focus of this research is to identify and classify the four main types of weeds (Small-Flowered Cranesbill, Chick Weed, Prickly Acacia, and Black-Grass) that are prevalent in our region’s major crops. In this work, we implemented three different deep learning techniques: YOLO-v5, Inception-v3, and Deep CNN on the same Dataset, and have concluded that deep convolutions neural network performed better with an accuracy of 97.45% for such classification. In relative to the state of the art, our proposed approach yields 2% better results. We devised the architecture in an efficient way such that it can be used in real-time.

Keywords: deep convolution networks, Yolo, machine learning, agriculture

Procedia PDF Downloads 103
16742 Scorbot-ER 4U Using Forward Kinematics Modelling and Analysis

Authors: D. Maneetham, L. Sivhour

Abstract:

Robotic arm manipulators are widely used to accomplish many kinds of tasks. SCORBOT-ER 4u is a 5-degree of freedom (DOF) vertical articulated educational robotic arm, and all joints are revolute. It is specifically designed to perform pick and place task with its gripper. The pick and place task consists of consideration of the end effector coordinate of the robotic arm and the desired position coordinate in its workspace. This paper describes about forward kinematics modeling and analysis of the robotic end effector motion through joint space. The kinematics problems are defined by the transformation from the Cartesian space to the joint space. Denavit-Hartenberg (D-H) model is used in order to model the robotic links and joints with 4x4 homogeneous matrix. The forward kinematics model is also developed and simulated in MATLAB. The mathematical model is validated by using robotic toolbox in MATLAB. By using this method, it may be applicable to get the end effector coordinate of this robotic arm and other similar types to this arm. The software development of SCORBOT-ER 4u is also described here. PC-and EtherCAT based control technology from BECKHOFF is used to control the arm to express the pick and place task.

Keywords: forward kinematics, D-H model, robotic toolbox, PC- and EtherCAT-based control

Procedia PDF Downloads 168
16741 Wind Interference Effects on Various Plan Shape Buildings Under Wind Load

Authors: Ritu Raj, Hrishikesh Dubey

Abstract:

This paper presents the results of the experimental investigations carried out on two intricate plan shaped buildings to evaluate aerodynamic performance of the building. The purpose is to study the associated environment arising due to wind forces in isolated and interference conditions on a model of scale 1:300 with a prototype having 180m height. Experimental tests were carried out at the boundary layer wind tunnel considering isolated conditions with 0° to 180° isolated wind directions and four interference conditions of twin building (separately for both the models). The research has been undertaken in Terrain Category-II, which is the most widely available terrain in India. A comparative assessment of the two models is performed out in an attempt to comprehend the various consequences of diverse conditions that may emerge in real-life situations, as well as the discrepancies amongst them. Experimental results of wind pressure coefficients of Model-1 and Model-2 shows good agreement with various wind incidence conditions with minute difference in the magnitudes of mean Cp. On the basis of wind tunnel studies, it is distinguished that the performance of Model-2 is better than Model-1in both isolated as well as interference conditions for all wind incidences and orientations respectively.

Keywords: interference factor, tall buildings, wind direction, mean pressure-coefficients

Procedia PDF Downloads 121
16740 Comparison of Quality of Life One Year after Bariatric Intervention: Systematic Review of the Literature with Bayesian Network Meta-Analysis

Authors: Piotr Tylec, Alicja Dudek, Grzegorz Torbicz, Magdalena Mizera, Natalia Gajewska, Michael Su, Tanawat Vongsurbchart, Tomasz Stefura, Magdalena Pisarska, Mateusz Rubinkiewicz, Piotr Malczak, Piotr Major, Michal Pedziwiatr

Abstract:

Introduction: Quality of life after bariatric surgery is an important factor when evaluating the final result of the treatment. Considering the vast surgical options, we tried to globally compare available methods in terms of quality of following the surgery. The aim of the study is to compare the quality of life a year after bariatric intervention using network meta-analysis methods. Material and Methods: We performed a systematic review according to PRISMA guidelines with Bayesian network meta-analysis. Inclusion criteria were: studies comparing at least two methods of weight loss treatment of which at least one is surgical, assessment of the quality of life one year after surgery by validated questionnaires. Primary outcomes were quality of life one year after bariatric procedure. The following aspects of quality of life were analyzed: physical, emotional, general health, vitality, role physical, social, mental, and bodily pain. All questionnaires were standardized and pooled to a single scale. Lifestyle intervention was considered as a referenced point. Results: An initial reference search yielded 5636 articles. 18 studies were evaluated. In comparison of total score of quality of life, we observed that laparoscopic sleeve gastrectomy (LSG) (median (M): 3.606, Credible Interval 97.5% (CrI): 1.039; 6.191), laparoscopic Roux en-Y gastric by-pass (LRYGB) (M: 4.973, CrI: 2.627; 7.317) and open Roux en-Y gastric by-pass (RYGB) (M: 9.735, CrI: 6.708; 12.760) had better results than other bariatric intervention in relation to lifestyle interventions. In the analysis of the physical aspects of quality of life, we notice better results in LSG (M: 3.348, CrI: 0.548; 6.147) and in LRYGB procedure (M: 5.070, CrI: 2.896; 7.208) than control intervention, and worst results in open RYGB (M: -9.212, CrI: -11.610; -6.844). Analyzing emotional aspects, we found better results than control intervention in LSG, in LRYGB, in open RYGB, and laparoscopic gastric plication. In general health better results were in LSG (M: 9.144, CrI: 4.704; 13.470), in LRYGB (M: 6.451, CrI: 10.240; 13.830) and in single-anastomosis gastric by-pass (M: 8.671, CrI: 1.986; 15.310), and worst results in open RYGB (M: -4.048, CrI: -7.984; -0.305). In social and vital aspects of quality of life, better results were observed in LSG and LRYGB than control intervention. We did not find any differences between bariatric interventions in physical role, mental and bodily aspects of quality of life. Conclusion: The network meta-analysis revealed that better quality of life in total score one year after bariatric interventions were after LSG, LRYGB, open RYGB. In physical and general health aspects worst quality of life was in open RYGB procedure. Other interventions did not significantly affect the quality of life after a year compared to dietary intervention.

Keywords: bariatric surgery, network meta-analysis, quality of life, one year follow-up

Procedia PDF Downloads 156
16739 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 115
16738 Development of the Academic Model to Predict Student Success at VUT-FSASEC Using Decision Trees

Authors: Langa Hendrick Musawenkosi, Twala Bhekisipho

Abstract:

The success or failure of students is a concern for every academic institution, college, university, governments and students themselves. Several approaches have been researched to address this concern. In this paper, a view is held that when a student enters a university or college or an academic institution, he or she enters an academic environment. The academic environment is unique concept used to develop the solution for making predictions effectively. This paper presents a model to determine the propensity of a student to succeed or fail in the French South African Schneider Electric Education Center (FSASEC) at the Vaal University of Technology (VUT). The Decision Tree algorithm is used to implement the model at FSASEC.

Keywords: FSASEC, academic environment model, decision trees, k-nearest neighbor, machine learning, popularity index, support vector machine

Procedia PDF Downloads 196
16737 Analytical Modeling of Drain Current for DNA Biomolecule Detection in Double-Gate Tunnel Field-Effect Transistor Biosensor

Authors: Ashwani Kumar

Abstract:

Abstract- This study presents an analytical modeling approach for analyzing the drain current behavior in Tunnel Field-Effect Transistor (TFET) biosensors used for the detection of DNA biomolecules. The proposed model focuses on elucidating the relationship between the drain current and the presence of DNA biomolecules, taking into account the impact of various device parameters and biomolecule characteristics. Through comprehensive analysis, the model offers insights into the underlying mechanisms governing the sensing performance of TFET biosensors, aiding in the optimization of device design and operation. A non-local tunneling model is incorporated with other essential models to accurately trace the simulation and modeled data. An experimental validation of the model is provided, demonstrating its efficacy in accurately predicting the drain current response to DNA biomolecule detection. The sensitivity attained from the analytical model is compared and contrasted with the ongoing research work in this area.

Keywords: biosensor, double-gate TFET, DNA detection, drain current modeling, sensitivity

Procedia PDF Downloads 51