Search results for: RLS identification algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6295

Search results for: RLS identification algorithm

2515 Framework for Socio-Technical Issues in Requirements Engineering for Developing Resilient Machine Vision Systems Using Levels of Automation through the Lifecycle

Authors: Ryan Messina, Mehedi Hasan

Abstract:

This research is to examine the impacts of using data to generate performance requirements for automation in visual inspections using machine vision. These situations are intended for design and how projects can smooth the transfer of tacit knowledge to using an algorithm. We have proposed a framework when specifying machine vision systems. This framework utilizes varying levels of automation as contingency planning to reduce data processing complexity. Using data assists in extracting tacit knowledge from those who can perform the manual tasks to assist design the system; this means that real data from the system is always referenced and minimizes errors between participating parties. We propose using three indicators to know if the project has a high risk of failing to meet requirements related to accuracy and reliability. All systems tested achieved a better integration into operations after applying the framework.

Keywords: automation, contingency planning, continuous engineering, control theory, machine vision, system requirements, system thinking

Procedia PDF Downloads 201
2514 Alternator Fault Detection Using Wigner-Ville Distribution

Authors: Amin Ranjbar, Amir Arsalan Jalili Zolfaghari, Amir Abolfazl Suratgar, Mehrdad Khajavi

Abstract:

This paper describes two stages of learning-based fault detection procedure in alternators. The procedure consists of three states of machine condition namely shortened brush, high impedance relay and maintaining a healthy condition in the alternator. The fault detection algorithm uses Wigner-Ville distribution as a feature extractor and also appropriate feature classifier. In this work, ANN (Artificial Neural Network) and also SVM (support vector machine) were compared to determine more suitable performance evaluated by the mean squared of errors criteria. Modules work together to detect possible faulty conditions of machines working. To test the method performance, a signal database is prepared by making different conditions on a laboratory setup. Therefore, it seems by implementing this method, satisfactory results are achieved.

Keywords: alternator, artificial neural network, support vector machine, time-frequency analysis, Wigner-Ville distribution

Procedia PDF Downloads 367
2513 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance

Authors: Ammar Alali, Mahmoud Abughaban

Abstract:

Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.

Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe

Procedia PDF Downloads 223
2512 Design and Motion Control of a Two-Wheel Inverted Pendulum Robot

Authors: Shiuh-Jer Huang, Su-Shean Chen, Sheam-Chyun Lin

Abstract:

Two-wheel inverted pendulum robot (TWIPR) is designed with two-hub DC motors for human riding and motion control evaluation. In order to measure the tilt angle and angular velocity of the inverted pendulum robot, accelerometer and gyroscope sensors are chosen. The mobile robot’s moving position and velocity were estimated based on DC motor built in hall sensors. The control kernel of this electric mobile robot is designed with embedded Arduino Nano microprocessor. A handle bar was designed to work as steering mechanism. The intelligent model-free fuzzy sliding mode control (FSMC) was employed as the main control algorithm for this mobile robot motion monitoring with different control purpose adjustment. The intelligent controllers were designed for balance control, and moving speed control purposes of this robot under different operation conditions and the control performance were evaluated based on experimental results.

Keywords: balance control, speed control, intelligent controller, two wheel inverted pendulum

Procedia PDF Downloads 215
2511 Research on Development and Accuracy Improvement of an Explosion Proof Combustible Gas Leak Detector Using an IR Sensor

Authors: Gyoutae Park, Seungho Han, Byungduk Kim, Youngdo Jo, Yongsop Shim, Yeonjae Lee, Sangguk Ahn, Hiesik Kim, Jungil Park

Abstract:

In this paper, we presented not only development technology of an explosion proof type and portable combustible gas leak detector but also algorithm to improve accuracy for measuring gas concentrations. The presented techniques are to apply the flame-proof enclosure and intrinsic safe explosion proof to an infrared gas leak detector at first in Korea and to improve accuracy using linearization recursion equation and Lagrange interpolation polynomial. Together, we tested sensor characteristics and calibrated suitable input gases and output voltages. Then, we advanced the performances of combustible gaseous detectors through reflecting demands of gas safety management fields. To check performances of two company's detectors, we achieved the measurement tests with eight standard gases made by Korea Gas Safety Corporation. We demonstrated our instruments better in detecting accuracy other than detectors through experimental results.

Keywords: accuracy improvement, IR gas sensor, gas leak, detector

Procedia PDF Downloads 387
2510 A Review on Water Models of Surface Water Environment

Authors: Shahbaz G. Hassan

Abstract:

Water quality models are very important to predict the changes in surface water quality for environmental management. The aim of this paper is to give an overview of the water qualities, and to provide directions for selecting models in specific situation. Water quality models include one kind of model based on a mechanistic approach, while other models simulate water quality without considering a mechanism. Mechanistic models can be widely applied and have capabilities for long-time simulation, with highly complexity. Therefore, more spaces are provided to explain the principle and application experience of mechanistic models. Mechanism models have certain assumptions on rivers, lakes and estuaries, which limits the application range of the model, this paper introduces the principles and applications of water quality model based on the above three scenarios. On the other hand, mechanistic models are more easily to compute, and with no limit to the geographical conditions, but they cannot be used with confidence to simulate long term changes. This paper divides the empirical models into two broad categories according to the difference of mathematical algorithm, models based on artificial intelligence and models based on statistical methods.

Keywords: empirical models, mathematical, statistical, water quality

Procedia PDF Downloads 254
2509 Presenting a Model Of Empowering New Knowledge-based Companies In Iran Insurance Industry

Authors: Pedram Saadati, Zahra Nazari

Abstract:

In the last decade, the role and importance of knowledge-based technological businesses in the insurance industry has greatly increased, and due to the weakness of previous studies in Iran, the current research deals with the design of the InsurTech empowerment model. In order to obtain the conceptual model of the research, a hybrid framework has been used. The statistical population of the research in the qualitative part were experts, and in the quantitative part, the InsurTech activists. The tools of data collection in the qualitative part were in-depth and semi-structured interviews and structured self-interaction matrix, and in the quantitative part, a researcher-made questionnaire. In the qualitative part, 55 indicators, 20 components and 8 concepts (dimensions) were obtained by the content analysis method, then the relationships of the concepts with each other and the levels of the components were investigated. In the quantitative part, the information was analyzed using the descriptive analytical method in the way of path analysis and confirmatory factor analysis. The proposed model consists of eight dimensions of supporter capability, supervisor of insurance innovation ecosystem, managerial, financial, technological, marketing, opportunity identification, innovative InsurTech capabilities. The results of statistical tests in identifying the relationships of the concepts with each other have been examined in detail and suggestions have been presented in the conclusion section.

Keywords: insurTech, knowledge-base, empowerment model, factor analysis, insurance

Procedia PDF Downloads 38
2508 Identifying Risk Factors for Readmission Using Decision Tree Analysis

Authors: Sıdıka Kaya, Gülay Sain Güven, Seda Karsavuran, Onur Toka

Abstract:

This study is part of an ongoing research project supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 114K404, and participation to this conference was supported by Hacettepe University Scientific Research Coordination Unit under Project Number 10243. Evaluation of hospital readmissions is gaining importance in terms of quality and cost, and is becoming the target of national policies. In Turkey, the topic of hospital readmission is relatively new on agenda and very few studies have been conducted on this topic. The aim of this study was to determine 30-day readmission rates and risk factors for readmission. Whether readmission was planned, related to the prior admission and avoidable or not was also assessed. The study was designed as a ‘prospective cohort study.’ 472 patients hospitalized in internal medicine departments of a university hospital in Turkey between February 1, 2015 and April 30, 2015 were followed up. Analyses were conducted using IBM SPSS Statistics version 22.0 and SPSS Modeler 16.0. Average age of the patients was 56 and 56% of the patients were female. Among these patients 95 were readmitted. Overall readmission rate was calculated as 20% (95/472). However, only 31 readmissions were unplanned. Unplanned readmission rate was 6.5% (31/472). Out of 31 unplanned readmission, 24 was related to the prior admission. Only 6 related readmission was avoidable. To determine risk factors for readmission we constructed Chi-square automatic interaction detector (CHAID) decision tree algorithm. CHAID decision trees are nonparametric procedures that make no assumptions of the underlying data. This algorithm determines how independent variables best combine to predict a binary outcome based on ‘if-then’ logic by portioning each independent variable into mutually exclusive subsets based on homogeneity of the data. Independent variables we included in the analysis were: clinic of the department, occupied beds/total number of beds in the clinic at the time of discharge, age, gender, marital status, educational level, distance to residence (km), number of people living with the patient, any person to help his/her care at home after discharge (yes/no), regular source (physician) of care (yes/no), day of discharge, length of stay, ICU utilization (yes/no), total comorbidity score, means for each 3 dimensions of Readiness for Hospital Discharge Scale (patient’s personal status, patient’s knowledge, and patient’s coping ability) and number of daycare admissions within 30 days of discharge. In the analysis, we included all 95 readmitted patients (46.12%), but only 111 (53.88%) non-readmitted patients, although we had 377 non-readmitted patients, to balance data. The risk factors for readmission were found as total comorbidity score, gender, patient’s coping ability, and patient’s knowledge. The strongest identifying factor for readmission was comorbidity score. If patients’ comorbidity score was higher than 1, the risk for readmission increased. The results of this study needs to be validated by other data–sets with more patients. However, we believe that this study will guide further studies of readmission and CHAID is a useful tool for identifying risk factors for readmission.

Keywords: decision tree, hospital, internal medicine, readmission

Procedia PDF Downloads 255
2507 A Dynamic Software Product Line Approach to Self-Adaptive Genetic Algorithms

Authors: Abdelghani Alidra, Mohamed Tahar Kimour

Abstract:

Genetic algorithm must adapt themselves at design time to cope with the search problem specific requirements and at runtime to balance exploration and convergence objectives. In a previous article, we have shown that modeling and implementing Genetic Algorithms (GA) using the software product line (SPL) paradigm is very appreciable because they constitute a product family sharing a common base of code. In the present article we propose to extend the use of the feature model of the genetic algorithms family to model the potential states of the GA in what is called a Dynamic Software Product Line. The objective of this paper is the systematic generation of a reconfigurable architecture that supports the dynamic of the GA and which is easily deduced from the feature model. The resultant GA is able to perform dynamic reconfiguration autonomously to fasten the convergence process while producing better solutions. Another important advantage of our approach is the exploitation of recent advances in the domain of dynamic SPLs to enhance the performance of the GAs.

Keywords: self-adaptive genetic algorithms, software engineering, dynamic software product lines, reconfigurable architecture

Procedia PDF Downloads 280
2506 Numerical Model for Investigation of Recombination Mechanisms in Graphene-Bonded Perovskite Solar Cells

Authors: Amir Sharifi Miavaghi

Abstract:

It is believed recombination mechnisms in graphene-bonded perovskite solar cells based on numerical model in which doped-graphene structures are employed as anode/cathode bonding semiconductor. Moreover, th‌‌‌‌e da‌‌‌‌‌rk-li‌‌‌‌‌ght c‌‌‌‌urrent d‌‌‌‌ens‌‌‌‌ity-vo‌‌‌‌‌‌‌ltage density-voltage cu‌‌‌‌‌‌‌‌‌‌‌rves are investigated by regression analysis. L‌‌‌oss m‌‌‌‌echa‌‌‌‌nisms suc‌‌‌h a‌‌‌‌‌‌s ba‌‌‌‌ck c‌‌‌ontact b‌‌‌‌‌arrier, d‌‌‌‌eep surface defect i‌‌‌‌n t‌‌‌‌‌‌‌he adsorbent la‌‌‌yer is det‌‌‌‌‌ermined b‌‌‌y adapting th‌‌‌e sim‌‌‌‌‌ulated ce‌‌‌‌‌ll perfor‌‌‌‌‌mance to t‌‌‌‌he measure‌‌‌‌ments us‌‌‌‌ing the diffe‌‌‌‌‌‌rential evolu‌‌‌‌‌tion of th‌‌‌‌e global optimization algorithm. T‌‌‌‌he performance of t‌‌‌he c‌‌‌‌ell i‌‌‌‌n the connection proc‌‌‌‌‌ess incl‌‌‌‌‌‌udes J-V cur‌‌‌‌‌‌ves that are examined at di‌‌‌‌‌fferent tempe‌‌‌‌‌‌‌ratures an‌‌‌d op‌‌‌‌en cir‌‌‌‌cuit vol‌‌‌‌tage (V) und‌‌‌‌er differ‌‌‌‌‌ent light intensities as a function of temperature. Ba‌‌‌‌sed o‌‌‌n t‌‌‌he prop‌‌‌‌osed nu‌‌‌‌‌merical mod‌‌‌‌el a‌‌‌‌nd the acquired lo‌‌‌‌ss mecha‌‌‌‌‌‌nisms, our approach can be used to improve the efficiency of the solar cell further. Due to the high demand for alternative energy sources, solar cells are good alternatives for energy storage using the photovoltaic phenomenon.

Keywords: numerical model, recombination mechanism, graphen, perovskite solarcell

Procedia PDF Downloads 62
2505 Identification and Characterization of Genes Expressed in Diseased Condition Silkworms (Bombyx mori): A Systematic Investigation

Authors: Siddharth Soni, Gourav Kumar Pandey, Sneha Kumari, Dev Mani Pandey, Koel Mukherjee

Abstract:

The silkworm Bombyx mori is a commercially important insect, but a major roadblock in silk production are silkworm diseases. Flacherie is one of the diseases of the silkworm, that affects the midgut of the 4th and 5th instar larvae and eventually makes them lethargic, stop feeding and finally result in their death. The concerned disease is a result of bacterial and viral infection and in some instances a combination of both. The present study aims to identify and study the expression level of genes in the flacherie condition. For the said work, total RNA was isolated from the infected larvae at their most probable infectious instar and cDNA was synthesized using Reverse Transcriptase PCR (RT-PCR). This cDNA was then used to amplify disease relalted genes whose expression levels were checked using quantitaive PCR (qPCR) using the double delta Ct method. Cry toxin receptors like APN and BtR-175, ROS mediator Dual Oxidase are few proteins whose genes were overexpressed. Interestingly, pattern recognition receptors (PRRs) C-type lectins' genes were found to be downregulated. The results explain about the strong expression of genes that can distinguish the concerned protein in the midgut of diseased silkworm and thereby aiding knowledge in the field of inhibitor designing research.

Keywords: Bombyx mori, flacherie disease, inhibitor designing, up and down regulation

Procedia PDF Downloads 281
2504 Using of Particle Swarm Optimization for Loss Minimization of Vector-Controlled Induction Motors

Authors: V. Rashtchi, H. Bizhani, F. R. Tatari

Abstract:

This paper presents a new online loss minimization for an induction motor drive. Among the many loss minimization algorithms (LMAs) for an induction motor, a particle swarm optimization (PSO) has the advantages of fast response and high accuracy. However, the performance of the PSO and other optimization algorithms depend on the accuracy of the modeling of the motor drive and losses. In the development of the loss model, there is always a trade off between accuracy and complexity. This paper presents a new online optimization to determine an optimum flux level for the efficiency optimization of the vector-controlled induction motor drive. An induction motor (IM) model in d-q coordinates is referenced to the rotor magnetizing current. This transformation results in no leakage inductance on the rotor side, thus the decomposition into d-q components in the steady-state motor model can be utilized in deriving the motor loss model. The suggested algorithm is simple for implementation.

Keywords: induction machine, loss minimization, magnetizing current, particle swarm optimization

Procedia PDF Downloads 629
2503 Spectral Anomaly Detection and Clustering in Radiological Search

Authors: Thomas L. McCullough, John D. Hague, Marylesa M. Howard, Matthew K. Kiser, Michael A. Mazur, Lance K. McLean, Johanna L. Turk

Abstract:

Radiological search and mapping depends on the successful recognition of anomalies in large data sets which contain varied and dynamic backgrounds. We present a new algorithmic approach for real-time anomaly detection which is resistant to common detector imperfections, avoids the limitations of a source template library and provides immediate, and easily interpretable, user feedback. This algorithm is based on a continuous wavelet transform for variance reduction and evaluates the deviation between a foreground measurement and a local background expectation using methods from linear algebra. We also present a technique for recognizing and visualizing spectrally similar clusters of data. This technique uses Laplacian Eigenmap Manifold Learning to perform dimensional reduction which preserves the geometric "closeness" of the data while maintaining sensitivity to outlying data. We illustrate the utility of both techniques on real-world data sets.

Keywords: radiological search, radiological mapping, radioactivity, radiation protection

Procedia PDF Downloads 688
2502 Uncertainty Estimation in Neural Networks through Transfer Learning

Authors: Ashish James, Anusha James

Abstract:

The impressive predictive performance of deep learning techniques on a wide range of tasks has led to its widespread use. Estimating the confidence of these predictions is paramount for improving the safety and reliability of such systems. However, the uncertainty estimates provided by neural networks (NNs) tend to be overconfident and unreasonable. Ensemble of NNs typically produce good predictions but uncertainty estimates tend to be inconsistent. Inspired by these, this paper presents a framework that can quantitatively estimate the uncertainties by leveraging the advances in transfer learning through slight modification to the existing training pipelines. This promising algorithm is developed with an intention of deployment in real world problems which already boast a good predictive performance by reusing those pretrained models. The idea is to capture the behavior of the trained NNs for the base task by augmenting it with the uncertainty estimates from a supplementary network. A series of experiments with known and unknown distributions show that the proposed approach produces well calibrated uncertainty estimates with high quality predictions.

Keywords: uncertainty estimation, neural networks, transfer learning, regression

Procedia PDF Downloads 128
2501 The Acceptable Roles of Artificial Intelligence in the Judicial Reasoning Process

Authors: Sonia Anand Knowlton

Abstract:

There are some cases where we as a society feel deeply uncomfortable with the use of Artificial Intelligence (AI) tools in the judicial decision-making process, and justifiably so. A perfect example is COMPAS, an algorithmic model that predicts recidivism rates of offenders to assist in the determination of their bail conditions. COMPAS turned out to be extremely racist: it massively overpredicted recidivism rates of Black offenders and underpredicted recidivism rates of white offenders. At the same time, there are certain uses of AI in the judicial decision-making process that many would feel more comfortable with and even support. Take, for example, a “super-breathalyzer,” an (albeit imaginary) tool that uses AI to deliver highly detailed information about the subject of the breathalyzer test to the legal decision-makers analyzing their drunk-driving case. This article evaluates the point at which a judge’s use of AI tools begins to undermine the public’s trust in the administration of justice. It argues that the answer to this question depends on whether the AI tool is in a role in which it must perform a moral evaluation of a human being.

Keywords: artificial intelligence, judicial reasoning, morality, technology, algorithm

Procedia PDF Downloads 75
2500 Spherical Harmonic Based Monostatic Anisotropic Point Scatterer Model for RADAR Applications

Authors: Eric Huang, Coleman DeLude, Justin Romberg, Saibal Mukhopadhyay, Madhavan Swaminathan

Abstract:

High performance computing (HPC) based emulators can be used to model the scattering from multiple stationary and moving targets for RADAR applications. These emulators rely on the RADAR Cross Section (RCS) of the targets being available in complex scenarios. Representing the RCS using tables generated from electromagnetic (EM) simulations is often times cumbersome leading to large storage requirement. This paper proposed a spherical harmonic based anisotropic scatterer model to represent the RCS of complex targets. The problem of finding the locations and reflection profiles of all scatterers can be formulated as a linear least square problem with a special sparsity constraint. This paper solves this problem using a modified Orthogonal Matching Pursuit algorithm. The results show that the spherical harmonic based scatterer model can effectively represent the RCS data of complex targets.

Keywords: RADAR, RCS, high performance computing, point scatterer model

Procedia PDF Downloads 185
2499 An Improved Mesh Deformation Method Based on Radial Basis Function

Authors: Xuan Zhou, Litian Zhang, Shuixiang Li

Abstract:

Mesh deformation using radial basis function interpolation method has been demonstrated to produce quality meshes with relatively little computational cost using a concise algorithm. However, it still suffers from the limited deformation ability, especially in large deformation. In this paper, a pre-displacement improvement is proposed to improve the problem that illegal meshes always appear near the moving inner boundaries owing to the large relative displacement of the nodes near inner boundaries. In this improvement, nodes near the inner boundaries are first associated to the near boundary nodes, and a pre-displacement based on the displacements of associated boundary nodes is added to the nodes near boundaries in order to make the displacement closer to the boundary deformation and improve the deformation capability. Several 2D and 3D numerical simulation cases have shown that the pre-displacement improvement for radial basis function (RBF) method significantly improves the mesh quality near inner boundaries and deformation capability, with little computational burden increasement.

Keywords: mesh deformation, mesh quality, background mesh, radial basis function

Procedia PDF Downloads 360
2498 A Reliable Multi-Type Vehicle Classification System

Authors: Ghada S. Moussa

Abstract:

Vehicle classification is an important task in traffic surveillance and intelligent transportation systems. Classification of vehicle images is facing several problems such as: high intra-class vehicle variations, occlusion, shadow, illumination. These problems and others must be considered to develop a reliable vehicle classification system. In this study, a reliable multi-type vehicle classification system based on Bag-of-Words (BoW) paradigm is developed. Our proposed system used and compared four well-known classifiers; Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), k-Nearest Neighbour (KNN), and Decision Tree to classify vehicles into four categories: motorcycles, small, medium and large. Experiments on a large dataset show that our approach is efficient and reliable in classifying vehicles with accuracy of 95.7%. The SVM outperforms other classification algorithms in terms of both accuracy and robustness alongside considerable reduction in execution time. The innovativeness of developed system is it can serve as a framework for many vehicle classification systems.

Keywords: vehicle classification, bag-of-words technique, SVM classifier, LDA classifier, KNN classifier, decision tree classifier, SIFT algorithm

Procedia PDF Downloads 353
2497 Optimal Load Control Strategy in the Presence of Stochastically Dependent Renewable Energy Sources

Authors: Mahmoud M. Othman, Almoataz Y. Abdelaziz, Yasser G. Hegazy

Abstract:

This paper presents a load control strategy based on modification of the Big Bang Big Crunch optimization method. The proposed strategy aims to determine the optimal load to be controlled and the corresponding time of control in order to minimize the energy purchased from substation. The presented strategy helps the distribution network operator to rely on the renewable energy sources in supplying the system demand. The renewable energy sources used in the presented study are modeled using the diagonal band Copula method and sequential Monte Carlo method in order to accurately consider the multivariate stochastic dependence between wind power, photovoltaic power and the system demand. The proposed algorithms are implemented in MATLAB environment and tested on the IEEE 37-node feeder. Several case studies are done and the subsequent discussions show the effectiveness of the proposed algorithm.

Keywords: big bang big crunch, distributed generation, load control, optimization, planning

Procedia PDF Downloads 338
2496 Identification and Classification of Gliadin Genes in Iranian Diploid Wheat

Authors: Jafar Ahmadi, Alireza Pour-Aboughadareh

Abstract:

Wheat is the first and the most important grain of the world and its bakery property is due to glutenin and gliadin qualities. Wheat seed proteins were divided into four groups according to solubility. Two groups are albumin and globulin dissolving in water and salt solutions possessing metabolic activities. Two other groups are inactive and non-dissolvable and contain glutelins or glutenins and prolamins or gliadins. Gliadins are major components of the storage proteins in wheat endosperm. Gliadin proteins are separated into three groups based on electrophoretic mobility: α/β-gliadin, γ-gliadin, and ω-gliadin. It seems that little information is available about gliadin genes in Iranian wild relatives of wheat. Thus, the aim of this study was the evaluation of the wheat wild relatives collected from different origins of Zagros Mountains in Iran, involving coding gliadin genes using specific primers. For this, forty accessions of Triticum boeoticum and Triticum urartu were selected. For each accession, genomic DNA was extracted and PCRs were performed in total volumes of 15 μl. The amplification products were separated on 1.5% agarose gels. In results, for Gli-2A locus, three allelic variants were detected by Gli-2As primer pairs. The sizes of PCR products for these alleles were 210, 490 and 700 bp. Only five (13%) and two accessions (5%) produced 700 and 490 bp fragments when their DNA was amplified with the Gli.As.2 primer pairs. However, 37 of the 40 accessions (93%) carried 210 bp allele, and three accessions (8%) did not yield any product for this marker. Therefore, these germplasm could be used as rich gene pool to broaden the genetic base of bread wheat.

Keywords: diploied wheat, gliadin, Triticum boeoticum, Triticum urartu

Procedia PDF Downloads 244
2495 Isolation and Identification of Novel Escherichia Marmotae Spp.: Their Enzymatic Biodegradation of Zearalenone and Deep-oxidation of Deoxynivalenol

Authors: Bilal Murtaza, Xiaoyu Li, Liming Dong, Muhammad Kashif Saleemi, Gen Li, Bowen Jin, Lili Wang, Yongping Xu

Abstract:

Fusarium spp. produce numerous mycotoxins, such as zearalenone (ZEN), deoxynivalenol (DON), and its acetylated compounds, 3-acetyl-deoxynivalenol (3-ADON) and 15-acetyl-deoxynivalenol (15-ADON) (15-ADON). In a co-culture system, the soil-derived Escherichia marmotae strain degrades ZEN and DON into 3-keto-DON and DOM-1 via enzymatic deep-oxidation. When pure mycotoxins were subjected to Escherichia marmotae in culture flasks, degradation, and detoxification were also attained. DON and ZEN concentrations, ambient pH, incubation temperatures, bacterium concentrations, and the impact of acid treatment on degradation were all evaluated. The results of the ELISA and high-performance liquid chromatography-electrospray ionization-high resolution mass spectrometry (HPLC-ESI-HRMS) tests demonstrated that the concentration of mycotoxins exposed to Escherichia marmotae was significantly lower than the control. ZEN levels were reduced by 43.9%, while zearalenone sulfate ([M/z 397.1052 C18H21O8S1) was discovered as a derivative of ZEN converted by microbes to a less toxic molecule. Furthermore, Escherichia marmotae appeared to metabolize DON 35.10% into less toxic derivatives (DOM-1 at m/z 281 of [DON - O]+ and 3-keto-DON at m/z 295 of [DON - 2H]+). These results show that Escherichia marmotae can reduce Fusarium mycotoxins production, degrade pure mycotoxins, and convert them to less harmful compounds, opening up new possibilities for study and innovation in mycotoxin detoxification.

Keywords: mycotoxins, zearalenone, deoxynivalenol, bacterial degradation

Procedia PDF Downloads 93
2494 PEA Design of the Direct Control for Training Motor Drives

Authors: Abdulatif Abdulsalam Mohamed Shaban

Abstract:

This paper states that the art of Procedure Entry Array (PEA) plan with a focus on control system applications. This paper begins with an impression of PEA technology development, followed by an arrangement of design technologies, and the use of programmable description languages and system-level design tools. They allow a practical approach based on a unique model for complete engineering electronics systems. There are three main design rules are implemented in the system. These are algorithm based fine-tuning, modularity, and the control act and the architectural constraints. An overview of contributions and limits of PEAs is also given, followed by a short survey of PEA-based gifted controllers for recent engineering systems. Finally, two complete and timely case studies are presented to illustrate the benefits of a PEA implementation when using the proposed system modelling and devise attitude. These consist of the direct control for training motor drives and the control of a diesel-driven stand-alone generator with the help of logical design.

Keywords: control (DC), engineering electronics systems, training motor drives, procedure entry array

Procedia PDF Downloads 511
2493 A Transformer-Based Question Answering Framework for Software Contract Risk Assessment

Authors: Qisheng Hu, Jianglei Han, Yue Yang, My Hoa Ha

Abstract:

When a company is considering purchasing software for commercial use, contract risk assessment is critical to identify risks to mitigate the potential adverse business impact, e.g., security, financial and regulatory risks. Contract risk assessment requires reviewers with specialized knowledge and time to evaluate the legal documents manually. Specifically, validating contracts for a software vendor requires the following steps: manual screening, interpreting legal documents, and extracting risk-prone segments. To automate the process, we proposed a framework to assist legal contract document risk identification, leveraging pre-trained deep learning models and natural language processing techniques. Given a set of pre-defined risk evaluation problems, our framework utilizes the pre-trained transformer-based models for question-answering to identify risk-prone sections in a contract. Furthermore, the question-answering model encodes the concatenated question-contract text and predicts the start and end position for clause extraction. Due to the limited labelled dataset for training, we leveraged transfer learning by fine-tuning the models with the CUAD dataset to enhance the model. On a dataset comprising 287 contract documents and 2000 labelled samples, our best model achieved an F1 score of 0.687.

Keywords: contract risk assessment, NLP, transfer learning, question answering

Procedia PDF Downloads 128
2492 Sensor Fault-Tolerant Model Predictive Control for Linear Parameter Varying Systems

Authors: Yushuai Wang, Feng Xu, Junbo Tan, Xueqian Wang, Bin Liang

Abstract:

In this paper, a sensor fault-tolerant control (FTC) scheme using robust model predictive control (RMPC) and set theoretic fault detection and isolation (FDI) is extended to linear parameter varying (LPV) systems. First, a group of set-valued observers are designed for passive fault detection (FD) and the observer gains are obtained through minimizing the size of invariant set of state estimation-error dynamics. Second, an input set for fault isolation (FI) is designed offline through set theory for actively isolating faults after FD. Third, an RMPC controller based on state estimation for LPV systems is designed to control the system in the presence of disturbance and measurement noise and tolerate faults. Besides, an FTC algorithm is proposed to maintain the plant operate in the corresponding mode when the fault occurs. Finally, a numerical example is used to show the effectiveness of the proposed results.

Keywords: fault detection, linear parameter varying, model predictive control, set theory

Procedia PDF Downloads 246
2491 First Experimental Evidence on Feasibility of Molecular Magnetic Particle Imaging of Tumor Marker Alpha-1-Fetoprotein Using Antibody Conjugated Nanoparticles

Authors: Kolja Them, Priyal Chikhaliwala, Sudeshna Chandra

Abstract:

Purpose: The purpose of this work is to examine possibilities for noninvasive imaging and identification of tumor markers for cancer diagnosis. The proposed method uses antibody conjugated iron oxide nanoparticles and multicolor Magnetic Particle Imaging (mMPI). The method has the potential for radiation exposure free real-time estimation of local tumor marker concentrations in vivo. In this study, the method is applied to human Alpha-1-Fetoprotein. Materials and Methods: As tracer material AFP antibody-conjugated Dendrimer-Fe3O4 nanoparticles were used. The nanoparticle bioconjugates were then incubated with bovine serum albumin (BSA) to block any possible nonspecific binding sites. Parts of the resulting solution were then incubated with AFP antigen. MPI measurements were done using the preclinical MPI scanner (Bruker Biospin MRI GmbH) and the multicolor method was used for image reconstruction. Results: In multicolor MPI images the nanoparticles incubated only with BSA were clearly distinguished from nanoparticles incubated with BSA and AFP antigens. Conclusion: Tomographic imaging of human tumor marker Alpha-1-Fetoprotein is possible using AFP antibody conjugated iron oxide nanoparticles in presence of BSA. This opens interesting perspectives for cancer diagnosis.

Keywords: noninvasive imaging, tumor antigens, antibody conjugated iron oxide nanoparticles, multicolor magnetic particle imaging, cancer diagnosis

Procedia PDF Downloads 301
2490 Qualitative and Quantitative Characterization of Generated Waste in Nouri Petrochemical Complex, Assaluyeh, Iran

Authors: L. Heidari, M. Jalili Ghazizade

Abstract:

In recent years, different petrochemical complexes have been established to produce aromatic compounds. Among them, Nouri Petrochemical Complex (NPC) is the largest producer of aromatic raw materials in the world, and is located in south of Iran. Environmental concerns have been raised in this region due to generation of different types of solid waste generated in the process of aromatics production, and subsequently, industrial waste characterization has been thoroughly considered. The aim of this study is qualitative and quantitative characterization of industrial waste generated in the aromatics production process and determination of the best method for industrial waste management. For this purpose, all generated industrial waste during the production process was determined using a checklist. Four main industrial wastes were identified as follows: spent industrial soil, spent catalyst, spent molecular sieves and spent N-formyl morpholine (NFM) solvent. The amount of heavy metals and organic compounds in these wastes were further measured in order to identify the nature and toxicity of such a dangerous compound. Then industrial wastes were classified based on lab analysis results as well as using different international lists of hazardous waste identification such as EPA, UNEP and Basel Convention. Finally, the best method of waste disposal is selected based on environmental, economic and technical aspects. 

Keywords: aromatic compounds, industrial soil, molecular sieve, normal formyl morpholine solvent

Procedia PDF Downloads 229
2489 Effect of the Drawbar Force on the Dynamic Characteristics of a Spindle-Tool Holder System

Authors: Jui-Pui Hung, Yu-Sheng Lai, Tzuo-Liang Luo, Kung-Da Wu, Yun-Ji Zhan

Abstract:

This study presented the investigation of the influence of the tool holder interface stiffness on the dynamic characteristics of a spindle tool system. The interface stiffness was produced by drawbar force on the tool holder, which tends to affect the spindle dynamics. In order to assess the influence of interface stiffness on the vibration characteristic of spindle unit, we first created a three dimensional finite element model of a high speed spindle system integrated with tool holder. The key point for the creation of FEM model is the modeling of the rolling interface within the angular contact bearings and the tool holder interface. The former can be simulated by a introducing a series of spring elements between inner and outer rings. The contact stiffness was calculated according to Hertz contact theory and the preload applied on the bearings. The interface stiffness of the tool holder was identified through the experimental measurement and finite element modal analysis. Current results show that the dynamic stiffness was greatly influenced by the tool holder system. In addition, variations of modal damping, static stiffness and dynamic stiffness of the spindle tool system were greatly determined by the interface stiffness of the tool holder which was in turn dependent on the draw bar force applied on the tool holder. Overall, this study demonstrates that identification of the interface characteristics of spindle tool holder is of very importance for the refinement of the spindle tooling system to achieve the optimum machining performance.

Keywords: dynamic stiffness, spindle-tool holder, interface stiffness, drawbar force

Procedia PDF Downloads 393
2488 Performance of Neural Networks vs. Radial Basis Functions When Forming a Metamodel for Residential Buildings

Authors: Philip Symonds, Jon Taylor, Zaid Chalabi, Michael Davies

Abstract:

With the world climate projected to warm and major cities in developing countries becoming increasingly populated and polluted, governments are tasked with the problem of overheating and air quality in residential buildings. This paper presents the development of an adaptable model of these risks. Simulations are performed using the EnergyPlus building physics software. An accurate metamodel is formed by randomly sampling building input parameters and training on the outputs of EnergyPlus simulations. Metamodels are used to vastly reduce the amount of computation time required when performing optimisation and sensitivity analyses. Neural Networks (NNs) are compared to a Radial Basis Function (RBF) algorithm when forming a metamodel. These techniques were implemented using the PyBrain and scikit-learn python libraries, respectively. NNs are shown to perform around 15% better than RBFs when estimating overheating and air pollution metrics modelled by EnergyPlus.

Keywords: neural networks, radial basis functions, metamodelling, python machine learning libraries

Procedia PDF Downloads 441
2487 Sparse-View CT Reconstruction Based on Nonconvex L1 − L2 Regularizations

Authors: Ali Pour Yazdanpanah, Farideh Foroozandeh Shahraki, Emma Regentova

Abstract:

The reconstruction from sparse-view projections is one of important problems in computed tomography (CT) limited by the availability or feasibility of obtaining of a large number of projections. Traditionally, convex regularizers have been exploited to improve the reconstruction quality in sparse-view CT, and the convex constraint in those problems leads to an easy optimization process. However, convex regularizers often result in a biased approximation and inaccurate reconstruction in CT problems. Here, we present a nonconvex, Lipschitz continuous and non-smooth regularization model. The CT reconstruction is formulated as a nonconvex constrained L1 − L2 minimization problem and solved through a difference of convex algorithm and alternating direction of multiplier method which generates a better result than L0 or L1 regularizers in the CT reconstruction. We compare our method with previously reported high performance methods which use convex regularizers such as TV, wavelet, curvelet, and curvelet+TV (CTV) on the test phantom images. The results show that there are benefits in using the nonconvex regularizer in the sparse-view CT reconstruction.

Keywords: computed tomography, non-convex, sparse-view reconstruction, L1-L2 minimization, difference of convex functions

Procedia PDF Downloads 309
2486 Development and Validation for Center-Based Learning in Teaching Science

Authors: Julie Berame

Abstract:

The study probed that out of eight (8) lessons in Science Six have been validated, lessons 1-3 got the descriptive rating of very satisfactory and lessons 4-8 got the descriptive rating of outstanding based on the content analysis of the prepared CBL lesson plans. The evaluation of the lesson plans focused on the three main features such as statements of the lesson objectives, lesson content, and organization and effectiveness. The study used developmental research procedure that contained three phases, namely: Development phase consists of determining the learning unit, lesson plans, creation of the table of specifications, exercises/quizzes, and revision of the materials; Evaluation phase consists of the development of experts’ assessment checklist, presentation of checklist to the adviser, comments and suggestions, and final validation of the materials; and try-out phase consists of identification of the subject, try-out of the materials using CBL strategy, administering science attitude questionnaire, and statistical analysis to obtain the data. The findings of the study revealed that the relevance and usability of CBL lessons 1 and 2 in terms of lesson objective, lesson content, and organization and effectiveness got the rating of very satisfactory (4.4) and lessons 3-8 got the rating of outstanding (4.7). The lessons 1-8 got the grand rating of outstanding (4.6). Additionally, results showed that CBL strategy helped foster positive attitude among students and achieved effectiveness in psychomotor learning objectives.

Keywords: development, validation, center-based learning, science

Procedia PDF Downloads 233