Search results for: gaussian process classification model with multiclass
29170 Methodology for Temporary Analysis of Production and Logistic Systems on the Basis of Distance Data
Authors: M. Mueller, M. Kuehn, M. Voelker
Abstract:
In small and medium-sized enterprises (SMEs), the challenge is to create a well-grounded and reliable basis for process analysis, optimization and planning due to a lack of data. SMEs have limited access to methods with which they can effectively and efficiently analyse processes and identify cause-and-effect relationships in order to generate the necessary database and derive optimization potential from it. The implementation of digitalization within the framework of Industry 4.0 thus becomes a particular necessity for SMEs. For these reasons, the abstract presents an analysis methodology that is subject to the objective of developing an SME-appropriate methodology for efficient, temporarily feasible data collection and evaluation in flexible production and logistics systems as a basis for process analysis and optimization. The overall methodology focuses on retrospective, event-based tracing and analysis of material flow objects. The technological basis consists of Bluetooth low energy (BLE)-based transmitters, so-called beacons, and smart mobile devices (SMD), e.g. smartphones as receivers, between which distance data can be measured and derived motion profiles. The distance is determined using the Received Signal Strength Indicator (RSSI), which is a measure of signal field strength between transmitter and receiver. The focus is the development of a software-based methodology for interpretation of relative movements of transmitters and receivers based on distance data. The main research is on selection and implementation of pattern recognition methods for automatic process recognition as well as methods for the visualization of relative distance data. Due to an existing categorization of the database regarding process types, classification methods (e.g. Support Vector Machine) from the field of supervised learning are used. The necessary data quality requires selection of suitable methods as well as filters for smoothing occurring signal variations of the RSSI, the integration of methods for determination of correction factors depending on possible signal interference sources (columns, pallets) as well as the configuration of the used technology. The parameter settings on which respective algorithms are based have a further significant influence on result quality of the classification methods, correction models and methods for visualizing the position profiles used. The accuracy of classification algorithms can be improved up to 30% by selected parameter variation; this has already been proven in studies. Similar potentials can be observed with parameter variation of methods and filters for signal smoothing. Thus, there is increased interest in obtaining detailed results on the influence of parameter and factor combinations on data quality in this area. The overall methodology is realized with a modular software architecture consisting of independently modules for data acquisition, data preparation and data storage. The demonstrator for initialization and data acquisition is available as mobile Java-based application. The data preparation, including methods for signal smoothing, are Python-based with the possibility to vary parameter settings and to store them in the database (SQLite). The evaluation is divided into two separate software modules with database connection: the achievement of an automated assignment of defined process classes to distance data using selected classification algorithms and the visualization as well as reporting in terms of a graphical user interface (GUI).Keywords: event-based tracing, machine learning, process classification, parameter settings, RSSI, signal smoothing
Procedia PDF Downloads 12929169 Process Integration: Mathematical Model for Contaminant Removal in Refinery Process Stream
Authors: Wasif Mughees, Malik Al-Ahmad
Abstract:
This research presents the graphical design analysis and mathematical programming technique to dig out the possible water allocation distribution to minimize water usage in process units. The study involves the mass and property integration in its core methodology. Tehran Oil Refinery is studied to implement the focused water pinch technology for regeneration, reuse and recycling of water streams. Process data is manipulated in terms of sources and sinks, which are given in terms of properties. Sources are the streams to be allocated. Sinks are the units which can accept the sources. Suspended Solids (SS) is taken as a single contaminant. The model minimizes the mount of freshwater from 340 to 275m3/h (19.1%). Redesigning and allocation of water streams was built. The graphical technique and mathematical programming shows the consistency of results which confirms mass transfer dependency of water streams.Keywords: minimization, water pinch, process integration, pollution prevention
Procedia PDF Downloads 31729168 Case-Based Reasoning Approach for Process Planning of Internal Thread Cold Extrusion
Authors: D. Zhang, H. Y. Du, G. W. Li, J. Zeng, D. W. Zuo, Y. P. You
Abstract:
For the difficult issues of process selection, case-based reasoning technology is applied to computer aided process planning system for cold form tapping of internal threads on the basis of similarity in the process. A model is established based on the analysis of process planning. Case representation and similarity computing method are given. Confidence degree is used to evaluate the case. Rule-based reuse strategy is presented. The scheme is illustrated and verified by practical application. The case shows the design results with the proposed method are effective.Keywords: case-based reasoning, internal thread, cold extrusion, process planning
Procedia PDF Downloads 50729167 Sorting Maize Haploids from Hybrids Using Single-Kernel Near-Infrared Spectroscopy
Authors: Paul R Armstrong
Abstract:
Doubled haploids (DHs) have become an important breeding tool for creating maize inbred lines, although several bottlenecks in the DH production process limit wider development, application, and adoption of the technique. DH kernels are typically sorted manually and represent about 10% of the seeds in a much larger pool where the remaining 90% are hybrid siblings. This introduces time constraints on DH production and manual sorting is often not accurate. Automated sorting based on the chemical composition of the kernel can be effective, but devices, namely NMR, have not achieved the sorting speed to be a cost-effective replacement to manual sorting. This study evaluated a single kernel near-infrared reflectance spectroscopy (skNIR) platform to accurately identify DH kernels based on oil content. The skNIR platform is a higher-throughput device, approximately 3 seeds/s, that uses spectra to predict oil content of each kernel from maize crosses intentionally developed to create larger than normal oil differences, 1.5%-2%, between DH and hybrid kernels. Spectra from the skNIR were used to construct a partial least squares regression (PLS) model for oil and for a categorical reference model of 1 (DH kernel) or 2 (hybrid kernel) and then used to sort several crosses to evaluate performance. Two approaches were used for sorting. The first used a general PLS model developed from all crosses to predict oil content and then used for sorting each induction cross, the second was the development of a specific model from a single induction cross where approximately fifty DH and one hundred hybrid kernels used. This second approach used a categorical reference value of 1 and 2, instead of oil content, for the PLS model and kernels selected for the calibration set were manually referenced based on traditional commercial methods using coloration of the tip cap and germ areas. The generalized PLS oil model statistics were R2 = 0.94 and RMSE = .93% for kernels spanning an oil content of 2.7% to 19.3%. Sorting by this model resulted in extracting 55% to 85% of haploid kernels from the four induction crosses. Using the second method of generating a model for each cross yielded model statistics ranging from R2s = 0.96 to 0.98 and RMSEs from 0.08 to 0.10. Sorting in this case resulted in 100% correct classification but required models that were cross. In summary, the first generalized model oil method could be used to sort a significant number of kernels from a kernel pool but was not close to the accuracy of developing a sorting model from a single cross. The penalty for the second method is that a PLS model would need to be developed for each individual cross. In conclusion both methods could find useful application in the sorting of DH from hybrid kernels.Keywords: NIR, haploids, maize, sorting
Procedia PDF Downloads 30129166 Making of Alloy Steel by Direct Alloying with Mineral Oxides during Electro-Slag Remelting
Authors: Vishwas Goel, Kapil Surve, Somnath Basu
Abstract:
In-situ alloying in steel during the electro-slag remelting (ESR) process has already been achieved by the addition of necessary ferroalloys into the electro-slag remelting mold. However, the use of commercially available ferroalloys during ESR processing is often found to be financially less favorable, in comparison with the conventional alloying techniques. However, a process of alloying steel with elements like chromium and manganese using the electro-slag remelting route is under development without any ferrochrome addition. The process utilizes in-situ reduction of refined mineral chromite (Cr₂O₃) and resultant enrichment of chromium in the steel ingot produced. It was established in course of this work that this process can become more advantageous over conventional alloying techniques, both economically and environmentally, for applications which inherently demand the use of the electro-slag remelting process, such as manufacturing of superalloys. A key advantage is the lower overall CO₂ footprint of this process relative to the conventional route of production, storage, and the addition of ferrochrome. In addition to experimentally validating the feasibility of the envisaged reactions, a mathematical model to simulate the reduction of chromium (III) oxide and transfer to chromium to the molten steel droplets was also developed as part of the current work. The developed model helps to correlate the amount of chromite input and the magnitude of chromium alloying that can be achieved through this process. Experiments are in progress to validate the predictions made by this model and to fine-tune its parameters.Keywords: alloying element, chromite, electro-slag remelting, ferrochrome
Procedia PDF Downloads 22229165 A Statistical Approach to Classification of Agricultural Regions
Authors: Hasan Vural
Abstract:
Turkey is a favorable country to produce a great variety of agricultural products because of her different geographic and climatic conditions which have been used to divide the country into four main and seven sub regions. This classification into seven regions traditionally has been used in order to data collection and publication especially related with agricultural production. Afterwards, nine agricultural regions were considered. Recently, the governmental body which is responsible of data collection and dissemination (Turkish Institute of Statistics-TIS) has used 12 classes which include 11 sub regions and Istanbul province. This study aims to evaluate these classification efforts based on the acreage of ten main crops in a ten years time period (1996-2005). The panel data grouped in 11 subregions has been evaluated by cluster and multivariate statistical methods. It was concluded that from the agricultural production point of view, it will be rather meaningful to consider three main and eight sub-agricultural regions throughout the country.Keywords: agricultural region, factorial analysis, cluster analysis,
Procedia PDF Downloads 41329164 Efficient Sampling of Probabilistic Program for Biological Systems
Authors: Keerthi S. Shetty, Annappa Basava
Abstract:
In recent years, modelling of biological systems represented by biochemical reactions has become increasingly important in Systems Biology. Biological systems represented by biochemical reactions are highly stochastic in nature. Probabilistic model is often used to describe such systems. One of the main challenges in Systems biology is to combine absolute experimental data into probabilistic model. This challenge arises because (1) some molecules may be present in relatively small quantities, (2) there is a switching between individual elements present in the system, and (3) the process is inherently stochastic on the level at which observations are made. In this paper, we describe a novel idea of combining absolute experimental data into probabilistic model using tool R2. Through a case study of the Transcription Process in Prokaryotes we explain how biological systems can be written as probabilistic program to combine experimental data into the model. The model developed is then analysed in terms of intrinsic noise and exact sampling of switching times between individual elements in the system. We have mainly concentrated on inferring number of genes in ON and OFF states from experimental data.Keywords: systems biology, probabilistic model, inference, biology, model
Procedia PDF Downloads 34629163 Clustering Based Level Set Evaluation for Low Contrast Images
Authors: Bikshalu Kalagadda, Srikanth Rangu
Abstract:
The important object of images segmentation is to extract objects with respect to some input features. One of the important methods for image segmentation is Level set method. Generally medical images and synthetic images with low contrast of pixel profile, for such images difficult to locate interested features in images. In conventional level set function, develops irregularity during its process of evaluation of contour of objects, this destroy the stability of evolution process. For this problem a remedy is proposed, a new hybrid algorithm is Clustering Level Set Evolution. Kernel fuzzy particles swarm optimization clustering with the Distance Regularized Level Set (DRLS) and Selective Binary, and Gaussian Filtering Regularized Level Set (SBGFRLS) methods are used. The ability of identifying different regions becomes easy with improved speed. Efficiency of the modified method can be evaluated by comparing with the previous method for similar specifications. Comparison can be carried out by considering medical and synthetic images.Keywords: segmentation, clustering, level set function, re-initialization, Kernel fuzzy, swarm optimization
Procedia PDF Downloads 35129162 The Change of Urban Land Use/Cover Using Object Based Approach for Southern Bali
Authors: I. Gusti A. A. Rai Asmiwyati, Robert J. Corner, Ashraf M. Dewan
Abstract:
Change on land use/cover (LULC) dominantly affects spatial structure and function. It can have such impacts by disrupting social culture practice and disturbing physical elements. Thus, it has become essential to understand of the dynamics in time and space of LULC as it can be used as a critical input for developing sustainable LULC. This study was an attempt to map and monitor the LULC change in Bali Indonesia from 2003 to 2013. Using object based classification to improve the accuracy, and change detection, multi temporal land use/cover data were extracted from a set of ASTER satellite image. The overall accuracies of the classification maps of 2003 and 2013 were 86.99% and 80.36%, respectively. Built up area and paddy field were the dominant type of land use/cover in both years. Patch increase dominantly in 2003 illustrated the rapid paddy field fragmentation and the huge occurring transformation. This approach is new for the case of diverse urban features of Bali that has been growing fast and increased the classification accuracy than the manual pixel based classification.Keywords: land use/cover, urban, Bali, ASTER
Procedia PDF Downloads 53929161 Designing Cultural-Creative Products with the Six Categories of Hanzi (Chinese Character Classification)
Authors: Pei-Jun Xue, Ming-Yu Hsiao
Abstract:
Chinese characters, or hanzi, represent a process of simplifying three-dimensional signs into plane signifiers. From pictograms at the beginning to logograms today, a Han linguist thus classified them into six categories known as the six categories of Chinese characters. Design is a process of signification, and cultural-creative design is a process translating ideas into design with creativity upon culture. Aiming to investigate the process of cultural-creative design transforming cultural text into cultural signs, this study analyzed existing cultural-creative products with the six categories of Chinese characters by treating such products as representations which accurately communicate the designer’s ideas to users through the categorization, simplification, and interpretation of sign features. This is a two-phase pilot study on designing cultural-creative products with the six categories of Chinese characters. Phase I reviews the related literature on the theory of the six categories of Chinese characters investigated and concludes with the process and principles of character evolution. Phase II analyzes the design of existing cultural-creative products with the six categories of Chinese characters and explores the conceptualization of product design.Keywords: six categories of Chinese characters, cultural-creative product design, cultural signs, cultural product
Procedia PDF Downloads 33729160 The Evaluation of the Performance of Different Filtering Approaches in Tracking Problem and the Effect of Noise Variance
Authors: Mohammad Javad Mollakazemi, Farhad Asadi, Aref Ghafouri
Abstract:
Performance of different filtering approaches depends on modeling of dynamical system and algorithm structure. For modeling and smoothing the data the evaluation of posterior distribution in different filtering approach should be chosen carefully. In this paper different filtering approaches like filter KALMAN, EKF, UKF, EKS and smoother RTS is simulated in some trajectory tracking of path and accuracy and limitation of these approaches are explained. Then probability of model with different filters is compered and finally the effect of the noise variance to estimation is described with simulations results.Keywords: Gaussian approximation, Kalman smoother, parameter estimation, noise variance
Procedia PDF Downloads 43729159 Estimation of Maize Yield by Using a Process-Based Model and Remote Sensing Data in the Northeast China Plain
Authors: Jia Zhang, Fengmei Yao, Yanjing Tan
Abstract:
The accurate estimation of crop yield is of great importance for the food security. In this study, a process-based mechanism model was modified to estimate yield of C4 crop by modifying the carbon metabolic pathway in the photosynthesis sub-module of the RS-P-YEC (Remote-Sensing-Photosynthesis-Yield estimation for Crops) model. The yield was calculated by multiplying net primary productivity (NPP) and the harvest index (HI) derived from the ratio of grain to stalk yield. The modified RS-P-YEC model was used to simulate maize yield in the Northeast China Plain during the period 2002-2011. The statistical data of maize yield from study area was used to validate the simulated results at county-level. The results showed that the Pearson correlation coefficient (R) was 0.827 (P < 0.01) between the simulated yield and the statistical data, and the root mean square error (RMSE) was 712 kg/ha with a relative error (RE) of 9.3%. From 2002-2011, the yield of maize planting zone in the Northeast China Plain was increasing with smaller coefficient of variation (CV). The spatial pattern of simulated maize yield was consistent with the actual distribution in the Northeast China Plain, with an increasing trend from the northeast to the southwest. Hence the results demonstrated that the modified process-based model coupled with remote sensing data was suitable for yield prediction of maize in the Northeast China Plain at the spatial scale.Keywords: process-based model, C4 crop, maize yield, remote sensing, Northeast China Plain
Procedia PDF Downloads 37229158 Land Cover Classification System for the Estimation of Carbon Storage in Terrestrial Ecosystems
Authors: Lei Zhang
Abstract:
The carbon cycle greatly influences global change, and the land cover changes contribute to the status and rate of the carbon budget in ecosystems. This paper proposes a land cover classification system for mapping land cover, the national ecological environment assessment, and estimating carbon storage in ecosystems. The classification system consists of basic land cover classes at levels Ⅰ and Ⅱ and auxiliary features at level III. The basic 38 classes characterizing land cover features are derived from 19 criteria referring to composition, structure, pattern, phenology, etc. The basic classes reflect the status of carbon storage in ecosystems. The auxiliary classes at level III complement the attributes of higher levels by 9 criteria. The 5 environmental criteria of temperature, moisture, landform, aspect and slope mainly reflect the potential and intensity of carbon storage in ecosystems. The disturbance of vegetation succession caused by land use type influences the vegetation carbon budget. The other 3 vegetation cover criteria, growth period, and species characteristics further refine the vegetation types. The hierarchical structure of the land cover map (the classes of levels Ⅰ and Ⅱ) is independent of the products of level III, which is helpful for land cover product management and applications. The classification system has been adopted in the Chinese national land cover database for the carbon budget in ecosystems at a 30 m scale.Keywords: classification system, land cover, ecosystem, carbon storage, object based
Procedia PDF Downloads 6929157 Emergentist Metaphorical Creativity: Towards a Model of Analysing Metaphorical Creativity in Interactive Talk
Authors: Afef Badri
Abstract:
Metaphorical creativity does not constitute a static property of discourse. It is an interactive dynamic process created online. There has been a lack of research concerning online produced metaphorical creativity. This paper intends to account for metaphorical creativity in online talk-in-interaction as a dynamic process that emerges as discourse unfolds. It brings together insights from the emergentist approach to the study of metaphor in verbal interactions and insights from conceptual blending approach as a model for analysing online metaphorical constructions to propose a model for studying metaphorical creativity in interactive talk. The model is based on three focal points. First, metaphorical creativity is a dynamic emergent and open-to-change process that evolves in real time as interlocutors constantly blend and re-blend previous metaphorical contributions. Second, it is not a product of isolated individual minds but a joint achievement that is co-constructed and co-elaborated by interlocutors. The third and most important point is that the emergent process of metaphorical creativity is tightly shaped by contextual variables surrounding talk-in-interaction. It is grounded in the framework of interpretation of interlocutors. It is constrained by preceding contributions in a way that creates textual cohesion of the verbal exchange and it is also a goal-oriented process predefined by the communicative intention of each participant in a way that reveals the ideological coherence/incoherence of the entire conversation.Keywords: communicative intention, conceptual blending, the emergentist approach, metaphorical creativity
Procedia PDF Downloads 25729156 A Structuring and Classification Method for Assigning Application Areas to Suitable Digital Factory Models
Authors: R. Hellmuth
Abstract:
The method of factory planning has changed a lot, especially when it is about planning the factory building itself. Factory planning has the task of designing products, plants, processes, organization, areas, and the building of a factory. Regular restructuring is becoming more important in order to maintain the competitiveness of a factory. Restrictions in new areas, shorter life cycles of product and production technology as well as a VUCA world (Volatility, Uncertainty, Complexity and Ambiguity) lead to more frequent restructuring measures within a factory. A digital factory model is the planning basis for rebuilding measures and becomes an indispensable tool. Furthermore, digital building models are increasingly being used in factories to support facility management and manufacturing processes. The main research question of this paper is, therefore: What kind of digital factory model is suitable for the different areas of application during the operation of a factory? First, different types of digital factory models are investigated, and their properties and usabilities for use cases are analysed. Within the scope of investigation are point cloud models, building information models, photogrammetry models, and these enriched with sensor data are examined. It is investigated which digital models allow a simple integration of sensor data and where the differences are. Subsequently, possible application areas of digital factory models are determined by means of a survey and the respective digital factory models are assigned to the application areas. Finally, an application case from maintenance is selected and implemented with the help of the appropriate digital factory model. It is shown how a completely digitalized maintenance process can be supported by a digital factory model by providing information. Among other purposes, the digital factory model is used for indoor navigation, information provision, and display of sensor data. In summary, the paper shows a structuring of digital factory models that concentrates on the geometric representation of a factory building and its technical facilities. A practical application case is shown and implemented. Thus, the systematic selection of digital factory models with the corresponding application cases is evaluated.Keywords: building information modeling, digital factory model, factory planning, maintenance
Procedia PDF Downloads 10929155 Application of MALDI-MS to Differentiate SARS-CoV-2 and Non-SARS-CoV-2 Symptomatic Infections in the Early and Late Phases of the Pandemic
Authors: Dmitriy Babenko, Sergey Yegorov, Ilya Korshukov, Aidana Sultanbekova, Valentina Barkhanskaya, Tatiana Bashirova, Yerzhan Zhunusov, Yevgeniya Li, Viktoriya Parakhina, Svetlana Kolesnichenko, Yeldar Baiken, Aruzhan Pralieva, Zhibek Zhumadilova, Matthew S. Miller, Gonzalo H. Hortelano, Anar Turmuhambetova, Antonella E. Chesca, Irina Kadyrova
Abstract:
Introduction: The rapidly evolving COVID-19 pandemic, along with the re-emergence of pathogens causing acute respiratory infections (ARI), has necessitated the development of novel diagnostic tools to differentiate various causes of ARI. MALDI-MS, due to its wide usage and affordability, has been proposed as a potential instrument for diagnosing SARS-CoV-2 versus non-SARS-CoV-2 ARI. The aim of this study was to investigate the potential of MALDI-MS in conjunction with a machine learning model to accurately distinguish between symptomatic infections caused by SARS-CoV-2 and non-SARS-CoV-2 during both the early and later phases of the pandemic. Furthermore, this study aimed to analyze mass spectrometry (MS) data obtained from nasal swabs of healthy individuals. Methods: We gathered mass spectra from 252 samples, comprising 108 SARS-CoV-2-positive samples obtained in 2020 (Covid 2020), 7 SARS-CoV- 2-positive samples obtained in 2023 (Covid 2023), 71 samples from symptomatic individuals without SARS-CoV-2 (Control non-Covid ARVI), and 66 samples from healthy individuals (Control healthy). All the samples were subjected to RT-PCR testing. For data analysis, we employed the caret R package to train and test seven machine-learning algorithms: C5.0, KNN, NB, RF, SVM-L, SVM-R, and XGBoost. We conducted a training process using a five-fold (outer) nested repeated (five times) ten-fold (inner) cross-validation with a randomized stratified splitting approach. Results: In this study, we utilized the Covid 2020 dataset as a case group and the non-Covid ARVI dataset as a control group to train and test various machine learning (ML) models. Among these models, XGBoost and SVM-R demonstrated the highest performance, with accuracy values of 0.97 [0.93, 0.97] and 0.95 [0.95; 0.97], specificity values of 0.86 [0.71; 0.93] and 0.86 [0.79; 0.87], and sensitivity values of 0.984 [0.984; 1.000] and 1.000 [0.968; 1.000], respectively. When examining the Covid 2023 dataset, the Naive Bayes model achieved the highest classification accuracy of 43%, while XGBoost and SVM-R achieved accuracies of 14%. For the healthy control dataset, the accuracy of the models ranged from 0.27 [0.24; 0.32] for k-nearest neighbors to 0.44 [0.41; 0.45] for the Support Vector Machine with a radial basis function kernel. Conclusion: Therefore, ML models trained on MALDI MS of nasopharyngeal swabs obtained from patients with Covid during the initial phase of the pandemic, as well as symptomatic non-Covid individuals, showed excellent classification performance, which aligns with the results of previous studies. However, when applied to swabs from healthy individuals and a limited sample of patients with Covid in the late phase of the pandemic, ML models exhibited lower classification accuracy.Keywords: SARS-CoV-2, MALDI-TOF MS, ML models, nasopharyngeal swabs, classification
Procedia PDF Downloads 10629154 A New Approach to Interval Matrices and Applications
Authors: Obaid Algahtani
Abstract:
An interval may be defined as a convex combination as follows: I=[a,b]={x_α=(1-α)a+αb: α∈[0,1]}. Consequently, we may adopt interval operations by applying the scalar operation point-wise to the corresponding interval points: I ∙J={x_α∙y_α ∶ αϵ[0,1],x_α ϵI ,y_α ϵJ}, With the usual restriction 0∉J if ∙ = ÷. These operations are associative: I+( J+K)=(I+J)+ K, I*( J*K)=( I*J )* K. These two properties, which are missing in the usual interval operations, will enable the extension of the usual linear system concepts to the interval setting in a seamless manner. The arithmetic introduced here avoids such vague terms as ”interval extension”, ”inclusion function”, determinants which we encounter in the engineering literature that deal with interval linear systems. On the other hand, these definitions were motivated by our attempt to arrive at a definition of interval random variables and investigate the corresponding statistical properties. We feel that they are the natural ones to handle interval systems. We will enable the extension of many results from usual state space models to interval state space models. The interval state space model we will consider here is one of the form X_((t+1) )=AX_t+ W_t, Y_t=HX_t+ V_t, t≥0, where A∈ 〖IR〗^(k×k), H ∈ 〖IR〗^(p×k) are interval matrices and 〖W 〗_t ∈ 〖IR〗^k,V_t ∈〖IR〗^p are zero – mean Gaussian white-noise interval processes. This feeling is reassured by the numerical results we obtained in a simulation examples.Keywords: interval analysis, interval matrices, state space model, Kalman Filter
Procedia PDF Downloads 42329153 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis
Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer
Abstract:
Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter
Procedia PDF Downloads 32829152 Localization of Buried People Using Received Signal Strength Indication Measurement of Wireless Sensor
Authors: Feng Tao, Han Ye, Shaoyi Liao
Abstract:
City constructions collapse after earthquake and people will be buried under ruins. Search and rescue should be conducted as soon as possible to save them. Therefore, according to the complicated environment, irregular aftershocks and rescue allow of no delay, a kind of target localization method based on RSSI (Received Signal Strength Indication) is proposed in this article. The target localization technology based on RSSI with the features of low cost and low complexity has been widely applied to nodes localization in WSN (Wireless Sensor Networks). Based on the theory of RSSI transmission and the environment impact to RSSI, this article conducts the experiments in five scenes, and multiple filtering algorithms are applied to original RSSI value in order to establish the signal propagation model with minimum test error respectively. Target location can be calculated from the distance, which can be estimated from signal propagation model, through improved centroid algorithm. Result shows that the localization technology based on RSSI is suitable for large-scale nodes localization. Among filtering algorithms, mixed filtering algorithm (average of average, median and Gaussian filtering) performs better than any other single filtering algorithm, and by using the signal propagation model, the minimum error of distance between known nodes and target node in the five scene is about 3.06m.Keywords: signal propagation model, centroid algorithm, localization, mixed filtering, RSSI
Procedia PDF Downloads 29929151 Developing an Information Model of Manufacturing Process for Sustainability
Authors: Jae Hyun Lee
Abstract:
Manufacturing companies use life-cycle inventory databases to analyze sustainability of their manufacturing processes. Life cycle inventory data provides reference data which may not be accurate for a specific company. Collecting accurate data of manufacturing processes for a specific company requires enormous time and efforts. An information model of typical manufacturing processes can reduce time and efforts to get appropriate reference data for a specific company. This paper shows an attempt to build an abstract information model which can be used to develop information models for specific manufacturing processes.Keywords: process information model, sustainability, OWL, manufacturing
Procedia PDF Downloads 42829150 A Survey of Skin Cancer Detection and Classification from Skin Lesion Images Using Deep Learning
Authors: Joseph George, Anne Kotteswara Roa
Abstract:
Skin disease is one of the most common and popular kinds of health issues faced by people nowadays. Skin cancer (SC) is one among them, and its detection relies on the skin biopsy outputs and the expertise of the doctors, but it consumes more time and some inaccurate results. At the early stage, skin cancer detection is a challenging task, and it easily spreads to the whole body and leads to an increase in the mortality rate. Skin cancer is curable when it is detected at an early stage. In order to classify correct and accurate skin cancer, the critical task is skin cancer identification and classification, and it is more based on the cancer disease features such as shape, size, color, symmetry and etc. More similar characteristics are present in many skin diseases; hence it makes it a challenging issue to select important features from a skin cancer dataset images. Hence, the skin cancer diagnostic accuracy is improved by requiring an automated skin cancer detection and classification framework; thereby, the human expert’s scarcity is handled. Recently, the deep learning techniques like Convolutional neural network (CNN), Deep belief neural network (DBN), Artificial neural network (ANN), Recurrent neural network (RNN), and Long and short term memory (LSTM) have been widely used for the identification and classification of skin cancers. This survey reviews different DL techniques for skin cancer identification and classification. The performance metrics such as precision, recall, accuracy, sensitivity, specificity, and F-measures are used to evaluate the effectiveness of SC identification using DL techniques. By using these DL techniques, the classification accuracy increases along with the mitigation of computational complexities and time consumption.Keywords: skin cancer, deep learning, performance measures, accuracy, datasets
Procedia PDF Downloads 12729149 Olefin and Paraffin Separation Using Simulations on Extractive Distillation
Authors: Muhammad Naeem, Abdulrahman A. Al-Rabiah
Abstract:
Technical mixture of C4 containing 1-butene and n-butane are very close to each other with respect to their boiling points i.e. -6.3°C for 1-butene and -1°C for n-butane. Extractive distillation process is used for the separation of 1-butene from the existing mixture of C4. The solvent is the essential of extractive distillation, and an appropriate solvent shows an important role in the process economy of extractive distillation. Aspen Plus has been applied for the separation of these hydrocarbons as a simulator; moreover NRTL activity coefficient model was used in the simulation. This model indicated that the material balances in this separation process were accurate for several solvent flow rates. Mixture of acetonitrile and water used as a solvent and 99 % pure 1-butene was separated. This simulation proposed the ratio of the feed to solvent as 1 : 7.9 and 15 plates for the solvent recovery column, previously feed to solvent ratio was more than this and the proposed plates were 30, which can economize the separation process.Keywords: extractive distillation, 1-butene, Aspen Plus, ACN solvent
Procedia PDF Downloads 44529148 A Study on Automotive Attack Database and Data Flow Diagram for Concretization of HEAVENS: A Car Security Model
Authors: Se-Han Lee, Kwang-Woo Go, Gwang-Hyun Ahn, Hee-Sung Park, Cheol-Kyu Han, Jun-Bo Shim, Geun-Chul Kang, Hyun-Jung Lee
Abstract:
In recent years, with the advent of smart cars and the expansion of the market, the announcement of 'Adventures in Automotive Networks and Control Units' at the DEFCON21 conference in 2013 revealed that cars are not safe from hacking. As a result, the HEAVENS model considering not only the functional safety of the vehicle but also the security has been suggested. However, the HEAVENS model only presents a simple process, and there are no detailed procedures and activities for each process, making it difficult to apply it to the actual vehicle security vulnerability check. In this paper, we propose an automated attack database that systematically summarizes attack vectors, attack types, and vulnerable vehicle models to prepare for various car hacking attacks, and data flow diagrams that can detect various vulnerabilities and suggest a way to materialize the HEAVENS model.Keywords: automotive security, HEAVENS, car hacking, security model, information security
Procedia PDF Downloads 36129147 Crop Classification using Unmanned Aerial Vehicle Images
Authors: Iqra Yaseen
Abstract:
One of the well-known areas of computer science and engineering, image processing in the context of computer vision has been essential to automation. In remote sensing, medical science, and many other fields, it has made it easier to uncover previously undiscovered facts. Grading of diverse items is now possible because of neural network algorithms, categorization, and digital image processing. Its use in the classification of agricultural products, particularly in the grading of seeds or grains and their cultivars, is widely recognized. A grading and sorting system enables the preservation of time, consistency, and uniformity. Global population growth has led to an increase in demand for food staples, biofuel, and other agricultural products. To meet this demand, available resources must be used and managed more effectively. Image processing is rapidly growing in the field of agriculture. Many applications have been developed using this approach for crop identification and classification, land and disease detection and for measuring other parameters of crop. Vegetation localization is the base of performing these task. Vegetation helps to identify the area where the crop is present. The productivity of the agriculture industry can be increased via image processing that is based upon Unmanned Aerial Vehicle photography and satellite. In this paper we use the machine learning techniques like Convolutional Neural Network, deep learning, image processing, classification, You Only Live Once to UAV imaging dataset to divide the crop into distinct groups and choose the best way to use it.Keywords: image processing, UAV, YOLO, CNN, deep learning, classification
Procedia PDF Downloads 10429146 A Taxonomy of Routing Protocols in Wireless Sensor Networks
Authors: A. Kardi, R. Zagrouba, M. Alqahtani
Abstract:
The Internet of Everything (IoE) presents today a very attractive and motivating field of research. It is basically based on Wireless Sensor Networks (WSNs) in which the routing task is the major analysis topic. In fact, it directly affects the effectiveness and the lifetime of the network. This paper, developed from recent works and based on extensive researches, proposes a taxonomy of routing protocols in WSNs. Our main contribution is that we propose a classification model based on nine classes namely application type, delivery mode, initiator of communication, network architecture, path establishment (route discovery), network topology (structure), protocol operation, next hop selection and latency-awareness and energy-efficient routing protocols. In order to provide a total classification pattern to serve as reference for network designers, each class is subdivided into possible subclasses, presented, and discussed using different parameters such as purposes and characteristics.Keywords: routing, sensor, survey, wireless sensor networks, WSNs
Procedia PDF Downloads 18129145 Application of Remote Sensing and GIS in Assessing Land Cover Changes within Granite Quarries around Brits Area, South Africa
Authors: Refilwe Moeletsi
Abstract:
Dimension stone quarrying around Brits and Belfast areas started in the early 1930s and has been growing rapidly since then. Environmental impacts associated with these quarries have not been documented, and hence this study aims at detecting any change in the environment that might have been caused by these activities. Landsat images that were used to assess land use/land cover changes in Brits quarries from 1998 - 2015. A supervised classification using maximum likelihood classifier was applied to classify each image into different land use/land cover types. Classification accuracy was assessed using Google Earth™ as a source of reference data. Post-classification change detection method was used to determine changes. The results revealed significant increase in granite quarries and corresponding decrease in vegetation cover within the study region.Keywords: remote sensing, GIS, change detection, granite quarries
Procedia PDF Downloads 31229144 A New Categorization of Image Quality Metrics Based on a Model of Human Quality Perception
Authors: Maria Grazia Albanesi, Riccardo Amadeo
Abstract:
This study presents a new model of the human image quality assessment process: the aim is to highlight the foundations of the image quality metrics proposed in literature, by identifying the cognitive/physiological or mathematical principles of their development and the relation with the actual human quality assessment process. The model allows to create a novel categorization of objective and subjective image quality metrics. Our work includes an overview of the most used or effective objective metrics in literature, and, for each of them, we underline its main characteristics, with reference to the rationale of the proposed model and categorization. From the results of this operation, we underline a problem that affects all the presented metrics: the fact that many aspects of human biases are not taken in account at all. We then propose a possible methodology to address this issue.Keywords: eye-tracking, image quality assessment metric, MOS, quality of user experience, visual perception
Procedia PDF Downloads 41029143 Design of Lead-Lag Based Internal Model Controller for Binary Distillation Column
Authors: Rakesh Kumar Mishra, Tarun Kumar Dan
Abstract:
Lead-Lag based Internal Model Control method is proposed based on Internal Model Control (IMC) strategy. In this paper, we have designed the Lead-Lag based Internal Model Control for binary distillation column for SISO process (considering only bottom product). The transfer function has been taken from Wood and Berry model. We have find the composition control and disturbance rejection using Lead-Lag based IMC and comparing with the response of simple Internal Model Controller.Keywords: SISO, lead-lag, internal model control, wood and berry, distillation column
Procedia PDF Downloads 64229142 A Psychophysiological Evaluation of an Effective Recognition Technique Using Interactive Dynamic Virtual Environments
Authors: Mohammadhossein Moghimi, Robert Stone, Pia Rotshtein
Abstract:
Recording psychological and physiological correlates of human performance within virtual environments and interpreting their impacts on human engagement, ‘immersion’ and related emotional or ‘effective’ states is both academically and technologically challenging. By exposing participants to an effective, real-time (game-like) virtual environment, designed and evaluated in an earlier study, a psychophysiological database containing the EEG, GSR and Heart Rate of 30 male and female gamers, exposed to 10 games, was constructed. Some 174 features were subsequently identified and extracted from a number of windows, with 28 different timing lengths (e.g. 2, 3, 5, etc. seconds). After reducing the number of features to 30, using a feature selection technique, K-Nearest Neighbour (KNN) and Support Vector Machine (SVM) methods were subsequently employed for the classification process. The classifiers categorised the psychophysiological database into four effective clusters (defined based on a 3-dimensional space – valence, arousal and dominance) and eight emotion labels (relaxed, content, happy, excited, angry, afraid, sad, and bored). The KNN and SVM classifiers achieved average cross-validation accuracies of 97.01% (±1.3%) and 92.84% (±3.67%), respectively. However, no significant differences were found in the classification process based on effective clusters or emotion labels.Keywords: virtual reality, effective computing, effective VR, emotion-based effective physiological database
Procedia PDF Downloads 23129141 Automatic Method for Classification of Informative and Noninformative Images in Colonoscopy Video
Authors: Nidhal K. Azawi, John M. Gauch
Abstract:
Colorectal cancer is one of the leading causes of cancer death in the US and the world, which is why millions of colonoscopy examinations are performed annually. Unfortunately, noise, specular highlights, and motion artifacts corrupt many images in a typical colonoscopy exam. The goal of our research is to produce automated techniques to detect and correct or remove these noninformative images from colonoscopy videos, so physicians can focus their attention on informative images. In this research, we first automatically extract features from images. Then we use machine learning and deep neural network to classify colonoscopy images as either informative or noninformative. Our results show that we achieve image classification accuracy between 92-98%. We also show how the removal of noninformative images together with image alignment can aid in the creation of image panoramas and other visualizations of colonoscopy images.Keywords: colonoscopy classification, feature extraction, image alignment, machine learning
Procedia PDF Downloads 250