Search results for: Automated feeding
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 474

Search results for: Automated feeding

54 Online Optic Disk Segmentation Using Fractals

Authors: Srinivasan Aruchamy, Partha Bhattacharjee, Goutam Sanyal

Abstract:

Optic disk segmentation plays a key role in the mass screening of individuals with diabetic retinopathy and glaucoma ailments. An efficient hardware-based algorithm for optic disk localization and segmentation would aid for developing an automated retinal image analysis system for real time applications. Herein, TMS320C6416DSK DSP board pixel intensity based fractal analysis algorithm for an automatic localization and segmentation of the optic disk is reported. The experiment has been performed on color and fluorescent angiography retinal fundus images. Initially, the images were pre-processed to reduce the noise and enhance the quality. The retinal vascular tree of the image was then extracted using canny edge detection technique. Finally, a pixel intensity based fractal analysis is performed to segment the optic disk by tracing the origin of the vascular tree. The proposed method is examined on three publicly available data sets of the retinal image and also with the data set obtained from an eye clinic. The average accuracy achieved is 96.2%. To the best of the knowledge, this is the first work reporting the use of TMS320C6416DSK DSP board and pixel intensity based fractal analysis algorithm for an automatic localization and segmentation of the optic disk. This will pave the way for developing devices for detection of retinal diseases in the future.

Keywords: Color retinal fundus images, Diabetic retinopathy, Fluorescein angiography retinal fundus images, Fractal analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2458
53 Honey Contamination in the Republic of Kazakhstan

Authors: B. Sadepovich Maikanov, Z. Shabanbayevich Adilbekov, R. Husainovna Mustafina, L. Tyulegenovna Auteleyeva

Abstract:

This study involves detailed information about contaminants of honey in the Republic of Kazakhstan. The requirements of the technical regulation ‘Requirements to safety of honey and bee products’ and GOST 19792-2001 were taken into account in this research. Contamination of honey by antibiotics wqs determined by the IEA (immune-enzyme analysis), Ridder analyzer and Tecna produced test systems. Voltammetry (TaLab device) was used to define contamination by salts of heavy metals and gamma-beta spectrometry, ‘Progress BG’ system, with preliminary ashing of the sample of honey was used to define radioactive contamination. This article pointed out that residues of chloramphenicol were detected in 24% of investigated products, in 22% of them –streptomycin, in 7.3% - sulfanilamide, in 2.4% - tylosin, and in 12% - combined contamination was noted. Geographically, the greatest degree of contamination of honey with antibiotics occurs in the Northern Kazakhstan – 54.4%, and Southern Kazakhstan - 50%, and the lowest in Central and Eastern Kazakhstan with 30% and 25%, respectively. Generally, pollution by heavy metals is within acceptable limits, but the contamination from lead is highest in the Akmola region. The level of radioactive cesium and strontium is also within acceptable concentrations. The highest radioactivity in terms of cesium was observed in the East Kazakhstan region - 49.00±10 Bq/kg, in Akmola, North Kazakhstan and Almaty - 12.00±5, 11.05±3 and 19.0±8 Bq/kg, respectively, while the norm is 100 Bq/kg. In terms of strontium, the radioactivity in the East Kazakhstan region is 25.03±15 Bq/kg, while in Akmola, North Kazakhstan and Almaty regions it is 12.00±3, 10.2±4 and 1.0±2 Bq/kg, respectively, with the norm of 80 Bq/kg. This accumulation is mainly associated with the environmental degradation, feeding and treating of bees. Moreover, in the process of collecting nectar, external substances can penetrate honey. Overall, this research determines factors and reasons of honey contamination.

Keywords: Antibiotics, contamination of honey, honey, radionuclides.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1661
52 A method for Music Classification Based On Perceived Mood Detection for Indian Bollywood Music

Authors: Vallabha Hampiholi

Abstract:

A lot of research has been done in the past decade in the field of audio content analysis for extracting various information from audio signal. One such significant information is the "perceived mood" or the "emotions" related to a music or audio clip. This information is extremely useful in applications like creating or adapting the play-list based on the mood of the listener. This information could also be helpful in better classification of the music database. In this paper we have presented a method to classify music not just based on the meta-data of the audio clip but also include the "mood" factor to help improve the music classification. We propose an automated and efficient way of classifying music samples based on the mood detection from the audio data. We in particular try to classify the music based on mood for Indian bollywood music. The proposed method tries to address the following problem statement: Genre information (usually part of the audio meta-data) alone does not help in better music classification. For example the acoustic version of the song "nothing else matters by Metallica" can be classified as melody music and thereby a person in relaxing or chill out mood might want to listen to this track. But more often than not this track is associated with metal / heavy rock genre and if a listener classified his play-list based on the genre information alone for his current mood, the user shall miss out on listening to this track. Currently methods exist to detect mood in western or similar kind of music. Our paper tries to solve the issue for Indian bollywood music from an Indian cultural context

Keywords: Mood, music classification, music genre, rhythm, music analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3415
51 Evaluation of State of the Art IDS Message Exchange Protocols

Authors: Robert Koch, Mario Golling, Gabi Dreo

Abstract:

During the last couple of years, the degree of dependence on IT systems has reached a dimension nobody imagined to be possible 10 years ago. The increased usage of mobile devices (e.g., smart phones), wireless sensor networks and embedded devices (Internet of Things) are only some examples of the dependency of modern societies on cyber space. At the same time, the complexity of IT applications, e.g., because of the increasing use of cloud computing, is rising continuously. Along with this, the threats to IT security have increased both quantitatively and qualitatively, as recent examples like STUXNET or the supposed cyber attack on Illinois water system are proofing impressively. Once isolated control systems are nowadays often publicly available - a fact that has never been intended by the developers. Threats to IT systems don’t care about areas of responsibility. Especially with regard to Cyber Warfare, IT threats are no longer limited to company or industry boundaries, administrative jurisdictions or state boundaries. One of the important countermeasures is increased cooperation among the participants especially in the field of Cyber Defence. Besides political and legal challenges, there are technical ones as well. A better, at least partially automated exchange of information is essential to (i) enable sophisticated situational awareness and to (ii) counter the attacker in a coordinated way. Therefore, this publication performs an evaluation of state of the art Intrusion Detection Message Exchange protocols in order to guarantee a secure information exchange between different entities.

Keywords: Cyber Defence, Cyber Warfare, Intrusion Detection Information Exchange, Early Warning Systems, Joint Intrusion Detection, Cyber Conflict

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2242
50 Evolutionary Approach for Automated Discovery of Censored Production Rules

Authors: Kamal K. Bharadwaj, Basheer M. Al-Maqaleh

Abstract:

In the recent past, there has been an increasing interest in applying evolutionary methods to Knowledge Discovery in Databases (KDD) and a number of successful applications of Genetic Algorithms (GA) and Genetic Programming (GP) to KDD have been demonstrated. The most predominant representation of the discovered knowledge is the standard Production Rules (PRs) in the form If P Then D. The PRs, however, are unable to handle exceptions and do not exhibit variable precision. The Censored Production Rules (CPRs), an extension of PRs, were proposed by Michalski & Winston that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: If P Then D Unless C, where C (Censor) is an exception to the rule. Such rules are employed in situations, in which the conditional statement 'If P Then D' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence are tight or there is simply no information available as to whether it holds or not. Thus, the 'If P Then D' part of the CPR expresses important information, while the Unless C part acts only as a switch and changes the polarity of D to ~D. This paper presents a classification algorithm based on evolutionary approach that discovers comprehensible rules with exceptions in the form of CPRs. The proposed approach has flexible chromosome encoding, where each chromosome corresponds to a CPR. Appropriate genetic operators are suggested and a fitness function is proposed that incorporates the basic constraints on CPRs. Experimental results are presented to demonstrate the performance of the proposed algorithm.

Keywords: Censored Production Rule, Data Mining, MachineLearning, Evolutionary Algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1833
49 Stimulation of Stevioside Accumulation on Stevia rebaudiana (Bertoni) Shoot Culture Induced with Red LED Light in TIS RITA® Bioreactor System

Authors: Vincent Alexander, Rizkita Esyanti

Abstract:

Leaves of Stevia rebaudiana contain steviol glycoside which mainly comprise of stevioside, a natural sweetener compound that is 100-300 times sweeter than sucrose. Current cultivation method of Stevia rebaudiana in Indonesia has yet to reach its optimum efficiency and productivity to produce stevioside as a safe sugar substitute sweetener for people with diabetes. An alternative method that is not limited by environmental factor is in vitro temporary immersion system (TIS) culture method using recipient for automated immersion (RITA®) bioreactor. The aim of this research was to evaluate the effect of red LED light induction towards shoot growth and stevioside accumulation in TIS RITA® bioreactor system, as an endeavour to increase the secondary metabolite synthesis. The result showed that the stevioside accumulation in TIS RITA® bioreactor system induced with red LED light for one hour during night was higher than that in TIS RITA® bioreactor system without red LED light induction, i.e. 71.04 ± 5.36 μg/g and 42.92 ± 5.40 μg/g respectively. Biomass growth rate reached as high as 0.072 ± 0.015/day for red LED light induced TIS RITA® bioreactor system, whereas TIS RITA® bioreactor system without induction was only 0.046 ± 0.003/day. Productivity of Stevia rebaudiana shoots induced with red LED light was 0.065 g/L medium/day, whilst shoots without any induction was 0.041 g/L medium/day. Sucrose, salt, and inorganic consumption in both bioreactor media increased as biomass increased. It can be concluded that Stevia rebaudiana shoot in TIS RITA® bioreactor induced with red LED light produces biomass and accumulates higher stevioside concentration, in comparison to bioreactor without any light induction.

Keywords: LED, Stevia rebaudiana, Stevioside, TIS RITA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697
48 An Automated Approach to the Nozzle Configuration of Polycrystalline Diamond Compact Drill Bits for Effective Cuttings Removal

Authors: R. Suresh, Pavan Kumar Nimmagadda, Ming Zo Tan, Shane Hart, Sharp Ugwuocha

Abstract:

Polycrystalline diamond compact (PDC) drill bits are extensively used in the oil and gas industry as well as the mining industry. Industry engineers continually improve upon PDC drill bit designs and hydraulic conditions. Optimized injection nozzles play a key role in improving the drilling performance and efficiency of these ever changing PDC drill bits. In the first part of this study, computational fluid dynamics (CFD) modelling is performed to investigate the hydrodynamic characteristics of drilling fluid flow around the PDC drill bit. An Open-source CFD software – OpenFOAM simulates the flow around the drill bit, based on the field input data. A specifically developed console application integrates the entire CFD process including, domain extraction, meshing, and solving governing equations and post-processing. The results from the OpenFOAM solver are then compared with that of the ANSYS Fluent software. The data from both software programs agree. The second part of the paper describes the parametric study of the PDC drill bit nozzle to determine the effect of parameters such as number of nozzles, nozzle velocity, nozzle radial position and orientations on the flow field characteristics and bit washing patterns. After analyzing a series of nozzle configurations, the best configuration is identified and recommendations are made for modifying the PDC bit design.

Keywords: ANSYS Fluent, computational fluid dynamics, nozzle configuration, OpenFOAM, PDC dill bit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 908
47 Influence of Improved Roughage Quality and Period of Meal Termination on Digesta Load in the Digestive Organs of Goats

Authors: Rasheed A. Adebayo, Mehluli M. Moyo, Ignatius V. Nsahlai

Abstract:

Ruminants are known to relish roughage for productivity but the effect of its quality on digesta load in rumen, omasum, abomasum and other distal organs of the digestive tract is yet unknown. Reticulorumen fill is a strong indicator for long-term control of intake in ruminants. As such, the measurement and prediction of digesta load in these compartments may be crucial to productivity in the ruminant industry. The current study aimed at determining the effect of (a) diet quality on digesta load in digestive organs of goats, and (b) period of meal termination on the reticulorumen fill and digesta load in other distal compartments of the digestive tract of goats. Goats were fed with urea-treated hay (UTH), urea-sprayed hay (USH) and non-treated hay (NTH). At the end of eight weeks of a feeding trial period, upon termination of a meal in the morning, afternoon or evening, all goats were slaughtered in random groups of three per day to measure reticulorumen fill and digesta loads in other distal compartments of the digestive tract. Both diet quality and period affected (P < 0.05) the measure of reticulorumen fill. However, reticulorumen fill in the evening was larger (P < 0.05) than afternoon, while afternoon was similar (P > 0.05) to morning. Also, diet quality affected (P < 0.05) the wet omasal digesta load, wet abomasum, dry abomasum and dry caecum digesta loads but did not affect (P > 0.05) both wet and dry digesta loads in other compartments of the digestive tract. Period of measurement did not affect (P > 0.05) the wet omasal digesta load, and both wet and dry digesta loads in other compartments of the digestive tract except wet abomasum digesta load (P < 0.05) and dry caecum digesta load (P < 0.05). Both wet and dry reticulorumen fill were correlated (P < 0.05) with omasum (r = 0.623) and (r = 0.723), respectively. In conclusion, reticulorumen fill of goats decreased by improving the roughage quality; and the period of meal termination and measurement of the fill is a key factor to the quantity of digesta load.

Keywords: Digesta, goats, meal termination, reticulorumen fill.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 757
46 Feature Analysis of Predictive Maintenance Models

Authors: Zhaoan Wang

Abstract:

Research in predictive maintenance modeling has improved in the recent years to predict failures and needed maintenance with high accuracy, saving cost and improving manufacturing efficiency. However, classic prediction models provide little valuable insight towards the most important features contributing to the failure. By analyzing and quantifying feature importance in predictive maintenance models, cost saving can be optimized based on business goals. First, multiple classifiers are evaluated with cross-validation to predict the multi-class of failures. Second, predictive performance with features provided by different feature selection algorithms are further analyzed. Third, features selected by different algorithms are ranked and combined based on their predictive power. Finally, linear explainer SHAP (SHapley Additive exPlanations) is applied to interpret classifier behavior and provide further insight towards the specific roles of features in both local predictions and global model behavior. The results of the experiments suggest that certain features play dominant roles in predictive models while others have significantly less impact on the overall performance. Moreover, for multi-class prediction of machine failures, the most important features vary with type of machine failures. The results may lead to improved productivity and cost saving by prioritizing sensor deployment, data collection, and data processing of more important features over less importance features.

Keywords: Automated supply chain, intelligent manufacturing, predictive maintenance machine learning, feature engineering, model interpretation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1933
45 Integration Methods and Processes of Product Design and Flexible Production for Direct Production within the iCIM 3000 System

Authors: Roman Ružarovský, Radovan Holubek, Daynier Rolando Delgado Sobrino

Abstract:

Currently is characterized production engineering together with the integration of industrial automation and robotics such very quick view of to manufacture the products. The production range is continuously changing, expanding and producers have to be flexible in this regard. It means that need to offer production possibilities, which can respond to the quick change. Engineering product development is focused on supporting CAD software, such systems are mainly used for product design. That manufacturers are competitive, it should be kept procured machines made available capable of responding to output flexibility. In response to that problem is the development of flexible manufacturing systems, consisting of various automated systems. The integration of flexible manufacturing systems and subunits together with product design and of engineering is a possible solution for this issue. Integration is possible through the implementation of CIM systems. Such a solution and finding a hyphen between CAD and procurement system ICIM 3000 from Festo Co. is engaged in the research project and this contribution. This can be designed the products in CAD systems and watch the manufacturing process from order to shipping by the development of methods and processes of integration, This can be modeled in CAD systems products and watch the manufacturing process from order to shipping to develop methods and processes of integration, which will improve support for product design parameters by monitoring of the production process, by creating of programs for production using the CAD and therefore accelerates the a total of process from design to implementation.

Keywords: CAD- Computer Aided Design, CAM- Computer Aided Manufacturing, CIM- Computer integrated manufacturing, iCIM 3000, integration, direct production from CAD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2256
44 Automated Textile Defect Recognition System Using Computer Vision and Artificial Neural Networks

Authors: Atiqul Islam, Shamim Akhter, Tumnun E. Mursalin

Abstract:

Least Development Countries (LDC) like Bangladesh, whose 25% revenue earning is achieved from Textile export, requires producing less defective textile for minimizing production cost and time. Inspection processes done on these industries are mostly manual and time consuming. To reduce error on identifying fabric defects requires more automotive and accurate inspection process. Considering this lacking, this research implements a Textile Defect Recognizer which uses computer vision methodology with the combination of multi-layer neural networks to identify four classifications of textile defects. The recognizer, suitable for LDC countries, identifies the fabric defects within economical cost and produces less error prone inspection system in real time. In order to generate input set for the neural network, primarily the recognizer captures digital fabric images by image acquisition device and converts the RGB images into binary images by restoration process and local threshold techniques. Later, the output of the processed image, the area of the faulty portion, the number of objects of the image and the sharp factor of the image, are feed backed as an input layer to the neural network which uses back propagation algorithm to compute the weighted factors and generates the desired classifications of defects as an output.

Keywords: Computer vision, image acquisition device, machine vision, multi-layer neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3236
43 Implication and Genetic Variations on Lipid Profile of the Fasting Respondent

Authors: Rohayu Izanwati M. R., Muhamad Ridhwan M. R., Abbe Maleyki M. J., Ahmad Zubaidi A. L., Zahri M. K.

Abstract:

PPARs function as regulators of lipid and lipoprotein metabolism. The aim of the study was to compare the lipid profile between two phases of fasting and to examine the frequency and relationship of peroxisome proliferator-activated receptor, PPARα gene polymorphisms to lipid profile in fasting respondents. We conducted a case-control study protocol, which included 21 healthy volunteers without gender discrimination at the age of 18 years old. 3 ml of blood sample was drawn before the fasting phase and during the fasting phase (in Ramadhan month). 1ml of serum for the lipid profile was analyzed by using the automated chemistry analyser (Olympus, AU 400) and the data were analysed using the Paired T-Test (SPSS ver.20). DNA was extracted and PCR was conducted utilising 6 sets of primer. Primers were designed within 6 exons of interest in PPARα gene. Genetic and metabolic characteristics of fasting respondents and controls were estimated and compared. Fasting respondents were significantly have lowered the LDL levels (p=0.03). There were no polymorphisms detected except in exon 1 with 5% of this population study respectively. The polymorphisms in exon 1 of the PPARα gene were found in low frequency. Regarding the 1375G/T and 1386G/T polymorphisms in the exon 1 of the PPARα gene, the T-allele in fasting phase had no association with the decreased LDL levels (Fisher Exact Test). However this association is more promising when the sample size is larger in order to elucidate the precise impact of the polymorphisms on lipid profile in the population. In conclusion, the PPARα gene polymorphisms do not appear to affect the LDL of fasting respondents.

Keywords: Fasting, LDL, Peroxisome proliferator activated receptor alpha (PPAR-α), Polymorphisms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1569
42 Comparative Evaluation of Accuracy of Selected Machine Learning Classification Techniques for Diagnosis of Cancer: A Data Mining Approach

Authors: Rajvir Kaur, Jeewani Anupama Ginige

Abstract:

With recent trends in Big Data and advancements in Information and Communication Technologies, the healthcare industry is at the stage of its transition from clinician oriented to technology oriented. Many people around the world die of cancer because the diagnosis of disease was not done at an early stage. Nowadays, the computational methods in the form of Machine Learning (ML) are used to develop automated decision support systems that can diagnose cancer with high confidence in a timely manner. This paper aims to carry out the comparative evaluation of a selected set of ML classifiers on two existing datasets: breast cancer and cervical cancer. The ML classifiers compared in this study are Decision Tree (DT), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Logistic Regression, Ensemble (Bagged Tree) and Artificial Neural Networks (ANN). The evaluation is carried out based on standard evaluation metrics Precision (P), Recall (R), F1-score and Accuracy. The experimental results based on the evaluation metrics show that ANN showed the highest-level accuracy (99.4%) when tested with breast cancer dataset. On the other hand, when these ML classifiers are tested with the cervical cancer dataset, Ensemble (Bagged Tree) technique gave better accuracy (93.1%) in comparison to other classifiers.

Keywords: Artificial neural networks, breast cancer, cancer dataset, classifiers, cervical cancer, F-score, logistic regression, machine learning, precision, recall, support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1492
41 A Specification-Based Approach for Retrieval of Reusable Business Component for Software Reuse

Authors: Meng Fanchao, Zhan Dechen, Xu Xiaofei

Abstract:

Software reuse can be considered as the most realistic and promising way to improve software engineering productivity and quality. Automated assistance for software reuse involves the representation, classification, retrieval and adaptation of components. The representation and retrieval of components are important to software reuse in Component-Based on Software Development (CBSD). However, current industrial component models mainly focus on the implement techniques and ignore the semantic information about component, so it is difficult to retrieve the components that satisfy user-s requirements. This paper presents a method of business component retrieval based on specification matching to solve the software reuse of enterprise information system. First, a business component model oriented reuse is proposed. In our model, the business data type is represented as sign data type based on XML, which can express the variable business data type that can describe the variety of business operations. Based on this model, we propose specification match relationships in two levels: business operation level and business component level. In business operation level, we use input business data types, output business data types and the taxonomy of business operations evaluate the similarity between business operations. In the business component level, we propose five specification matches between business components. To retrieval reusable business components, we propose the measure of similarity degrees to calculate the similarities between business components. Finally, a business component retrieval command like SQL is proposed to help user to retrieve approximate business components from component repository.

Keywords: Business component, business operation, business data type, specification matching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1360
40 Process Optimization and Automation of Information Technology Services in a Heterogenic Digital Environment

Authors: Tasneem Halawani, Yamen Khateeb

Abstract:

With customers’ ever-increasing expectations for fast services provisioning for all their business needs, information technology (IT) organizations, as business partners, have to cope with this demanding environment and deliver their services in the most effective and efficient way. The purpose of this paper is to identify optimization and automation opportunities for the top requested IT services in a heterogenic digital environment and widely spread customer base. In collaboration with systems, processes, and subject matter experts (SMEs), the processes in scope were approached by analyzing four-year related historical data, identifying and surveying stakeholders, modeling the as-is processes, and studying systems integration/automation capabilities. This effort resulted in identifying several pain areas, including standardization, unnecessary customer and IT involvement, manual steps, systems integration, and performance measurement. These pain areas were addressed by standardizing the top five requested IT services, eliminating/automating 43 steps, and utilizing a single platform for end-to-end process execution. In conclusion, the optimization of IT service request processes in a heterogenic digital environment and widely spread customer base is challenging, yet achievable without compromising the service quality and customers’ added value. Further studies can focus on measuring the value of the eliminated/automated process steps to quantify the enhancement impact. Moreover, a similar approach can be utilized to optimize other IT service requests, with a focus on business criticality.

Keywords: Automation, customer value, heterogenic, integration, IT services, optimization, processes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 596
39 Thresholding Approach for Automatic Detection of Pseudomonas aeruginosa Biofilms from Fluorescence in situ Hybridization Images

Authors: Zonglin Yang, Tatsuya Akiyama, Kerry S. Williamson, Michael J. Franklin, Thiruvarangan Ramaraj

Abstract:

Pseudomonas aeruginosa is an opportunistic pathogen that forms surface-associated microbial communities (biofilms) on artificial implant devices and on human tissue. Biofilm infections are difficult to treat with antibiotics, in part, because the bacteria in biofilms are physiologically heterogeneous. One measure of biological heterogeneity in a population of cells is to quantify the cellular concentrations of ribosomes, which can be probed with fluorescently labeled nucleic acids. The fluorescent signal intensity following fluorescence in situ hybridization (FISH) analysis correlates to the cellular level of ribosomes. The goals here are to provide computationally and statistically robust approaches to automatically quantify cellular heterogeneity in biofilms from a large library of epifluorescent microscopy FISH images. In this work, the initial steps were developed toward these goals by developing an automated biofilm detection approach for use with FISH images. The approach allows rapid identification of biofilm regions from FISH images that are counterstained with fluorescent dyes. This methodology provides advances over other computational methods, allowing subtraction of spurious signals and non-biological fluorescent substrata. This method will be a robust and user-friendly approach which will enable users to semi-automatically detect biofilm boundaries and extract intensity values from fluorescent images for quantitative analysis of biofilm heterogeneity.

Keywords: Image informatics, Pseudomonas aeruginosa, biofilm, FISH, computer vision, data visualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1103
38 Path-Tracking Controller for Tracked Mobile Robot on Rough Terrain

Authors: Toshifumi Hiramatsu, Satoshi Morita, Manuel Pencelli, Marta Niccolini, Matteo Ragaglia, Alfredo Argiolas

Abstract:

Automation technologies for agriculture field are needed to promote labor-saving. One of the most relevant problems in automated agriculture is represented by controlling the robot along a predetermined path in presence of rough terrain or incline ground. Unfortunately, disturbances originating from interaction with the ground, such as slipping, make it quite difficult to achieve the required accuracy. In general, it is required to move within 5-10 cm accuracy with respect to the predetermined path. Moreover, lateral velocity caused by gravity on the incline field also affects slipping. In this paper, a path-tracking controller for tracked mobile robots moving on rough terrains of incline field such as vineyard is presented. The controller is composed of a disturbance observer and an adaptive controller based on the kinematic model of the robot. The disturbance observer measures the difference between the measured and the reference yaw rate and linear velocity in order to estimate slip. Then, the adaptive controller adapts “virtual” parameter of the kinematics model: Instantaneous Centers of Rotation (ICRs). Finally, target angular velocity reference is computed according to the adapted parameter. This solution allows estimating the effects of slip without making the model too complex. Finally, the effectiveness of the proposed solution is tested in a simulation environment.

Keywords: Agricultural robot, autonomous control, path-tracking control, tracked mobile robot.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1052
37 Performance of the Aptima® HIV-1 Quant Dx Assay on the Panther System

Authors: Siobhan O’Shea, Sangeetha Vijaysri Nair, Hee Cheol Kim, Charles Thomas Nugent, Cheuk Yan William Tong, Sam Douthwaite, Andrew Worlock

Abstract:

The Aptima® HIV-1 Quant Dx Assay is a fully automated assay on the Panther system. It is based on Transcription- Mediated Amplification and real time detection technologies. This assay is intended for monitoring HIV-1 viral load in plasma specimens and for the detection of HIV-1 in plasma and serum specimens. Nine-hundred and seventy nine specimens selected at random from routine testing at St Thomas’ Hospital, London were anonymised and used to compare the performance of the Aptima HIV-1 Quant Dx assay and Roche COBAS® AmpliPrep/COBAS® TaqMan® HIV-1 Test, v2.0. Two-hundred and thirty four specimens gave quantitative HIV-1 viral load results in both assays. The quantitative results reported by the Aptima Assay were comparable to those reported by the Roche COBAS AmpliPrep/COBAS TaqMan HIV-1 Test, v2.0 with a linear regression slope of 1.04 and an intercept on -0.097. The Aptima assay detected HIV-1 in more samples than the COBAS assay. This was not due to lack of specificity of the Aptima assay because this assay gave 99.83% specificity on testing plasma specimens from 600 HIV-1 negative individuals. To understand the reason for this higher detection rate a side-by-side comparison of low level panels made from the HIV-1 3rd international standard (NIBSC10/152) and clinical samples of various subtypes were tested in both assays. The Aptima assay was more sensitive than the COBAS assay. The good sensitivity, specificity and agreement with other commercial assays make the HIV-1 Quant Dx Assay appropriate for both viral load monitoring and detection of HIV-1 infections.

Keywords: HIV viral load, Aptima, Roche, Panther system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3169
36 Discovery of Quantified Hierarchical Production Rules from Large Set of Discovered Rules

Authors: Tamanna Siddiqui, M. Afshar Alam

Abstract:

Automated discovery of Rule is, due to its applicability, one of the most fundamental and important method in KDD. It has been an active research area in the recent past. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form: Decision If < condition> Generality Specificity . HPRs systems are capable of handling taxonomical structures inherent in the knowledge about the real world. This paper focuses on the issue of mining Quantified rules with crisp hierarchical structure using Genetic Programming (GP) approach to knowledge discovery. The post-processing scheme presented in this work uses Quantified production rules as initial individuals of GP and discovers hierarchical structure. In proposed approach rules are quantified by using Dempster Shafer theory. Suitable genetic operators are proposed for the suggested encoding. Based on the Subsumption Matrix(SM), an appropriate fitness function is suggested. Finally, Quantified Hierarchical Production Rules (HPRs) are generated from the discovered hierarchy, using Dempster Shafer theory. Experimental results are presented to demonstrate the performance of the proposed algorithm.

Keywords: Knowledge discovery in database, quantification, dempster shafer theory, genetic programming, hierarchy, subsumption matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1485
35 Coils and Antennas Fabricated with Sewing Litz Wire for Wireless Power Transfer

Authors: Hikari Ryu, Yuki Fukuda, Kento Oishi, Chiharu Igarashi, Shogo Kiryu

Abstract:

Recently, wireless power transfer has been developed in various fields. Magnetic coupling is popular for feeding power at a relatively short distance and at a lower frequency. Electro-magnetic wave coupling at a high frequency is used for long-distance power transfer. The wireless power transfer has attracted attention in e-textile fields. Rigid batteries are required for many body-worn electric systems at the present time. The technology enables such batteries to be removed from the systems. Coils with a high Q factor are required in the magnetic-coupling power transfer. Antennas with low return loss are needed for the electro-magnetic coupling. Litz wire is so flexible to fabricate coils and antennas sewn on fabric and has low resistivity. In this study, the electric characteristics of some coils and antennas fabricated with the Litz wire by using two sewing techniques are investigated. As examples, a coil and an antenna are described. Both were fabricated with 330/0.04 mm Litz wire. The coil was a planar coil with a square shape. The outer side was 150 mm, the number of turns was 15, and the pitch interval between each turn was 5 mm. The Litz wire of the coil was overstitched with a sewing machine. The coil was fabricated as a receiver coil for a magnetic coupled wireless power transfer. The Q factor was 200 at a frequency of 800 kHz. A wireless power system was constructed by using the coil. A power oscillator was used in the system. The resonant frequency of the circuit was set to 123 kHz, where the switching loss of power Field Effect Transistor (FET) was was small. The power efficiencies were 0.44-0.99, depending on the distance between the transmitter and receiver coils. As an example of an antenna with a sewing technique, a fractal pattern antenna was stitched on a 500 mm x 500 mm fabric by using a needle punch method. The pattern was the 2nd-oder Vicsec fractal. The return loss of the antenna was -28 dB at a frequency of 144 MHz.

Keywords: E-textile, flexible coils, flexible antennas, Litz wire, wireless power transfer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 102
34 Automatic Removal of Ocular Artifacts using JADE Algorithm and Neural Network

Authors: V Krishnaveni, S Jayaraman, A Gunasekaran, K Ramadoss

Abstract:

The ElectroEncephaloGram (EEG) is useful for clinical diagnosis and biomedical research. EEG signals often contain strong ElectroOculoGram (EOG) artifacts produced by eye movements and eye blinks especially in EEG recorded from frontal channels. These artifacts obscure the underlying brain activity, making its visual or automated inspection difficult. The goal of ocular artifact removal is to remove ocular artifacts from the recorded EEG, leaving the underlying background signals due to brain activity. In recent times, Independent Component Analysis (ICA) algorithms have demonstrated superior potential in obtaining the least dependent source components. In this paper, the independent components are obtained by using the JADE algorithm (best separating algorithm) and are classified into either artifact component or neural component. Neural Network is used for the classification of the obtained independent components. Neural Network requires input features that exactly represent the true character of the input signals so that the neural network could classify the signals based on those key characters that differentiate between various signals. In this work, Auto Regressive (AR) coefficients are used as the input features for classification. Two neural network approaches are used to learn classification rules from EEG data. First, a Polynomial Neural Network (PNN) trained by GMDH (Group Method of Data Handling) algorithm is used and secondly, feed-forward neural network classifier trained by a standard back-propagation algorithm is used for classification and the results show that JADE-FNN performs better than JADEPNN.

Keywords: Auto Regressive (AR) Coefficients, Feed Forward Neural Network (FNN), Joint Approximation Diagonalisation of Eigen matrices (JADE) Algorithm, Polynomial Neural Network (PNN).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1833
33 Revival of the Modern Wing Sails for the Propulsion of Commercial Ships

Authors: Pravesh Chandra Shukla, Kunal Ghosh

Abstract:

Over 90% of the world trade is carried by the international shipping industry. As most of the countries are developing, seaborne trade continues to expand to bring benefits for consumers across the world. Studies show that world trade will increase 70-80% through shipping in the next 15-20 years. Present global fleet of 70000 commercial ships consumes approximately 200 million tonnes of diesel fuel a year and it is expected that it will be around 350 million tonnes a year by 2020. It will increase the demand for fuel and also increase the concentration of CO2 in the atmosphere. So, it-s essential to control this massive fuel consumption and CO2 emission. The idea is to utilize a diesel-wind hybrid system for ship propulsion. Use of wind energy by installing modern wing-sails in ships can drastically reduce the consumption of diesel fuel. A huge amount of wind energy is available in oceans. Whenever wind is available the wing-sails would be deployed and the diesel engine would be throttled down and still the same forward speed would be maintained. Wind direction in a particular shipping route is not same throughout; it changes depending upon the global wind pattern which depends on the latitude. So, the wing-sail orientation should be such that it optimizes the use of wind energy. We have made a computer programme in which by feeding the data regarding wind velocity, wind direction, ship-motion direction; we can find out the best wing-sail position and fuel saving for commercial ships. We have calculated net fuel saving in certain international shipping routes, for instance, from Mumbai in India to Durban in South Africa. Our estimates show that about 8.3% diesel fuel can be saved by utilizing the wind. We are also developing an experimental model of the ship employing airfoils (small scale wingsail) and going to test it in National Wind Tunnel Facility in IIT Kanpur in order to develop a control mechanism for a system of airfoils.

Keywords: Commercial ships, Wind diesel hybrid system, Wing-sail, Wind direction, Wind velocity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3885
32 Response Time Behavior Trends of Proptional, Propotional Integral and Proportional Integral Derivative Mode on Lab Scale

Authors: Syed Zohaib Javaid Zaidi, W. Iqbal

Abstract:

The industrial automation is dependent upon pneumatic control systems. The industrial units are now controlled with digital control systems to tackle the process variables like Temperature, Pressure, Flow rates and Composition.

This research work produces an evaluation of the response time fluctuations for proportional mode, proportional integral and proportional integral derivative modes of automated chemical process control. The controller output is measured for different values of gain with respect to time in three modes (P, PI and PID). In case of P-mode for different values of gain the controller output has negligible change. When the controller output of PI-mode is checked for constant gain, it can be seen that by decreasing the integral time the controller output has showed more fluctuations. The PID mode results have found to be more interesting in a way that when rate minute has changed, the controller output has also showed fluctuations with respect to time.  The controller output for integral mode and derivative mode are observed with lesser steady state error, minimum offset and larger response time to control the process variable.   The tuning parameters in case of P-mode are only steady state gain with greater errors with respect to controller output. The integral mode showed controller outputs with intermediate responses during integral gain (ki).  By increasing the rate minute the derivative gain (kd) also increased which showed the controlled oscillations in case of PID mode and lesser overshoot.

Keywords: Controller Output, P, PI &PID modes, Steady state gain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5541
31 Effects of Four Dietary Oils on Cholesterol and Fatty Acid Composition of Egg Yolk in Layers

Authors: A. F. Agboola, B. R. O. Omidiwura, A. Oyeyemi, E. A. Iyayi, A. S. Adelani

Abstract:

Dietary cholesterol has elicited the most public interest as it relates with coronary heart disease. Thus, humans have been paying more attention to health, thereby reducing consumption of cholesterol enriched food. Egg is considered as one of the major sources of human dietary cholesterol. However, an alternative way to reduce the potential cholesterolemic effect of eggs is to modify the fatty acid composition of the yolk. The effect of palm oil (PO), soybean oil (SO), sesame seed oil (SSO) and fish oil (FO) supplementation in the diets of layers on egg yolk fatty acid, cholesterol, egg production and egg quality parameters were evaluated in a 42-day feeding trial. One hundred and five Isa Brown laying hens of 34 weeks of age were randomly distributed into seven groups of five replicates and three birds per replicate in a completely randomized design. Seven corn-soybean basal diets (BD) were formulated: BD+No oil (T1), BD+1.5% PO (T2), BD+1.5% SO (T3), BD+1.5% SSO (T4), BD+1.5% FO (T5), BD+0.75% SO+0.75% FO (T6) and BD+0.75% SSO+0.75% FO (T7). Five eggs were randomly sampled at day 42 from each replicate to assay for the cholesterol, fatty acid profile of egg yolk and egg quality assessment. Results showed that there were no significant (P>0.05) differences observed in production performance, egg cholesterol and egg quality parameters except for yolk height, albumen height, yolk index, egg shape index, haugh unit, and yolk colour. There were no significant differences (P>0.05) observed in total cholesterol, high density lipoprotein and low density lipoprotein levels of egg yolk across the treatments. However, diets had effect (P<0.05) on TAG (triacylglycerol) and VLDL (very low density lipoprotein) of the egg yolk. The highest TAG (603.78 mg/dl) and VLDL values (120.76 mg/dl) were recorded in eggs of hens on T4 (1.5% sesame seed oil) and was similar to those on T3 (1.5% soybean oil), T5 (1.5% fish oil) and T6 (0.75% soybean oil + 0.75% fish oil). However, results revealed a significant (P<0.05) variations on eggs’ summation of polyunsaturated fatty acid (PUFA). In conclusion, it is suggested that dietary oils could be included in layers’ diets to produce designer eggs low in cholesterol and high in PUFA especially omega-3 fatty acids.

Keywords: Dietary oils, Egg cholesterol, Egg fatty acid profile, Egg quality parameters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2028
30 Evaluation of the Analytic for Hemodynamic Instability as A Prediction Tool for Early Identification of Patient Deterioration

Authors: Bryce Benson, Sooin Lee, Ashwin Belle

Abstract:

Unrecognized or delayed identification of patient deterioration is a key cause of in-hospitals adverse events. Clinicians rely on vital signs monitoring to recognize patient deterioration. However, due to ever increasing nursing workloads and the manual effort required, vital signs tend to be measured and recorded intermittently, and inconsistently causing large gaps during patient monitoring. Additionally, during deterioration, the body’s autonomic nervous system activates compensatory mechanisms causing the vital signs to be lagging indicators of underlying hemodynamic decline. This study analyzes the predictive efficacy of the Analytic for Hemodynamic Instability (AHI) system, an automated tool that was designed to help clinicians in early identification of deteriorating patients. The lead time analysis in this retrospective observational study assesses how far in advance AHI predicted deterioration prior to the start of an episode of hemodynamic instability (HI) becoming evident through vital signs? Results indicate that of the 362 episodes of HI in this study, 308 episodes (85%) were correctly predicted by the AHI system with a median lead time of 57 minutes and an average of 4 hours (240.5 minutes). Of the 54 episodes not predicted, AHI detected 45 of them while the episode of HI was ongoing. Of the 9 undetected, 5 were not detected by AHI due to either missing or noisy input ECG data during the episode of HI. In total, AHI was able to either predict or detect 98.9% of all episodes of HI in this study. These results suggest that AHI could provide an additional ‘pair of eyes’ on patients, continuously filling the monitoring gaps and consequently giving the patient care team the ability to be far more proactive in patient monitoring and adverse event management.

Keywords: Clinical deterioration prediction, decision support system, early warning system, hemodynamic status, physiologic monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 346
29 CBIR Using Multi-Resolution Transform for Brain Tumour Detection and Stages Identification

Authors: H. Benjamin Fredrick David, R. Balasubramanian, A. Anbarasa Pandian

Abstract:

Image retrieval is the most interesting technique which is being used today in our digital world. CBIR, commonly expanded as Content Based Image Retrieval is an image processing technique which identifies the relevant images and retrieves them based on the patterns that are extracted from the digital images. In this paper, two research works have been presented using CBIR. The first work provides an automated and interactive approach to the analysis of CBIR techniques. CBIR works on the principle of supervised machine learning which involves feature selection followed by training and testing phase applied on a classifier in order to perform prediction. By using feature extraction, the image transforms such as Contourlet, Ridgelet and Shearlet could be utilized to retrieve the texture features from the images. The features extracted are used to train and build a classifier using the classification algorithms such as Naïve Bayes, K-Nearest Neighbour and Multi-class Support Vector Machine. Further the testing phase involves prediction which predicts the new input image using the trained classifier and label them from one of the four classes namely 1- Normal brain, 2- Benign tumour, 3- Malignant tumour and 4- Severe tumour. The second research work includes developing a tool which is used for tumour stage identification using the best feature extraction and classifier identified from the first work. Finally, the tool will be used to predict tumour stage and provide suggestions based on the stage of tumour identified by the system. This paper presents these two approaches which is a contribution to the medical field for giving better retrieval performance and for tumour stages identification.

Keywords: Brain tumour detection, content based image retrieval, classification of tumours, image retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 710
28 Identification of Flexographic-printed Newspapers with NIR Spectral Imaging

Authors: Raimund Leitner, Susanne Rosskopf

Abstract:

Near-infrared (NIR) spectroscopy is a widely used method for material identification for laboratory and industrial applications. While standard spectrometers only allow measurements at one sampling point at a time, NIR Spectral Imaging techniques can measure, in real-time, both the size and shape of an object as well as identify the material the object is made of. The online classification and sorting of recovered paper with NIR Spectral Imaging (SI) is used with success in the paper recycling industry throughout Europe. Recently, the globalisation of the recycling material streams caused that water-based flexographic-printed newspapers mainly from UK and Italy appear also in central Europe. These flexo-printed newspapers are not sufficiently de-inkable with the standard de-inking process originally developed for offset-printed paper. This de-inking process removes the ink from recovered paper and is the fundamental processing step to produce high-quality paper from recovered paper. Thus, the flexo-printed newspapers are a growing problem for the recycling industry as they reduce the quality of the produced paper if their amount exceeds a certain limit within the recovered paper material. This paper presents the results of a research project for the development of an automated entry inspection system for recovered paper that was jointly conducted by CTR AG (Austria) and PTS Papiertechnische Stiftung (Germany). Within the project an NIR SI prototype for the identification of flexo-printed newspaper has been developed. The prototype can identify and sort out flexoprinted newspapers in real-time and achieves a detection accuracy for flexo-printed newspaper of over 95%. NIR SI, the technology the prototype is based on, allows the development of inspection systems for incoming goods in a paper production facility as well as industrial sorting systems for recovered paper in the recycling industry in the near future.

Keywords: spectral imaging, imaging spectroscopy, NIR, waterbasedflexographic, flexo-printed, recovered paper, real-time classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1502
27 Hyperspectral Imaging and Nonlinear Fukunaga-Koontz Transform Based Food Inspection

Authors: Hamidullah Binol, Abdullah Bal

Abstract:

Nowadays, food safety is a great public concern; therefore, robust and effective techniques are required for detecting the safety situation of goods. Hyperspectral Imaging (HSI) is an attractive material for researchers to inspect food quality and safety estimation such as meat quality assessment, automated poultry carcass inspection, quality evaluation of fish, bruise detection of apples, quality analysis and grading of citrus fruits, bruise detection of strawberry, visualization of sugar distribution of melons, measuring ripening of tomatoes, defect detection of pickling cucumber, and classification of wheat kernels. HSI can be used to concurrently collect large amounts of spatial and spectral data on the objects being observed. This technique yields with exceptional detection skills, which otherwise cannot be achieved with either imaging or spectroscopy alone. This paper presents a nonlinear technique based on kernel Fukunaga-Koontz transform (KFKT) for detection of fat content in ground meat using HSI. The KFKT which is the nonlinear version of FKT is one of the most effective techniques for solving problems involving two-pattern nature. The conventional FKT method has been improved with kernel machines for increasing the nonlinear discrimination ability and capturing higher order of statistics of data. The proposed approach in this paper aims to segment the fat content of the ground meat by regarding the fat as target class which is tried to be separated from the remaining classes (as clutter). We have applied the KFKT on visible and nearinfrared (VNIR) hyperspectral images of ground meat to determine fat percentage. The experimental studies indicate that the proposed technique produces high detection performance for fat ratio in ground meat.

Keywords: Food (Ground meat) inspection, Fukunaga-Koontz transform, hyperspectral imaging, kernel methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1460
26 Rigorous Electromagnetic Model of Fourier Transform Infrared (FT-IR) Spectroscopic Imaging Applied to Automated Histology of Prostate Tissue Specimens

Authors: Rohith K Reddy, David Mayerich, Michael Walsh, P Scott Carney, Rohit Bhargava

Abstract:

Fourier transform infrared (FT-IR) spectroscopic imaging is an emerging technique that provides both chemically and spatially resolved information. The rich chemical content of data may be utilized for computer-aided determinations of structure and pathologic state (cancer diagnosis) in histological tissue sections for prostate cancer. FT-IR spectroscopic imaging of prostate tissue has shown that tissue type (histological) classification can be performed to a high degree of accuracy [1] and cancer diagnosis can be performed with an accuracy of about 80% [2] on a microscopic (≈ 6μm) length scale. In performing these analyses, it has been observed that there is large variability (more than 60%) between spectra from different points on tissue that is expected to consist of the same essential chemical constituents. Spectra at the edges of tissues are characteristically and consistently different from chemically similar tissue in the middle of the same sample. Here, we explain these differences using a rigorous electromagnetic model for light-sample interaction. Spectra from FT-IR spectroscopic imaging of chemically heterogeneous samples are different from bulk spectra of individual chemical constituents of the sample. This is because spectra not only depend on chemistry, but also on the shape of the sample. Using coupled wave analysis, we characterize and quantify the nature of spectral distortions at the edges of tissues. Furthermore, we present a method of performing histological classification of tissue samples. Since the mid-infrared spectrum is typically assumed to be a quantitative measure of chemical composition, classification results can vary widely due to spectral distortions. However, we demonstrate that the selection of localized metrics based on chemical information can make our data robust to the spectral distortions caused by scattering at the tissue boundary.

Keywords: Infrared, Spectroscopy, Imaging, Tissue classification

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1590
25 Fuzzy Power Controller Design for Purdue University Research Reactor-1

Authors: Oktavian Muhammad Rizki, Appiah Rita, Lastres Oscar, Miller True, Chapman Alec, Tsoukalas Lefteri H.

Abstract:

The Purdue University Research Reactor-1 (PUR-1) is a 10 kWth pool-type research reactor located at Purdue University’s West Lafayette campus. The reactor was recently upgraded to use entirely digital instrumentation and control systems. However, currently, there is no automated control system to regulate the power in the reactor. We propose a fuzzy logic controller as a form of digital twin to complement the existing digital instrumentation system to monitor and stabilize power control using existing experimental data. This work assesses the feasibility of a power controller based on a Fuzzy Rule-Based System (FRBS) by modelling and simulation with a MATLAB algorithm. The controller uses power error and reactor period as inputs and generates reactivity insertion as output. The reactivity insertion is then converted to control rod height using a logistic function based on information from the recorded experimental reactor control rod data. To test the capability of the proposed fuzzy controller, a point-kinetic reactor model is utilized based on the actual PUR-1 operation conditions and a Monte Carlo N-Particle simulation result of the core to numerically compute the neutronics parameters of reactor behavior. The Point Kinetic Equation (PKE) was employed to model dynamic characteristics of the research reactor since it explains the interactions between the spatial and time varying input and output variables efficiently. The controller is demonstrated computationally using various cases: startup, power maneuver, and shutdown. From the test results, it can be proved that the implemented fuzzy controller can satisfactorily regulate the reactor power to follow demand power without compromising nuclear safety measures.

Keywords: Fuzzy logic controller, power controller, reactivity, research reactor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 341