Search results for: Theano Petsi
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3

Search results for: Theano Petsi

3 Encapsulated Rennin Enzyme in Nano and Micro Tubular Cellulose/Starch Gel Composite for Milk Coagulation

Authors: Eleftheria Barouni, Theano Petsi, Argyro Bekatorou, Dionysos Kolliopoulos, Dimitrios Vasileiou, Panayiotis Panas, Maria Kanellaki, Athanasios A. Koutinas

Abstract:

The aim of the present work was the production and use of a composite filter (TC/starch), containing rennin enzyme, in continuous system and in successive fermentation batches (SFB) for milk coagulation in order to compare the operational stability of both systems and cheese production cost. Tubular cellulose (TC) was produced after removal of lignin from lignocellulosic biomass using several procedures, e.g. alkaline treatment [1] and starch gel was added for the reduction of TC tubes dimensions to micro- and nano- range[2]. Four immobilized biocatalysts were prepared using different ways of the enzyme entrapment. 1) TC/ rennin (rennin entrapped in the tubes of TC), 2) TC/SG-rennin (rennin entrapped in the tubes of the composite), 3) TC-SG/rennin (rennin entrapped into the layer of starch gel) and 4) TC/rennin- SG/rennin (rennin is entrapped both in the tubes of the TC and into the layer of starch gel). Firstly these immobilized biocatalysts were examined in ten SFB regarding the coagulation time and their activity All the above immobilized biocatalysts remained active and the coagulation time was ranged from 90 to 480, 120-480, 330-510, and 270-540 min for (1), (2), (3), and (4) respectively. The quality of the cheese was examined through the determination of volatile compounds by SPME GC/MS analysis. These results encouraged us to study a continuous coagulation system of milk. Even though the (1) immobilized biocatalyst gave lower coagulation time, we used the (2) immobilized biocatalyst in the continuous system. The results were promising.

Keywords: tubular cellulose, starch gel, composite biocatalyst, Rennin, milk coagulation

Procedia PDF Downloads 307
2 LGG Architecture for Brain Tumor Segmentation Using Convolutional Neural Network

Authors: Sajeeha Ansar, Asad Ali Safi, Sheikh Ziauddin, Ahmad R. Shahid, Faraz Ahsan

Abstract:

The most aggressive form of brain tumor is called glioma. Glioma is kind of tumor that arises from glial tissue of the brain and occurs quite often. A fully automatic 2D-CNN model for brain tumor segmentation is presented in this paper. We performed pre-processing steps to remove noise and intensity variances using N4ITK and standard intensity correction, respectively. We used Keras open-source library with Theano as backend for fast implementation of CNN model. In addition, we used BRATS 2015 MRI dataset to evaluate our proposed model. Furthermore, we have used SimpleITK open-source library in our proposed model to analyze images. Moreover, we have extracted random 2D patches for proposed 2D-CNN model for efficient brain segmentation. Extracting 2D patched instead of 3D due to less dimensional information present in 2D which helps us in reducing computational time. Dice Similarity Coefficient (DSC) is used as performance measure for the evaluation of the proposed method. Our method achieved DSC score of 0.77 for complete, 0.76 for core, 0.77 for enhanced tumor regions. However, these results are comparable with methods already implemented 2D CNN architecture.

Keywords: brain tumor segmentation, convolutional neural networks, deep learning, LGG

Procedia PDF Downloads 166
1 Use Cloud-Based Watson Deep Learning Platform to Train Models Faster and More Accurate

Authors: Susan Diamond

Abstract:

Machine Learning workloads have traditionally been run in high-performance computing (HPC) environments, where users log in to dedicated machines and utilize the attached GPUs to run training jobs on huge datasets. Training of large neural network models is very resource intensive, and even after exploiting parallelism and accelerators such as GPUs, a single training job can still take days. Consequently, the cost of hardware is a barrier to entry. Even when upfront cost is not a concern, the lead time to set up such an HPC environment takes months from acquiring hardware to set up the hardware with the right set of firmware, software installed and configured. Furthermore, scalability is hard to achieve in a rigid traditional lab environment. Therefore, it is slow to react to the dynamic change in the artificial intelligent industry. Watson Deep Learning as a service, a cloud-based deep learning platform that mitigates the long lead time and high upfront investment in hardware. It enables robust and scalable sharing of resources among the teams in an organization. It is designed for on-demand cloud environments. Providing a similar user experience in a multi-tenant cloud environment comes with its own unique challenges regarding fault tolerance, performance, and security. Watson Deep Learning as a service tackles these challenges and present a deep learning stack for the cloud environments in a secure, scalable and fault-tolerant manner. It supports a wide range of deep-learning frameworks such as Tensorflow, PyTorch, Caffe, Torch, Theano, and MXNet etc. These frameworks reduce the effort and skillset required to design, train, and use deep learning models. Deep Learning as a service is used at IBM by AI researchers in areas including machine translation, computer vision, and healthcare. 

Keywords: deep learning, machine learning, cognitive computing, model training

Procedia PDF Downloads 195