Search results for: parameter intervals
486 Construction of Microbial Fuel Cells from Local Benthic Zones
Authors: Maria Luiza D. Ramiento, Maria Lissette D. Lucas
Abstract:
Electricity is said to serve as the backbone of modern technology. Considering this, electricity consumption has dynamically grown due to the continuous demand. An alternative producer of energy concerning electricity must therefore be given focus. Microbial fuel cell wholly characterizes a new method of renewable energy recovery: the direct conversion of organic matter to electricity using bacteria. Electricity is produced as fuel or new food is given to the bacteria. The study concentrated in determining the feasibility of electricity production from local benthic zones. Microbial fuel cells were constructed to harvest the possible electricity and to test the presence of electricity producing microorganisms. Soil samples were gathered from Calumpang River, Palawan Mangrove Forest, Rosario River and Batangas Port. Eleven modules were constructed for the different trials of the soil samples. These modules were made of cathode and anode chambers connected by a salt bridge. For 85 days, the harvested voltage was measured daily. No parameter is added for the first 24 days. For the next 61 days, acetic acid was included in the first and second trials of the modules. Each of the trials of the soil samples gave a positive result in electricity production.There were electricity producing microbes in local benthic zones. It is observed that the higher the organic content of the soil sample, the higher the electricity harvested from it. It is recommended to identify the specific species of the electricity-producing microorganism present in the local benthic zone. Complement experiments are encouraged like determining the kind of soil particles to test its effect on the amount electricity that can be harvested. To pursue the development of microbial fuel cells by building a closed circuit in it is also suggested.Keywords: microbial fuel cell, benthic zone, electricity, reduction-oxidation reaction, bacteria
Procedia PDF Downloads 400485 Hydrometallurgical Processing of a Nigerian Chalcopyrite Ore
Authors: Alafara A. Baba, Kuranga I. Ayinla, Folahan A. Adekola, Rafiu B. Bale
Abstract:
Due to increasing demands and diverse applications of copper oxide as pigment in ceramics, cuprammonium hydroxide solution for rayon, p-type semi-conductor, dry cell batteries production and as safety disposal of hazardous materials, a study on the hydrometallurgical operations involving leaching, solvent extraction and precipitation for the recovery of copper for producing high grade copper oxide from a Nigerian chalcopyrite ore in chloride media has been examined. At a particular set of experimental parameter with respect to acid concentration, reaction temperature and particle size, the leaching investigation showed that the ore dissolution increases with increasing acid concentration, temperature and decreasing particle diameter at a moderate stirring. The kinetics data has been analyzed and was found to follow diffusion control mechanism. At optimal conditions, the extent of ore dissolution reached 94.3%. The recovery of the total copper from the hydrochloric acid-leached chalcopyrite ore was undertaken by solvent extraction and precipitation techniques, prior to the beneficiation of the purified solution as copper oxide. The purification of the leach liquor was firstly done by precipitation of total iron and manganese using Ca(OH)2 and H2O2 as oxidizer at pH 3.5 and 4.25, respectively. An extraction efficiency of 97.3% total copper was obtained by 0.2 mol/L Dithizone in kerosene at 25±2ºC within 40 minutes, from which ≈98% Cu from loaded organic phase was successfully stripped by 0.1 mol/L HCl solution. The beneficiation of the recovered pure copper solution was carried out by crystallization through alkali addition followed by calcination at 600ºC to obtain high grade copper oxide (Tenorite, CuO: 05-0661). Finally, a simple hydrometallurgical scheme for the operational extraction procedure amenable for industrial utilization and economic sustainability was provided.Keywords: chalcopyrite ore, Nigeria, copper, copper oxide, solvent extraction
Procedia PDF Downloads 393484 Investigating the Energy Harvesting Potential of a Pitch-Plunge Airfoil Subjected to Fluctuating Wind
Authors: Magu Raam Prasaad R., Venkatramani Jagadish
Abstract:
Recent studies in the literature have shown that randomly fluctuating wind flows can give rise to a distinct regime of pre-flutter oscillations called intermittency. Intermittency is characterized by the presence of sporadic bursts of high amplitude oscillations interspersed amidst low-amplitude aperiodic fluctuations. The focus of this study is on investigating the energy harvesting potential of these intermittent oscillations. Available literature has by and large devoted its attention on extracting energy from flutter oscillations. The possibility of harvesting energy from pre-flutter regimes have remained largely unexplored. However, extracting energy from violent flutter oscillations can be severely detrimental to the structural integrity of airfoil structures. Consequently, investigating the relatively stable pre-flutter responses for energy extraction applications is of practical importance. The present study is devoted towards addressing these concerns. A pitch-plunge airfoil with cubic hardening nonlinearity in the plunge and pitch degree of freedom is considered. The input flow fluctuations are modelled using a sinusoidal term with randomly perturbed frequencies. An electromagnetic coupling is provided to the pitch-plunge equations, such that, energy from the wind induced vibrations of the structural response are extracted. With the mean flow speed as the bifurcation parameter, a fourth order Runge-Kutta based time marching algorithm is used to solve the governing aeroelastic equations with electro-magnetic coupling. The harnessed energy from the intermittency regime is presented and the results are discussed in comparison to that obtained from the flutter regime. The insights from this study could be useful in health monitoring of aeroelastic structures.Keywords: aeroelasticity, energy harvesting, intermittency, randomly fluctuating flows
Procedia PDF Downloads 186483 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores
Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan
Abstract:
Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics
Procedia PDF Downloads 130482 A Randomized Active Controlled Clinical Trial to Assess Clinical Efficacy and Safety of Tapentadol Nasal Spray in Moderate to Severe Post-Surgical Pain
Authors: Kamal Tolani, Sandeep Kumar, Rohit Luthra, Ankit Dadhania, Krishnaprasad K., Ram Gupta, Deepa Joshi
Abstract:
Background: Post-operative analgesia remains a clinical challenge, with central and peripheral sensitization playing a pivotal role in treatment-related complications and impaired quality of life. Centrally acting opioids offer poor risk benefit profile with increased intensity of gastrointestinal or central side effects and slow onset of clinical analgesia. The objective of this study was to assess the clinical feasibility of induction and maintenance therapy with Tapentadol Nasal Spray (NS) in moderate to severe acute post-operative pain. Methods: Phase III, randomized, active-controlled, non-inferiority clinical trial involving 294 cases who had undergone surgical procedures under general anesthesia or regional anesthesia. Post-surgery patients were randomized to receive either Tapentadol NS 45 mg or Tramadol 100mg IV as a bolus and subsequent 50 mg or 100 mg dose over 2-3 minutes. The frequency of administration of NS was at every 4-6 hours. At the end of 24 hrs, patients in the tramadol group who had a pain intensity score of ≥4 were switched to oral tramadol immediate release 100mg capsule until the pain intensity score reduced to <4. All patients who had achieved pain intensity ≤ 4 were shifted to a lower dose of either Tapentadol NS 22.5 mg or oral Tramadol immediate release 50mg capsule. The statistical analysis plan was envisaged as a non-inferiority trial involving comparison with Tramadol for Pain intensity difference at 60 minutes (PID60min), Sum of Pain intensity difference at 60 minutes (SPID60min), and Physician Global Assessment at 24 hrs (PGA24 hrs). Results: The per-protocol analyses involved 255 hospitalized cases undergoing surgical procedures. The median age of patients was 38.0 years. For the primary efficacy variables, Tapentadol NS was non-inferior to Inj/Oral Tramadol in relief of moderate to severe post-operative pain. On the basis of SPID60min, no clinically significant difference was observed between Tapentadol NS and Tramadol IV (1.73±2.24 vs. 1.64± 1.92, -0.09 [95% CI, -0.43, 0.60]). In the co-primary endpoint PGA24hrs, Tapentadol NS was non–inferior to Tramadol IV (2.12 ± 0.707 vs. 2.02 ±0.704, - 0.11[95% CI, -0.07, 0.28). However, on further assessment at 48hr, 72 hrs, and 120hrs, clinically superior pain relief was observed with the Tapentadol NS formulation that was statistically significant (p <0.05) at each of the time intervals. Secondary efficacy measures, including the onset of clinical analgesia and TOTPAR, showed non-inferiority to Tramadol. The safety profile and need for rescue medication were also similar in both the groups during the treatment period. The most common concomitant medications were anti-bacterial (98.3%). Conclusion: Tapentadol NS is a clinically feasible option for improved compliance as induction and maintenance therapy while offering a sustained and persistent patient response that is clinically meaningful in post-surgical settings.Keywords: tapentadol nasal spray, acute pain, tramadol, post-operative pain
Procedia PDF Downloads 248481 A Statistical Approach to Predict and Classify the Commercial Hatchability of Chickens Using Extrinsic Parameters of Breeders and Eggs
Authors: M. S. Wickramarachchi, L. S. Nawarathna, C. M. B. Dematawewa
Abstract:
Hatchery performance is critical for the profitability of poultry breeder operations. Some extrinsic parameters of eggs and breeders cause to increase or decrease the hatchability. This study aims to identify the affecting extrinsic parameters on the commercial hatchability of local chicken's eggs and determine the most efficient classification model with a hatchability rate greater than 90%. In this study, seven extrinsic parameters were considered: egg weight, moisture loss, breeders age, number of fertilised eggs, shell width, shell length, and shell thickness. Multiple linear regression was performed to determine the most influencing variable on hatchability. First, the correlation between each parameter and hatchability were checked. Then a multiple regression model was developed, and the accuracy of the fitted model was evaluated. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), k-Nearest Neighbors (kNN), Support Vector Machines (SVM) with a linear kernel, and Random Forest (RF) algorithms were applied to classify the hatchability. This grouping process was conducted using binary classification techniques. Hatchability was negatively correlated with egg weight, breeders' age, shell width, shell length, and positive correlations were identified with moisture loss, number of fertilised eggs, and shell thickness. Multiple linear regression models were more accurate than single linear models regarding the highest coefficient of determination (R²) with 94% and minimum AIC and BIC values. According to the classification results, RF, CART, and kNN had performed the highest accuracy values 0.99, 0.975, and 0.972, respectively, for the commercial hatchery process. Therefore, the RF is the most appropriate machine learning algorithm for classifying the breeder outcomes, which are economically profitable or not, in a commercial hatchery.Keywords: classification models, egg weight, fertilised eggs, multiple linear regression
Procedia PDF Downloads 87480 Preliminary Assessment for Protective Effect of Rhodiola rosea in Chemically Induced Ulcerative Colitis
Authors: Santram Lodhi, Alok Pal Jain, Awesh K. Yadav, Gopal Rai
Abstract:
Rhodiola rosea L. (Crassulaceae) is commonly known as golden root or rose root. It is a perennial herbaceous plant and most investigated species of the genus Rhodiola. Rhodiola rosea contains flavonoids, terpenoids, phenylpropanoid glycosides and phenylethanol derivatives in the roots of the plant. The objective of present study was to investigate the protective effect of hydroalcoholic extract from Rhodiola rosea roots in DSS induced colitis in mice. The ulcerative colitis was induced by DSS (3%, w/v) in mice and estimated weight loss and stool consistency. Various parameters including Colon length, spleen weights and ulcer index were also measured. The histological observations were observed by H&E staining. Effect of hydroalcoholic extract on various antioxidant parameter of rat colon such as tissue myeloperoxidase (MPO), reduced GSH, SOD concentrations and lipid peroxidation were determined. Pro-inflammatory mediators, such as tumour necrosis factor-α (TNF-α) and nitric oxide (NO) were determined by ELISA. In DSS induced group, mice body weight decreased gradually as compared to the control group. Redness and edema were observed in the colons intensely and scores representing inflammation in this group. The extract treated showed with tissue levels of TNF-α, IL-6 and MPO activity were significantly (p<0.05) increased. The mice treated with higher doses of hydroalcoholic extract (300 mg/kg) significantly reduced the activity compared with standard drug sulfasalazine (100 mg/kg. B.wt). Conclusion: Results of this study were suggested that the efficacy of hydroalcoholic extract, especially at the higher dose, was similar to that of standard drug, which concerned its potential application as a natural medicine for the treatment of ulcerative colitis.Keywords: phenylpropanoid, Rhodiola rosea, sulfasalazin, ulcerative colitis
Procedia PDF Downloads 244479 Grey Relational Analysis Coupled with Taguchi Method for Process Parameter Optimization of Friction Stir Welding on 6061 AA
Authors: Eyob Messele Sefene, Atinkut Atinafu Yilma
Abstract:
The highest strength-to-weight ratio criterion has fascinated increasing curiosity in virtually all areas where weight reduction is indispensable. One of the recent advances in manufacturing to achieve this intention endears friction stir welding (FSW). The process is widely used for joining similar and dissimilar non-ferrous materials. In FSW, the mechanical properties of the weld joints are impelled by property-selected process parameters. This paper presents verdicts of optimum process parameters in attempting to attain enhanced mechanical properties of the weld joint. The experiment was conducted on a 5 mm 6061 aluminum alloy sheet. A butt joint configuration was employed. Process parameters, rotational speed, traverse speed or feed rate, axial force, dwell time, tool material and tool profiles were utilized. Process parameters were also optimized, making use of a mixed L18 orthogonal array and the Grey relation analysis method with larger is better quality characteristics. The mechanical properties of the weld joint are examined through the tensile test, hardness test and liquid penetrant test at ambient temperature. ANOVA was conducted in order to investigate the significant process parameters. This research shows that dwell time, rotational speed, tool shape, and traverse speed have become significant, with a joint efficiency of about 82.58%. Nine confirmatory tests are conducted, and the results indicate that the average values of the grey relational grade fall within the 99% confidence interval. Hence the experiment is proven reliable.Keywords: friction stir welding, optimization, 6061 AA, Taguchi
Procedia PDF Downloads 101478 Damage Mesomodel Based Low-Velocity Impact Damage Analysis of Laminated Composite Structures
Authors: Semayat Fanta, P.M. Mohite, C.S. Upadhyay
Abstract:
Damage meso-model for laminates is one of the most widely applicable approaches for the analysis of damage induced in laminated fiber-reinforced polymeric composites. Damage meso-model for laminates has been developed over the last three decades by many researchers in experimental, theoretical, and analytical methods that have been carried out in micromechanics as well as meso-mechanics analysis approaches. It has been fundamentally developed based on the micromechanical description that aims to predict the damage initiation and evolution until the failure of structure in various loading conditions. The current damage meso-model for laminates aimed to act as a bridge between micromechanics and macro-mechanics of the laminated composite structure. This model considers two meso-constituents for the analysis of damage in ply and interface that imparted from low-velocity impact. The damages considered in this study include fiber breakage, matrix cracking, and diffused damage of the lamina, and delamination of the interface. The damage initiation and evolution in laminae can be modeled in terms of damaged strain energy density using damage parameters and the thermodynamic irreversible forces. Interface damage can be modeled with a new concept of spherical micro-void in the resin-rich zone of interface material. The damage evolution is controlled by the damage parameter (d) and the radius of micro-void (r) from the point of damage nucleation to its saturation. The constitutive martial model for meso-constituents is defined in a user material subroutine VUMAT and implemented in ABAQUS/Explicit finite element modeling tool. The model predicts the damages in the meso-constituents level very accurately and is considered the most effective technique of modeling low-velocity impact simulation for laminated composite structures.Keywords: mesomodel, laminate, low-energy impact, micromechanics
Procedia PDF Downloads 223477 Hyper Parameter Optimization of Deep Convolutional Neural Networks for Pavement Distress Classification
Authors: Oumaima Khlifati, Khadija Baba
Abstract:
Pavement distress is the main factor responsible for the deterioration of road structure durability, damage vehicles, and driver comfort. Transportation agencies spend a high proportion of their funds on pavement monitoring and maintenance. The auscultation of pavement distress was based on the manual survey, which was extremely time consuming, labor intensive, and required domain expertise. Therefore, the automatic distress detection is needed to reduce the cost of manual inspection and avoid more serious damage by implementing the appropriate remediation actions at the right time. Inspired by recent deep learning applications, this paper proposes an algorithm for automatic road distress detection and classification using on the Deep Convolutional Neural Network (DCNN). In this study, the types of pavement distress are classified as transverse or longitudinal cracking, alligator, pothole, and intact pavement. The dataset used in this work is composed of public asphalt pavement images. In order to learn the structure of the different type of distress, the DCNN models are trained and tested as a multi-label classification task. In addition, to get the highest accuracy for our model, we adjust the structural optimization hyper parameters such as the number of convolutions and max pooling, filers, size of filters, loss functions, activation functions, and optimizer and fine-tuning hyper parameters that conclude batch size and learning rate. The optimization of the model is executed by checking all feasible combinations and selecting the best performing one. The model, after being optimized, performance metrics is calculated, which describe the training and validation accuracies, precision, recall, and F1 score.Keywords: distress pavement, hyperparameters, automatic classification, deep learning
Procedia PDF Downloads 93476 Effects of Foam Rolling with Different Application Volumes on the Isometric Force of the Calf Muscle with Consideration of Muscle Activity
Authors: T. Poppendieker, H. Maurer, C. Segieth
Abstract:
Over the past ten years, foam rolling has become a new trend in the fitness and health market. It is also a frequently used technique for self-massage. However, the scope of effects from foam rolling has only recently started to be researched and understood. The focus of this study is to examine the effects of prolonged foam rolling on muscle performance. Isometric muscle force was used as a parameter to determine an improving impact of the myofascial roller in two different application volumes. Besides the maximal muscle force, data were also collected on muscle activation during all tests. Twenty-four (17 females, 7 males) healthy students with an average age of 23.4 ± 2.8 years were recruited. The study followed a cross-over pre-/post design in which the order of conditions was counterbalanced. The subjects performed a one-minute and three-minute foam rolling application set on two separate days. Isometric maximal muscle force of the dominant calf was tested before and after the self-myofascial release application. The statistic software program SPSS 22 was used to analyze the data of the maximal isometric force of the calf muscle by a 2 x 2 (time of measurement x intervention) analysis of variance with repeated measures. The statistic significance level was set at p ≤ 0.05. Neither for the main effect of time of measurement (F(1,23) = .93, p = .36, f = .20) nor for the interaction of time of measurement x intervention (F(1,23) = 1.99, p = .17, f = 0.29) significant p-values were found. However, the effect size indicates a mean interaction effect with a tendency of greater pre-post improvements under the three-minute foam rolling condition. Changes in maximal force did not correlate with changes in EMG-activity (r = .02, p = .95 in the short and r = -.11, p = .65 in the long rolling condition). Results support findings of previous studies and suggest a positive potential for use of the foam roll as a means for keeping muscle force at least at the same performance level while leading to an increase in flexibility.Keywords: application volume differences, foam rolling, isometric maximal force, self-myofascial release
Procedia PDF Downloads 287475 Modified Weibull Approach for Bridge Deterioration Modelling
Authors: Niroshan K. Walgama Wellalage, Tieling Zhang, Richard Dwight
Abstract:
State-based Markov deterioration models (SMDM) sometimes fail to find accurate transition probability matrix (TPM) values, and hence lead to invalid future condition prediction or incorrect average deterioration rates mainly due to drawbacks of existing nonlinear optimization-based algorithms and/or subjective function types used for regression analysis. Furthermore, a set of separate functions for each condition state with age cannot be directly derived by using Markov model for a given bridge element group, which however is of interest to industrial partners. This paper presents a new approach for generating Homogeneous SMDM model output, namely, the Modified Weibull approach, which consists of a set of appropriate functions to describe the percentage condition prediction of bridge elements in each state. These functions are combined with Bayesian approach and Metropolis Hasting Algorithm (MHA) based Markov Chain Monte Carlo (MCMC) simulation technique for quantifying the uncertainty in model parameter estimates. In this study, factors contributing to rail bridge deterioration were identified. The inspection data for 1,000 Australian railway bridges over 15 years were reviewed and filtered accordingly based on the real operational experience. Network level deterioration model for a typical bridge element group was developed using the proposed Modified Weibull approach. The condition state predictions obtained from this method were validated using statistical hypothesis tests with a test data set. Results show that the proposed model is able to not only predict the conditions in network-level accurately but also capture the model uncertainties with given confidence interval.Keywords: bridge deterioration modelling, modified weibull approach, MCMC, metropolis-hasting algorithm, bayesian approach, Markov deterioration models
Procedia PDF Downloads 727474 Site Investigations and Mitigation Measures of Landslides in Sainj and Tirthan Valley of Kullu District, Himachal Pradesh, India
Authors: Laxmi Versain, R. S. Banshtu
Abstract:
Landslides are found to be the most commonly occurring geological hazards in the mountainous regions of the Himalaya. This mountainous zone is facing large number of seismic turbulences, climatic changes, and topography changes due to increasing urbanization. That eventually has lead several researchers working for best suitable methodologies to infer the ultimate results. Landslide Hazard Zonation has widely come as suitable method to know the appropriate factors that trigger the lansdslide phenomenon on higher reaches. Most vulnerable zones or zones of weaknesses are indentified and safe mitigation measures are to be suggested to mitigate and channelize the study of an effected area. Use of Landslide Hazard Zonation methodology in relative zones of weaknesses depend upon the data available for the particular site. The causative factors are identified and data is made available to infer the results. Factors like seismicity in mountainous region have closely associated to make the zones of thrust and faults or lineaments more vulnerable. Data related to soil, terrain, rainfall, geology, slope, nature of terrain, are found to be varied for various landforms and areas. Thus, the relative causes are to be identified and classified by giving specific weightage to each parameter. Factors which cause the instability of slopes are several and can be grouped to infer the potential modes of failure. The triggering factors of the landslides on the mountains are not uniform. The urbanization has crawled like ladder and emergence of concrete jungles are in a very fast pace on hilly region of Himalayas. The local terrains has largely been modified and hence instability of several zones are triggering at very fast pace. More strategic and pronounced methods are required to reduce the effect of landslide.Keywords: zonation, LHZ, susceptible, weightages, methodology
Procedia PDF Downloads 196473 Quantitative Evaluation of Supported Catalysts Key Properties from Electron Tomography Studies: Assessing Accuracy Using Material-Realistic 3D-Models
Authors: Ainouna Bouziane
Abstract:
The ability of Electron Tomography to recover the 3D structure of catalysts, with spatial resolution in the subnanometer scale, has been widely explored and reviewed in the last decades. A variety of experimental techniques, based either on Transmission Electron Microscopy (TEM) or Scanning Transmission Electron Microscopy (STEM) have been used to reveal different features of nanostructured catalysts in 3D, but High Angle Annular Dark Field imaging in STEM mode (HAADF-STEM) stands out as the most frequently used, given its chemical sensitivity and avoidance of imaging artifacts related to diffraction phenomena when dealing with crystalline materials. In this regard, our group has developed a methodology that combines image denoising by undecimated wavelet transforms (UWT) with automated, advanced segmentation procedures and parameter selection methods using CS-TVM (Compressed Sensing-total variation minimization) algorithms to reveal more reliable quantitative information out of the 3D characterization studies. However, evaluating the accuracy of the magnitudes estimated from the segmented volumes is also an important issue that has not been properly addressed yet, because a perfectly known reference is needed. The problem particularly complicates in the case of multicomponent material systems. To tackle this key question, we have developed a methodology that incorporates volume reconstruction/segmentation methods. In particular, we have established an approach to evaluate, in quantitative terms, the accuracy of TVM reconstructions, which considers the influence of relevant experimental parameters like the range of tilt angles, image noise level or object orientation. The approach is based on the analysis of material-realistic, 3D phantoms, which include the most relevant features of the system under analysis.Keywords: electron tomography, supported catalysts, nanometrology, error assessment
Procedia PDF Downloads 85472 Domain-Specific Deep Neural Network Model for Classification of Abnormalities on Chest Radiographs
Authors: Nkechinyere Joy Olawuyi, Babajide Samuel Afolabi, Bola Ibitoye
Abstract:
This study collected a preprocessed dataset of chest radiographs and formulated a deep neural network model for detecting abnormalities. It also evaluated the performance of the formulated model and implemented a prototype of the formulated model. This was with the view to developing a deep neural network model to automatically classify abnormalities in chest radiographs. In order to achieve the overall purpose of this research, a large set of chest x-ray images were sourced for and collected from the CheXpert dataset, which is an online repository of annotated chest radiographs compiled by the Machine Learning Research Group, Stanford University. The chest radiographs were preprocessed into a format that can be fed into a deep neural network. The preprocessing techniques used were standardization and normalization. The classification problem was formulated as a multi-label binary classification model, which used convolutional neural network architecture to make a decision on whether an abnormality was present or not in the chest radiographs. The classification model was evaluated using specificity, sensitivity, and Area Under Curve (AUC) score as the parameter. A prototype of the classification model was implemented using Keras Open source deep learning framework in Python Programming Language. The AUC ROC curve of the model was able to classify Atelestasis, Support devices, Pleural effusion, Pneumonia, A normal CXR (no finding), Pneumothorax, and Consolidation. However, Lung opacity and Cardiomegaly had a probability of less than 0.5 and thus were classified as absent. Precision, recall, and F1 score values were 0.78; this implies that the number of False Positive and False Negative is the same, revealing some measure of label imbalance in the dataset. The study concluded that the developed model is sufficient to classify abnormalities present in chest radiographs into present or absent.Keywords: transfer learning, convolutional neural network, radiograph, classification, multi-label
Procedia PDF Downloads 127471 Design and Development of Graphene Oxide Modified by Chitosan Nanosheets Showing pH-Sensitive Surface as a Smart Drug Delivery System for Control Release of Doxorubicin
Authors: Parisa Shirzadeh
Abstract:
Drug delivery systems in which drugs are traditionally used, multi-stage and at specified intervals by patients, do not meet the needs of the world's up-to-date drug delivery. In today's world, we are dealing with a huge number of recombinant peptide and protean drugs and analogues of hormones in the body, most of which are made with genetic engineering techniques. Most of these drugs are used to treat critical diseases such as cancer. Due to the limitations of the traditional method, researchers sought to find ways to solve the problems of the traditional method to a large extent. Following these efforts, controlled drug release systems were introduced, which have many advantages. Using controlled release of the drug in the body, the concentration of the drug is kept at a certain level, and in a short time, it is done at a higher rate. Graphene is a natural material that is biodegradable, non-toxic, and natural compared to carbon nanotubes; its price is lower than carbon nanotubes and is cost-effective for industrialization. On the other hand, the presence of highly effective surfaces and wide surfaces of graphene plates makes it more effective to modify graphene than carbon nanotubes. Graphene oxide is often synthesized using concentrated oxidizers such as sulfuric acid, nitric acid, and potassium permanganate based on Hummer 1 method. In comparison with the initial graphene, the resulting graphene oxide is heavier and has carboxyl, hydroxyl, and epoxy groups. Therefore, graphene oxide is very hydrophilic and easily dissolves in water and creates a stable solution. On the other hand, because the hydroxyl, carboxyl, and epoxy groups created on the surface are highly reactive, they have the ability to work with other functional groups such as amines, esters, polymers, etc. Connect and bring new features to the surface of graphene. In fact, it can be concluded that the creation of hydroxyl groups, Carboxyl, and epoxy and in fact graphene oxidation is the first step and step in creating other functional groups on the surface of graphene. Chitosan is a natural polymer and does not cause toxicity in the body. Due to its chemical structure and having OH and NH groups, it is suitable for binding to graphene oxide and increasing its solubility in aqueous solutions. Graphene oxide (GO) has been modified by chitosan (CS) covalently, developed for control release of doxorubicin (DOX). In this study, GO is produced by the hummer method under acidic conditions. Then, it is chlorinated by oxalyl chloride to increase its reactivity against amine. After that, in the presence of chitosan, the amino reaction was performed to form amide transplantation, and the doxorubicin was connected to the carrier surface by π-π interaction in buffer phosphate. GO, GO-CS, and GO-CS-DOX characterized by FT-IR, RAMAN, TGA, and SEM. The ability to load and release is determined by UV-Visible spectroscopy. The loading result showed a high capacity of DOX absorption (99%) and pH dependence identified as a result of DOX release from GO-CS nanosheet at pH 5.3 and 7.4, which show a fast release rate in acidic conditions.Keywords: graphene oxide, chitosan, nanosheet, controlled drug release, doxorubicin
Procedia PDF Downloads 120470 The Feasibility of Glycerol Steam Reforming in an Industrial Sized Fixed Bed Reactor Using Computational Fluid Dynamic (CFD) Simulations
Authors: Mahendra Singh, Narasimhareddy Ravuru
Abstract:
For the past decade, the production of biodiesel has significantly increased along with its by-product, glycerol. Biodiesel-derived glycerol massive entry into the glycerol market has caused its value to plummet. Newer ways to utilize the glycerol by-product must be implemented or the biodiesel industry will face serious economic problems. The biodiesel industry should consider steam reforming glycerol to produce hydrogen gas. Steam reforming is the most efficient way of producing hydrogen and there is a lot of demand for it in the petroleum and chemical industries. This study investigates the feasibility of glycerol steam reforming in an industrial sized fixed bed reactor. In this paper, using computational fluid dynamic (CFD) simulations, the extent of the transport resistances that would occur in an industrial sized reactor can be visualized. An important parameter in reactor design is the size of the catalyst particle. The size of the catalyst cannot be too large where transport resistances are too high, but also not too small where an extraordinary amount of pressure drop occurs. The goal of this paper is to find the best catalyst size under various flow rates that will result in the highest conversion. Computational fluid dynamics simulated the transport resistances and a pseudo-homogenous reactor model was used to evaluate the pressure drop and conversion. CFD simulations showed that glycerol steam reforming has strong internal diffusion resistances resulting in extremely low effectiveness factors. In the pseudo-homogenous reactor model, the highest conversion obtained with a Reynolds number of 100 (29.5 kg/h) was 9.14% using a 1/6 inch catalyst diameter. Due to the low effectiveness factors and high carbon deposition rates, a fluidized bed is recommended as the appropriate reactor to carry out glycerol steam reforming.Keywords: computational fluid dynamic, fixed bed reactor, glycerol, steam reforming, biodiesel
Procedia PDF Downloads 307469 Enhancing Warehousing Operation In Cold Supply Chain Through The Use Of IOT And Lifi Technologies
Authors: Sarah El-Gamal, Passent Hossam, Ahmed Abd El Aziz, Rojina Mahmoud, Ahmed Hassan, Dalia Hilal, Eman Ayman, Hana Haytham, Omar Khamis
Abstract:
Several concerns fall upon the supply chain, especially the cold supply chain. According to the literature, the main challenges in the cold supply chain are the distribution and storage phases. In this research, researchers focused on the storage area, which contains several activities such as the picking activity that faces a lot of obstacles and challenges The implementation of IoT solutions enables businesses to monitor the temperature of food items, which is perhaps the most critical parameter in cold chains. Therefore, researchers proposed a practical solution that would help in eliminating the problems related to ineffective picking for products, especially fish and seafood products, by using IoT technology, most notably LiFi technology. Thus, guaranteeing sufficient picking, reducing waste, and consequently lowering costs. A prototype was specially designed and examined. This research is a single case study research. Two methods of data collection were used; observation and semi-structured interviews. Semi-structured interviews were conducted with managers and decision maker at Carrefour Alexandria to validate the problem and the proposed practical solution using IoTandLiFi technology. A total of three interviews were conducted. As a result, a SWOT analysis was achieved in order to highlight all the strengths and weaknesses of using the recommended Lifi solution in the picking process. According to the investigations, it was found that the use of IoT and LiFi technology is cost effective, efficient, and reduces human errors, minimize the percentage of product waste and thus save money and cost. Thus, increasing customer satisfaction and profits gained.Keywords: cold supply chain, picking process, temperature control, IOT, warehousing, LIFI
Procedia PDF Downloads 190468 Next-Generation Laser-Based Transponder and 3D Switch for Free Space Optics in Nanosatellite
Authors: Nadir Atayev, Mehman Hasanov
Abstract:
Future spacecraft will require a structural change in the way data is transmitted due to the increase in the volume of data required for space communication. Current radio frequency communication systems are already facing a bottleneck in the volume of data sent to the ground segment due to their technological and regulatory characteristics. To overcome these issues, free space optics communication plays an important role in the integrated terrestrial space network due to its advantages such as significantly improved data rate compared to traditional RF technology, low cost, improved security, and inter-satellite free space communication, as well as uses a laser beam, which is an optical signal carrier to establish satellite-ground & ground-to-satellite links. In this approach, there is a need for high-speed and energy-efficient systems as a base platform for sending high-volume video & audio data. Nano Satellite and its branch CubeSat platforms have more technical functionality than large satellites, wheres cover an important part of the space sector, with their Low-Earth-Orbit application area with low-cost design and technical functionality for building networks using different communication topologies. Along the research theme developed in this regard, the output parameter indicators for the FSO of the optical communication transceiver subsystem on the existing CubeSat platforms, and in the direction of improving the mentioned parameters of this communication methodology, 3D optical switch and laser beam controlled optical transponder with 2U CubeSat structural subsystems and application in the Low Earth Orbit satellite network topology, as well as its functional performance and structural parameters, has been studied accordingly.Keywords: cubesat, free space optics, nano satellite, optical laser communication.
Procedia PDF Downloads 88467 Modeling and Design of E-mode GaN High Electron Mobility Transistors
Authors: Samson Mil'shtein, Dhawal Asthana, Benjamin Sullivan
Abstract:
The wide energy gap of GaN is the major parameter justifying the design and fabrication of high-power electronic components made of this material. However, the existence of a piezo-electrics in nature sheet charge at the AlGaN/GaN interface complicates the control of carrier injection into the intrinsic channel of GaN HEMTs (High Electron Mobility Transistors). As a result, most of the transistors created as R&D prototypes and all of the designs used for mass production are D-mode devices which introduce challenges in the design of integrated circuits. This research presents the design and modeling of an E-mode GaN HEMT with a very low turn-on voltage. The proposed device includes two critical elements allowing the transistor to achieve zero conductance across the channel when Vg = 0V. This is accomplished through the inclusion of an extremely thin, 2.5nm intrinsic Ga₀.₇₄Al₀.₂₆N spacer layer. The added spacer layer does not create piezoelectric strain but rather elastically follows the variations of the crystal structure of the adjacent GaN channel. The second important factor is the design of a gate metal with a high work function. The use of a metal gate with a work function (Ni in this research) greater than 5.3eV positioned on top of n-type doped (Nd=10¹⁷cm⁻³) Ga₀.₇₄Al₀.₂₆N creates the necessary built-in potential, which controls the injection of electrons into the intrinsic channel as the gate voltage is increased. The 5µm long transistor with a 0.18µm long gate and a channel width of 30µm operate at Vd=10V. At Vg =1V, the device reaches the maximum drain current of 0.6mA, which indicates a high current density. The presented device is operational at frequencies greater than 10GHz and exhibits a stable transconductance over the full range of operational gate voltages.Keywords: compound semiconductors, device modeling, enhancement mode HEMT, gallium nitride
Procedia PDF Downloads 260466 A Data-Driven Agent Based Model for the Italian Economy
Authors: Michele Catalano, Jacopo Di Domenico, Luca Riccetti, Andrea Teglio
Abstract:
We develop a data-driven agent based model (ABM) for the Italian economy. We calibrate the model for the initial condition and parameters. As a preliminary step, we replicate the Monte-Carlo simulation for the Austrian economy. Then, we evaluate the dynamic properties of the model: the long-run equilibrium and the allocative efficiency in terms of disequilibrium patterns arising in the search and matching process for final goods, capital, intermediate goods, and credit markets. In this perspective, we use a randomized initial condition approach. We perform a robustness analysis perturbing the system for different parameter setups. We explore the empirical properties of the model using a rolling window forecast exercise from 2010 to 2022 to observe the model’s forecasting ability in the wake of the COVID-19 pandemic. We perform an analysis of the properties of the model with a different number of agents, that is, with different scales of the model compared to the real economy. The model generally displays transient dynamics that properly fit macroeconomic data regarding forecasting ability. We stress the model with a large set of shocks, namely interest policy, fiscal policy, and exogenous factors, such as external foreign demand for export. In this way, we can explore the most exposed sectors of the economy. Finally, we modify the technology mix of the various sectors and, consequently, the underlying input-output sectoral interdependence to stress the economy and observe the long-run projections. In this way, we can include in the model the generation of endogenous crisis due to the implied structural change, technological unemployment, and potential lack of aggregate demand creating the condition for cyclical endogenous crises reproduced in this artificial economy.Keywords: agent-based models, behavioral macro, macroeconomic forecasting, micro data
Procedia PDF Downloads 69465 Investigations on the Influence of Web Openings on the Load Bearing Behavior of Steel Beams
Authors: Felix Eyben, Simon Schaffrath, Markus Feldmann
Abstract:
A building should maximize the potential for use through its design. Therefore, flexible use is always important when designing a steel structure. To create flexibility, steel beams with web openings are increasingly used, because these offer the advantage that cables, pipes and other technical equipment can easily be routed through without detours, allowing for more space-saving and aesthetically pleasing construction. This can also significantly reduce the height of ceiling systems. Until now, beams with web openings were not explicitly considered in the European standard. However, this is to be done with the new EN 1993-1-13, in which design rules for different opening forms are defined. In order to further develop the design concepts, beams with web openings under bending are therefore to be investigated in terms of damage mechanics as part of a German national research project aiming to optimize the verifications for steel structures based on a wider database and a validated damage prediction. For this purpose, first, fundamental factors influencing the load-bearing behavior of girders with web openings under bending load were investigated numerically without taking material damage into account. Various parameter studies were carried out for this purpose. For example, the factors under study were the opening shape, size and position as well as structural aspects as the span length, arrangement of stiffeners and loading situation. The load-bearing behavior is evaluated using resulting load-deformation curves. These results are compared with the design rules and critically analyzed. Experimental tests are also planned based on these results. Moreover, the implementation of damage mechanics in the form of the modified Bai-Wierzbicki model was examined. After the experimental tests will have been carried out, the numerical models are validated and further influencing factors will be investigated on the basis of parametric studies.Keywords: damage mechanics, finite element, steel structures, web openings
Procedia PDF Downloads 173464 Bioavailability of Zinc to Wheat Grown in the Calcareous Soils of Iraqi Kurdistan
Authors: Muhammed Saeed Rasheed
Abstract:
Knowledge of the zinc and phytic acid (PA) concentrations of staple cereal crops are essential when evaluating the nutritional health of national and regional populations. In the present study, a total of 120 farmers’ fields in Iraqi Kurdistan were surveyed for zinc status in soil and wheat grain samples; wheat is the staple carbohydrate source in the region. Soils were analysed for total concentrations of phosphorus (PT) and zinc (ZnT), available P (POlsen) and Zn (ZnDTPA) and for pH. Average values (mg kg-1) ranged between 403-3740 (PT), 42.0-203 (ZnT), 2.13-28.1 (POlsen) and 0.14-5.23 (ZnDTPA); pH was in the range 7.46-8.67. The concentrations of Zn, PA/Zn molar ratio and estimated Zn bioavailability were also determined in wheat grain. The ranges of Zn and PA concentrations (mg kg⁻¹) were 12.3-63.2 and 5400 – 9300, respectively, giving a PA/Zn molar ratio of 15.7-30.6. A trivariate model was used to estimate intake of bioaccessible Zn, employing the following parameter values: (i) maximum Zn absorption = 0.09 (AMAX), (ii) equilibrium dissociation constant of zinc-receptor binding reaction = 0.680 (KP), and (iii) equilibrium dissociation constant of Zn-PA binding reaction = 0.033 (KR). In the model, total daily absorbed Zn (TAZ) (mg d⁻¹) as a function of total daily nutritional PA (mmole d⁻¹) and total daily nutritional Zn (mmole Zn d⁻¹) was estimated assuming an average wheat flour consumption of 300 g day⁻¹ in the region. Consideration of the PA and Zn intake suggest only 21.5±2.9% of grain Zn is bioavailable so that the effective Zn intake from wheat is only 1.84-2.63 mg d-1 for the local population. Overall results suggest available dietary Zn is below recommended levels (11 mg d⁻¹), partly due to low uptake by wheat but also due to the presence of large concentrations of PA in wheat grains. A crop breeding program combined with enhanced agronomic management methods is needed to enhance both Zn uptake and bioavailability in grains of cultivated wheat types.Keywords: phosphorus, zinc, phytic acid, phytic acid to zinc molar ratio, zinc bioavailability
Procedia PDF Downloads 123463 Edible Active Antimicrobial Coatings onto Plastic-Based Laminates and Its Performance Assessment on the Shelf Life of Vacuum Packaged Beef Steaks
Authors: Andrey A. Tyuftin, David Clarke, Malco C. Cruz-Romero, Declan Bolton, Seamus Fanning, Shashi K. Pankaj, Carmen Bueno-Ferrer, Patrick J. Cullen, Joe P. Kerry
Abstract:
Prolonging of shelf-life is essential in order to address issues such as; supplier demands across continents, economical profit, customer satisfaction, and reduction of food wastage. Smart packaging solutions presented in the form of naturally occurred antimicrobially-active packaging may be a solution to these and other issues. Gelatin film forming solution with adding of natural sourced antimicrobials is a promising tool for the active smart packaging. The objective of this study was to coat conventional plastic hydrophobic packaging material with hydrophilic antimicrobial active beef gelatin coating and conduct shelf life trials on beef sub-primal cuts. Minimal inhibition concentration (MIC) of Caprylic acid sodium salt (SO) and commercially available Auranta FV (AFV) (bitter oranges extract with mixture of nutritive organic acids) were found of 1 and 1.5 % respectively against bacterial strains Bacillus cereus, Pseudomonas fluorescens, Escherichia coli, Staphylococcus aureus and aerobic and anaerobic beef microflora. Therefore SO or AFV were incorporated in beef gelatin film forming solution in concentration of two times of MIC which was coated on a conventional plastic LDPE/PA film on the inner cold plasma treated polyethylene surface. Beef samples were vacuum packed in this material and stored under chilling conditions, sampled at weekly intervals during 42 days shelf life study. No significant differences (p < 0.05) in the cook loss was observed among the different treatments compared to control samples until the day 29. Only for AFV coated beef sample it was 3% higher (37.3%) than the control (34.4 %) on the day 36. It was found antimicrobial films did not protect beef against discoloration. SO containing packages significantly (p < 0.05) reduced Total viable bacterial counts (TVC) compared to the control and AFV samples until the day 35. No significant reduction in TVC was observed between SO and AFV films on the day 42 but a significant difference was observed compared to control samples with a 1.40 log of bacteria reduction on the day 42. AFV films significantly (p < 0.05) reduced TVC compared to control samples from the day 14 until the day 42. Control samples reached the set value of 7 log CFU/g on day 27 of testing, AFV films did not reach this set limit until day 35 and SO films until day 42 of testing. The antimicrobial AFV and SO coated films significantly prolonged the shelf-life of beef steaks by 33 or 55% (on 7 and 14 days respectively) compared to control film samples. It is concluded antimicrobial coated films were successfully developed by coating the inner polyethylene layer of conventional LDPE/PA laminated films after plasma surface treatment. The results indicated that the use of antimicrobial active packaging coated with SO or AFV increased significantly (p < 0.05) the shelf life of the beef sub-primal. Overall, AFV or SO containing gelatin coatings have the potential of being used as effective antimicrobials for active packaging applications for muscle-based food products.Keywords: active packaging, antimicrobials, edible coatings, food packaging, gelatin films, meat science
Procedia PDF Downloads 303462 Effect of Z-VAD-FMK on in Vitro Viability of Dog Follicles
Authors: Leda Maria Costa Pereira, Maria Denise Lopes, Nucharin Songsasen
Abstract:
Mammalian ovaries contain thousands of follicles that eventually degenerate or die after culture in vitro. Caspase-3 is a key enzyme that regulating cell death. Our objective was to examine the influence of anti-apoptotic drug Z-VAD-FMK (pan-caspase inhibitor) on in vitro viability of dog follicles within the ovarian cortex. Ovaries were obtained from prepubertal (age, 2.5–6 months) and adult (age, 8 months to 2 years) bitches and ovarian cortical fragments were recovered. The cortices were then incubated on 1.5% (w/v) agarose gel blocks within a 24-wells culture plate (three cortical pieces/well) containing Minimum Essential Medium Eagle - Alpha Modification (Alpha MEM) supplemented with 4.2 µg/ml insulin, 3.8 µg/ml transferrin, 5 ng/ml selenium, 2 mM L-glutamine, 100 µg/mL of penicillin G sodium, 100 µg/mL of streptomycin sulfate, 0.05 mM ascorbic acid, 10 ng/mL of FSH and 0.1% (w/v) polyvinyl alcohol in humidified atmosphere of 5% CO2 and 5% O2. The cortices were divided in six treatment groups: 1) 10 ng/mL EGF (EGF V0); 2) 10 ng/mL of EGF plus 1 mM Z-VAD-FMK (EGF V1); 3) 10 ng/mL of EGF and 10 mM Z-VAD-FMK (EGF V10); 4) 1 mM Z-VAD-FMK; 5) 10 mM Z-VAD-FMK and (6) no EGF and Z-VAD-FMK supplementation. Ovarian follicles within the tissues were processed for histology and assessed for follicle density, viability (based on morphology) and diameter immediately after collection (Control) or after 3 or 7 days of in vitro incubation. Comparison among fresh and culture treatment group was performed using ANOVA test. There were no differences (P > 0.05) in follicle density and viability among different culture treatments. However, there were differences in this parameter between culture days. Specifically, culturing tissue for 7 days resulted in significant reduction in follicle viability and density, regardless of treatments. We found a difference in size between culture days when these follicles were cultured using 10 mM Z-VAD-FMK or 10 ng/mL EGF (EGF V0). In sum, the finding demonstrated that Z-VAD-FMK at the dosage used in the present study does not provide the protective effect to ovarian tissue during in vitro culture. Future studies should explore different Z-VAD-FMK dosages or other anti-apoptotic agent, such as surviving in protecting ovarian follicles against cell death.Keywords: anti apoptotic drug, bitches, follicles, Z-VAD-FMK
Procedia PDF Downloads 360461 Water Droplet Impact on Vibrating Rigid Superhydrophobic Surfaces
Authors: Jingcheng Ma, Patricia B. Weisensee, Young H. Shin, Yujin Chang, Junjiao Tian, William P. King, Nenad Miljkovic
Abstract:
Water droplet impact on surfaces is a ubiquitous phenomenon in both nature and industry. The transfer of mass, momentum and energy can be influenced by the time of contact between droplet and surface. In order to reduce the contact time, we study the influence of substrate motion prior to impact on the dynamics of droplet recoil. Using optical high speed imaging, we investigated the impact dynamics of macroscopic water droplets (~ 2mm) on rigid nanostructured superhydrophobic surfaces vibrating at 60 – 300 Hz and amplitudes of 0 – 3 mm. In addition, we studied the influence of the phase of the substrate at the moment of impact on total contact time. We demonstrate that substrate vibration can alter droplet dynamics, and decrease total contact time by as much as 50% compared to impact on stationary rigid superhydrophobic surfaces. Impact analysis revealed that the vibration frequency mainly affected the maximum contact time, while the amplitude of vibration had little direct effect on the contact time. Through mathematical modeling, we show that the oscillation amplitude influences the possibility density function of droplet impact at a given phase, and thus indirectly influences the average contact time. We also observed more vigorous droplet splashing and breakup during impact at larger amplitudes. Through semi-empirical mathematical modeling, we describe the relationship between contact time and vibration frequency, phase, and amplitude of the substrate. We also show that the maximum acceleration during the impact process is better suited as a threshold parameter for the onset of splashing than a Weber-number criterion. This study not only provides new insights into droplet impact physics on vibrating surfaces, but develops guidelines for the rational design of surfaces to achieve controllable droplet wetting in applications utilizing vibration.Keywords: contact time, impact dynamics, oscillation, pear-shape droplet
Procedia PDF Downloads 454460 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model
Authors: Donatella Giuliani
Abstract:
In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation
Procedia PDF Downloads 217459 Establishment and Validation of Correlation Equations to Estimate Volumetric Oxygen Mass Transfer Coefficient (KLa) from Process Parameters in Stirred-Tank Bioreactors Using Response Surface Methodology
Authors: Jantakan Jullawateelert, Korakod Haonoo, Sutipong Sananseang, Sarun Torpaiboon, Thanunthon Bowornsakulwong, Lalintip Hocharoen
Abstract:
Process scale-up is essential for the biological process to increase production capacity from bench-scale bioreactors to either pilot or commercial production. Scale-up based on constant volumetric oxygen mass transfer coefficient (KLa) is mostly used as a scale-up factor since oxygen supply is one of the key limiting factors for cell growth. However, to estimate KLa of culture vessels operated with different conditions are time-consuming since it is considerably influenced by a lot of factors. To overcome the issue, this study aimed to establish correlation equations of KLa and operating parameters in 0.5 L and 5 L bioreactor employed with pitched-blade impeller and gas sparger. Temperature, gas flow rate, agitation speed, and impeller position were selected as process parameters and equations were created using response surface methodology (RSM) based on central composite design (CCD). In addition, the effects of these parameters on KLa were also investigated. Based on RSM, second-order polynomial models for 0.5 L and 5 L bioreactor were obtained with an acceptable determination coefficient (R²) as 0.9736 and 0.9190, respectively. These models were validated, and experimental values showed differences less than 10% from the predicted values. Moreover, RSM revealed that gas flow rate is the most significant parameter while temperature and agitation speed were also found to greatly affect the KLa in both bioreactors. Nevertheless, impeller position was shown to influence KLa in only 5L system. To sum up, these modeled correlations can be used to accurately predict KLa within the specified range of process parameters of two different sizes of bioreactors for further scale-up application.Keywords: response surface methodology, scale-up, stirred-tank bioreactor, volumetric oxygen mass transfer coefficient
Procedia PDF Downloads 206458 Iranian Processed Cheese under Effect of Emulsifier Salts and Cooking Time in Process
Authors: M. Dezyani, R. Ezzati bbelvirdi, M. Shakerian, H. Mirzaei
Abstract:
Sodium Hexametaphosphate (SHMP) is commonly used as an Emulsifying Salt (ES) in process cheese, although rarely as the sole ES. It appears that no published studies exist on the effect of SHMP concentration on the properties of process cheese when pH is kept constant; pH is well known to affect process cheese functionality. The detailed interactions between the added phosphate, Casein (CN), and indigenous Ca phosphate are poorly understood. We studied the effect of the concentration of SHMP (0.25-2.75%) and holding time (0-20 min) on the textural and Rheological properties of pasteurized process Cheddar cheese using a central composite rotatable design. All cheeses were adjusted to pH 5.6. The meltability of process cheese (as indicated by the decrease in loss tangent parameter from small amplitude oscillatory rheology, degree of flow, and melt area from the Schreiber test) decreased with an increase in the concentration of SHMP. Holding time also led to a slight reduction in meltability. Hardness of process cheese increased as the concentration of SHMP increased. Acid-base titration curves indicated that the buffering peak at pH 4.8, which is attributable to residual colloidal Ca phosphate, was shifted to lower pH values with increasing concentration of SHMP. The insoluble Ca and total and insoluble P contents increased as concentration of SHMP increased. The proportion of insoluble P as a percentage of total (indigenous and added) P decreased with an increase in ES concentration because of some of the (added) SHMP formed soluble salts. The results of this study suggest that SHMP chelated the residual colloidal Ca phosphate content and dispersed CN; the newly formed Ca-phosphate complex remained trapped within the process cheese matrix, probably by cross-linking CN. Increasing the concentration of SHMP helped to improve fat emulsification and CN dispersion during cooking, both of which probably helped to reinforce the structure of process cheese.Keywords: Iranian processed cheese, emulsifying salt, rheology, texture
Procedia PDF Downloads 431457 Multiple Version of Roman Domination in Graphs
Authors: J. C. Valenzuela-Tripodoro, P. Álvarez-Ruíz, M. A. Mateos-Camacho, M. Cera
Abstract:
In 2004, it was introduced the concept of Roman domination in graphs. This concept was initially inspired and related to the defensive strategy of the Roman Empire. An undefended place is a city so that no legions are established on it, whereas a strong place is a city in which two legions are deployed. This situation may be modeled by labeling the vertices of a finite simple graph with labels {0, 1, 2}, satisfying the condition that any 0-vertex must be adjacent to, at least, a 2-vertex. Roman domination in graphs is a variant of classic domination. Clearly, the main aim is to obtain such labeling of the vertices of the graph with minimum cost, that is to say, having minimum weight (sum of all vertex labels). Formally, a function f: V (G) → {0, 1, 2} is a Roman dominating function (RDF) in the graph G = (V, E) if f(u) = 0 implies that f(v) = 2 for, at least, a vertex v which is adjacent to u. The weight of an RDF is the positive integer w(f)= ∑_(v∈V)▒〖f(v)〗. The Roman domination number, γ_R (G), is the minimum weight among all the Roman dominating functions? Obviously, the set of vertices with a positive label under an RDF f is a dominating set in the graph, and hence γ(G)≤γ_R (G). In this work, we start the study of a generalization of RDF in which we consider that any undefended place should be defended from a sudden attack by, at least, k legions. These legions can be deployed in the city or in any of its neighbours. A function f: V → {0, 1, . . . , k + 1} such that f(N[u]) ≥ k + |AN(u)| for all vertex u with f(u) < k, where AN(u) represents the set of active neighbours (i.e., with a positive label) of vertex u, is called a [k]-multiple Roman dominating functions and it is denoted by [k]-MRDF. The minimum weight of a [k]-MRDF in the graph G is the [k]-multiple Roman domination number ([k]-MRDN) of G, denoted by γ_[kR] (G). First, we prove that the [k]-multiple Roman domination decision problem is NP-complete even when restricted to bipartite and chordal graphs. A problem that had been resolved for other variants and wanted to be generalized. We know the difficulty of calculating the exact value of the [k]-MRD number, even for families of particular graphs. Here, we present several upper and lower bounds for the [k]-MRD number that permits us to estimate it with as much precision as possible. Finally, some graphs with the exact value of this parameter are characterized.Keywords: multiple roman domination function, decision problem np-complete, bounds, exact values
Procedia PDF Downloads 108