Search results for: Synthetic Aperture Radar
47 Motivated Support Vector Regression with Structural Prior Knowledge
Authors: Wei Zhang, Yao-Yu Li, Yi-Fan Zhu, Qun Li, Wei-Ping Wang
Abstract:
It-s known that incorporating prior knowledge into support vector regression (SVR) can help to improve the approximation performance. Most of researches are concerned with the incorporation of knowledge in form of numerical relationships. Little work, however, has been done to incorporate the prior knowledge on the structural relationships among the variables (referred as to Structural Prior Knowledge, SPK). This paper explores the incorporation of SPK in SVR by constructing appropriate admissible support vector kernel (SV kernel) based on the properties of reproducing kernel (R.K). Three-levels specifications of SPK are studies with the corresponding sub-levels of prior knowledge that can be considered for the method. These include Hierarchical SPK (HSPK), Interactional SPK (ISPK) consisting of independence, global and local interaction, Functional SPK (FSPK) composed of exterior-FSPK and interior-FSPK. A convenient tool for describing the SPK, namely Description Matrix of SPK is introduced. Subsequently, a new SVR, namely Motivated Support Vector Regression (MSVR) whose structure is motivated in part by SPK, is proposed. Synthetic examples show that it is possible to incorporate a wide variety of SPK and helpful to improve the approximation performance in complex cases. The benefits of MSVR are finally shown on a real-life military application, Air-toground battle simulation, which shows great potential for MSVR to the complex military applications.Keywords: admissible support vector kernel, reproducing kernel, structural prior knowledge, motivated support vector regression
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 140046 Evaluating Machine Learning Techniques for Activity Classification in Smart Home Environments
Authors: Talal Alshammari, Nasser Alshammari, Mohamed Sedky, Chris Howard
Abstract:
With the widespread adoption of the Internet-connected devices, and with the prevalence of the Internet of Things (IoT) applications, there is an increased interest in machine learning techniques that can provide useful and interesting services in the smart home domain. The areas that machine learning techniques can help advance are varied and ever-evolving. Classifying smart home inhabitants’ Activities of Daily Living (ADLs), is one prominent example. The ability of machine learning technique to find meaningful spatio-temporal relations of high-dimensional data is an important requirement as well. This paper presents a comparative evaluation of state-of-the-art machine learning techniques to classify ADLs in the smart home domain. Forty-two synthetic datasets and two real-world datasets with multiple inhabitants are used to evaluate and compare the performance of the identified machine learning techniques. Our results show significant performance differences between the evaluated techniques. Such as AdaBoost, Cortical Learning Algorithm (CLA), Decision Trees, Hidden Markov Model (HMM), Multi-layer Perceptron (MLP), Structured Perceptron and Support Vector Machines (SVM). Overall, neural network based techniques have shown superiority over the other tested techniques.Keywords: Activities of daily living, classification, internet of things, machine learning, smart home.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 177045 Teager-Huang Analysis Applied to Sonar Target Recognition
Authors: J.-C. Cexus, A.O. Boudraa
Abstract:
In this paper, a new approach for target recognition based on the Empirical mode decomposition (EMD) algorithm of Huang etal. [11] and the energy tracking operator of Teager [13]-[14] is introduced. The conjunction of these two methods is called Teager-Huang analysis. This approach is well suited for nonstationary signals analysis. The impulse response (IR) of target is first band pass filtered into subsignals (components) called Intrinsic mode functions (IMFs) with well defined Instantaneous frequency (IF) and Instantaneous amplitude (IA). Each IMF is a zero-mean AM-FM component. In second step, the energy of each IMF is tracked using the Teager energy operator (TEO). IF and IA, useful to describe the time-varying characteristics of the signal, are estimated using the Energy separation algorithm (ESA) algorithm of Maragos et al .[16]-[17]. In third step, a set of features such as skewness and kurtosis are extracted from the IF, IA and IMF energy functions. The Teager-Huang analysis is tested on set of synthetic IRs of Sonar targets with different physical characteristics (density, velocity, shape,? ). PCA is first applied to features to discriminate between manufactured and natural targets. The manufactured patterns are classified into spheres and cylinders. One hundred percent of correct recognition is achieved with twenty three echoes where sixteen IRs, used for training, are free noise and seven IRs, used for testing phase, are corrupted with white Gaussian noise.
Keywords: Target recognition, Empirical mode decomposition, Teager-Kaiser energy operator, Features extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 228344 Featured based Segmentation of Color Textured Images using GLCM and Markov Random Field Model
Authors: Dipti Patra, Mridula J
Abstract:
In this paper, we propose a new image segmentation approach for colour textured images. The proposed method for image segmentation consists of two stages. In the first stage, textural features using gray level co-occurrence matrix(GLCM) are computed for regions of interest (ROI) considered for each class. ROI acts as ground truth for the classes. Ohta model (I1, I2, I3) is the colour model used for segmentation. Statistical mean feature at certain inter pixel distance (IPD) of I2 component was considered to be the optimized textural feature for further segmentation. In the second stage, the feature matrix obtained is assumed to be the degraded version of the image labels and modeled as Markov Random Field (MRF) model to model the unknown image labels. The labels are estimated through maximum a posteriori (MAP) estimation criterion using ICM algorithm. The performance of the proposed approach is compared with that of the existing schemes, JSEG and another scheme which uses GLCM and MRF in RGB colour space. The proposed method is found to be outperforming the existing ones in terms of segmentation accuracy with acceptable rate of convergence. The results are validated with synthetic and real textured images.
Keywords: Texture Image Segmentation, Gray Level Cooccurrence Matrix, Markov Random Field Model, Ohta colour space, ICM algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 217343 Bridging Stress Modeling of Composite Materials Reinforced by Fibers Using Discrete Element Method
Authors: Chong Wang, Kellem M. Soares, Luis E. Kosteski
Abstract:
The problem of toughening in brittle materials reinforced by fibers is complex, involving all of the mechanical properties of fibers, matrix and the fiber/matrix interface, as well as the geometry of the fiber. Development of new numerical methods appropriate to toughening simulation and analysis is necessary. In this work, we have performed simulations and analysis of toughening in brittle matrix reinforced by randomly distributed fibers by means of the discrete elements method. At first, we put forward a mechanical model of toughening contributed by random fibers. Then with a numerical program, we investigated the stress, damage and bridging force in the composite material when a crack appeared in the brittle matrix. From the results obtained, we conclude that: (i) fibers of high strength and low elasticity modulus are beneficial to toughening; (ii) fibers of relatively high elastic modulus compared to the matrix may result in substantial matrix damage due to spalling effect; (iii) employment of high-strength synthetic fibers is a good option for toughening. We expect that the combination of the discrete element method (DEM) with the finite element method (FEM) can increase the versatility and efficiency of the software developed. The present work can guide the design of ceramic composites of high performance through the optimization of the parameters.
Keywords: Bridging stress, discrete element method, fiber reinforced composites, toughening.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 189942 Can Exams Be Shortened? Using a New Empirical Approach to Test in Finance Courses
Authors: Eric S. Lee, Connie Bygrave, Jordan Mahar, Naina Garg, Suzanne Cottreau
Abstract:
Marking exams is universally detested by lecturers. Final exams in many higher education courses often last 3.0 hrs. Do exams really need to be so long? Can we justifiably reduce the number of questions on them? Surprisingly few have researched these questions, arguably because of the complexity and difficulty of using traditional methods. To answer these questions empirically, we used a new approach based on three key elements: Use of an unusual variation of a true experimental design, equivalence hypothesis testing, and an expanded set of six psychometric criteria to be met by any shortened exam if it is to replace a current 3.0-hr exam (reliability, validity, justifiability, number of exam questions, correspondence, and equivalence). We compared student performance on each official 3.0-hr exam with that on five shortened exams having proportionately fewer questions (2.5, 2.0, 1.5, 1.0, and 0.5 hours) in a series of four experiments conducted in two classes in each of two finance courses (224 students in total). We found strong evidence that, in these courses, shortening of final exams to 2.0 hrs was warranted on all six psychometric criteria. Shortening these exams by one hour should result in a substantial one-third reduction in lecturer time and effort spent marking, lower student stress, and more time for students to prepare for other exams. Our approach provides a relatively simple, easy-to-use methodology that lecturers can use to examine the effect of shortening their own exams.
Keywords: Exam length, psychometric criteria, synthetic experimental designs, test length.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 150341 Noise Source Identification on Urban Construction Sites Using Signal Time Delay Analysis
Authors: Balgaisha G. Mukanova, Yelbek B. Utepov, Aida G. Nazarova, Alisher Z. Imanov
Abstract:
The problem of identifying local noise sources on a construction site using a sensor system is considered. Mathematical modeling of detected signals on sensors was carried out, considering signal decay and signal delay time between the source and detector. Recordings of noises produced by construction tools were used as a dependence of noise on time. Synthetic sensor data was constructed based on these data, and a model of the propagation of acoustic waves from a point source in the three-dimensional space was applied. All sensors and sources are assumed to be located in the same plane. A source localization method is checked based on the signal time delay between two adjacent detectors and plotting the direction of the source. Based on the two direct lines' crossline, the noise source's position is determined. Cases of one dominant source and the case of two sources in the presence of several other sources of lower intensity are considered. The number of detectors varies from three to eight detectors. The intensity of the noise field in the assessed area is plotted. The signal of a two-second duration is considered. The source is located for subsequent parts of the signal with a duration above 0.04 sec; the final result is obtained by computing the average value.
Keywords: Acoustic model, direction of arrival, inverse source problem, sound localization, urban noises.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7640 Hand Gesture Interpretation Using Sensing Glove Integrated with Machine Learning Algorithms
Authors: Aqsa Ali, Aleem Mushtaq, Attaullah Memon, Monna
Abstract:
In this paper, we present a low cost design for a smart glove that can perform sign language recognition to assist the speech impaired people. Specifically, we have designed and developed an Assistive Hand Gesture Interpreter that recognizes hand movements relevant to the American Sign Language (ASL) and translates them into text for display on a Thin-Film-Transistor Liquid Crystal Display (TFT LCD) screen as well as synthetic speech. Linear Bayes Classifiers and Multilayer Neural Networks have been used to classify 11 feature vectors obtained from the sensors on the glove into one of the 27 ASL alphabets and a predefined gesture for space. Three types of features are used; bending using six bend sensors, orientation in three dimensions using accelerometers and contacts at vital points using contact sensors. To gauge the performance of the presented design, the training database was prepared using five volunteers. The accuracy of the current version on the prepared dataset was found to be up to 99.3% for target user. The solution combines electronics, e-textile technology, sensor technology, embedded system and machine learning techniques to build a low cost wearable glove that is scrupulous, elegant and portable.Keywords: American sign language, assistive hand gesture interpreter, human-machine interface, machine learning, sensing glove.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 273139 Impact of Disposed Drinking Water Sachets in Damaturu, Yobe State, Nigeria
Authors: Meeta Ratawa Tiwary
Abstract:
Damaturu is the capital of Yobe State in northeastern Nigeria where civic amenities and facilities are not adequate even after 24 years of its existence. The volatile security and political situations are most significant causes for the same. The basic facility for the citizens in terms of drinking water and electricity are not available. For the drinking water, they have to rely on personal boreholes or the filtered borehole waters available in packaged sachets in market. The present study is concerned with environmental impact of indiscriminate disposal of drinking synthetic polythene water sachets in Damaturu. The sachet water is popularly called as “pure water”, but its purity is questionable. Increased production and consumption of sachet water has led to indiscriminate dumping and disposal of empty sachets leading to serious environmental threat. The evidence of this is seen for sachets littering the streets and the drainages blocked by ‘blocks’ of water sachet waste. Sachet water gained much popularity in Nigeria because the product is convenient for use, affordable and economically viable. The present study aims to find out the solution to this environmental problem. The fieldbased study has found some significant factors that cause environmental and socio economic effect due to this. Some recommendations have been made based on research findings regarding sustainable waste management, recycling and re-use of the non-biodegradable products in society.Keywords: Civic amenities, non-biodegradable, pure water, sustainable environment, waste disposal.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 350438 Target Detection using Adaptive Progressive Thresholding Based Shifted Phase-Encoded Fringe-Adjusted Joint Transform Correlator
Authors: Inder K. Purohit, M. Nazrul Islam, K. Vijayan Asari, Mohammad A. Karim
Abstract:
A new target detection technique is presented in this paper for the identification of small boats in coastal surveillance. The proposed technique employs an adaptive progressive thresholding (APT) scheme to first process the given input scene to separate any objects present in the scene from the background. The preprocessing step results in an image having only the foreground objects, such as boats, trees and other cluttered regions, and hence reduces the search region for the correlation step significantly. The processed image is then fed to the shifted phase-encoded fringe-adjusted joint transform correlator (SPFJTC) technique which produces single and delta-like correlation peak for a potential target present in the input scene. A post-processing step involves using a peak-to-clutter ratio (PCR) to determine whether the boat in the input scene is authorized or unauthorized. Simulation results are presented to show that the proposed technique can successfully determine the presence of an authorized boat and identify any intruding boat present in the given input scene.Keywords: Adaptive progressive thresholding, fringe adjusted filters, image segmentation, joint transform correlation, synthetic discriminant function
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 120837 Urban Greenery in the Greatest Polish Cities: Analysis of Spatial Concentration
Authors: Elżbieta Antczak
Abstract:
Cities offer important opportunities for economic development and for expanding access to basic services, including health care and education, for large numbers of people. Moreover, green areas (as an integral part of sustainable urban development) present a major opportunity for improving urban environments, quality of lives and livelihoods. This paper examines, using spatial concentration and spatial taxonomic measures, regional diversification of greenery in the cities of Poland. The analysis includes location quotients, Lorenz curve, Locational Gini Index, and the synthetic index of greenery and spatial statistics tools: (1) To verify the occurrence of strong concentration or dispersion of the phenomenon in time and space depending on the variable category, and, (2) To study if the level of greenery depends on the spatial autocorrelation. The data includes the greatest Polish cities, categories of the urban greenery (parks, lawns, street greenery, and green areas on housing estates, cemeteries, and forests) and the time span 2004-2015. According to the obtained estimations, most of cites in Poland are already taking measures to become greener. However, in the country there are still many barriers to well-balanced urban greenery development (e.g. uncontrolled urban sprawl, poor management as well as lack of spatial urban planning systems).
Keywords: Greenery, urban areas, regional spatial diversification and concentration, spatial taxonomic measure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 125236 Competitive Adsorption of Heavy Metals onto Natural and Activated Clay: Equilibrium, Kinetics and Modeling
Authors: L. Khalfa, M. Bagane, M. L. Cervera, S. Najjar
Abstract:
The aim of this work is to present a low cost adsorbent for removing toxic heavy metals from aqueous solutions. Therefore, we are interested to investigate the efficiency of natural clay minerals collected from south Tunisia and their modified form using sulfuric acid in the removal of toxic metal ions: Zn(II) and Pb(II) from synthetic waste water solutions. The obtained results indicate that metal uptake is pH-dependent and maximum removal was detected to occur at pH 6. Adsorption equilibrium is very rapid and it was achieved after 90 min for both metal ions studied. The kinetics results show that the pseudo-second-order model describes the adsorption and the intraparticle diffusion models are the limiting step. The treatment of natural clay with sulfuric acid creates more active sites and increases the surface area, so it showed an increase of the adsorbed quantities of lead and zinc in single and binary systems. The competitive adsorption study showed that the uptake of lead was inhibited in the presence of 10 mg/L of zinc. An antagonistic binary adsorption mechanism was observed. These results revealed that clay is an effective natural material for removing lead and zinc in single and binary systems from aqueous solution.Keywords: Lead, zinc heavy metal, activated clay, kinetic study, competitive adsorption, modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 182735 Estimation of Relative Subsidence of Collapsible Soils Using Electromagnetic Measurements
Authors: Henok Hailemariam, Frank Wuttke
Abstract:
Collapsible soils are weak soils that appear to be stable in their natural state, normally dry condition, but rapidly deform under saturation (wetting), thus generating large and unexpected settlements which often yield disastrous consequences for structures unwittingly built on such deposits. In this study, a prediction model for the relative subsidence of stressed collapsible soils based on dielectric permittivity measurement is presented. Unlike most existing methods for soil subsidence prediction, this model does not require moisture content as an input parameter, thus providing the opportunity to obtain accurate estimation of the relative subsidence of collapsible soils using dielectric measurement only. The prediction model is developed based on an existing relative subsidence prediction model (which is dependent on soil moisture condition) and an advanced theoretical frequency and temperature-dependent electromagnetic mixing equation (which effectively removes the moisture content dependence of the original relative subsidence prediction model). For large scale sub-surface soil exploration purposes, the spatial sub-surface soil dielectric data over wide areas and high depths of weak (collapsible) soil deposits can be obtained using non-destructive high frequency electromagnetic (HF-EM) measurement techniques such as ground penetrating radar (GPR). For laboratory or small scale in-situ measurements, techniques such as an open-ended coaxial line with widely applicable time domain reflectometry (TDR) or vector network analysers (VNAs) are usually employed to obtain the soil dielectric data. By using soil dielectric data obtained from small or large scale non-destructive HF-EM investigations, the new model can effectively predict the relative subsidence of weak soils without the need to extract samples for moisture content measurement. Some of the resulting benefits are the preservation of the undisturbed nature of the soil as well as a reduction in the investigation costs and analysis time in the identification of weak (problematic) soils. The accuracy of prediction of the presented model is assessed by conducting relative subsidence tests on a collapsible soil at various initial soil conditions and a good match between the model prediction and experimental results is obtained.
Keywords: Collapsible soil, relative subsidence, dielectric permittivity, moisture content.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 111734 Anisotropic Total Fractional Order Variation Model in Seismic Data Denoising
Authors: Jianwei Ma, Diriba Gemechu
Abstract:
In seismic data processing, attenuation of random noise is the basic step to improve quality of data for further application of seismic data in exploration and development in different gas and oil industries. The signal-to-noise ratio of the data also highly determines quality of seismic data. This factor affects the reliability as well as the accuracy of seismic signal during interpretation for different purposes in different companies. To use seismic data for further application and interpretation, we need to improve the signal-to-noise ration while attenuating random noise effectively. To improve the signal-to-noise ration and attenuating seismic random noise by preserving important features and information about seismic signals, we introduce the concept of anisotropic total fractional order denoising algorithm. The anisotropic total fractional order variation model defined in fractional order bounded variation is proposed as a regularization in seismic denoising. The split Bregman algorithm is employed to solve the minimization problem of the anisotropic total fractional order variation model and the corresponding denoising algorithm for the proposed method is derived. We test the effectiveness of theproposed method for synthetic and real seismic data sets and the denoised result is compared with F-X deconvolution and non-local means denoising algorithm.Keywords: Anisotropic total fractional order variation, fractional order bounded variation, seismic random noise attenuation, Split Bregman Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 101333 Application of Synthetic Monomers Grafted Xanthan Gum for Rhodamine B Removal in Aqueous Solution
Authors: T. Moremedi, L. Katata-Seru, S. Sardar, A. Bandyopadhyay, E. Makhado, M. Joseph Hato
Abstract:
The rapid industrialisation and population growth have led to a steady fall in freshwater supplies worldwide. As a result, water systems are affected by modern methods upon use due to secondary contamination. The application of novel adsorbents derived from natural polymer holds a great promise in addressing challenges in water treatment. In this study, the UV irradiation technique was used to prepare acrylamide (AAm) monomer, and acrylic acid (AA) monomer grafted xanthan gum (XG) copolymer. Furthermore, the factors affecting rhodamine B (RhB) adsorption from aqueous media, such as pH, dosage, concentration, and time were also investigated. The FTIR results confirmed the formation of graft copolymer by the strong vibrational bands at 1709 cm-1 and 1612 cm-1 for AA and AAm, respectively. Additionally, more irregular, porous and wrinkled surface observed from SEM of XG-g-AAm/AA indicated copolymerization interaction of monomers. The optimum conditions for removing RhB dye with a maximum adsorption capacity of 313 mg/g at 25 0C from aqueous solution were pH approximately 5, initial dye concentration = 200 ppm, adsorbent dose = 30 mg. Also, the detailed investigation of the isothermal and adsorption kinetics of RhB from aqueous solution showed that the adsorption of the dye followed a Freundlich model (R2 = 0.96333) and pseudo-second-order kinetics. The results further indicated that this absorbent based on XG had the universality to remove dye through the mechanism of chemical adsorption. The outstanding adsorption potential of the grafted copolymer could be used to remove cationic dyes from aqueous solution as a low-cost product.
Keywords: Xanthan gum, adsorbents, rhodamine B, Freundlich model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 66132 Rapid Finite-Element Based Airport Pavement Moduli Solutions using Neural Networks
Authors: Kasthurirangan Gopalakrishnan, Marshall R. Thompson, Anshu Manik
Abstract:
This paper describes the use of artificial neural networks (ANN) for predicting non-linear layer moduli of flexible airfield pavements subjected to new generation aircraft (NGA) loading, based on the deflection profiles obtained from Heavy Weight Deflectometer (HWD) test data. The HWD test is one of the most widely used tests for routinely assessing the structural integrity of airport pavements in a non-destructive manner. The elastic moduli of the individual pavement layers backcalculated from the HWD deflection profiles are effective indicators of layer condition and are used for estimating the pavement remaining life. HWD tests were periodically conducted at the Federal Aviation Administration-s (FAA-s) National Airport Pavement Test Facility (NAPTF) to monitor the effect of Boeing 777 (B777) and Beoing 747 (B747) test gear trafficking on the structural condition of flexible pavement sections. In this study, a multi-layer, feed-forward network which uses an error-backpropagation algorithm was trained to approximate the HWD backcalculation function. The synthetic database generated using an advanced non-linear pavement finite-element program was used to train the ANN to overcome the limitations associated with conventional pavement moduli backcalculation. The changes in ANN-based backcalculated pavement moduli with trafficking were used to compare the relative severity effects of the aircraft landing gears on the NAPTF test pavements.Keywords: Airfield pavements, ANN, backcalculation, newgeneration aircraft
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 218531 Effect of Oral Administration of “Gadagi“ Tea on Liver Function in Rats
Authors: A. M. Gadanya, M. S. Sule, M. K. Atiku
Abstract:
Effect of oral administration of “Gadagi" tea on liver function was assessed on 50 healthy male albino rats which were grouped and administered with different doses(mg/kg) i.e low dose (380mg/kg, 415mg/kg, 365mg/kg, 315mg/kg for “sak", “sada" and “magani" respectively), standard dose ( 760mg/kg, 830mg/kg, 730mg/kg for “sak-, “sada" and “magani" respectively) and high dose (1500mg/kg, 1700mg/kg and 1460mg/kg for “sak--,"sada" and “magani" groups respectively) for a period of four weeks. Animals that were not administered with the tea constituted the control group. At the end of fourth week, the animals were sacrificed and their serum alanine aminotransferase (ALT), aspartate aminotransferase (AST), alkaline phosphatase (ALP), total protein (TP), albumin (ALB), and globulins (GLO) were determined. Mean serum ALT and ALP activities were significantly higher (P<0.05) in rats orally administered with high dose of “sak" and those administered with standard dose of “sada" than those of the control group, suggesting a probable impairment of liver function due to liver cytolysis.Mean serum AST, ALT and ALP activities were significantly lower (P<0.05) in rats that were orally administered with high dose of “magani" than that of the control group, suggesting a probable improvement in liver function (due to decrease in liver cytolysis). Mean serum TP, ALB and GLO levels were significantly higher (P<0.05) in rats that were orally administered with the various doses of“sak", “sada" and “magani" than those of the control group. This also suggests a probable improvement in the synthetic function of the liver.Thus, some dosages of “sak" and “sada could be hepatotoxic, whereas “magani" especially at the high dose administered could have pharmacologically positive effect on the liver of the rats.Keywords: Gadagi" tea, Liver function, Oral, Rats.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 185630 Structural Characterization and Physical Properties of Antimicrobial (AM) Starch-Based Films
Authors: Eraricar Salleh, Ida Idayu Muhamad, Nozieanna Khairuddin
Abstract:
Antimicrobial (AM) starch-based films were developed by incorporating chitosan and lauric acid as antimicrobial agent into starch-based film. Chitosan has wide range of applications as a biomaterial, but barriers still exist to its broader use due to its physical and chemical limitations. In this work, a series of starch/chitosan (SC) blend films containing 8% of lauric acid was prepared by casting method. The structure of the film was characterized by Fourier transform infrared spectroscopy (FTIR), Xray diffraction (XRD), and scanning electron microscopy (SEM). The results indicated that there were strong interactions were present between the hydroxyl groups of starch and the amino groups of chitosan resulting in a good miscibility between starch and chitosan in the blend films. Physical properties and optical properties of the AM starch-based film were evaluated. The AM starch-based films incorporated with chitosan and lauric acid showed an improvement in water vapour transmission rate (WVTR) and addition of starch content provided more transparent films while the yellowness of the film attributed to the higher chitosan content. The improvement in water barrier properties was mainly attributed to the hydrophobicity of lauric acid and optimum chitosan or starch content. AM starch based film also showed excellent oxygen barrier. Obtaining films with good oxygen permeability would be an indication of the potential use of these antimicrobial packaging as a natural packaging and an alternative packaging to the synthetic polymer to protect food from oxidation reactionsKeywords: Antimicrobial starch-based films, chitosan, lauric acid, starch.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 280829 Dynamic Bayesian Networks Modeling for Inferring Genetic Regulatory Networks by Search Strategy: Comparison between Greedy Hill Climbing and MCMC Methods
Authors: Huihai Wu, Xiaohui Liu
Abstract:
Using Dynamic Bayesian Networks (DBN) to model genetic regulatory networks from gene expression data is one of the major paradigms for inferring the interactions among genes. Averaging a collection of models for predicting network is desired, rather than relying on a single high scoring model. In this paper, two kinds of model searching approaches are compared, which are Greedy hill-climbing Search with Restarts (GSR) and Markov Chain Monte Carlo (MCMC) methods. The GSR is preferred in many papers, but there is no such comparison study about which one is better for DBN models. Different types of experiments have been carried out to try to give a benchmark test to these approaches. Our experimental results demonstrated that on average the MCMC methods outperform the GSR in accuracy of predicted network, and having the comparable performance in time efficiency. By proposing the different variations of MCMC and employing simulated annealing strategy, the MCMC methods become more efficient and stable. Apart from comparisons between these approaches, another objective of this study is to investigate the feasibility of using DBN modeling approaches for inferring gene networks from few snapshots of high dimensional gene profiles. Through synthetic data experiments as well as systematic data experiments, the experimental results revealed how the performances of these approaches can be influenced as the target gene network varies in the network size, data size, as well as system complexity.
Keywords: Genetic regulatory network, Dynamic Bayesian network, GSR, MCMC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 188628 Influence of Laminated Textile Structures on Mechanical Performance of NF-Epoxy Composites
Authors: A. R. Azrin Hani, R. Ahmad, M. Mariatti
Abstract:
Textile structures are engineered and fabricated to meet worldwide structural applications. Nevertheless, research varying textile structure on natural fibre as composite reinforcement was found to be very limited. Most of the research is focusing on short fibre and random discontinuous orientation of the reinforcement structure. Realizing that natural fibre (NF) composite had been widely developed to be used as synthetic fibre composite replacement, this research attempted to examine the influence of woven and cross-ply laminated structure towards its mechanical performances. Laminated natural fibre composites were developed using hand lay-up and vacuum bagging technique. Impact and flexural strength were investigated as a function of fibre type (coir and kenaf) and reinforcement structure (imbalanced plain woven, 0°/90° cross-ply and +45°/-45° cross-ply). Multi-level full factorial design of experiment (DOE) and analysis of variance (ANOVA) was employed to impart data as to how fibre type and reinforcement structure parameters affect the mechanical properties of the composites. This systematic experimentation has led to determination of significant factors that predominant influences the impact and flexural properties of the textile composites. It was proven that both fibre type and reinforcement structure demonstrated significant difference results. Overall results indicated that coir composite and woven structure exhibited better impact and flexural strength. Yet, cross-ply composite structure demonstrated better fracture resistance.Keywords: Cross-ply composite, Flexural strength, Impact strength, Textile natural fibre composite, Woven composite.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 243227 Rheological and Computational Analysis of Crude Oil Transportation
Authors: Praveen Kumar, Satish Kumar, Jashanpreet Singh
Abstract:
Transportation of unrefined crude oil from the production unit to a refinery or large storage area by a pipeline is difficult due to the different properties of crude in various areas. Thus, the design of a crude oil pipeline is a very complex and time consuming process, when considering all the various parameters. There were three very important parameters that play a significant role in the transportation and processing pipeline design; these are: viscosity profile, temperature profile and the velocity profile of waxy crude oil through the crude oil pipeline. Knowledge of the Rheological computational technique is required for better understanding the flow behavior and predicting the flow profile in a crude oil pipeline. From these profile parameters, the material and the emulsion that is best suited for crude oil transportation can be predicted. Rheological computational fluid dynamic technique is a fast method used for designing flow profile in a crude oil pipeline with the help of computational fluid dynamics and rheological modeling. With this technique, the effect of fluid properties including shear rate range with temperature variation, degree of viscosity, elastic modulus and viscous modulus was evaluated under different conditions in a transport pipeline. In this paper, two crude oil samples was used, as well as a prepared emulsion with natural and synthetic additives, at different concentrations ranging from 1,000 ppm to 3,000 ppm. The rheological properties was then evaluated at a temperature range of 25 to 60 °C and which additive was best suited for transportation of crude oil is determined. Commercial computational fluid dynamics (CFD) has been used to generate the flow, velocity and viscosity profile of the emulsions for flow behavior analysis in crude oil transportation pipeline. This rheological CFD design can be further applied in developing designs of pipeline in the future.
Keywords: Natural surfactant, crude oil, rheology, CFD, viscosity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 167526 Study of Mechanical Properties of Glutarylated Jute Fiber Reinforced Epoxy Composites
Authors: V. Manush Nandan, K. Lokdeep, R. Vimal, K. Hari Hara Subramanyan, C. Aswin, V. Logeswaran
Abstract:
Natural fibers have attained the potential market in the composite industry because of the huge environmental impact caused by synthetic fibers. Among the natural fibers, jute fibers are the most abundant plant fibers which are manufactured mainly in countries like India. Even though there is a good motive to utilize the natural supplement, the strength of the natural fiber composites is still a topic of discussion. In recent days, many researchers are showing interest in the chemical modification of the natural fibers to increase various mechanical and thermal properties. In the present study, jute fibers have been modified chemically using glutaric anhydride at different concentrations of 5%, 10%, 20%, and 30%. The glutaric anhydride solution is prepared by dissolving the different quantity of glutaric anhydride in benzene and dimethyl-sulfoxide using sodium formate catalyst. The jute fiber mats have been treated by the method of retting at various time intervals of 3, 6, 12, 24, and 36 hours. The modification structure of the treated fibers has been confirmed with infrared spectroscopy. The degree of modification increases with an increase in retention time, but higher retention time has damaged the fiber structure. The unmodified fibers and glutarylated fibers at different retention times are reinforced with epoxy matrix under room temperature. The tensile strength and flexural strength of the composites are analyzed in detail. Among these, the composite made with glutarylated fiber has shown good mechanical properties when compared to those made of unmodified fiber.
Keywords: Flexural properties, glutarylation, glutaric anhydride, tensile properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 70525 Contribution of Vitaton (Β-Carotene) to the Rearing Factors Survival Rate and Visual Flesh Color of Rainbow Trout Fish in Comparison With Astaxanthin
Authors: M.Ghotbi, M.Ghotbi, Gh. Azari Takami
Abstract:
In this study Vitaton (an organic supplement which contains fermentative β-carotene) and synthetic astaxanthin (CAROPHYLL® Pink) were evaluated as pro-growth factors in Rainbow trout diet. An 8 week feeding trial was conducted to determine the effects of Vitaton versus astaxanthin on rearing factors, survival rate and visual flesh color of Rainbow trout (Oncorhnchynchus mykiss) with initial weight of 196±5. Four practical diets were formulated to contain 50 and 80 (ppm) of β- carotene and astaxanthin and also a control diet was prepared without any pigment. Each diet was fed to triplicate groups of fish rearing in fresh water. Fish were fed twice daily. The water temperature fluctuated from 12 to 15 (C˚) and also dissolved oxygen content was between 7 to 7.5 (mg/lit) during the experimental period. At the end of the experiment, growth and food utilization parameters and survival rate were unaffected by dietary treatments (p>0.05). Also, there was no significant difference between carcass yield within treatments (p>0.05). No significant difference recognized between visual flesh color (SalmoFan score) of fish fed Vitaton-containing diets. On the contrary, feeding on diets containing 50 and 80 (ppm) of astaxanthin, increased SalmoFan score (flesh astaxanthin concentration) from <20 (<1 mg/kg) to 23.33 (2.03 mg/kg) and 27.67 (5.74 mg/kg), respectively. Ultimately, a significant difference was seen between flesh carotenoid concentrations of fish feeding on astaxanthin containing treatments and control treatment (P<0.05). It should be mentioned that just raw fillet color of fish belonged to 80 (ppm) of astaxanthin treatment was seen to be close to color targets (SalmoFan scores) adopted for harvest-size fish.Keywords: Astaxanthin, Flesh color, Rainbow trout, Vitaton, β- carotene,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 342824 Determination of Alkali Treatment Conditions Effects Which Influence the Variability of Kenaf Fiber Mean Cross Sectional Area
Authors: Mohd Yussni Hashim, Mohd Nazrul Roslan, Shahruddin Mahzan @ Mohd Zin, Saparudin Ariffin
Abstract:
Fiber cross sectional area value is a crucial factor in determining the strength properties of natural fiber. Furthermore, unlike synthetic fiber, a diameter and cross sectional area of natural fiber has a large variation along and between the fibers. This study aims to determine the main and interaction effects of alkali treatment conditions which influence kenaf bast fiber mean cross sectional area. Three alkali treatment conditions at two different levels were selected. The conditions setting were alkali concentrations at 2 and 10 w/v %; fiber immersed temperature at room temperature and 1000C; and fiber immersed duration for 30 and 480 minutes. Untreated kenaf fiber was used as a control unit. Kenaf bast fiber bundle mounting tab was prepared according to ASTM C1557-03. Cross sectional area was measured using a Leica video analyzer. The study result showed that kenaf fiber bundle mean cross sectional area was reduced 6.77% to 29.88% after alkali treatment. From analysis of variance, it shows that interaction of alkali concentration and immersed time has a higher magnitude at 0.1619 compared to alkali concentration and immersed temperature interaction which was 0.0896. For the main effect, alkali concentration factor contributes to the higher magnitude at 0.1372 which indicated are decrease pattern of variability when the level was change from lower to higher level. Then, it was followed by immersed temperature at 0.1261 and immersed time at 0.0696 magnitudes.
Keywords: Natural fiber, kenaf bast fiber bundles, alkali treatment, cross sectional area.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 194123 Perceptual Framework for a Modern Left-Turn Collision Warning System
Authors: E. Dabbour, S. M. Easa
Abstract:
Most of the collision warning systems currently available in the automotive market are mainly designed to warn against imminent rear-end and lane-changing collisions. No collision warning system is commercially available to warn against imminent turning collisions at intersections, especially for left-turn collisions when a driver attempts to make a left-turn at either a signalized or non-signalized intersection, conflicting with the path of other approaching vehicles traveling on the opposite-direction traffic stream. One of the major factors that lead to left-turn collisions is the human error and misjudgment of the driver of the turning vehicle when perceiving the speed and acceleration of other vehicles traveling on the opposite-direction traffic stream; therefore, using a properly-designed collision warning system will likely reduce, or even eliminate, this type of collisions by reducing human error. This paper introduces perceptual framework for a proposed collision warning system that can detect imminent left-turn collisions at intersections. The system utilizes a commercially-available detection sensor (either a radar sensor or a laser detector) to detect approaching vehicles traveling on the opposite-direction traffic stream and calculate their speeds and acceleration rates to estimate the time-tocollision and compare that time to the time required for the turning vehicle to clear the intersection. When calculating the time required for the turning vehicle to clear the intersection, consideration is given to the perception-reaction time of the driver of the turning vehicle, which is the time required by the driver to perceive the message given by the warning system and react to it by engaging the throttle. A regression model was developed to estimate perception-reaction time based on age and gender of the driver of the host vehicle. Desired acceleration rate selected by the driver of the turning vehicle, when making the left-turn movement, is another human factor that is considered by the system. Another regression model was developed to estimate the acceleration rate selected by the driver of the turning vehicle based on driver-s age and gender as well as on the location and speed of the nearest approaching vehicle along with the maximum acceleration rate provided by the mechanical characteristics of the turning vehicle. By comparing time-to-collision with the time required for the turning vehicle to clear the intersection, the system displays a message to the driver of the turning vehicle when departure is safe. An application example is provided to illustrate the logic algorithm of the proposed system.Keywords: Collision warning systems, intelligent transportationsystems, vehicle safety.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 205522 Surfactant Stabilized Nanoemulsion: Characterization and Application in Enhanced Oil Recovery
Authors: Ajay Mandal, Achinta Bera
Abstract:
Nanoemulsions are a class of emulsions with a droplet size in the range of 50–500 nm and have attracted a great deal of attention in recent years because it is unique characteristics. The physicochemical properties of nanoemulsion suggests that it can be successfully used to recover the residual oil which is trapped in the fine pore of reservoir rock by capillary forces after primary and secondary recovery. Oil-in-water nanoemulsion which can be formed by high-energy emulsification techniques using specific surfactants can reduce oil-water interfacial tension (IFT) by 3-4 orders of magnitude. The present work is aimed on characterization of oil-inwater nanoemulsion in terms of its phase behavior, morphological studies; interfacial energy; ability to reduce the interfacial tension and understanding the mechanisms of mobilization and displacement of entrapped oil blobs by lowering interfacial tension both at the macroscopic and microscopic level. In order to investigate the efficiency of oil-water nanoemulsion in enhanced oil recovery (EOR), experiments were performed to characterize the emulsion in terms of their physicochemical properties and size distribution of the dispersed oil droplet in water phase. Synthetic mineral oil and a series of surfactants were used to prepare oil-in-water emulsions. Characterization of emulsion shows that it follows pseudo-plastic behaviour and drop size of dispersed oil phase follows lognormal distribution. Flooding experiments were also carried out in a sandpack system to evaluate the effectiveness of the nanoemulsion as displacing fluid for enhanced oil recovery. Substantial additional recoveries (more than 25% of original oil in place) over conventional water flooding were obtained in the present investigation.Keywords: Nanoemulsion, Characterization, Enhanced Oil Recovery, Particle Size Distribution
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 502921 Elliptical Features Extraction Using Eigen Values of Covariance Matrices, Hough Transform and Raster Scan Algorithms
Authors: J. Prakash, K. Rajesh
Abstract:
In this paper, we introduce a new method for elliptical object identification. The proposed method adopts a hybrid scheme which consists of Eigen values of covariance matrices, Circular Hough transform and Bresenham-s raster scan algorithms. In this approach we use the fact that the large Eigen values and small Eigen values of covariance matrices are associated with the major and minor axial lengths of the ellipse. The centre location of the ellipse can be identified using circular Hough transform (CHT). Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain a small number of nonzero elements they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of circumference pixels is identified using raster scan algorithm which uses the geometrical symmetry property. This method does not require the evaluation of tangents or curvature of edge contours, which are generally very sensitive to noise working conditions. The proposed method has the advantages of small storage, high speed and accuracy in identifying the feature. The new method has been tested on both synthetic and real images. Several experiments have been conducted on various images with considerable background noise to reveal the efficacy and robustness. Experimental results about the accuracy of the proposed method, comparisons with Hough transform and its variants and other tangential based methods are reported.Keywords: Circular Hough transform, covariance matrix, Eigen values, ellipse detection, raster scan algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 264120 Low Temperature Biological Treatment of Chemical Oxygen Demand for Agricultural Water Reuse Application Using Robust Biocatalysts
Authors: Vedansh Gupta, Allyson Lutz, Ameen Razavi, Fatemeh Shirazi
Abstract:
The agriculture industry is especially vulnerable to forecasted water shortages. In the fresh and fresh-cut produce sector, conventional flume-based washing with recirculation exhibits high water demand. This leads to a large water footprint and possible cross-contamination of pathogens. These can be alleviated through advanced water reuse processes, such as membrane technologies including reverse osmosis (RO). Water reuse technologies effectively remove dissolved constituents but can easily foul without pre-treatment. Biological treatment is effective for the removal of organic compounds responsible for fouling, but not at the low temperatures encountered at most produce processing facilities. This study showed that the Microvi MicroNiche Engineering (MNE) technology effectively removes organic compounds (> 80%) at low temperatures (6-8 °C) from wash water. The MNE technology uses synthetic microorganism-material composites with negligible solids production, making it advantageously situated as an effective bio-pretreatment for RO. A preliminary technoeconomic analysis showed 60-80% savings in operation and maintenance costs (OPEX) when using the Microvi MNE technology for organics removal. This study and the accompanying economic analysis indicated that the proposed technology process will substantially reduce the cost barrier for adopting water reuse practices, thereby contributing to increased food safety and furthering sustainable water reuse processes across the agricultural industry.
Keywords: Biological pre-treatment, innovative technology, vegetable processing, water reuse, agriculture, reverse osmosis, MNE biocatalysts.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 61519 Assessing Storage of Stability and Mercury Reduction of Freeze-Dried Pseudomonas putida within Different Types of Lyoprotectant
Authors: A. A. M. Azoddein, Y. Nuratri, A. B. Bustary, F. A. M. Azli, S. C. Sayuti
Abstract:
Pseudomonas putida is a potential strain in biological treatment to remove mercury contained in the effluent of petrochemical industry due to its mercury reductase enzyme that able to reduce ionic mercury to elementary mercury. Freeze-dried P. putida allows easy, inexpensive shipping, handling and high stability of the product. This study was aimed to freeze dry P. putida cells with addition of lyoprotectant. Lyoprotectant was added into the cells suspension prior to freezing. Dried P. putida obtained was then mixed with synthetic mercury. Viability of recovery P. putida after freeze dry was significantly influenced by the type of lyoprotectant. Among the lyoprotectants, tween 80/ sucrose was found to be the best lyoprotectant. Sucrose able to recover more than 78% (6.2E+09 CFU/ml) of the original cells (7.90E+09CFU/ml) after freeze dry and able to retain 5.40E+05 viable cells after 4 weeks storage in 4oC without vacuum. Polyethylene glycol (PEG) pre-treated freeze dry cells and broth pre-treated freeze dry cells after freeze-dry recovered more than 64% (5.0 E+09CFU/ml) and >0.1% (5.60E+07CFU/ml). Freeze-dried P. putida cells in PEG and broth cannot survive after 4 weeks storage. Freeze dry also does not really change the pattern of growth P. putida but extension of lag time was found 1 hour after 3 weeks of storage. Additional time was required for freeze-dried P. putida cells to recover before introduce freeze-dried cells to more complicated condition such as mercury solution. The maximum mercury reduction of PEG pre-treated freeze-dried cells after freeze dry and after storage 3 weeks was 56.78% and 17.91%. The maximum of mercury reduction of tween 80/sucrose pre-treated freeze-dried cells after freeze dry and after storage 3 weeks were 26.35% and 25.03%. Freeze dried P. putida was found to have lower mercury reduction compare to the fresh P. putida that has been growth in agar. Result from this study may be beneficial and useful as initial reference before commercialize freeze-dried P. putida.
Keywords: Pseudomonas putida, freeze-dry, PEG, Tween80/Sucrose, mercury, cell viability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 111918 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things
Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker
Abstract:
Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.Keywords: CUSUM, evidence theory, KL divergence, quickest change detection, time series data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 994