Search results for: adaptive estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2879

Search results for: adaptive estimation

1739 Efficacy of Agrobacterium Tumefaciens as a Possible Entomopathogenic Agent

Authors: Fouzia Qamar, Shahida Hasnain

Abstract:

The objective of the present study was to evaluate the possible role of Agrobacterium tumefaciens as a possible insect biocontrol agent. Pests selected for the present challenge were adult males of Periplaneta americana and last instar larvae of Pieris brassicae and Spodoptera litura. Different ranges of bacterial doses were selected and tested to score the mortalities of the insects after 24 hours, for the lethal dose estimation studies. Mode of application for the inoculation of the bacteria, was the microinjection technique. The evaluation of the possible entomopathogenic carrying attribute of bacterial Ti plasmid, led to the conclusion that the loss of plasmid was associated with the loss of virulence against target insects.

Keywords: agrobacterium tumefaciens, toxicity assessment, biopesticidal attribute, entomopathogenic agent

Procedia PDF Downloads 379
1738 Kinetic Studies on CO₂ Gasification of Low and High Ash Indian Coals in Context of Underground Coal Gasification

Authors: Geeta Kumari, Prabu Vairakannu

Abstract:

Underground coal gasification (UCG) technology is an efficient and an economic in-situ clean coal technology, which converts unmineable coals into calorific valuable gases. This technology avoids ash disposal, coal mining, and storage problems. CO₂ gas can be a potential gasifying medium for UCG. CO₂ is a greenhouse gas and, the liberation of this gas to the atmosphere from thermal power plant industries leads to global warming. Hence, the capture and reutilization of CO₂ gas are crucial for clean energy production. However, the reactivity of high ash Indian coals with CO₂ needs to be assessed. In the present study, two varieties of Indian coals (low ash and high ash) are used for thermogravimetric analyses (TGA). Two low ash north east Indian coals (LAC) and a typical high ash Indian coal (HAC) are procured from the coal mines of India. Low ash coal with 9% ash (LAC-1) and 4% ash (LAC-2) and high ash coal (HAC) with 42% ash are used for the study. TGA studies are carried out to evaluate the activation energy for pyrolysis and gasification of coal under N₂ and CO₂ atmosphere. Coats and Redfern method is used to estimate the activation energy of coal under different temperature regimes. Volumetric model is assumed for the estimation of the activation energy. The activation energy estimated under different temperature range. The inherent properties of coals play a major role in their reactivity. The results show that the activation energy decreases with the decrease in the inherent percentage of coal ash due to the ash layer hindrance. A reverse trend was observed with volatile matter. High volatile matter of coal leads to the estimation of low activation energy. It was observed that the activation energy under CO₂ atmosphere at 400-600°C is less as compared to N₂ inert atmosphere. At this temperature range, it is estimated that 15-23% reduction in the activation energy under CO₂ atmosphere. This shows the reactivity of CO₂ gas with higher hydrocarbons of the coal volatile matters. The reactivity of CO₂ with the volatile matter of coal might occur through dry reforming reaction in which CO₂ reacts with higher hydrocarbon, aromatics of the tar content. The observed trend of Ea in the temperature range of 150-200˚C and 400-600˚C is HAC > LAC-1 >LAC-2 in both N₂ and CO₂ atmosphere. At the temperature range of 850-1000˚C, higher activation energy is estimated when compared to those values in the temperature range of 400-600°C. Above 800°C, char gasification through Boudouard reaction progressed under CO₂ atmosphere. It was observed that 8-20 kJ/mol of activation energy is increased during char gasification above 800°C compared to volatile matter pyrolysis between the temperature ranges of 400-600°C. The overall activation energy of the coals in the temperature range of 30-1000˚C is higher in N₂ atmosphere than CO₂ atmosphere. It can be concluded that higher hydrocarbons such as tar effectively undergoes cracking and reforming reactions in presence of CO₂. Thus, CO₂ gas is beneficial for the production of high calorific value syngas using high ash Indian coals.

Keywords: clean coal technology, CO₂ gasification, activation energy, underground coal gasification

Procedia PDF Downloads 171
1737 Major Histocompatibility Complex (MHC) Polymorphism and Disease Resistance

Authors: Oya Bulut, Oguzhan Avci, Zafer Bulut, Atilla Simsek

Abstract:

Livestock breeders have focused on the improvement of production traits with little or no attention for improvement of disease resistance traits. In order to determine the association between the genetic structure of the individual gene loci with possibility of the occurrence and the development of diseases, MHC (major histocompatibility complex) are frequently used. Because of their importance in the immune system, MHC locus is considered as candidate genes for resistance/susceptibility against to different diseases. Major histocompatibility complex (MHC) molecules play a critical role in both innate and adaptive immunity and have been considered candidate molecular markers of an association between polymorphisms and resistance/susceptibility to diseases. The purpose of this study is to give some information about MHC genes become an important area of study in recent years in terms of animal husbandry and determine the relation between MHC genes and resistance/susceptibility to disease.

Keywords: MHC, polymorphism, disease, resistance

Procedia PDF Downloads 631
1736 The Normal-Generalized Hyperbolic Secant Distribution: Properties and Applications

Authors: Hazem M. Al-Mofleh

Abstract:

In this paper, a new four-parameter univariate continuous distribution called the Normal-Generalized Hyperbolic Secant Distribution (NGHS) is defined and studied. Some general and structural distributional properties are investigated and discussed, including: central and non-central n-th moments and incomplete moments, quantile and generating functions, hazard function, Rényi and Shannon entropies, shapes: skewed right, skewed left, and symmetric, modality regions: unimodal and bimodal, maximum likelihood (MLE) estimators for the parameters. Finally, two real data sets are used to demonstrate empirically its flexibility and prove the strength of the new distribution.

Keywords: bimodality, estimation, hazard function, moments, Shannon’s entropy

Procedia PDF Downloads 350
1735 Localization of Radioactive Sources with a Mobile Radiation Detection System using Profit Functions

Authors: Luís Miguel Cabeça Marques, Alberto Manuel Martinho Vale, José Pedro Miragaia Trancoso Vaz, Ana Sofia Baptista Fernandes, Rui Alexandre de Barros Coito, Tiago Miguel Prates da Costa

Abstract:

The detection and localization of hidden radioactive sources are of significant importance in countering the illicit traffic of Special Nuclear Materials and other radioactive sources and materials. Radiation portal monitors are commonly used at airports, seaports, and international land borders for inspecting cargo and vehicles. However, these equipment can be expensive and are not available at all checkpoints. Consequently, the localization of SNM and other radioactive sources often relies on handheld equipment, which can be time-consuming. The current study presents the advantages of real-time analysis of gamma-ray count rate data from a mobile radiation detection system based on simulated data and field tests. The incorporation of profit functions and decision criteria to optimize the detection system's path significantly enhances the radiation field information and reduces survey time during cargo inspection. For source position estimation, a maximum likelihood estimation algorithm is employed, and confidence intervals are derived using the Fisher information. The study also explores the impact of uncertainties, baselines, and thresholds on the performance of the profit function. The proposed detection system, utilizing a plastic scintillator with silicon photomultiplier sensors, boasts several benefits, including cost-effectiveness, high geometric efficiency, compactness, and lightweight design. This versatility allows for seamless integration into any mobile platform, be it air, land, maritime, or hybrid, and it can also serve as a handheld device. Furthermore, integration of the detection system into drones, particularly multirotors, and its affordability enable the automation of source search and substantial reduction in survey time, particularly when deploying a fleet of drones. While the primary focus is on inspecting maritime container cargo, the methodologies explored in this research can be applied to the inspection of other infrastructures, such as nuclear facilities or vehicles.

Keywords: plastic scintillators, profit functions, path planning, gamma-ray detection, source localization, mobile radiation detection system, security scenario

Procedia PDF Downloads 116
1734 Real-Time Adaptive Obstacle Avoidance with DS Method and the Influence of Dynamic Environments Change on Different DS

Authors: Saeed Mahjoub Moghadas, Farhad Asadi, Shahed Torkamandi, Hassan Moradi, Mahmood Purgamshidian

Abstract:

In this paper, we present real-time obstacle avoidance approach for both autonomous and non-autonomous DS-based controllers and also based on dynamical systems (DS) method. In this approach, we can modulate the original dynamics of the controller and it allows us to determine safety margin and different types of DS to increase the robot’s reactiveness in the face of uncertainty in the localization of the obstacle and especially when robot moves very fast in changeable complex environments. The method is validated in simulation and influence of different autonomous and non-autonomous DS such as limit cycles, and unstable DS on this algorithm and also the position of different obstacles in complex environment is explained. Finally, we describe how the avoidance trajectories can be verified through different parameters such as safety factor.

Keywords: limit cycles, nonlinear dynamical system, real time obstacle avoidance, DS-based controllers

Procedia PDF Downloads 389
1733 Optimizing Protection of Medieval Glass Mosaic

Authors: J. Valach, S. Pospisil, S. Kuznecov

Abstract:

The paper deals with experimental estimation of future environmental load on medieval mosaic of Last Judgement on entrance to St. Vitus cathedral on Prague castle. The mosaic suffers from seasonal changes of weather pattern, as well as rains, their acidity, deposition of dust and sooth particles from polluted air and also from freeze-thaw cycles. These phenomena influence state of the mosaic. The mosaic elements, tesserae are mostly made from glass prone to weathering. To estimate future procedure of the best maintenance, relation between various weather scenarios and their effect on the mosaic was investigated. At the same time local method for evaluation of protective coating was developed. Together both methods will contribute to better care for the mosaic and also visitors aesthetical experience.

Keywords: environmental load, cultural heritage, glass mosaic, protection

Procedia PDF Downloads 280
1732 The Impact of Artificial Intelligence on E-Learning

Authors: Sameil Hanna Samweil Botros

Abstract:

The variation of social networking websites inside higher training has garnered enormous hobby in recent years, with numerous researchers thinking about it as a possible shift from the conventional lecture room-based learning paradigm. However, this boom in research and carried out research, but the adaption of SNS-based modules has not proliferated inside universities. This paper commences its contribution with the aid of studying the numerous fashions and theories proposed in the literature and amalgamates together various effective aspects for the inclusion of social technology within e-gaining knowledge. A three-phased framework is similarly proposed, which informs the important concerns for the hit edition of SNS in improving the student's mastering experience. This suggestion outlines the theoretical foundations as a way to be analyzed in sensible implementation across worldwide university campuses.

Keywords: eLearning, institutionalization, teaching and learning, transformation vtuber, ray tracing, avatar agriculture, adaptive, e-learning, technology eLearning, higher education, social network sites, student learning

Procedia PDF Downloads 25
1731 Estimation of Delay Due to Loading–Unloading of Passengers by Buses and Reduction of Number of Lanes at Selected Intersections in Dhaka City

Authors: Sumit Roy, A. Uddin

Abstract:

One of the significant reasons that increase the delay time in the intersections at heterogeneous traffic condition is a sudden reduction of the capacity of the roads. In this study, the delay for this sudden capacity reduction is estimated. Two intersections at Dhaka city were brought in to thestudy, i.e., Kakrail intersection, and SAARC Foara intersection. At Kakrail intersection, the sudden reduction of capacity in the roads is seen at three downstream legs of the intersection, which are because of slowing down or stopping of buses for loading and unloading of passengers. At SAARC Foara intersection, sudden reduction of capacity was seen at two downstream legs. At one leg, it was due to loading and unloading of buses, and at another leg, it was for both loading and unloading of buses and reduction of the number of lanes. With these considerations, the delay due to intentional stoppage or slowing down of buses and reduction of the number of lanes for these two intersections are estimated. Here the delay was calculated by two approaches. The first approach came from the concept of shock waves in traffic streams. Here the delay was calculated by determining the flow, density, and speed before and after the sudden capacity reduction. The second approach came from the deterministic analysis of queues. Here the delay is calculated by determining the volume, capacity and reduced capacity of the road. After determining the delay from these two approaches, the results were compared. For this study, the video of each of the two intersections was recorded for one hour at the evening peak. Necessary geometric data were also taken to determine speed, flow, and density, etc. parameters. The delay was calculated for one hour with one-hour data at both intersections. In case of Kakrail intersection, the per hour delay for Kakrail circle leg was 5.79, and 7.15 minutes, for Shantinagar cross intersection leg they were 13.02 and 15.65 minutes, and for Paltan T intersection leg, they were 3 and 1.3 minutes for 1st and 2nd approaches respectively. In the case of SAARC Foara intersection, the delay at Shahbag leg was only due to intentional stopping or slowing down of busses, which were 3.2 and 3 minutes respectively for both approaches. For the Karwan Bazar leg, the delays for buses by both approaches were 5 and 7.5 minutes respectively, and for reduction of the number of lanes, the delays for both approaches were 2 and 1.78 minutes respectively. Measuring the delay per hour for the Kakrail leg at Kakrail circle, it is seen that, with consideration of the first approach of delay estimation, the intentional stoppage and lowering of speed by buses contribute to 26.24% of total delay at Kakrail circle. If the loading and unloading of buses at intersection is made forbidden near intersection, and any other measures for loading and unloading of passengers are established far enough from the intersections, then the delay at intersections can be reduced at significant scale, and the performance of the intersections can be enhanced.

Keywords: delay, deterministic queue analysis, shock wave, passenger loading-unloading

Procedia PDF Downloads 178
1730 Estimation of Scour Using a Coupled Computational Fluid Dynamics and Discrete Element Model

Authors: Zeinab Yazdanfar, Dilan Robert, Daniel Lester, S. Setunge

Abstract:

Scour has been identified as the most common threat to bridge stability worldwide. Traditionally, scour around bridge piers is calculated using the empirical approaches that have considerable limitations and are difficult to generalize. The multi-physic nature of scouring which involves turbulent flow, soil mechanics and solid-fluid interactions cannot be captured by simple empirical equations developed based on limited laboratory data. These limitations can be overcome by direct numerical modeling of coupled hydro-mechanical scour process that provides a robust prediction of bridge scour and valuable insights into the scour process. Several numerical models have been proposed in the literature for bridge scour estimation including Eulerian flow models and coupled Euler-Lagrange models incorporating an empirical sediment transport description. However, the contact forces between particles and the flow-particle interaction haven’t been taken into consideration. Incorporating collisional and frictional forces between soil particles as well as the effect of flow-driven forces on particles will facilitate accurate modeling of the complex nature of scour. In this study, a coupled Computational Fluid Dynamics and Discrete Element Model (CFD-DEM) has been developed to simulate the scour process that directly models the hydro-mechanical interactions between the sediment particles and the flowing water. This approach obviates the need for an empirical description as the fundamental fluid-particle, and particle-particle interactions are fully resolved. The sediment bed is simulated as a dense pack of particles and the frictional and collisional forces between particles are calculated, whilst the turbulent fluid flow is modeled using a Reynolds Averaged Navier Stocks (RANS) approach. The CFD-DEM model is validated against experimental data in order to assess the reliability of the CFD-DEM model. The modeling results reveal the criticality of particle impact on the assessment of scour depth which, to the authors’ best knowledge, hasn’t been considered in previous studies. The results of this study open new perspectives to the scour depth and time assessment which is the key to manage the failure risk of bridge infrastructures.

Keywords: bridge scour, discrete element method, CFD-DEM model, multi-phase model

Procedia PDF Downloads 131
1729 Unsupervised Assistive and Adaptive Intelligent Agent in Smart Environment

Authors: Sebastião Pais, João Casal, Ricardo Ponciano, Sérgio Lourenço

Abstract:

The adaptation paradigm is a basic defining feature for pervasive computing systems. Adaptation systems must work efficiently in smart environment while providing suitable information relevant to the user system interaction. The key objective is to deduce the information needed information changes. Therefore, relying on fixed operational models would be inappropriate. This paper presents a study on developing a Intelligent Personal Assistant to assist the user in interacting with their Smart Environment. We propose a Unsupervised and Language-Independent Adaptation through Intelligent Speech Interface and a set of methods of Acquiring Knowledge, namely Semantic Similarity and Unsupervised Learning.

Keywords: intelligent personal assistants, intelligent speech interface, unsupervised learning, language-independent, knowledge acquisition, association measures, symmetric word similarities, attributional word similarities

Procedia PDF Downloads 644
1728 Alwadei Syndrome - A Genetic Cause Of Intellectual Disability

Authors: Mafalda Moreira, Diana Alba, Inês Paiva Ferreira, Rita Calejo, Ana Rita Soares, Leonilde Machado

Abstract:

Intellectual disability (ID) is characterized by deficits in intellectualfunctioningassociatedwithalterations in the adaptive behaviour, whose onset is inthedevelopmentalperiod. Itaffects 3% of the population, ofwhich 10% have a geneticaetiology. One of those causes isAlwadeiSyndrome, with 3 cases describedworldwide. It results from a homozygous nonsense mutation in theRUSC2 gene andisassociatedwithintellectualdisabilityanddysmorphic facialfeatures. Theauthorsreportthe case of a 5-year-old-boy, born to a healthymotherafter a full-termuneventfulpregnancy, thatwasreferred to Neurodevelopmentalconsultationdue toglobal developmentaldelay. Familyhistoryrevealedlearningdifficulties in the paternal brotherhood. Milddismorphicfeatureswereevidentsuch as darkinfraorbitalregion, low-set ears, beakednose, retrognathism, high-archedpalateandjointhyperlaxity. WechslerIntelligenceScale for Children III fullscaleIQ quoted 61. Karyotypeandchromosomalmicroarrayanalysiswerenormal, as well as the fragile X molecular study. DNA sequencingwasthenperformedandallowedtheidentificationof amutation in the RUSC2 gene. Theetiologicaldiagnosisof ID remains unknown in up to 80% of cases, creatinguncertainty in children’sfamilies. Theadvances in DNA sequencingtechnologieshaveincreasedourknowledgeofthegeneticdiseasesinvolved, as theAlwadeisyndromewasonlydescribedsince 2016. Thegeneticdiagnosisof ID allowsfamilygeneticcounselingandenablesthedevelopmentof target therapeutic approaches.

Keywords: intellectual disability, genetic aetiology, alwadei syndrome, RUSC2

Procedia PDF Downloads 179
1727 Comparison of ANFIS Update Methods Using Genetic Algorithm, Particle Swarm Optimization, and Artificial Bee Colony

Authors: Michael R. Phangtriastu, Herriyandi Herriyandi, Diaz D. Santika

Abstract:

This paper presents a comparison of the implementation of metaheuristic algorithms to train the antecedent parameters and consequence parameters in the adaptive network-based fuzzy inference system (ANFIS). The algorithms compared are genetic algorithm (GA), particle swarm optimization (PSO), and artificial bee colony (ABC). The objective of this paper is to benchmark well-known metaheuristic algorithms. The algorithms are applied to several data set with different nature. The combinations of the algorithms' parameters are tested. In all algorithms, a different number of populations are tested. In PSO, combinations of velocity are tested. In ABC, a different number of limit abandonment are tested. Experiments find out that ABC is more reliable than other algorithms, ABC manages to get better mean square error (MSE) than other algorithms in all data set.

Keywords: ANFIS, artificial bee colony, genetic algorithm, metaheuristic algorithm, particle swarm optimization

Procedia PDF Downloads 352
1726 Numerical Modelling of Effective Diffusivity in Bone Tissue Engineering

Authors: Ayesha Sohail, Khadija Maqbool, Anila Asif, Haroon Ahmad

Abstract:

The field of tissue engineering is an active area of research. Bone tissue engineering helps to resolve the clinical problems of critical size and non-healing defects by the creation of man-made bone tissue. We will design and validate an efficient numerical model, which will simulate the effective diffusivity in bone tissue engineering. Our numerical model will be based on the finite element analysis of the diffusion-reaction equations. It will have the ability to optimize the diffusivity, even at multi-scale, with the variation of time. It will also have a special feature, with which we will not only be able to predict the oxygen, glucose and cell density dynamics, more accurately, but will also sort the issues arising due to anisotropy. We will fix these problems with the help of modifying the governing equations, by selecting appropriate spatio-temporal finite element schemes, by adaptive grid refinement strategy and by transient analysis.

Keywords: scaffolds, porosity, diffusion, transient analysis

Procedia PDF Downloads 541
1725 Parameters Estimation of Power Function Distribution Based on Selective Order Statistics

Authors: Moh'd Alodat

Abstract:

In this paper, we discuss the power function distribution and derive the maximum likelihood estimator of its parameter as well as the reliability parameter. We derive the large sample properties of the estimators based on the selective order statistic scheme. We conduct simulation studies to investigate the significance of the selective order statistic scheme in our setup and to compare the efficiency of the new proposed estimators.

Keywords: fisher information, maximum likelihood estimator, power function distribution, ranked set sampling, selective order statistics sampling

Procedia PDF Downloads 464
1724 An Investigation on Hot-Spot Temperature Calculation Methods of Power Transformers

Authors: Ahmet Y. Arabul, Ibrahim Senol, Fatma Keskin Arabul, Mustafa G. Aydeniz, Yasemin Oner, Gokhan Kalkan

Abstract:

In the standards of IEC 60076-2 and IEC 60076-7, three different hot-spot temperature estimation methods are suggested. In this study, the algorithms which used in hot-spot temperature calculations are analyzed by comparing the algorithms with the results of an experimental set-up made by a Transformer Monitoring System (TMS) in use. In tested system, TMS uses only top oil temperature and load ratio for hot-spot temperature calculation. And also, it uses some constants from standards which are on agreed statements tables. During the tests, it came out that hot-spot temperature calculation method is just making a simple calculation and not uses significant all other variables that could affect the hot-spot temperature.

Keywords: Hot-spot temperature, monitoring system, power transformer, smart grid

Procedia PDF Downloads 573
1723 Residual Life Estimation of K-out-of-N Cold Standby System

Authors: Qian Zhao, Shi-Qi Liu, Bo Guo, Zhi-Jun Cheng, Xiao-Yue Wu

Abstract:

Cold standby redundancy is considered to be an effective mechanism for improving system reliability and is widely used in industrial engineering. However, because of the complexity of the reliability structure, there is little literature studying on the residual life of cold standby system consisting of complex components. In this paper, a simulation method is presented to predict the residual life of k-out-of-n cold standby system. In practical cases, failure information of a system is either unknown, partly unknown or completely known. Our proposed method is designed to deal with the three scenarios, respectively. Differences between the procedures are analyzed. Finally, numerical examples are used to validate the proposed simulation method.

Keywords: cold standby system, k-out-of-n, residual life, simulation sampling

Procedia PDF Downloads 401
1722 Estimation of Antiurolithiatic Activity of a Biochemical Medicine, Magnesia phosphorica, in Ethylene Glycol-Induced Nephrolithiasis in Wistar Rats by Urine Analysis, Biochemical, Histopathological, and Electron Microscopic Studies

Authors: Priti S. Tidke, Chandragouda R. Patil

Abstract:

The present study was designed to investigate the effect of Magnesia phosphorica, a biochemical medicine on urine screeing, biochemical, histopathological, and electron microscopic images in ethylene glycol induced nepholithiasis in rats.Male Wistar albino rats were divided into six groups and were orally administered saline once daily (IR-sham and IR-control) or Magnesia phosphorica 100 mg/kg twice daily for 24 days.The effect of various dilutions of biochemical Mag phos3x, 6x, 30x was determined on urine output by comparing the urine volume collected by keeping individual animals in metabolic cages. Calcium oxalate urolithiasis and hyperoxaluria in male Wistar rats was induced by oral administration of 0.75% Ethylene glycol p.o. daily for 24 days. Simultaneous administration of biochemical 3x, 6x, 30xMag phos (100mg/kg p.o. twice a day) along with ethylene glycol significantly decreased calcium oxalate, urea, creatinine, Calcium, Magnesium, Chloride, Phosphorus, Albumin, Alkaline Phosphatase content in urine compared with vehicle-treated control group.After the completion of treatment period animals were sacrificed, kidneys were removed and subjected to microscopic examination for possible stone formation. Histological estimation of kidney treated with biochemical Mag phos (3x, 6x, 30xMag phos 100 mg/kg, p.o.) along with ethylene glycol inhibited the growth of calculi and reduced the number of stones in kidney compared with control group. Biochemical Mag phos of 3x dilution and its crude equivalent also showed potent diuretic and antiurolithiatic activity in ethylene glycol induced urolithiasis. A significant decrease in the weight of stones was observed after treatment in animals which received biochemical Mag phos of 3x dilution and its crude equivalent in comparison with control groups. From this study, it can be proposed that the 3x dilution of biochemical Mag phos exhibits a significant inhibitory effect on crystal growth, with the improvement of kidney function and substantiates claims on the biological activity of twelve tissue remedies which can be proved scientifically through laboratory animal studies.

Keywords: Mag phos, Magnesia phosphorica, ciochemic medicine, urolithiasis, kidney stone, ethylene glycol

Procedia PDF Downloads 428
1721 State and Benefit: Delivering the First State of the Bays Report for Victoria

Authors: Scott Rawlings

Abstract:

Victoria’s first State of the Bays report is an historic baseline study of the health of Port Phillip Bay and Western Port. The report includes 50 assessments of 36 indicators across a broad array of topics from the nitrogen cycle and water quality to key marine species and habitats. This paper discusses the processes for determining and assessing the indicators and comments on future priorities identified to maintain and improve the health of these water ways. Victoria’s population is now at six million, and growing at a rate of over 100,000 people per year - the highest increase in Australia – and the population of greater Melbourne is over four million. Port Phillip Bay and Western Port are vital marine assets at the centre of this growth and will require adaptive strategies if they are to remain in good condition and continue to deliver environmental, economic and social benefits. In 2014, it was in recognition of these pressures that the incoming Victorian Government committed to reporting on the state of the bays every five years. The inaugural State of the Bays report was issued by the independent Victorian Commissioner for Environmental Sustainability. The report brought together what is known about both bays, based on existing research. It was a baseline on which future reports will build and, over time, include more of Victoria’s marine environment. Port Phillip Bay and Western Port generally demonstrate healthy systems. Specific threats linked to population growth are a significant pressure. Impacts are more significant where human activity is more intense and where nutrients are transported to the bays around the mouths of creeks and drainage systems. The transport of high loads of nutrients and pollutants to the bays from peak rainfall events is likely to increase with climate change – as will sea level rise. Marine pests are also a threat. More than 100 introduced marine species have become established in Port Phillip Bay and can compete with native species, alter habitat, reduce important fish stocks and potentially disrupt nitrogen cycling processes. This study confirmed that our data collection regime is better within the Marine Protected Areas of Port Phillip Bay than in other parts. The State of the Bays report is a positive and practical example of what can be achieved through collaboration and cooperation between environmental reporters, Government agencies, academic institutions, data custodians, and NGOs. The State of the Bays 2016 provides an important foundation by identifying knowledge gaps and research priorities for future studies and reports on the bays. It builds a strong evidence base to effectively manage the bays and support an adaptive management framework. The Report proposes a set of indicators for future reporting that will support a step-change in our approach to monitoring and managing the bays – a shift from reporting only on what we do know, to reporting on what we need to know.

Keywords: coastal science, marine science, Port Phillip Bay, state of the environment, Western Port

Procedia PDF Downloads 210
1720 Energy Recovery from Swell with a Height Inferior to 1.5 m

Authors: A. Errasti, F. Doffagne, O. Foucrier, S. Kao, A. Meigne, H. Pellae, T. Rouland

Abstract:

Renewable energy recovery is an important domain of research in past few years in view of protection of our ecosystem. Several industrial companies are setting up widespread recovery systems to exploit wave energy. Most of them have a large size, are implanted near the shores and exploit current flows. However, as oceans represent 70% of Earth surface, a huge space is still unexploited to produce energy. Present analysis focuses on surface small scale wave energy recovery. The principle is exactly the opposite of wheel damper for a car on a road. Instead of maintaining the car body as non-oscillatory as possible by adapted control, a system is designed so that its oscillation amplitude under wave action will be maximized with respect to a boat carrying it in view of differential potential energy recuperation. From parametric analysis of system equations, interesting domains have been selected and expected energy output has been evaluated.

Keywords: small scale wave, potential energy, optimized energy recovery, auto-adaptive system

Procedia PDF Downloads 260
1719 Exploring SSD Suitable Allocation Schemes Incompliance with Workload Patterns

Authors: Jae Young Park, Hwansu Jung, Jong Tae Kim

Abstract:

Whether the data has been well parallelized is an important factor in the Solid-State-Drive (SSD) performance. SSD parallelization is affected by allocation scheme and it is directly connected to SSD performance. There are dynamic allocation and static allocation in representative allocation schemes. Dynamic allocation is more adaptive in exploiting write operation parallelism, while static allocation is better in read operation parallelism. Therefore, it is hard to select the appropriate allocation scheme when the workload is mixed read and write operations. We simulated conditions on a few mixed data patterns and analyzed the results to help the right choice for better performance. As the results, if data arrival interval is long enough prior operations to be finished and continuous read intensive data environment static allocation is more suitable. Dynamic allocation performs the best on write performance and random data patterns.

Keywords: dynamic allocation, NAND flash based SSD, SSD parallelism, static allocation

Procedia PDF Downloads 340
1718 Development of a Model Based on Wavelets and Matrices for the Treatment of Weakly Singular Partial Integro-Differential Equations

Authors: Somveer Singh, Vineet Kumar Singh

Abstract:

We present a new model based on viscoelasticity for the Non-Newtonian fluids.We use a matrix formulated algorithm to approximate solutions of a class of partial integro-differential equations with the given initial and boundary conditions. Some numerical results are presented to simplify application of operational matrix formulation and reduce the computational cost. Convergence analysis, error estimation and numerical stability of the method are also investigated. Finally, some test examples are given to demonstrate accuracy and efficiency of the proposed method.

Keywords: Legendre Wavelets, operational matrices, partial integro-differential equation, viscoelasticity

Procedia PDF Downloads 336
1717 Modeling and Simulation of Flow Shop Scheduling Problem through Petri Net Tools

Authors: Joselito Medina Marin, Norberto Hernández Romero, Juan Carlos Seck Tuoh Mora, Erick S. Martinez Gomez

Abstract:

The Flow Shop Scheduling Problem (FSSP) is a typical problem that is faced by production planning managers in Flexible Manufacturing Systems (FMS). This problem consists in finding the optimal scheduling to carry out a set of jobs, which are processed in a set of machines or shared resources. Moreover, all the jobs are processed in the same machine sequence. As in all the scheduling problems, the makespan can be obtained by drawing the Gantt chart according to the operations order, among other alternatives. On this way, an FMS presenting the FSSP can be modeled by Petri nets (PNs), which are a powerful tool that has been used to model and analyze discrete event systems. Then, the makespan can be obtained by simulating the PN through the token game animation and incidence matrix. In this work, we present an adaptive PN to obtain the makespan of FSSP by applying PN analytical tools.

Keywords: flow-shop scheduling problem, makespan, Petri nets, state equation

Procedia PDF Downloads 298
1716 Hansen Solubility Parameter from Surface Measurements

Authors: Neveen AlQasas, Daniel Johnson

Abstract:

Membranes for water treatment are an established technology that attracts great attention due to its simplicity and cost effectiveness. However, membranes in operation suffer from the adverse effect of membrane fouling. Bio-fouling is a phenomenon that occurs at the water-membrane interface, and is a dynamic process that is initiated by the adsorption of dissolved organic material, including biomacromolecules, on the membrane surface. After initiation, attachment of microorganisms occurs, followed by biofilm growth. The biofilm blocks the pores of the membrane and consequently results in reducing the water flux. Moreover, the presence of a fouling layer can have a substantial impact on the membrane separation properties. Understanding the mechanism of the initiation phase of biofouling is a key point in eliminating the biofouling on membrane surfaces. The adhesion and attachment of different fouling materials is affected by the surface properties of the membrane materials. Therefore, surface properties of different polymeric materials had been studied in terms of their surface energies and Hansen solubility parameters (HSP). The difference between the combined HSP parameters (HSP distance) allows prediction of the affinity of two materials to each other. The possibilities of measuring the HSP of different polymer films via surface measurements, such as contact angle has been thoroughly investigated. Knowing the HSP of a membrane material and the HSP of a specific foulant, facilitate the estimation of the HSP distance between the two, and therefore the strength of attachment to the surface. Contact angle measurements using fourteen different solvents on five different polymeric films were carried out using the sessile drop method. Solvents were ranked as good or bad solvents using different ranking method and ranking was used to calculate the HSP of each polymeric film. Results clearly indicate the absence of a direct relation between contact angle values of each film and the HSP distance between each polymer film and the solvents used. Therefore, estimating HSP via contact angle alone is not sufficient. However, it was found if the surface tensions and viscosities of the used solvents are taken in to the account in the analysis of the contact angle values, a prediction of the HSP from contact angle measurements is possible. This was carried out via training of a neural network model. The trained neural network model has three inputs, contact angle value, surface tension and viscosity of solvent used. The model is able to predict the HSP distance between the used solvent and the tested polymer (material). The HSP distance prediction is further used to estimate the total and individual HSP parameters of each tested material. The results showed an accuracy of about 90% for all the five studied films

Keywords: surface characterization, hansen solubility parameter estimation, contact angle measurements, artificial neural network model, surface measurements

Procedia PDF Downloads 94
1715 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures

Authors: Rui Teixeira, Alan O’Connor, Maria Nogal

Abstract:

The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.

Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data

Procedia PDF Downloads 273
1714 Evaluation and Selection of SaaS Product Based on User Preferences

Authors: Boussoualim Nacira, Aklouf Youcef

Abstract:

Software as a Service (SaaS) is a software delivery paradigm in which the product is not installed on-premise, but it is available on Internet and Web. The customers do not pay to possess the software itself but rather to use it. This concept of pay per use is very attractive. Hence, we see increasing number of organizations adopting SaaS. However, each customer is unique, which leads to a very large variation in the requirements off the software. As several suppliers propose SaaS products, the choice of this latter becomes a major issue. When multiple criteria are involved in decision making, we talk about a problem of «Multi-Criteria Decision-Making» (MCDM). Therefore, this paper presents a method to help customers to choose a better SaaS product satisfying most of their conditions and alternatives. Also, we know that a good method of adaptive selection should be based on the correct definition of the different parameters of choice. This is why we started by extraction and analysis the various parameters involved in the process of the selection of a SaaS application.

Keywords: cloud computing, business operation, Multi-Criteria Decision-Making (MCDM), Software as a Service (SaaS)

Procedia PDF Downloads 483
1713 Outlier Detection in Stock Market Data using Tukey Method and Wavelet Transform

Authors: Sadam Alwadi

Abstract:

Outlier values become a problem that frequently occurs in the data observation or recording process. Thus, the need for data imputation has become an essential matter. In this work, it will make use of the methods described in the prior work to detect the outlier values based on a collection of stock market data. In order to implement the detection and find some solutions that maybe helpful for investors, real closed price data were obtained from the Amman Stock Exchange (ASE). Tukey and Maximum Overlapping Discrete Wavelet Transform (MODWT) methods will be used to impute the detect the outlier values.

Keywords: outlier values, imputation, stock market data, detecting, estimation

Procedia PDF Downloads 81
1712 Markov-Chain-Based Optimal Filtering and Smoothing

Authors: Garry A. Einicke, Langford B. White

Abstract:

This paper describes an optimum filter and smoother for recovering a Markov process message from noisy measurements. The developments follow from an equivalence between a state space model and a hidden Markov chain. The ensuing filter and smoother employ transition probability matrices and approximate probability distribution vectors. The properties of the optimum solutions are retained, namely, the estimates are unbiased and minimize the variance of the output estimation error, provided that the assumed parameter set are correct. Methods for estimating unknown parameters from noisy measurements are discussed. Signal recovery examples are described in which performance benefits are demonstrated at an increased calculation cost.

Keywords: optimal filtering, smoothing, Markov chains

Procedia PDF Downloads 317
1711 Digital Forgery Detection by Signal Noise Inconsistency

Authors: Bo Liu, Chi-Man Pun

Abstract:

A novel technique for digital forgery detection by signal noise inconsistency is proposed in this paper. The forged area spliced from the other picture contains some features which may be inconsistent with the rest part of the image. Noise pattern and the level is a possible factor to reveal such inconsistency. To detect such noise discrepancies, the test picture is initially segmented into small pieces. The noise pattern and level of each segment are then estimated by using various filters. The noise features constructed in this step are utilized in energy-based graph cut to expose forged area in the final step. Experimental results show that our method provides a good illustration of regions with noise inconsistency in various scenarios.

Keywords: forgery detection, splicing forgery, noise estimation, noise

Procedia PDF Downloads 461
1710 A Comparison of Smoothing Spline Method and Penalized Spline Regression Method Based on Nonparametric Regression Model

Authors: Autcha Araveeporn

Abstract:

This paper presents a study about a nonparametric regression model consisting of a smoothing spline method and a penalized spline regression method. We also compare the techniques used for estimation and prediction of nonparametric regression model. We tried both methods with crude oil prices in dollars per barrel and the Stock Exchange of Thailand (SET) index. According to the results, it is concluded that smoothing spline method performs better than that of penalized spline regression method.

Keywords: nonparametric regression model, penalized spline regression method, smoothing spline method, Stock Exchange of Thailand (SET)

Procedia PDF Downloads 440