Search results for: Squared Error (SE) loss function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9698

Search results for: Squared Error (SE) loss function

6908 A Relative Entropy Regularization Approach for Fuzzy C-Means Clustering Problem

Authors: Ouafa Amira, Jiangshe Zhang

Abstract:

Clustering is an unsupervised machine learning technique; its aim is to extract the data structures, in which similar data objects are grouped in the same cluster, whereas dissimilar objects are grouped in different clusters. Clustering methods are widely utilized in different fields, such as: image processing, computer vision , and pattern recognition, etc. Fuzzy c-means clustering (fcm) is one of the most well known fuzzy clustering methods. It is based on solving an optimization problem, in which a minimization of a given cost function has been studied. This minimization aims to decrease the dissimilarity inside clusters, where the dissimilarity here is measured by the distances between data objects and cluster centers. The degree of belonging of a data point in a cluster is measured by a membership function which is included in the interval [0, 1]. In fcm clustering, the membership degree is constrained with the condition that the sum of a data object’s memberships in all clusters must be equal to one. This constraint can cause several problems, specially when our data objects are included in a noisy space. Regularization approach took a part in fuzzy c-means clustering technique. This process introduces an additional information in order to solve an ill-posed optimization problem. In this study, we focus on regularization by relative entropy approach, where in our optimization problem we aim to minimize the dissimilarity inside clusters. Finding an appropriate membership degree to each data object is our objective, because an appropriate membership degree leads to an accurate clustering result. Our clustering results in synthetic data sets, gaussian based data sets, and real world data sets show that our proposed model achieves a good accuracy.

Keywords: clustering, fuzzy c-means, regularization, relative entropy

Procedia PDF Downloads 257
6907 Bio-Inspired Design Approach Analysis: A Case Study of Antoni Gaudi and Santiago Calatrava

Authors: Marzieh Imani

Abstract:

Antoni Gaudi and Santiago Calatrava have reputation for designing bio-inspired creative and technical buildings. Even though they have followed different independent approaches towards design, the source of bio-inspiration seems to be common. Taking a closer look at their projects reveals that Calatrava has been influenced by Gaudi in terms of interpreting nature and applying natural principles into the design process. This research firstly discusses the dialogue between Biomimicry and architecture. This review also explores human/nature discourse during the history by focusing on how nature revealed itself to the fine arts. This is explained by introducing naturalism and romantic style in architecture as the outcome of designers’ inclination towards nature. Reviewing the literature, theoretical background and practical illustration of nature have been included. The most dominant practical aspects of imitating nature are form and function. Nature has been reflected in architectural science resulted in shaping different architectural styles such as organic, green, sustainable, bionic, and biomorphic. By defining a set of common aspects of Gaudi and Calatrava‘s design approach and by considering biomimetic design categories (organism, ecosystem, and behaviour as the main division and form, function, process, material, and construction as subdivisions), Gaudi’s and Calatrava’s project have been analysed. This analysis explores if their design approaches are equivalent or different. Based on this analysis, Gaudi’s architecture can be recognised as biomorphic while Calatrava’s projects are literally biomimetic. Referring to these architects, this review suggests a new set of principles by which a bio-inspired project can be determined either biomorphic or biomimetic.

Keywords: biomimicry, Calatrava, Gaudi, nature

Procedia PDF Downloads 283
6906 Wireless Optic Last Mile Multi-Gbit/s Communication System

Authors: Manea Viorel, Puscoci Sorin, Stoichescu Dan Alexandru

Abstract:

Free Space Optics (FSO) is an optical telecommunication system that uses laser beam to transmit data at high bit rates via terrestrial atmosphere. This article describes a method to obtain higher bit rates, under unfavorable weather conditions using multiple optical beams, which carry information with low optical power. Optical link quality assessment is given by the attenuation on different weather conditions. The goal of this paper is to compare two transmission techniques: mono and multi beam, both affected by atmospheric attenuation, using OOK and L-PPM modulation. Link availability is evaluated using eye-diagram that provides information about the overall bit error rate of the system.

Keywords: free space optics, wireless optic, laser communication, spatial diversity

Procedia PDF Downloads 502
6905 White-Rot Hymenomycetes as Oil Palm Log Treatments: Accelerating Biodegradation of Basal Stem Rot-Affected Oil Palm Stumps

Authors: Yuvarani Naidu, Yasmeen Siddiqui, Mohd Yusof Rafii , Abu Seman Idris

Abstract:

Sustainability of oil palm production in Southeast Asia, especially in Indonesia and Malaysia, is jeopardized by Ganoderma boninense, the fungus which causes basal stem rot (BSR) in oil palm. The root contact with unattended infected debris left in the plantations during replanting is known to be the primary source of inoculum. Abiding by the law, potentially effective technique of managing Ganoderma infected oil palm debris is deemed necessary because of the zero-burning policy in Malaysian oil palm plantations. White-rot hymenomycetes antagonistic to Ganoderma sp were selected to test their efficacy as log treatments in degrading Ganoderma infected oil palm logs and to minimize the survival of Ganoderma inoculum. Decay rate in terms of mass loss was significantly higher after the application of solid-state cultivation (SSC) of Trametes lactinea FBW (64% ±1.2), followed by Pycnoporus sanguineus FBR (55% ±1.7) in infected log block tissues, after 10 months of treatments. The degradation pattern was clearly distinguished between the treated and non-treated log blocks with the developed SSC formulations. The control infected log blocks showed the highest, whereas infected log blocks treated with either P. sanguineus FBR or T. lactinea FBW SSC formulations exhibited statistically lowest number of Ganoderma spp. recovery on Ganoderma Selective Medium (GSM), after 8 months of treatment. Out of that, the lowest recovery of Ganoderma spp. was reported in infected log blocks inoculated with the strain T. lactinea FBW (21% ± 0.9) followed by P. sanguineus FBR (33% ± 2.2), after 8 months, Further, no recovery of Ganoderma was noticeable, 10 months after treatment applications in log blocks treated with both of the formulations. This is the first nursery-base study to substantiate the initial colonization of white-rot hymenomycetes on oil palm log blocks previously infected with BSR pathogen, G. boninense. The present study has indicated that log blocks treatment with white-rot hymenomycetes significantly affected the mass loss of diseased and healthy log block tissues. This study provides a basis of biotechnological approaches inefficient degradation of oil palm-generated crop debris, under natural conditions with an ultimate aim of reducing the Ganoderma inoculum under heavy BSR infection pressure in eco-friendly manner.

Keywords: basal stem rot disease, ganoderma boninense, oil palm, white-rot fungi

Procedia PDF Downloads 205
6904 The Climate Change and Soil Degradation in the Czech Republic

Authors: Miroslav Dumbrovsky

Abstract:

The paper deals with impacts of climate change with the main emphasis on land degradation, agriculture and forestry management in the landscape. Land degradation, due to adverse effect of farmers activities, as a result of inappropriate conventional technologies, was a major issue in the Czech Republic during the 20th century and will remain for solving in the 21st century. The importance of land degradation is very high because of its impact on crop productivity and many other adverse effects. Land degradation through soil degradation is causing losses on crop productivity and quality of the environment, through decreasing quality of soil and water (especially water resources). Negative effects of conventional farming practices are increased water erosion, as well as crusting and compaction of the topsoil and subsoil. Soil erosion caused by water destructs the soil’s structure, reduces crop productivity due to deterioration in soil physical and chemical properties such as infiltration rate, water-holding capacity, loss of nutrients needed for crop production, and loss of soil carbon. Water erosion occurs on fields with row crops (maize, sunflower), especially during the rainfall period from April to October. Recently there is a serious problem of greatly expanded production of biofuels and bioenergy from field crops. The result is accelerated soil degradation. The damages (on and off- site) are greater than the benefits. An effective soil conservation requires an appropriate complex system of measures in the landscape. They are also important to continue to develop new sophisticated methods and technologies for decreasing land degradation. The system of soil conservation solving land degradation depend on the ability and the willingness of land users to apply them. When we talk about land degradation, it is not just a technical issue but also an economic and political issue. From a technical point of view, we have already made many positive steps, but for successful solving the problem of land degradation is necessary to develop suitable economic and political tools to increase the willingness and ability of land users to adopt conservation measures.

Keywords: land degradation, soil erosion, soil conservation, climate change

Procedia PDF Downloads 371
6903 Simplified Linearized Layering Method for Stress Intensity Factor Determination

Authors: Jeries J. Abou-Hanna, Bradley Storm

Abstract:

This paper looks to reduce the complexity of determining stress intensity factors while maintaining high levels of accuracy by the use of a linearized layering approach. Many techniques for stress intensity factor determination exist, but they can be limited by conservative results, requiring too many user parameters, or by being too computationally intensive. Multiple notch geometries with various crack lengths were investigated in this study to better understand the effectiveness of the proposed method. By linearizing the average stresses in radial layers around the crack tip, stress intensity factors were found to have error ranging from -10.03% to 8.94% when compared to analytically exact solutions. This approach proved to be a robust and efficient method of accurately determining stress intensity factors.

Keywords: fracture mechanics, finite element method, stress intensity factor, stress linearization

Procedia PDF Downloads 138
6902 Validating Quantitative Stormwater Simulations in Edmonton Using MIKE URBAN

Authors: Mohamed Gaafar, Evan Davies

Abstract:

Many municipalities within Canada and abroad use chloramination to disinfect drinking water so as to avert the production of the disinfection by-products (DBPs) that result from conventional chlorination processes and their consequential public health risks. However, the long-lasting monochloramine disinfectant (NH2Cl) can pose a significant risk to the environment. As, it can be introduced into stormwater sewers, from different water uses, and thus freshwater sources. Little research has been undertaken to monitor and characterize the decay of NH2Cl and to study the parameters affecting its decomposition in stormwater networks. Therefore, the current study was intended to investigate this decay starting by building a stormwater model and validating its hydraulic and hydrologic computations, and then modelling water quality in the storm sewers and examining the effects of different parameters on chloramine decay. The presented work here is only the first stage of this study. The 30th Avenue basin in Southern Edmonton was chosen as a case study, because the well-developed basin has various land-use types including commercial, industrial, residential, parks and recreational. The City of Edmonton has already built a MIKE-URBAN stormwater model for modelling floods. Nevertheless, this model was built to the trunk level which means that only the main drainage features were presented. Additionally, this model was not calibrated and known to consistently compute pipe flows higher than the observed values; not to the benefit of studying water quality. So the first goal was to complete modelling and updating all stormwater network components. Then, available GIS Data was used to calculate different catchment properties such as slope, length and imperviousness. In order to calibrate and validate this model, data of two temporary pipe flow monitoring stations, collected during last summer, was used along with records of two other permanent stations available for eight consecutive summer seasons. The effect of various hydrological parameters on model results was investigated. It was found that model results were affected by the ratio of impervious areas. The catchment length was tested, however calculated, because it is approximate representation of the catchment shape. Surface roughness coefficients were calibrated using. Consequently, computed flows at the two temporary locations had correlation coefficients of values 0.846 and 0.815, where the lower value pertained to the larger attached catchment area. Other statistical measures, such as peak error of 0.65%, volume error of 5.6%, maximum positive and negative differences of 2.17 and -1.63 respectively, were all found in acceptable ranges.

Keywords: stormwater, urban drainage, simulation, validation, MIKE URBAN

Procedia PDF Downloads 290
6901 The Effect of Oil Price Uncertainty on Food Price in South Africa

Authors: Goodness C. Aye

Abstract:

This paper examines the effect of the volatility of oil prices on food price in South Africa using monthly data covering the period 2002:01 to 2014:09. Food price is measured by the South African consumer price index for food while oil price is proxied by the Brent crude oil. The study employs the GARCH-in-mean VAR model, which allows the investigation of the effect of a negative and positive shock in oil price volatility on food price. The model also allows the oil price uncertainty to be measured as the conditional standard deviation of a one-step-ahead forecast error of the change in oil price. The results show that oil price uncertainty has a positive and significant effect on food price in South Africa. The responses of food price to a positive and negative oil price shocks is asymmetric.

Keywords: oil price volatility, food price, bivariate, GARCH-in-mean VAR, asymmetric

Procedia PDF Downloads 470
6900 Certain Results of a New Class of Meromorphic Multivalent Functions Involving Ruscheweyh Derivative

Authors: Kassim A. Jassim

Abstract:

In the present paper, we introduce and discuss a new class Kp(λ,α) of meromorphic multivalent functions in the punctured unit disk U*={z∈¢:0<|z|<1} defined by Ruscheweyh derivative. We obtain some sufficient conditions for the functions belonging to the class Kp(λ,α).

Keywords: meromorphic multivalent function, Ruscheweyh derivative, hadamard product

Procedia PDF Downloads 334
6899 Facial Recognition of University Entrance Exam Candidates using FaceMatch Software in Iran

Authors: Mahshid Arabi

Abstract:

In recent years, remarkable advancements in the fields of artificial intelligence and machine learning have led to the development of facial recognition technologies. These technologies are now employed in a wide range of applications, including security, surveillance, healthcare, and education. In the field of education, the identification of university entrance exam candidates has been one of the fundamental challenges. Traditional methods such as using ID cards and handwritten signatures are not only inefficient and prone to fraud but also susceptible to errors. In this context, utilizing advanced technologies like facial recognition can be an effective and efficient solution to increase the accuracy and reliability of identity verification in entrance exams. This article examines the use of FaceMatch software for recognizing the faces of university entrance exam candidates in Iran. The main objective of this research is to evaluate the efficiency and accuracy of FaceMatch software in identifying university entrance exam candidates to prevent fraud and ensure the authenticity of individuals' identities. Additionally, this research investigates the advantages and challenges of using this technology in Iran's educational systems. This research was conducted using an experimental method and random sampling. In this study, 1000 university entrance exam candidates in Iran were selected as samples. The facial images of these candidates were processed and analyzed using FaceMatch software. The software's accuracy and efficiency were evaluated using various metrics, including accuracy rate, error rate, and processing time. The research results indicated that FaceMatch software could accurately identify candidates with a precision of 98.5%. The software's error rate was less than 1.5%, demonstrating its high efficiency in facial recognition. Additionally, the average processing time for each candidate's image was less than 2 seconds, indicating the software's high efficiency. Statistical evaluation of the results using precise statistical tests, including analysis of variance (ANOVA) and t-test, showed that the observed differences were significant, and the software's accuracy in identity verification is high. The findings of this research suggest that FaceMatch software can be effectively used as a tool for identifying university entrance exam candidates in Iran. This technology not only enhances security and prevents fraud but also simplifies and streamlines the exam administration process. However, challenges such as preserving candidates' privacy and the costs of implementation must also be considered. The use of facial recognition technology with FaceMatch software in Iran's educational systems can be an effective solution for preventing fraud and ensuring the authenticity of university entrance exam candidates' identities. Given the promising results of this research, it is recommended that this technology be more widely implemented and utilized in the country's educational systems.

Keywords: facial recognition, FaceMatch software, Iran, university entrance exam

Procedia PDF Downloads 36
6898 Application of Optimization Techniques in Overcurrent Relay Coordination: A Review

Authors: Syed Auon Raza, Tahir Mahmood, Syed Basit Ali Bukhari

Abstract:

In power system properly coordinated protection scheme is designed to make sure that only the faulty part of the system will be isolated when abnormal operating condition of the system will reach. The complexity of the system as well as the increased user demand and the deregulated environment enforce the utilities to improve system reliability by using a properly coordinated protection scheme. This paper presents overview of over current relay coordination techniques. Different techniques such as Deterministic Techniques, Meta Heuristic Optimization techniques, Hybrid Optimization Techniques, and Trial and Error Optimization Techniques have been reviewed in terms of method of their implementation, operation modes, nature of distribution system, and finally their advantages as well as the disadvantages.

Keywords: distribution system, relay coordination, optimization, Plug Setting Multiplier (PSM)

Procedia PDF Downloads 393
6897 Psychological Nano-Therapy: A New Method in Family Therapy

Authors: Siamak Samani, Nadereh Sohrabi

Abstract:

Psychological nano-therapy is a new method based on systems theory. According to the theory, systems with severe dysfunctions are resistant to changes. Psychological nano-therapy helps the therapists to break this ice. Two key concepts in psychological nano-therapy are nano-functions and nano-behaviors. The most important step in psychological nano-therapy in family therapy is selecting the most effective nano-function and nano-behavior. The aim of this study was to check the effectiveness of psychological nano-therapy for family therapy. One group pre-test-post-test design (quasi-experimental Design) was applied for research. The sample consisted of ten families with severe marital conflict. The important character of these families was resistance for participating in family therapy. In this study, sending respectful (nano-function) text massages (nano-behavior) with cell phone were applied as a treatment. Cohesion/respect sub scale from self-report family processes scale and family readiness for therapy scale were used to assess all family members in pre-test and post-test. In this study, one of family members was asked to send a respectful text massage to other family members every day for a week. The content of the text massages were selected and checked by therapist. To compare the scores of families in pre-test and post-test paired sample t-test was used. The results of the test showed significant differences in both cohesion/respect score and family readiness for therapy between per-test and post-test. The results revealed that these families have found a better atmosphere for participation in a complete family therapy program. Indeed, this study showed that psychological nano-therapy is an effective method to make family readiness for therapy.

Keywords: family therapy, family conflicts, nano-therapy, family readiness

Procedia PDF Downloads 655
6896 Human TP53 Three Dimentional (3D) Core Domain Hot Spot Mutations at Codon, 36, 72 and 240 are Associated with Oral Squamous Cell Carcinoma

Authors: Saima Saleem, Zubair Abbasi, Abdul Hameed, Mansoor Ahmed Khan, Navid Rashid Qureshi, Abid Azhar

Abstract:

Oral Squamous Cell Carcinoma (OSCC) is the leading cause of death in the developing countries like Pakistan. This problem aggravates because of the excessive use of available chewing products. In spite of widespread information on their use and purported legislations against their use the Pakistani markets are classical examples of selling chewable carcinogenic mutagens. Reported studies indicated that these products are rich in reactive oxygen species (ROS) and polyphenols. TP53 gene is involved in the suppression of tumor. It has been reported that somatic mutations caused by TP53 gene are the foundation of the cancer. This study aims to find the loss of TP53 functions due to mutation/polymorphism caused by genomic alteration and interaction with tobacco and its related ingredients. Total 260 tissues and blood specimens were collected from OSCC patients and compared with age and sex matched controls. Mutations in exons 2-11 of TP53 were examined by PCR-SSCP. Samples showing mobility shift were directly sequenced. Two mutations were found in exon 4 at nucleotide position 108 and 215 and one in exon 7 at nucleotide position 719 of the coding sequences in patient’s tumor samples. These results show that substitution of proline with arginine at codon 72 and serine with threonine at codon 240 of p53 protein. These polymorphic changes, found in tumor samples of OSCC, could be involved in loss of heterozygocity and apoptotic activity in the binding domain of TP53. The model of the mutated TP53 gene elaborated a nonfunctional unfolded p53 protein, suggesting an important role of these mutations in p53 protein inactivation and malfunction. This nonfunctional 3D model also indicates that exogenous tobacco related carcinogens may act as DNA-damaging agents affecting the structure of DNA. The interpretations could be helpful in establishing the pathways responsible for tumor formation in OSCC patients.

Keywords: TP53 mutation/polymorphism, OSCC, PCR-SSCP, direct DNA sequencing, 3D structure

Procedia PDF Downloads 364
6895 Identifying the Determinants of the Shariah Non-Compliance Risk via Principal Axis Factoring

Authors: Muhammad Arzim Naim, Saiful Azhar Rosly, Mohamad Sahari Nordin

Abstract:

The objective of this study is to investigate the factors affecting the rise of Shariah non-compliance risk that can bring Islamic banks to succumb to monetary loss. Prior literatures have never analyzed such risk in details despite lots of it arguing on the validity of some Shariah compliance products. The Shariah non-compliance risk in this context is looking to the potentially failure of the facility to stand from the court test say that if the banks bring it to the court for compensation from the defaulted clients. The risk may also arise if the customers refuse to make the financing payments on the grounds of the validity of the contracts, for example, when relinquishing critical requirement of Islamic contract such as ownership, the risk that may lead the banks to suffer loss when the customer invalidate the contract through the court. The impact of Shariah non-compliance risk to Islamic banks is similar to that of legal risks faced by the conventional banks. Both resulted into monetary losses to the banks respectively. In conventional banking environment, losses can be in the forms of summons paid to the customers if they won the case. In banking environment, this normally can be in very huge amount. However, it is right to mention that for Islamic banks, the subsequent impact to them can be rigorously big because it will affect their reputation. If the customers do not perceive them to be Shariah compliant, they will take their money and bank it in other places. This paper provides new insights of risks faced by credit intensive Islamic banks by providing a new extension of knowledge with regards to the Shariah non-compliance risk by identifying its individual components that directly affecting the risk together with empirical evidences. Not limited to the Islamic banking fraternities, the regulators and policy makers should be able to use findings in this paper to evaluate the components of the Shariah non-compliance risk and make the necessary actions. The paper is written based on Malaysia’s Islamic banking practices which may not directly related to other jurisdictions. Even though the focuses of this study is directly towards to the Bay Bithaman Ajil or popularly known as BBA (i.e. sale with deferred payments) financing modality, the result from this study may be applicable to other Islamic financing vehicles.

Keywords: Islamic banking, Islamic finance, Shariah Non-compliance risk, Bay Bithaman Ajil (BBA), principal axis factoring

Procedia PDF Downloads 294
6894 Design of Two-Channel Quadrature Mirror Filter Banks Using a Transformation Approach

Authors: Ju-Hong Lee, Yi-Lin Shieh

Abstract:

Two-dimensional (2-D) quadrature mirror filter (QMF) banks have been widely considered for high-quality coding of image and video data at low bit rates. Without implementing subband coding, a 2-D QMF bank is required to have an exactly linear-phase response without magnitude distortion, i.e., the perfect reconstruction (PR) characteristics. The design problem of 2-D QMF banks with the PR characteristics has been considered in the literature for many years. This paper presents a transformation approach for designing 2-D two-channel QMF banks. Under a suitable one-dimensional (1-D) to two-dimensional (2-D) transformation with a specified decimation/interpolation matrix, the analysis and synthesis filters of the QMF bank are composed of 1-D causal and stable digital allpass filters (DAFs) and possess the 2-D doubly complementary half-band (DC-HB) property. This facilitates the design problem of the two-channel QMF banks by finding the real coefficients of the 1-D recursive DAFs. The design problem is formulated based on the minimax phase approximation for the 1-D DAFs. A novel objective function is then derived to obtain an optimization for 1-D minimax phase approximation. As a result, the problem of minimizing the objective function can be simply solved by using the well-known weighted least-squares (WLS) algorithm in the minimax (L∞) optimal sense. The novelty of the proposed design method is that the design procedure is very simple and the designed 2-D QMF bank achieves perfect magnitude response and possesses satisfactory phase response. Simulation results show that the proposed design method provides much better design performance and much less design complexity as compared with the existing techniques.

Keywords: Quincunx QMF bank, doubly complementary filter, digital allpass filter, WLS algorithm

Procedia PDF Downloads 222
6893 Forest Soil Greenhouse Gas Real-Time Analysis Using Quadrupole Mass Spectrometry

Authors: Timothy L. Porter, T. Randy Dillingham

Abstract:

Vegetation growth and decomposition, along with soil microbial activity play a complex role in the production of greenhouse gases originating in forest soils. The absorption or emission (respiration) of these gases is a function of many factors relating to the soils themselves, the plants, and the environment in which the plants are growing. For this study, we have constructed a battery-powered, portable field mass spectrometer for use in analyzing gases in the soils surrounding trees, plants, and other areas. We have used the instrument to sample in real-time the greenhouse gases carbon dioxide and methane in soils where plant life may be contributing to the production of gases such as methane. Gases such as isoprene, which may help correlate gas respiration to microbial activity have also been measured. The instrument is composed of a quadrupole mass spectrometer with part per billion or better sensitivity, coupled to battery-powered turbo and diaphragm pumps. A unique ambient air pressure differentially pumped intake apparatus allows for the real-time sampling of gases in the soils from the surface to several inches below the surface. Results show that this instrument is capable of instant, part-per-billion sensitivity measurement of carbon dioxide and methane in the near surface region of various forest soils. We have measured differences in soil respiration resulting from forest thinning, forest burning, and forest logging as compared to pristine, untouched forests. Further studies will include measurements of greenhouse gas respiration as a function of temperature, microbial activity as measured by isoprene production, and forest restoration after fire.

Keywords: forest, soil, greenhouse, quadrupole

Procedia PDF Downloads 113
6892 Geometric Optimisation of Piezoelectric Fan Arrays for Low Energy Cooling

Authors: Alastair Hales, Xi Jiang

Abstract:

Numerical methods are used to evaluate the operation of confined face-to-face piezoelectric fan arrays as pitch, P, between the blades is varied. Both in-phase and counter-phase oscillation are considered. A piezoelectric fan consists of a fan blade, which is clamped at one end, and an extremely low powered actuator. This drives the blade tip’s oscillation at its first natural frequency. Sufficient blade tip speed, created by the high oscillation frequency and amplitude, is required to induce vortices and downstream volume flow in the surrounding air. A single piezoelectric fan may provide the ideal solution for low powered hot spot cooling in an electronic device, but is unable to induce sufficient downstream airflow to replace a conventional air mover, such as a convection fan, in power electronics. Piezoelectric fan arrays, which are assemblies including multiple fan blades usually in face-to-face orientation, must be developed to widen the field of feasible applications for the technology. The potential energy saving is significant, with a 50% power demand reduction compared to convection fans even in an unoptimised state. A numerical model of a typical piezoelectric fan blade is derived and validated against experimental data. Numerical error is found to be 5.4% and 9.8% using two data comparison methods. The model is used to explore the variation of pitch as a function of amplitude, A, for a confined two-blade piezoelectric fan array in face-to-face orientation, with the blades oscillating both in-phase and counter-phase. It has been reported that in-phase oscillation is optimal for generating maximum downstream velocity and flow rate in unconfined conditions, due at least in part to the beneficial coupling between the adjacent blades that leads to an increased oscillation amplitude. The present model demonstrates that confinement has a significant detrimental effect on in-phase oscillation. Even at low pitch, counter-phase oscillation produces enhanced downstream air velocities and flow rates. Downstream air velocity from counter-phase oscillation can be maximally enhanced, relative to that generated from a single blade, by 17.7% at P = 8A. Flow rate enhancement at the same pitch is found to be 18.6%. By comparison, in-phase oscillation at the same pitch outputs 23.9% and 24.8% reductions in peak downstream air velocity and flow rate, relative to that generated from a single blade. This optimal pitch, equivalent to those reported in the literature, suggests that counter-phase oscillation is less affected by confinement. The optimal pitch for generating bulk airflow from counter-phase oscillation is large, P > 16A, due to the small but significant downstream velocity across the span between adjacent blades. However, by considering design in a confined space, counterphase pitch should be minimised to maximise the bulk airflow generated from a certain cross-sectional area within a channel flow application. Quantitative values are found to deviate to a small degree as other geometric and operational parameters are varied, but the established relationships are maintained.

Keywords: piezoelectric fans, low energy cooling, power electronics, computational fluid dynamics

Procedia PDF Downloads 215
6891 Simulation of Ester Based Mud Performance through Drilling Genting Timur Field

Authors: Lina Ismail Jassim, Robiah Yunus

Abstract:

To successfully drill oil or gas well, two main characteristics of numerous other tasks of an efficient drilling fluid are required, which are suspended and carrying cuttings from the beneath wellbore to the surface and managed between pore (formation) and hydrostatic pressure (mud pressure). Several factors like mud composition and its rheology, wellbore design, drilled cuttings characteristics and drilling string rotation contribute to drill wellbore successfully. Simulation model can support an appropriate indication on the drilling fluid performance in the real field as Genting Timur field, located in Pahang in Malaysia on 4295 m depth, held the world record in Sempah Muda 1 (Vertical). A detailed 3 dimensional CFD analysis of vertical, concentric annular two phase flow was developed to study and asses Herschel Bulkley drilling fluid. The effect of Hematite, Barite and calcium carbonates types and size of cutting rock particles on such flow is analyzed. The vertical flows are also associated with a good amount of temperature variation along the depth. This causes a good amount of change in viscosity of the fluid, which is non-Newtonian in nature. Good understanding of the nature of such flows is imperative in developing and maintaining successful vertical well systems. A detailed analysis of flow characteristics due to the drill pipe rotation is done in this work. The inner cylinder of the annulus gets different rotational speed, depending upon the operating conditions. This speed induces a good swirl on the particles and primary fluids which interpret in Ester based drilling fluid cleaning well ability, which in turn determines energy loss along the pipe. Energy loss is assessed in this work in terms of wall shear stress and pressure drop along the pipe. The flow is under an adverse pressure gradient condition, which causes chance of reversed flow and transfers the rock cuttings to the surface.

Keywords: concentric annulus, non-Newtonian, two phase, Herschel Bulkley

Procedia PDF Downloads 304
6890 Vibration Absorption Strategy for Multi-Frequency Excitation

Authors: Der Chyan Lin

Abstract:

Since the early introduction by Ormondroyd and Den Hartog, vibration absorber (VA) has become one of the most commonly used vibration mitigation strategies. The strategy is most effective for a primary plant subjected to a single frequency excitation. For continuous systems, notable advances in vibration absorption in the multi-frequency system were made. However, the efficacy of the VA strategy for systems under multi-frequency excitation is not well understood. For example, for an N degrees-of-freedom (DOF) primary-absorber system, there are N 'peak' frequencies of large amplitude vibration per every new excitation frequency. In general, the usable range for vibration absorption can be greatly reduced as a result. Frequency modulated harmonic excitation is a commonly seen multi-frequency excitation example: f(t) = cos(ϖ(t)t) where ϖ(t)=ω(1+α sin⁡(δt)). It is known that f(t) has a series expansion given by the Bessel function of the first kind, which implies an infinity of forcing frequencies in the frequency modulated harmonic excitation. For an SDOF system of natural frequency ωₙ subjected to f(t), it can be shown that amplitude peaks emerge at ω₍ₚ,ₖ₎=(ωₙ ± 2kδ)/(α ∓ 1),k∈Z; i.e., there is an infinity of resonant frequencies ω₍ₚ,ₖ₎, k∈Z, making the use of VA strategy ineffective. In this work, we propose an absorber frequency placement strategy for SDOF vibration systems subjected to frequency-modulated excitation. An SDOF linear mass-spring system coupled to lateral absorber systems is used to demonstrate the ideas. Although the mechanical components are linear, the governing equations for the coupled system are nonlinear. We show using N identical absorbers, for N ≫ 1, that (a) there is a cluster of N+1 natural frequencies around every natural absorber frequency, and (b) the absorber frequencies can be moved away from the plant's resonance frequency (ω₀) as N increases. Moreover, we also show the bandwidth of the VA performance increases with N. The derivations of the clustering and bandwidth widening effect will be given, and the superiority of the proposed strategy will be demonstrated via numerical experiments.

Keywords: Bessel function, bandwidth, frequency modulated excitation, vibration absorber

Procedia PDF Downloads 147
6889 Endothelial Dysfunction in Non-Alcoholic Fatty Liver Disease: An Updated Meta-Analysis

Authors: Anit S. Malhotra, Ajay Duseja, Neelam Chadha

Abstract:

Endothelial dysfunction is a precursor to atherosclerosis, and flow-mediated dilatation (FMD) in the brachial artery is the commonest method to evaluate endothelial function in humans. Non-alcoholic fatty liver disease (NAFLD) is one of the most common liver disorders encountered in clinical practice. An earlier meta-analysis had quantitatively assessed the degree of endothelial dysfunction using FMD. However, the largest study investigating the relation of FMD with NAFLD was published after that meta-analysis. In addition, that meta-analysis did not include some studies, including one from our centre. Therefore, an updating the previous meta-analysis was considered important. We searched PubMed, Cochrane Library, Embase, Scopus, SCI, Google Scholar, conference proceedings, and references of included studies till June 2017 to identify observational studies evaluating endothelial function using FMD in patients with non-alcoholic fatty liver disease. Data was analyzed using MedCalc. Fourteen studies were found eligible for inclusion in the meta-analysis. Patients with NAFLD had lower brachial artery FMD as compared to controls, standardized mean difference (random effects model) being –1.279%; 95% confidence interval (CI), –1.478 to –0.914. The effect size became smaller after addition of the recent study with the largest sample size was included compared with the earlier meta-analysis. In conclusion, patients with NAFLD had low FMD values indicating that they are at a higher risk of cardiovascular disease although our results suggest the effect size is not as large as reported previously.

Keywords: endothelial dysfunction, flow-mediated dilatation, meta-analysis, non-alcoholic fatty liver disease

Procedia PDF Downloads 189
6888 Fault Diagnosis in Induction Motor

Authors: Kirti Gosavi, Anita Bhole

Abstract:

The paper demonstrates simulation and steady-state performance of three phase squirrel cage induction motor and detection of rotor broken bar fault using MATLAB. This simulation model is successfully used in the fault detection of rotor broken bar for the induction machines. A dynamic model using PWM inverter and mathematical modelling of the motor is developed. The dynamic simulation of the small power induction motor is one of the key steps in the validation of the design process of the motor drive system and it is needed for eliminating advertent design errors and the resulting error in the prototype construction and testing. The simulation model will be helpful in detecting the faults in three phase induction motor using Motor current signature analysis.

Keywords: squirrel cage induction motor, pulse width modulation (PWM), fault diagnosis, induction motor

Procedia PDF Downloads 627
6887 Flexible Programmable Circuit Board Electromagnetic 1-D Scanning Micro-Mirror Laser Rangefinder by Active Triangulation

Authors: Vixen Joshua Tan, Siyuan He

Abstract:

Scanners have been implemented within single point laser rangefinders, to determine the ranges within an environment by sweeping the laser spot across the surface of interest. The research motivation is to exploit a smaller and cheaper alternative scanning component for the emitting portion within current designs of laser rangefinders. This research implements an FPCB (Flexible Programmable Circuit Board) Electromagnetic 1-Dimensional scanning micro-mirror as a scanning component for laser rangefinding by means of triangulation. The prototype uses a laser module, micro-mirror, and receiver. The laser module is infrared (850 nm) with a power output of 4.5 mW. The receiver consists of a 50 mm convex lens and a 45mm 1-dimensional PSD (Position Sensitive Detector) placed at the focal length of the lens at 50 mm. The scanning component is an elliptical Micro-Mirror attached onto an FPCB Structure. The FPCB structure has two miniature magnets placed symmetrically underneath it on either side, which are then electromagnetically actuated by small solenoids, causing the FPCB to mechanically rotate about its torsion beams. The laser module projects a laser spot onto the micro-mirror surface, hence producing a scanning motion of the laser spot during the rotational actuation of the FPCB. The receiver is placed at a fixed distance from the micro-mirror scanner and is oriented to capture the scanning motion of the laser spot during operation. The elliptical aperture dimensions of the micro-mirror are 8mm by 5.5 mm. The micro-mirror is supported by an FPCB with two torsion beams with dimensions of 4mm by 0.5mm. The overall length of the FPCB is 23 mm. The voltage supplied to the solenoids is sinusoidal with an amplitude of 3.5 volts and 4.5 volts to achieve optical scanning angles of +/- 10 and +/- 17 degrees respectively. The operating scanning frequency during experiments was 5 Hz. For an optical angle of +/- 10 degrees, the prototype is capable of detecting objects within the ranges from 0.3-1.2 meters with an error of less than 15%. As for an optical angle of +/- 17 degrees the measuring range was from 0.3-0.7 meters with an error of 16% or less. Discrepancy between the experimental and actual data is possibly caused by misalignment of the components during experiments. Furthermore, the power of the laser spot collected by the receiver gradually decreased as the object was placed further from the sensor. A higher powered laser will be tested to potentially measure further distances more accurately. Moreover, a wide-angled lens will be used in future experiments when higher scanning angles are used. Modulation within the current and future higher powered lasers will be implemented to enable the operation of the laser rangefinder prototype without the use of safety goggles.

Keywords: FPCB electromagnetic 1-D scanning micro-mirror, laser rangefinder, position sensitive detector, PSD, triangulation

Procedia PDF Downloads 133
6886 Carbendazim Toxicity and Ameliorative Effect of Vitamin E in African Giant Rats

Authors: A. O. Omonona, T. A. Jarikre

Abstract:

Increase specialization in agriculture and use of pesticides may inadvertently cause ecosystem degradation and eventually loss of biodiversity. The populations of numerous wildlife species have undergone a precipitous decline. Many of these problems have been attributed directly to habitat loss and over exploitation resulting from unregulated pesticide uses. Carbendazim a broad spectrum benzimidazole fungicide and a metabolite of benomyl, is used to control plant disease in cereals and fruit. The effect of carbendazim exposure and the ameliorative effect of tocopherol (vitamin E) were assessed on African giant rat AGR. Hematological, biochemical and histological changes were used to determine the health condition of the animals exposed to pesticide. Sixteen AGR were stabilized, weighed and then divided into four experimental groups (A to D). Two groups were pretreated with vitamin. Group A was exposed to carbendazim only, B- carbendazim + vitamin, C- vitamin only, and D- blank (control). Packed cell volume PCV was estimated by the microhematocrit method, Leucocyte and Platelet counts were determined using the hemocytometric method. Cholinesterase (AchE) and markers of oxidative stress were quantified, and tissue changes examined microscopically. There were no behavioral changes observed in the animals, but there was a decrease in body weight and abortion after 23 days of exposure to carbendazim. There was significant differences in the packed cell volume, the hemoglobin concentration and the red blood cell counts (p < 0.05). The increases in malonyl aldehyde MDA was significant (p < 0.05) in the pesticide intoxicated rats compared to control. Vitamin E supplementation reduced MDA level significantly (p < 0.05). There was a sharp remarkable decrease in acetylcholinesterase levels in the pesticide intoxicated rats (p < 0.05). Vitamin E supplementation normalise the AchE levels comparable to that in control. Grossly, the vital organs appeared normal in the pesticide exposed and control groups except moderate pulmonary congestion. Microscopically, there was severe diffuse hepatocellular swelling in carbendazim exposed group. The severity of hepatocellular injury was reduced in the rats with vitamin E. This study ascertained the toxic effect of carbendazim and antioxidative properties of vitamins in the Africa giant rat.

Keywords: African giant rat, antioxidant, carbendazim, pesticides, toxicity

Procedia PDF Downloads 358
6885 Bulk-Density and Lignocellulose Composition: Influence of Changing Lignocellulosic Composition on Bulk-Density during Anaerobic Digestion and Implication of Compacted Lignocellulose Bed on Mass Transfer

Authors: Aastha Paliwal, H. N. Chanakya, S. Dasappa

Abstract:

Lignocellulose, as an alternate feedstock for biogas production, has been an active area of research. However, lignocellulose poses a lot of operational difficulties- widespread variation in the structural organization of lignocellulosic matrix, amenability to degradation, low bulk density, to name a few. Amongst these, the low bulk density of the lignocellulosic feedstock is crucial to the process operation and optimization. Low bulk densities render the feedstock floating in conventional liquid/wet digesters. Low bulk densities also restrict the maximum achievable organic loading rate in the reactor, decreasing the power density of the reactor. However, during digestion, lignocellulose undergoes very high compaction (up to 26 times feeding density). This first reduces the achievable OLR (because of low feeding density) and compaction during digestion, then renders the reactor space underutilized and also imposes significant mass transfer limitations. The objective of this paper was to understand the effects of compacting lignocellulose on mass transfer and the influence of loss of different components on the bulk density and hence structural integrity of the digesting lignocellulosic feedstock. 10 different lignocellulosic feedstocks (monocots and dicots) were digested anaerobically in a fed-batch, leach bed reactor -solid-state stratified bed reactor (SSBR). Percolation rates of the recycled bio-digester liquid (BDL) were also measured during the reactor run period to understand the implication of compaction on mass transfer. After 95 ds, in a destructive sampling, lignocellulosic feedstocks digested at different SRT were investigated to quantitate the weekly changes in bulk density and lignocellulosic composition. Further, percolation rate data was also compared to bulk density data. Results from the study indicate loss of hemicellulose (r²=0.76), hot water extractives (r²=0.68), and oxalate extractives (r²=0.64) had dominant influence on changing the structural integrity of the studied lignocellulose during anaerobic digestion. Further, feeding bulk density of the lignocellulose can be maintained between 300-400kg/m³ to achieve higher OLR, and bulk density of 440-500kg/m³ incurs significant mass transfer limitation for high compacting beds of dicots.

Keywords: anaerobic digestion, bulk density, feed compaction, lignocellulose, lignocellulosic matrix, cellulose, hemicellulose, lignin, extractives, mass transfer

Procedia PDF Downloads 162
6884 Role of Internal and External Factors in Preventing Risky Sexual Behavior, Drug and Alcohol Abuse

Authors: Veronika Sharok

Abstract:

Research relevance on psychological determinants of risky behaviors is caused by high prevalence of such behaviors, particularly among youth. Risky sexual behavior, including unprotected and casual sex, frequent change of sexual partners, drug and alcohol use lead to negative social consequences and contribute to the spread of HIV infection and other sexually transmitted diseases. Data were obtained from 302 respondents aged 15-35 which were divided into 3 empirical groups: persons prone to risky sexual behavior, drug users and alcohol users; and 3 control groups: the individuals who are not prone to risky sexual behavior, persons who do not use drugs and the respondents who do not use alcohol. For processing, we used the following methods: Qualitative method for nominative data (Chi-squared test) and quantitative methods for metric data (student's t-test, Fisher's F-test, Pearson's r correlation test). Statistical processing was performed using Statistica 6.0 software. The study identifies two groups of factors that prevent risky behaviors. Internal factors, which include the moral and value attitudes; significance of existential values: love, life, self-actualization and search for the meaning of life; understanding independence as a responsibility for the freedom and ability to get attached to someone or something up to a point when this relationship starts restricting the freedom and becomes vital; awareness of risky behaviors as dangerous for the person and for others; self-acknowledgement. External factors (prevent risky behaviors in case of absence of the internal ones): absence of risky behaviors among friends and relatives; socio-demographic characteristics (middle class, marital status); awareness about the negative consequences of risky behaviors; inaccessibility to psychoactive substances. These factors are common for proneness to each type of risky behavior, because it usually caused by the same reasons. It should be noted that if prevention of risky behavior is based only on elimination of external factors, it is not as effective as it may be if we pay more attention to internal factors. The results obtained in the study can be used to develop training programs and activities for prevention of risky behaviors, for using values preventing such behaviors and promoting healthy lifestyle.

Keywords: existential values, prevention, psychological features, risky behavior

Procedia PDF Downloads 252
6883 A Semi-Markov Chain-Based Model for the Prediction of Deterioration of Concrete Bridges in Quebec

Authors: Eslam Mohammed Abdelkader, Mohamed Marzouk, Tarek Zayed

Abstract:

Infrastructure systems are crucial to every aspect of life on Earth. Existing Infrastructure is subjected to degradation while the demands are growing for a better infrastructure system in response to the high standards of safety, health, population growth, and environmental protection. Bridges play a crucial role in urban transportation networks. Moreover, they are subjected to high level of deterioration because of the variable traffic loading, extreme weather conditions, cycles of freeze and thaw, etc. The development of Bridge Management Systems (BMSs) has become a fundamental imperative nowadays especially in the large transportation networks due to the huge variance between the need for maintenance actions, and the available funds to perform such actions. Deterioration models represent a very important aspect for the effective use of BMSs. This paper presents a probabilistic time-based model that is capable of predicting the condition ratings of the concrete bridge decks along its service life. The deterioration process of the concrete bridge decks is modeled using semi-Markov process. One of the main challenges of the Markov Chain Decision Process (MCDP) is the construction of the transition probability matrix. Yet, the proposed model overcomes this issue by modeling the sojourn times based on some probability density functions. The sojourn times of each condition state are fitted to probability density functions based on some goodness of fit tests such as Kolmogorov-Smirnov test, Anderson Darling, and chi-squared test. The parameters of the probability density functions are obtained using maximum likelihood estimation (MLE). The condition ratings obtained from the Ministry of Transportation in Quebec (MTQ) are utilized as a database to construct the deterioration model. Finally, a comparison is conducted between the Markov Chain and semi-Markov chain to select the most feasible prediction model.

Keywords: bridge management system, bridge decks, deterioration model, Semi-Markov chain, sojourn times, maximum likelihood estimation

Procedia PDF Downloads 205
6882 Function Study of IrMYB55 in Regulating Synthesis of Terpenoids in Isodon Rubescens

Authors: Qingfang Guo

Abstract:

Isodon rubescens is rich in a variety of terpenes such as oridonin. It has important medicinal value. MYB transcription factors are involved in the regulation of plant secondary metabolic pathways. The combined transcriptomics and metabolomics analysis revealed that IrMYB55 might be involved in the regulation of the synthesis of terpenes. The function of IrMYB55 was further verified by establishing of a genetic transformation system by CRISPR/Cas9. Obtaining a virus-mediated Isodon rubescens gene silencing material. The main research results are as follows: (1) Screening IrMYB which can regulate the synthesis of terpenes. Metabolomics and transcriptomics analyses of materials with high (TJ)-and low (FL)-content populations which revealed significant differences in terpene content and IrMYB55 expression. Correlation analysis showed that the expression level of IrMYB55 had a significant correlation with the content of terpenes. (2) Establishment of a genetic transformation system of Isodon rubescens. The IrPDS gene could be knocked out by injection of Isodon rubescens cotyledon, and the transformed material showed obvious albino phenotype. Subsequently, IrMYB55 conversion material was obtained by this method. (3) The IrMYB55 silencing material was obtained. Subcellular localization indicated that IrMYB55 was located in the nucleus, indicating that it might regulate the synthesis of terpenoids through transcription. In summary, IrMYB55 that may regulate the synthesis of oridonin was dug out from the transcriptome and metabolome data. In this study, a genetic transformation system of Isodon rubescens was successfully established. Further studies showed that IrMYB55 regulated the transcription level of genes related to the synthesis of terpenoids, thereby promoting the accumulation of oridonin.

Keywords: isodon rubescens, MYB, oridonin, CRISPR/Cas9

Procedia PDF Downloads 17
6881 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 157
6880 Optical Fiber Data Throughput in a Quantum Communication System

Authors: Arash Kosari, Ali Araghi

Abstract:

A mathematical model for an optical-fiber communication channel is developed which results in an expression that calculates the throughput and loss of the corresponding link. The data are assumed to be transmitted by using of separate photons with different polarizations. The derived model also shows the dependency of data throughput with length of the channel and depolarization factor. It is observed that absorption of photons affects the throughput in a more intensive way in comparison with that of depolarization. Apart from that, the probability of depolarization and the absorption of radiated photons are obtained.

Keywords: absorption, data throughput, depolarization, optical fiber

Procedia PDF Downloads 284
6879 Investigating the Process Kinetics and Nitrogen Gas Production in Anammox Hybrid Reactor with Special Emphasis on the Role of Filter Media

Authors: Swati Tomar, Sunil Kumar Gupta

Abstract:

Anammox is a novel and promising technology that has changed the traditional concept of biological nitrogen removal. The process facilitates direct oxidation of ammonical nitrogen under anaerobic conditions with nitrite as an electron acceptor without the addition of external carbon sources. The present study investigated the feasibility of anammox hybrid reactor (AHR) combining the dual advantages of suspended and attached growth media for biodegradation of ammonical nitrogen in wastewater. The experimental unit consisted of 4 nos. of 5L capacity AHR inoculated with mixed seed culture containing anoxic and activated sludge (1:1). The process was established by feeding the reactors with synthetic wastewater containing NH4-H and NO2-N in the ratio 1:1 at HRT (hydraulic retention time) of 1 day. The reactors were gradually acclimated to higher ammonium concentration till it attained pseudo steady state removal at a total nitrogen concentration of 1200 mg/l. During this period, the performance of the AHR was monitored at twelve different HRTs varying from 0.25-3.0 d with increasing NLR from 0.4 to 4.8 kg N/m3d. AHR demonstrated significantly higher nitrogen removal (95.1%) at optimal HRT of 1 day. Filter media in AHR contributed an additional 27.2% ammonium removal in addition to 72% reduction in the sludge washout rate. This may be attributed to the functional mechanism of filter media which acts as a mechanical sieve and reduces the sludge washout rate many folds. This enhances the biomass retention capacity of the reactor by 25%, which is the key parameter for successful operation of high rate bioreactors. The effluent nitrate concentration, which is one of the bottlenecks of anammox process was also minimised significantly (42.3-52.3 mg/L). Process kinetics was evaluated using first order and Grau-second order models. The first-order substrate removal rate constant was found as 13.0 d-1. Model validation revealed that Grau second order model was more precise and predicted effluent nitrogen concentration with least error (1.84±10%). A new mathematical model based on mass balance was developed to predict N2 gas in AHR. The mass balance model derived from total nitrogen dictated significantly higher correlation (R2=0.986) and predicted N2 gas with least error of precision (0.12±8.49%). SEM study of biomass indicated the presence of the heterogeneous population of cocci and rod shaped bacteria of average diameter varying from 1.2-1.5 mm. Owing to enhanced NRE coupled with meagre production of effluent nitrate and its ability to retain high biomass, AHR proved to be the most competitive reactor configuration for dealing with nitrogen laden wastewater.

Keywords: anammox, filter media, kinetics, nitrogen removal

Procedia PDF Downloads 377