Search results for: parity check matrix
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3076

Search results for: parity check matrix

1096 Investigations of Flame Retardant Properties of Beneficiated Huntite and Hydromagnesite Mineral Reinforced Polymer Composites

Authors: H. Yilmaz Atay

Abstract:

Huntite and hydromagnesite minerals have been used as additive materials to achieve incombustible material due to their inflammability property. Those fire retardants materials can help to extinguish in the early stages of fire. Thus dispersion of the flame can be prevented even if the fire started. Huntite and hydromagnesite minerals are known to impart fire-proofing of the polymer composites. However, the additives used in the applications led to deterioration in the mechanical properties due to the usage of high amount of the powders in the composites. In this study, by enriching huntite and hydromagnesite, it was aimed to use purer minerals to reinforce the polymer composites. Thus, predictably, using purer mineral will lead to use lower amount of mineral powders. By this manner, the minerals free from impurities by various processes were added to the polymer matrix with different loading level and grades. Different types of samples were manufactured, and subsequently characterized by XRD, SEM-EDS, XRF and flame-retardant tests. Tensile strength and elongation at break values were determined according to loading levels and grades. Besides, a comparison on the properties of the polymer composites produced by using of minerals with and without impurities was performed. As a result of the work, it was concluded that it is required to use beneficiated minerals to provide better fire-proofing behaviors in the polymer composites.

Keywords: flame retardant, huntite and hydromagnesite, mechanical property, polymer composites

Procedia PDF Downloads 235
1095 Microstracture of Iranian Processed Cheese

Authors: R. Ezzati, M. Dezyani, H. Mirzaei

Abstract:

The effects of the concentration of trisodium citrate (TSC) emulsifying salt (0.25 to 2.75%) and holding time (0 to 20 min) on the textural, rheological, and microstructural properties of Iranian Processed Cheese Cheddar cheese were studied using a central composite rotatable design. The loss tangent parameter (from small amplitude oscillatory rheology), extent of flow, and melt area (from the Schreiber test) all indicated that the meltability of process cheese decreased with increased concentration of TSC and that holding time led to a slight reduction in meltability. Hardness increased as the concentration of TSC increased. Fluorescence micrographs indicated that the size of fat droplets decreased with an increase in the concentration of TSC and with longer holding times. Acid-base titration curves indicated that the buffering peak at pH 4.8, which is due to residual colloidal calcium phosphate, decreased as the concentration of TSC increased. The soluble phosphate content increased as concentration of TSC increased. However, the insoluble Ca decreased with increasing concentration of TSC. The results of this study suggest that TSC chelated Ca from colloidal calcium phosphate and dispersed casein; the citrate-Ca complex remained trapped within the process cheese matrix. Increasing the concentration of TSC helped to improve fat emulsification and casein dispersion during cooking, both of which probably helped to reinforce the structure of process cheese.

Keywords: Iranian processed cheese, cheddar cheese, emulsifying salt, rheology

Procedia PDF Downloads 438
1094 Curative Role of Bromoenol Lactone, an Inhibitor of Phospholipase A2 Enzyme, during Cigarette Smoke Condensate Induced Anomalies in Lung Epithelium

Authors: Subodh Kumar, Sanjeev Kumar Sharma, Gaurav Kaushik, Pramod Avti, Phulen Sarma, Bikash Medhi, Krishan Lal Khanduja

Abstract:

Background: It is well known that cigarette smoke is one of the causative factors in various lung diseases especially cancer. Carcinogens and oxidant molecules present in cigarette smoke not only damage the cellular constituents (lipids, proteins, DNA) but may also regulate the molecular pathways involved in inflammation and cancer. Continuous oxidative stress caused by the constituents of cigarette smoke leads to higher PhospholipaseA₂ (PLA₂) activity, resulting in elevated levels of secondary metabolites whose role is well defined in cancer. To reduce the burden of chronic inflammation as well as oxidative stress, and higher levels of secondary metabolites, we checked the curative potential of PLA₂ inhibitor Bromoenol Lactone (BEL) during continuous exposure of cigarette smoke condensate (CSC). Aim: To check the therapeutic potential of Bromoenol Lactone (BEL), an inhibitor of PhospholipaseA₂s, in pathways of CSC-induced changes in type I and type II alveolar epithelial cells. Methods: Effect of BEL on CSC-induced PLA2 activity were checked using colorimetric assay, cellular toxicity using cell viability assay, membrane integrity using fluorescein di-acetate (FDA) uptake assay, reactive oxygen species (ROS) levels and apoptosis markers through flow cytometry, and cellular regulation using MAPKinases levels, in lung epithelium. Results: BEL significantly mimicked CSC-induced PLA₂ activity, ROS levels, apoptosis, and kinases level whereas improved cellular viability and membrane integrity. Conclusions: Current observations revealed that BEL may be a potential therapeutic agent during Cigarette smoke-induced anomalies in lung epithelium.

Keywords: cigarette smoke condensate, phospholipase A₂, oxidative stress, alveolar epithelium, bromoenol lactone

Procedia PDF Downloads 179
1093 Multivariate Control Chart to Determine Efficiency Measurements in Industrial Processes

Authors: J. J. Vargas, N. Prieto, L. A. Toro

Abstract:

Control charts are commonly used to monitor processes involving either variable or attribute of quality characteristics and determining the control limits as a critical task for quality engineers to improve the processes. Nonetheless, in some applications it is necessary to include an estimation of efficiency. In this paper, the ability to define the efficiency of an industrial process was added to a control chart by means of incorporating a data envelopment analysis (DEA) approach. In depth, a Bayesian estimation was performed to calculate the posterior probability distribution of parameters as means and variance and covariance matrix. This technique allows to analyse the data set without the need of using the hypothetical large sample implied in the problem and to be treated as an approximation to the finite sample distribution. A rejection simulation method was carried out to generate random variables from the parameter functions. Each resulting vector was used by stochastic DEA model during several cycles for establishing the distribution of each efficiency measures for each DMU (decision making units). A control limit was calculated with model obtained and if a condition of a low level efficiency of DMU is presented, system efficiency is out of control. In the efficiency calculated a global optimum was reached, which ensures model reliability.

Keywords: data envelopment analysis, DEA, Multivariate control chart, rejection simulation method

Procedia PDF Downloads 370
1092 Synthesis of Polyvinyl Alcohol Encapsulated Ag Nanoparticle Film by Microwave Irradiation for Reduction of P-Nitrophenol

Authors: Supriya, J. K. Basu, S. Sengupta

Abstract:

Silver nanoparticles have caught a lot of attention because of its unique physical and chemical properties. Silver nanoparticles embedded in polyvinyl alcohol (PVA/Ag) free-standing film have been prepared by microwave irradiation in few minutes. PVA performed as a reducing agent, stabilizing agents as well as support for silver nanoparticles. UV-Vis spectrometry, scanning transmission electron (SEM) and transmission electron microscopy (TEM) techniques affirmed the reduction of silver ion to silver nanoparticles in the polymer matrix. Effect of irradiation time, the concentration of PVA and concentration of silver precursor on the synthesis of silver nanoparticle has been studied. Particles size of silver nanoparticles decreases with increase in irradiation time. Concentration of silver nanoparticles increases with increase in concentration of silver precursor. Good dispersion of silver nanoparticles in the film has been confirmed by TEM analysis. Particle size of silver nanoparticle has been found to be in the range of 2-10nm. Catalytic property of prepared silver nanoparticles as a heterogeneous catalyst has been studied in the reduction of p-Nitrophenol (a water pollutant) with >98% conversion. From the experimental results, it can be concluded that PVA encapsulated Ag nanoparticles film as a catalyst shows better efficiency and reusability in the reduction of p-Nitrophenol.

Keywords: biopolymer, microwave irradiation, silver nanoparticles, water pollutant

Procedia PDF Downloads 283
1091 Use of SUDOKU Design to Assess the Implications of the Block Size and Testing Order on Efficiency and Precision of Dulce De Leche Preference Estimation

Authors: Jéssica Ferreira Rodrigues, Júlio Silvio De Sousa Bueno Filho, Vanessa Rios De Souza, Ana Carla Marques Pinheiro

Abstract:

This study aimed to evaluate the implications of the block size and testing order on efficiency and precision of preference estimation for Dulce de leche samples. Efficiency was defined as the inverse of the average variance of pairwise comparisons among treatments. Precision was defined as the inverse of the variance of treatment means (or effects) estimates. The experiment was originally designed to test 16 treatments as a series of 8 Sudoku 16x16 designs being 4 randomized independently and 4 others in the reverse order, to yield balance in testing order. Linear mixed models were assigned to the whole experiment with 112 testers and all their grades, as well as their partially balanced subgroups, namely: a) experiment with the four initial EU; b) experiment with EU 5 to 8; c) experiment with EU 9 to 12; and b) experiment with EU 13 to 16. To record responses we used a nine-point hedonic scale, it was assumed a mixed linear model analysis with random tester and treatments effects and with fixed test order effect. Analysis of a cumulative random effects probit link model was very similar, with essentially no different conclusions and for simplicity, we present the results using Gaussian assumption. R-CRAN library lme4 and its function lmer (Fit Linear Mixed-Effects Models) was used for the mixed models and libraries Bayesthresh (default Gaussian threshold function) and ordinal with the function clmm (Cumulative Link Mixed Model) was used to check Bayesian analysis of threshold models and cumulative link probit models. It was noted that the number of samples tested in the same session can influence the acceptance level, underestimating the acceptance. However, proving a large number of samples can help to improve the samples discrimination.

Keywords: acceptance, block size, mixed linear model, testing order, testing order

Procedia PDF Downloads 316
1090 Physiological Assessment for Straightforward Symptom Identification (PASSify): An Oral Diagnostic Device for Infants

Authors: Kathryn Rooney, Kaitlyn Eddy, Evan Landers, Weihui Li

Abstract:

The international mortality rate for neonates and infants has been declining at a disproportionally low rate when compared to the overall decline in child mortality in recent decades. A significant portion of infant deaths could be prevented with the implementation of low-cost and easy to use physiological monitoring devices, by enabling early identification of symptoms before they progress into life-threatening illnesses. The oral diagnostic device discussed in this paper serves to continuously monitor the key vital signs of body temperature, respiratory rate, heart rate, and oxygen saturation. The device mimics an infant pacifier, designed to be easily tolerated by infants as well as orthodontically inert. The fundamental measurements are gathered via thermistors and a pulse oximeter, each encapsulated in medical-grade silicone and wired internally to a microcontroller chip. The chip then translates the raw measurements into physiological values via an internal algorithm, before outputting the data to a liquid crystal display screen and an Android application. Additionally, a biological sample collection chamber is incorporated into the internal portion of the device. The movement within the oral chamber created by sucking on the pacifier-like device pushes saliva through a small check valve in the distal end, where it is accumulated and stored. The collection chamber can be easily removed, making the sample readily available to be tested for various diseases and analytes. With the vital sign monitoring and sample collection offered by this device, abnormal fluctuations in physiological parameters can be identified and appropriate medical care can be sought. This device enables preventative diagnosis for infants who may otherwise have gone undiagnosed, due to the inaccessibility of healthcare that plagues vast numbers of underprivileged populations.

Keywords: neonate mortality, infant mortality, low-cost diagnostics, vital signs, saliva testing, preventative care

Procedia PDF Downloads 149
1089 Influence of Silicon Carbide Particle Size and Thermo-Mechanical Processing on Dimensional Stability of Al 2124SiC Nanocomposite

Authors: Mohamed M. Emara, Heba Ashraf

Abstract:

This study is to investigation the effect of silicon carbide (SiC) particle size and thermo-mechanical processing on dimensional stability of aluminum alloy 2124. Three combinations of SiC weight fractions are investigated, 2.5, 5, and 10 wt. % with different SiC particle sizes (25 μm, 5 μm, and 100nm) were produced using mechanical ball mill. The standard testing samples were fabricated using powder metallurgy technique. Both samples, prior and after extrusion, were heated from room temperature up to 400ºC in a dilatometer at different heating rates, that is, 10, 20, and 40ºC/min. The analysis showed that for all materials, there was an increase in length change as temperature increased and the temperature sensitivity of aluminum alloy decreased in the presence of both micro and nano-sized silicon carbide. For all conditions, nanocomposites showed better dimensional stability compared to conventional Al 2124/SiC composites. The after extrusion samples showed better thermal stability and less temperature sensitivity for the aluminum alloy for both micro and nano-sized silicon carbide.

Keywords: aluminum 2124 metal matrix composite, SiC nano-sized reinforcements, powder metallurgy, extrusion mechanical ball mill, dimensional stability

Procedia PDF Downloads 523
1088 CRLH and SRR Based Microwave Filter Design Useful for Communication Applications

Authors: Subal Kar, Amitesh Kumar, A. Majumder, S. K. Ghosh, S. Saha, S. S. Sikdar, T. K. Saha

Abstract:

CRLH (composite right/left-handed) based and SRR (split-ring resonator) based filters have been designed at microwave frequency which can provide better performance compared to conventional edge-coupled band-pass filter designed around the same frequency, 2.45 GHz. Both CRLH and SRR are unit cells used in metamaterial design. The primary aim of designing filters with such structures is to realize size reduction and also to realize novel filter performance. The CRLH based filter has been designed in microstrip transmission line, while the SRR based filter is designed with SRR loading in waveguide. The CRLH based filter designed at 2.45 GHz provides an insertion loss of 1.6 dB with harmonic suppression up to 10 GHz with 67 % size reduction when compared with a conventional edge-coupled band-pass filter designed around the same frequency. One dimensional (1-D) SRR matrix loaded in a waveguide shows the possibility of realizing a stop-band with sharp skirts in the pass-band while a stop-band in the pass-band of normal rectangular waveguide with tailoring of the dimensions of SRR unit cells. Such filters are expected to be very useful for communication systems at microwave frequency.

Keywords: BPF, CRLH, harmonic, metamaterial, SRR and waveguide

Procedia PDF Downloads 422
1087 Reconfigurable Consensus Achievement of Multi Agent Systems Subject to Actuator Faults in a Leaderless Architecture

Authors: F. Amirarfaei, K. Khorasani

Abstract:

In this paper, reconfigurable consensus achievement of a team of agents with marginally stable linear dynamics and single input channel has been considered. The control algorithm is based on a first order linear protocol. After occurrence of a LOE fault in one of the actuators, using the imperfect information of the effectiveness of the actuators from fault detection and identification module, the control gain is redesigned in a way to still reach consensus. The idea is based on the modeling of change in effectiveness as change of Laplacian matrix. Then as special cases of this class of systems, a team of single integrators as well as double integrators are considered and their behavior subject to a LOE fault is considered. The well-known relative measurements consensus protocol is applied to a leaderless team of single integrator as well as double integrator systems, and Gersgorin disk theorem is employed to determine whether fault occurrence has an effect on system stability and team consensus achievement or not. The analyses show that loss of effectiveness fault in actuator(s) of integrator systems affects neither system stability nor consensus achievement.

Keywords: multi-agent system, actuator fault, stability analysis, consensus achievement

Procedia PDF Downloads 327
1086 Modelling Volatility Spillovers and Cross Hedging among Major Agricultural Commodity Futures

Authors: Roengchai Tansuchat, Woraphon Yamaka, Paravee Maneejuk

Abstract:

From the past recent, the global financial crisis, economic instability, and large fluctuation in agricultural commodity price have led to increased concerns about the volatility transmission among them. The problem is further exacerbated by commodities volatility caused by other commodity price fluctuations, hence the decision on hedging strategy has become both costly and useless. Thus, this paper is conducted to analysis the volatility spillover effect among major agriculture including corn, soybeans, wheat and rice, to help the commodity suppliers hedge their portfolios, and manage the risk and co-volatility of them. We provide a switching regime approach to analyzing the issue of volatility spillovers in different economic conditions, namely upturn and downturn economic. In particular, we investigate relationships and volatility transmissions between these commodities in different economic conditions. We purposed a Copula-based multivariate Markov Switching GARCH model with two regimes that depend on an economic conditions and perform simulation study to check the accuracy of our proposed model. In this study, the correlation term in the cross-hedge ratio is obtained from six copula families – two elliptical copulas (Gaussian and Student-t) and four Archimedean copulas (Clayton, Gumbel, Frank, and Joe). We use one-step maximum likelihood estimation techniques to estimate our models and compare the performance of these copula using Akaike information criterion (AIC) and Bayesian information criteria (BIC). In the application study of agriculture commodities, the weekly data used are conducted from 4 January 2005 to 1 September 2016, covering 612 observations. The empirical results indicate that the volatility spillover effects among cereal futures are different, as response of different economic condition. In addition, the results of hedge effectiveness will also suggest the optimal cross hedge strategies in different economic condition especially upturn and downturn economic.

Keywords: agricultural commodity futures, cereal, cross-hedge, spillover effect, switching regime approach

Procedia PDF Downloads 195
1085 Layouting for Phase II of New Priok Project Using Adaptive Port Planning Frameworks

Authors: Mustarakh Gelfi, Poonam Taneja, Tiedo Vellinga, Delon Hamonangan

Abstract:

The initial masterplan of New Priok in the Port of Tanjung Priok was developed in 2012 is being updated to cater to new developments and new demands. In the new masterplan (2017), Phase II of development will start from 2035-onwards, depending on the future conditions. This study is about creating a robust masterplan for Phase II, which will remain functional under future uncertainties. The methodology applied in this study is scenario-based planning in the framework of Adaptive Port Planning (APP). Scenario-based planning helps to open up the perspective of the future as a horizon of possibilities. The scenarios are built around two major uncertainties in a 2x2 matrix approach. The two major uncertainties for New Priok port are economics and sustainability awareness. The outcome is four plausible scenarios: Green Port, Business As Usual, Moderate Expansion, and No Expansion. Terminal needs in each scenario are analyzed through traffic analysis and identifying the key cargos and commodities. In conclusion, this study gives the wide perspective for Port of Tanjung Priok for the planning Phase II of the development. The port has to realize that uncertainties persevere and are very likely to influence the decision making as to the future layouts. Instead of ignoring uncertainty, the port needs to make the action plans to deal with these uncertainties.

Keywords: Indonesia Port, port's layout, port planning, scenario-based planning

Procedia PDF Downloads 527
1084 Pricing European Options under Jump Diffusion Models with Fast L-stable Padé Scheme

Authors: Salah Alrabeei, Mohammad Yousuf

Abstract:

The goal of option pricing theory is to help the investors to manage their money, enhance returns and control their financial future by theoretically valuing their options. Modeling option pricing by Black-School models with jumps guarantees to consider the market movement. However, only numerical methods can solve this model. Furthermore, not all the numerical methods are efficient to solve these models because they have nonsmoothing payoffs or discontinuous derivatives at the exercise price. In this paper, the exponential time differencing (ETD) method is applied for solving partial integrodifferential equations arising in pricing European options under Merton’s and Kou’s jump-diffusion models. Fast Fourier Transform (FFT) algorithm is used as a matrix-vector multiplication solver, which reduces the complexity from O(M2) into O(M logM). A partial fraction form of Pad`e schemes is used to overcome the complexity of inverting polynomial of matrices. These two tools guarantee to get efficient and accurate numerical solutions. We construct a parallel and easy to implement a version of the numerical scheme. Numerical experiments are given to show how fast and accurate is our scheme.

Keywords: Integral differential equations, , L-stable methods, pricing European options, Jump–diffusion model

Procedia PDF Downloads 143
1083 Modelling Dengue Disease With Climate Variables Using Geospatial Data For Mekong River Delta Region of Vietnam

Authors: Thi Thanh Nga Pham, Damien Philippon, Alexis Drogoul, Thi Thu Thuy Nguyen, Tien Cong Nguyen

Abstract:

Mekong River Delta region of Vietnam is recognized as one of the most vulnerable to climate change due to flooding and seawater rise and therefore an increased burden of climate change-related diseases. Changes in temperature and precipitation are likely to alter the incidence and distribution of vector-borne diseases such as dengue fever. In this region, the peak of the dengue epidemic period is around July to September during the rainy season. It is believed that climate is an important factor for dengue transmission. This study aims to enhance the capacity of dengue prediction by the relationship of dengue incidences with climate and environmental variables for Mekong River Delta of Vietnam during 2005-2015. Mathematical models for vector-host infectious disease, including larva, mosquito, and human being were used to calculate the impacts of climate to the dengue transmission with incorporating geospatial data for model input. Monthly dengue incidence data were collected at provincial level. Precipitation data were extracted from satellite observations of GSMaP (Global Satellite Mapping of Precipitation), land surface temperature and land cover data were from MODIS. The value of seasonal reproduction number was estimated to evaluate the potential, severity and persistence of dengue infection, while the final infected number was derived to check the outbreak of dengue. The result shows that the dengue infection depends on the seasonal variation of climate variables with the peak during the rainy season and predicted dengue incidence follows well with this dynamic for the whole studied region. However, the highest outbreak of 2007 dengue was not captured by the model reflecting nonlinear dependences of transmission on climate. Other possible effects will be discussed to address the limitation of the model. This suggested the need of considering of both climate variables and another variability across temporal and spatial scales.

Keywords: infectious disease, dengue, geospatial data, climate

Procedia PDF Downloads 375
1082 On the Stability Exact Analysis of Tall Buildings with Outrigger System

Authors: Mahrooz Abed, Amir R. Masoodi

Abstract:

Many structural lateral systems are used in tall buildings such as rigid frames, braced frames, shear walls, tubular structures and core structures. Some efficient structures for drift control and base moment reduction in tall buildings is outrigger and belt truss systems. When adopting outrigger beams in building design, their location should be in an optimum position for an economical design. A range of different strategies has been employed to identify the optimum locations of these outrigger beams under wind load. However, there is an absence of scientific research or case studies dealing with optimum outrigger location using buckling analysis. In this paper, one outrigger system is considered at the middle of height of structure. The optimum location of outrigger will be found based on the buckling load limitation. The core of structure is modeled by a clamped tapered beam. The exact stiffness matrix of tapered beam is formulated based on the Euler-Bernoulli theory. Finally, based on the buckling load of structure, the optimal location of outrigger will be found.

Keywords: tall buildings, outrigger system, buckling load, second-order effects, Euler-Bernoulli beam theory

Procedia PDF Downloads 390
1081 Experimental and Theoretical Study on Hygrothermal Aging Effect on Mechanical Behavior of Fiber Reinforced Plastic Laminates

Authors: S. Larbi, R. Bensaada, S. Djebali, A. Bilek

Abstract:

The manufacture of composite parts is a major issue in many industrial domains. Polymer composite materials are ideal for structural applications where high strength-to-weight and stiffness-to-weight ratios are required. However, exposition to extreme environment conditions (temperature, humidity) affects mechanical properties of organic composite materials and lead to an undesirable degradation. Aging mechanisms in organic matrix are very diverse and vary according to the polymer and the aging conditions such as temperature, humidity etc. This paper studies the hygrothermal aging effect on the mechanical properties of fiber reinforced plastics laminates at 40 °C in different environment exposure. Two composite materials are used to conduct the study (carbon fiber/epoxy and glass fiber/vinyl ester with two stratifications for both the materials [904/04] and [454/04]). The experimental procedure includes a mechanical characterization of the materials in a virgin state and exposition of specimens to two environments (seawater and demineralized water). Absorption kinetics for the two materials and both the stratifications are determined. Three-point bending test is performed on the aged materials in order to determine the hygrothermal effect on the mechanical properties of the materials.

Keywords: FRP laminates, hygrothermal aging, mechanical properties, theory of laminates

Procedia PDF Downloads 278
1080 Directional Solidification of Al–Cu–Mg Eutectic Alloy

Authors: Yusuf Kaygısız, Necmetti̇n Maraşlı

Abstract:

Aluminum alloys are produced and used at various areas of industry and especially in the aerospace industry. The advantages of these alloys over traditional iron-based alloys are lightweight, corrosion resistance, and very good thermal and electrical conductivity. The aim of this work is to experimentally investigate the effect of growth rates on the eutectic spacings (λ), microhardness, tensile strength and electrical resistivity in Al–30wt.%Cu–6wt.%Mg eutectic alloy. Al–Cu–Mg eutectic alloy was directionally solidified at a constant temperature gradient (G=8.55 K/mm) with different growth rates, 9.43 to 173.3 µm/s by using a Bridgman-type furnace. The dependency of microstructure, microhardness, tensile strength and electrical resistivity for directionally solidified the Al-Cu-Mg eutectic alloy were investigated. Eutectic microstructure is consisting of regular Al2CuMg lamellar and Al2Cu rod phases with in the α (Al) solid solution matrix. The lamellar eutectic spacings were measured from transverse sections of the samples. It was found that the value of microstructures decrease with the increase the value the growth rates. The microhardness, tensile strength and electrical resistivity of the alloy also were measured from sample and relationships between them were experimentally analyzed by using regression analysis. According to present results, values tensile strength and electrical resistivity increase with increasing growth rates.

Keywords: directional solidification, aluminum alloys, microstructure, electrical properties, hardness test

Procedia PDF Downloads 288
1079 Deciding Graph Non-Hamiltonicity via a Closure Algorithm

Authors: E. R. Swart, S. J. Gismondi, N. R. Swart, C. E. Bell

Abstract:

We present an heuristic algorithm that decides graph non-Hamiltonicity. All graphs are directed, each undirected edge regarded as a pair of counter directed arcs. Each of the n! Hamilton cycles in a complete graph on n+1 vertices is mapped to an n-permutation matrix P where p(u,i)=1 if and only if the ith arc in a cycle enters vertex u, starting and ending at vertex n+1. We first create exclusion set E by noting all arcs (u, v) not in G, sufficient to code precisely all cycles excluded from G i.e. cycles not in G use at least one arc not in G. Members are pairs of components of P, {p(u,i),p(v,i+1)}, i=1, n-1. A doubly stochastic-like relaxed LP formulation of the Hamilton cycle decision problem is constructed. Each {p(u,i),p(v,i+1)} in E is coded as variable q(u,i,v,i+1)=0 i.e. shrinks the feasible region. We then implement the Weak Closure Algorithm (WCA) that tests necessary conditions of a matching, together with Boolean closure to decide 0/1 variable assignments. Each {p(u,i),p(v,j)} not in E is tested for membership in E, and if possible, added to E (q(u,i,v,j)=0) to iteratively maximize |E|. If the WCA constructs E to be maximal, the set of all {p(u,i),p(v,j)}, then G is decided non-Hamiltonian. Only non-Hamiltonian G share this maximal property. Ten non-Hamiltonian graphs (10 through 104 vertices) and 2000 randomized 31 vertex non-Hamiltonian graphs are tested and correctly decided non-Hamiltonian. For Hamiltonian G, the complement of E covers a matching, perhaps useful in searching for cycles. We also present an example where the WCA fails.

Keywords: Hamilton cycle decision problem, computational complexity theory, graph theory, theoretical computer science

Procedia PDF Downloads 368
1078 Ambiguity Resolution for Ground-based Pulse Doppler Radars Using Multiple Medium Pulse Repetition Frequency

Authors: Khue Nguyen Dinh, Loi Nguyen Van, Thanh Nguyen Nhu

Abstract:

In this paper, we propose an adaptive method to resolve ambiguities and a ghost target removal process to extract targets detected by a ground-based pulse-Doppler radar using medium pulse repetition frequency (PRF) waveforms. The ambiguity resolution method is an adaptive implementation of the coincidence algorithm, which is implemented on a two-dimensional (2D) range-velocity matrix to resolve range and velocity ambiguities simultaneously, with a proposed clustering filter to enhance the anti-error ability of the system. Here we consider the scenario of multiple target environments. The ghost target removal process, which is based on the power after Doppler processing, is proposed to mitigate ghosting detections to enhance the performance of ground-based radars using a short PRF schedule in multiple target environments. Simulation results on a ground-based pulsed Doppler radar model will be presented to show the effectiveness of the proposed approach.

Keywords: ambiguity resolution, coincidence algorithm, medium PRF, ghosting removal

Procedia PDF Downloads 144
1077 Status of Reintroduced Houbara Bustard Chlamydotis macqueeni in Saudi Arabia

Authors: Mohammad Zafar-ul Islam

Abstract:

The breeding programme of Houbara bustard was started in Saudi Arabia in 1986 to undertake the restoration of native species such as Houbara through a programme of re-introduction, involving the release of captive-bred birds in the wild. Two sites were selected for houbara re-introduction, i.e., Mahazat as-Sayd and Saja Umm Ar-Rimth protected areas in 1988 and 1998 respectively. Both the areas are fenced fairly level, sandy plain with a few rock outcrops. Captive bred houbara have been released in Mahazat since 1992 by NWRC and those birds have been successfully breeding since then. The nesting season of the houbara at Mahazat recorded from February to May and on an average 20-25 nests are located each year but no nesting recorded in Saja. Houbara are monitored using radio transmitters through aerial tracking technique and also a vehicle for terrestrial tracking. Total population of houbara in Mahazat is roughly estimated around 300-400 birds, using the following: N = n1+n2+n3+n4+n5 (n1 = released or wild-born, radio, regularly monitored/checked; n2 = radio tagged missing; n3 = wild born chicks not recorded; n4 = wild born chicks, recorded but not tagged; n5 = immigrants). However, in Saja only 4-7 individuals of houbara have been survived since 2001 because most of the birds are predated immediately after the release. The mean annual home was also calculated using Kernel and Convex polygons methods with Range VII software. The minimum density of houbara was also calculated. In order to know the houbara movement or their migration to other regions, two captive-reared male houbara that were released into the wild and one wild born female were fitted with Platform Transmitter Terminals (PTT). The home range shows that wild-born female has larger movement than two males. More areas need to be selected for reintroduction programme to establish the network of sites to provide easy access to move these birds and mingle with the wild houbara. Some potential sites have been proposed which require more surveys to check the habitat suitability.

Keywords: re-introduction, survival rate, home range, Saudi Arabia

Procedia PDF Downloads 406
1076 Impure Water, a Future Disaster: A Case Study of Lahore Ground Water Quality with GIS Techniques

Authors: Rana Waqar Aslam, Urooj Saeed, Hammad Mehmood, Hameed Ullah, Imtiaz Younas

Abstract:

This research has been conducted to assess the water quality in and around Lahore Metropolitan area on the basis of three different land uses, i.e. residential, commercial, and industrial land uses. For this, 29 sample sites have been selected on the basis of simple random sampling technique. Samples were collected at the source (WASA tube wells). The criteria for selecting sample sites are to have a maximum concentration of population in the selected land uses. The results showed that in the residential land use the proportion of nitrate and turbidity is at their highest level in the areas of Allama Iqbal Town and Samanabad Town. Commercial land use of Gulberg and Data Gunj Bakhsh Town have highest level of proportion of chlorides, calcium, TDS, pH, Mg, total hardness, arsenic and alkalinity. Whereas in industrial type of land use in Ravi and Wahga Town have the proportion of arsenic, Mg, nitrate, pH, and turbidity are at their highest level. The high rate of concentration of these parameters in these areas is basically due to the old and fractured pipelines that allow bacterial as well as physiochemical contaminants to contaminate the portable water at the sources. Furthermore, it is seen in most areas that waste water from domestic, industrial, as well as municipal sources may get easy discharge into open spaces and water bodies, like, cannels, rivers, lakes that seeps and become a part of ground water. In addition, huge dumps located in Lahore are becoming the cause of ground water contamination as when the rain falls, the water gets seep into the ground and impures the ground water quality. On the basis of the derived results with the help of Geo-spatial technology ACRGIS 9.3 Interpolation (IDW), it is recommended that water filtration plants must be installed with specific parameter control. A separate team for proper inspection has to be made for water quality check at the source. Old water pipelines must be replaced with the new pipelines, and safe water depth must be ensured at the source end.

Keywords: GIS, remote sensing, pH, nitrate, disaster, IDW

Procedia PDF Downloads 221
1075 The Role of Hypothalamus Mediators in Energy Imbalance

Authors: Maftunakhon Latipova, Feruza Khaydarova

Abstract:

Obesity is considered a chronic metabolic disease that occurs at any age. Regulation of body weight in the body is carried out through complex interaction of a complex of interrelated systems that control the body's energy system. Energy imbalance is the cause of obesity and overweight, in which the supply of energy from food exceeds the energy needs of the body. Obesity is closely related to impaired appetite regulation, and a hypothalamus is a key place for neural regulation of food consumption. The nucleus of the hypothalamus is connected and interdependent on receiving, integrating and sending hunger signals to regulate appetite. Purpose of the study: to identify markers of food behavior. Materials and methods: The screening was carried out to identify eating disorders in 200 men and women aged 18 to 35 years with overweight and obesity and to check the effects of Orexin A and Neuropeptide Y markers. A questionnaire and questionnaires were conducted with over 200 people aged 18 to 35 years. Questionnaires were for eating disorders and hidden depression (on the Zang scale). Anthropometry is measured by OT, OB, BMI, Weight, and Height. Based on the results of the collected data, 3 groups were divided: People with obesity, People with overweight, Control Group of Healthy People. Results: Of the 200 analysed persons, 86% had eating disorders. Of these, 60% of eating disorders were associated with childhood. According to the Zang test result: Normal condition was about 37%, mild depressive disorder 20%, moderate depressive disorder 25% and 18% of people suffered from severe depressive disorder without knowing it. One group of people with obesity had eating disorders and moderate and severe depressive disorder, and group 2 was overweight with mild depressive disorder. According to laboratory data, the first group had the lowest concentration of Orexin A and Neuropeptide U in blood serum. Conclusions: Being overweight and obese are the first signal of many diseases, and prevention and detection of these disorders will prevent various diseases, including type 2 diabetes. Obesity etiology is associated with eating disorders and signal transmission of the orexinorghetic system of the hypothalamus.

Keywords: obesity, endocrinology, hypothalamus, overweight

Procedia PDF Downloads 66
1074 Computation of Radiotherapy Treatment Plans Based on CT to ED Conversion Curves

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Radiotherapy treatment planning computers use CT data of the patient. For the computation of a treatment plan, treatment planning system must have an information on electron densities of tissues scanned by CT. This information is given by the conversion curve CT (CT number) to ED (electron density), or simply calibration curve. Every treatment planning system (TPS) has built in default CT to ED conversion curves, for the CTs of different manufacturers. However, it is always recommended to verify the CT to ED conversion curve before actual clinical use. Objective of this study was to check how the default curve already provided matches the curve actually measured on a specific CT, and how much it influences the calculation of a treatment planning computer. The examined CT scanners were from the same manufacturer, but four different scanners from three generations. The measurements of all calibration curves were done with the dedicated phantom CIRS 062M Electron Density Phantom. The phantom was scanned, and according to real HU values read at the CT console computer, CT to ED conversion curves were generated for different materials, for same tube voltage 140 kV. Another phantom, CIRS Thorax 002 LFC which represents an average human torso in proportion, density and two-dimensional structure, was used for verification. The treatment planning was done on CT slices of scanned CIRS LFC 002 phantom, for selected cases. Interest points were set in the lungs, and in the spinal cord, and doses recorded in TPS. The overall calculated treatment times for four scanners and default scanner did not differ more than 0.8%. Overall interest point dose in bone differed max 0.6% while for single fields was maximum 2.7% (lateral field). Overall interest point dose in lungs differed max 1.1% while for single fields was maximum 2.6% (lateral field). It is known that user should verify the CT to ED conversion curve, but often, developing countries are facing lack of QA equipment, and often use default data provided. We have concluded that the CT to ED curves obtained differ in certain points of a curve, generally in the region of higher densities. This influences the treatment planning result which is not significant, but definitely does make difference in the calculated dose.

Keywords: Computation of treatment plan, conversion curve, radiotherapy, electron density

Procedia PDF Downloads 478
1073 Real Energy Performance Study of Large-Scale Solar Water Heater by Using Remote Monitoring

Authors: F. Sahnoune, M. Belhamel, M. Zelmat

Abstract:

Solar thermal systems available today provide reliability, efficiency and significant environmental benefits. In housing, they can satisfy the hot water demand and reduce energy bills by 60 % or more. Additionally, collective systems or large scale solar thermal systems are increasingly used in different conditions for hot water applications and space heating in hotels and multi-family homes, hospitals, nursing homes and sport halls as well as in commercial and industrial building. However, in situ real performance data for collective solar water heating systems has not been extensively outlined. This paper focuses on the study of real energy performances of a collective solar water heating system using the remote monitoring technique in Algerian climatic conditions. This is to ensure proper operation of the system at any time, determine the system performance and to check to what extent solar performance guarantee can be achieved. The measurements are performed on an active indirect heating system of 12 m2 flat plate collector’s surface installed in Algiers and equipped with a various sensors. The sensors transmit measurements to a local station which controls the pumps, valves, electrical auxiliaries, etc. The simulation of the installation was developed using the software SOLO 2000. The system provides a yearly solar yield of 6277.5 KWh for an estimated annual need of 7896 kWh; the yearly average solar cover rate amounted to 79.5%. The productivity is in the order of 523.13 kWh / m²/year. Simulation results are compared to measured results and to guaranteed solar performances. The remote monitoring shows that 90% of the expected solar results can be easy guaranteed on a long period. Furthermore, the installed remote monitoring unit was able to detect some dysfunctions. It follows that remote monitoring is an important tool in energy management of some building equipment.

Keywords: large-scale solar water heater, real energy performance, remote monitoring, solar performance guarantee, tool to promote solar water heater

Procedia PDF Downloads 232
1072 Normal and Peaberry Coffee Beans Classification from Green Coffee Bean Images Using Convolutional Neural Networks and Support Vector Machine

Authors: Hira Lal Gope, Hidekazu Fukai

Abstract:

The aim of this study is to develop a system which can identify and sort peaberries automatically at low cost for coffee producers in developing countries. In this paper, the focus is on the classification of peaberries and normal coffee beans using image processing and machine learning techniques. The peaberry is not bad and not a normal bean. The peaberry is born in an only single seed, relatively round seed from a coffee cherry instead of the usual flat-sided pair of beans. It has another value and flavor. To make the taste of the coffee better, it is necessary to separate the peaberry and normal bean before green coffee beans roasting. Otherwise, the taste of total beans will be mixed, and it will be bad. In roaster procedure time, all the beans shape, size, and weight must be unique; otherwise, the larger bean will take more time for roasting inside. The peaberry has a different size and different shape even though they have the same weight as normal beans. The peaberry roasts slower than other normal beans. Therefore, neither technique provides a good option to select the peaberries. Defect beans, e.g., sour, broken, black, and fade bean, are easy to check and pick up manually by hand. On the other hand, the peaberry pick up is very difficult even for trained specialists because the shape and color of the peaberry are similar to normal beans. In this study, we use image processing and machine learning techniques to discriminate the normal and peaberry bean as a part of the sorting system. As the first step, we applied Deep Convolutional Neural Networks (CNN) and Support Vector Machine (SVM) as machine learning techniques to discriminate the peaberry and normal bean. As a result, better performance was obtained with CNN than with SVM for the discrimination of the peaberry. The trained artificial neural network with high performance CPU and GPU in this work will be simply installed into the inexpensive and low in calculation Raspberry Pi system. We assume that this system will be used in under developed countries. The study evaluates and compares the feasibility of the methods in terms of accuracy of classification and processing speed.

Keywords: convolutional neural networks, coffee bean, peaberry, sorting, support vector machine

Procedia PDF Downloads 140
1071 Oily Sludge Bioremediation Pilot Plant Project, Nigeria

Authors: Ime R. Udotong, Justina I. R. Udotong, Ofonime U. M. John

Abstract:

Brass terminal, one of the several crude oil and petroleum products storage/handling facilities in the Niger Delta was built in the 1980s. Activities at this site, over the years, released crude oil into this 3 m-deep, 1500 m-long canal lying adjacent to the terminal with oil floating on it and its sediment heavily polluted. To ensure effective clean-up, three major activities were planned: Site characterization, bioremediation pilot plant construction and testing and full-scale bioremediation of contaminated sediment/bank soil by land farming. The canal was delineated into 12 lots and each characterized, with reference to the floating oily phase, contaminated sediment and canal bank soil. As a result of site characterization, a pilot plant for on-site bioremediation was designed and a treatment basin constructed for carrying out pilot bioremediation test. Following a designed sampling protocol, samples from this pilot plant were collected for analysis at two laboratories as a quality assurance/quality control check. Results showed that Brass Canal upstream is contaminated with dark, thick and viscous oily film with characteristic hydrocarbon smell while downstream, thin oily film interspersed with water were observed. Sediments were observed to be dark with mixture of brownish sandy soil with TPH ranging from 17,800 mg/kg in Lot 1 to 88,500 mg/kg in Lot 12 samples. Brass Canal bank soil was observed to be sandy from ground surface to 3m, below ground surface (bgs) it was silty-sandy and brownish while subsurface soil (4-10m bgs) was sandy-clayey and whitish/grayish with typical hydrocarbon smell. Preliminary results obtained so far have been very promising but were proprietary. This project is considered, to the best of technical literature knowledge, the first large-scale on-site bioremediation project in the Niger Delta region, Nigeria.

Keywords: bioremediation, contaminated sediment, land farming, oily sludge, oil terminal

Procedia PDF Downloads 450
1070 Parametric Investigation of Aircraft Door’s Emergency Power Assist System (EPAS)

Authors: Marshal D. Kafle, Jun H. Kim, Hyun W. Been, Kyoung M. Min

Abstract:

Fluid viscous damping systems are well suited for many air vehicles subjected to shock and vibration. These damping system work with the principle of viscous fluid throttling through the orifice to create huge pressure difference between compression and rebound chamber and obtain the required damping force. One application of such systems is its use in aircraft door system to counteract the door’s velocity and safely stop it. In exigency situations like crash or emergency landing where the door doesn’t open easily, possibly due to unusually tilting of fuselage or some obstacles or intrusion of debris obstruction to move the parts of the door, such system can be combined with other systems to provide needed force to forcefully open the door and also securely stop it simultaneously within the required time i.e.less than 8seconds. In the present study, a hydraulic system called snubber along with other systems like actuator, gas bottle assembly which together known as emergency power assist system (EPAS) is designed, built and experimentally studied to check the magnitude of angular velocity, damping force and time required to effectively open the door. Whenever needed, the gas pressure from the bottle is released to actuate the actuator and at the same time pull the snubber’s piston to operate the emergency opening of the door. Such EPAS installed in the suspension arm of the aircraft door is studied explicitly changing parameters like orifice size, oil level, oil viscosity and bypass valve gap and its spring of the snubber at varying temperature to generate the optimum design case. Comparative analysis of the EPAS at several cases is done and conclusions are made. It is found that during emergency condition, the systemopening time and angular velocity, when snubber with 0.3mm piston and shaft orifice and bypass valve gap of 0.5 mm with its original spring is used,shows significant improvement over the old ones.

Keywords: aircraft door damper, bypass valve, emergency power assist system, hydraulic damper, oil viscosity

Procedia PDF Downloads 420
1069 Study of Error Analysis and Sources of Uncertainty in the Measurement of Residual Stresses by the X-Ray Diffraction

Authors: E. T. Carvalho Filho, J. T. N. Medeiros, L. G. Martinez

Abstract:

Residual stresses are self equilibrating in a rigid body that acts on the microstructure of the material without application of an external load. They are elastic stresses and can be induced by mechanical, thermal and chemical processes causing a deformation gradient in the crystal lattice favoring premature failure in mechanicals components. The search for measurements with good reliability has been of great importance for the manufacturing industries. Several methods are able to quantify these stresses according to physical principles and the response of the mechanical behavior of the material. The diffraction X-ray technique is one of the most sensitive techniques for small variations of the crystalline lattice since the X-ray beam interacts with the interplanar distance. Being very sensitive technique is also susceptible to variations in measurements requiring a study of the factors that influence the final result of the measurement. Instrumental, operational factors, form deviations of the samples and geometry of analyzes are some variables that need to be considered and analyzed in order for the true measurement. The aim of this work is to analyze the sources of errors inherent to the residual stress measurement process by X-ray diffraction technique making an interlaboratory comparison to verify the reproducibility of the measurements. In this work, two specimens were machined, differing from each other by the surface finishing: grinding and polishing. Additionally, iron powder with particle size less than 45 µm was selected in order to be a reference (as recommended by ASTM E915 standard) for the tests. To verify the deviations caused by the equipment, those specimens were positioned and with the same analysis condition, seven measurements were carried out at 11Ψ tilts. To verify sample positioning errors, seven measurements were performed by positioning the sample at each measurement. To check geometry errors, measurements were repeated for the geometry and Bragg Brentano parallel beams. In order to verify the reproducibility of the method, the measurements were performed in two different laboratories and equipments. The results were statistically worked out and the quantification of the errors.

Keywords: residual stress, x-ray diffraction, repeatability, reproducibility, error analysis

Procedia PDF Downloads 174
1068 Disparity in New Born Care Practices Reducing in Uttar Pradesh: Evidences from NFHS and DLHS

Authors: Gudakesh Yadav

Abstract:

Utter Pradesh, which is one of the largest states of India with unequal distribution of resources and different socioeconomic and cultural characteristics, level of different new born health care indicators varies a lot from one district to another district. State shared more than 21 percent of total live births of India; whereas, it accounts for 28 percent of total infant deaths of the country, with the 53 per thousand infant mortality rate. The present paper attempts to examine tempo-spatial changes in new born care practices during NFHS-1 to NFHS-3 and DLHS-2 to DLHS-3 in Uttar Pradesh and different regions. Descriptive statistics, rate-ratios, concentration index, multivariate and decomposition analysis has been used for the study. Findings of the study reveal that new born care practices have improved over the time in the state and across all the regions because of giving more emphasis on venerable groups like poor, rural, less educated mothers and scheduled caste & tribes but still it did not achieve the desired successes. Regional analysis of third rounds of DLHS shows that, coverage of intuitional delivery was the lowest in the central region. Performance of the southern region was the lowest in terms of initiation of breastfeeding, keeping baby warm and dry after the birth. The study calls for proper follow up of new born children to accelerate new born and child health care service and prioritises increasing antenatal check-ups and institutional delivery, which helps to improve level of other new born care services. At the policy level there is need to reach venerable groups like scheduled caste and tribes, poor and uneducated, and new mother especially in rural areas. High focused district should be allocated for better implementation of new born care promotion programme in low performing districts. Partnership with the private sector health professional is necessary to reach the every part of population.

Keywords: decomposition, inequality, initiation of breastfeeding, institutional delivery

Procedia PDF Downloads 234
1067 Molecular Dynamics Simulation for Buckling Analysis at Nanocomposite Beams

Authors: Babak Safaei, A. M. Fattahi

Abstract:

In the present study we have investigated axial buckling characteristics of nanocomposite beams reinforced by single-walled carbon nanotubes (SWCNTs). Various types of beam theories including Euler-Bernoulli beam theory, Timoshenko beam theory and Reddy beam theory were used to analyze the buckling behavior of carbon nanotube-reinforced composite beams. Generalized differential quadrature (GDQ) method was utilized to discretize the governing differential equations along with four commonly used boundary conditions. The material properties of the nanocomposite beams were obtained using molecular dynamic (MD) simulation corresponding to both short-(10,10) SWCNT and long-(10,10) SWCNT composites which were embedded by amorphous polyethylene matrix. Then the results obtained directly from MD simulations were matched with those calculated by the mixture rule to extract appropriate values of carbon nanotube efficiency parameters accounting for the scale-dependent material properties. The selected numerical results were presented to indicate the influences of nanotube volume fractions and end supports on the critical axial buckling loads of nanocomposite beams relevant to long- and short-nanotube composites.

Keywords: nanocomposites, molecular dynamics simulation, axial buckling, generalized differential quadrature (GDQ)

Procedia PDF Downloads 320