Search results for: covariance matrix estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4187

Search results for: covariance matrix estimation

3077 Application of KL Divergence for Estimation of Each Metabolic Pathway Genes

Authors: Shohei Maruyama, Yasuo Matsuyama, Sachiyo Aburatani

Abstract:

The development of the method to annotate unknown gene functions is an important task in bioinformatics. One of the approaches for the annotation is The identification of the metabolic pathway that genes are involved in. Gene expression data have been utilized for the identification, since gene expression data reflect various intracellular phenomena. However, it has been difficult to estimate the gene function with high accuracy. It is considered that the low accuracy of the estimation is caused by the difficulty of accurately measuring a gene expression. Even though they are measured under the same condition, the gene expressions will vary usually. In this study, we proposed a feature extraction method focusing on the variability of gene expressions to estimate the genes' metabolic pathway accurately. First, we estimated the distribution of each gene expression from replicate data. Next, we calculated the similarity between all gene pairs by KL divergence, which is a method for calculating the similarity between distributions. Finally, we utilized the similarity vectors as feature vectors and trained the multiclass SVM for identifying the genes' metabolic pathway. To evaluate our developed method, we applied the method to budding yeast and trained the multiclass SVM for identifying the seven metabolic pathways. As a result, the accuracy that calculated by our developed method was higher than the one that calculated from the raw gene expression data. Thus, our developed method combined with KL divergence is useful for identifying the genes' metabolic pathway.

Keywords: metabolic pathways, gene expression data, microarray, Kullback–Leibler divergence, KL divergence, support vector machines, SVM, machine learning

Procedia PDF Downloads 402
3076 Using MALDI-TOF MS to Detect Environmental Microplastics (Polyethylene, Polyethylene Terephthalate, and Polystyrene) within a Simulated Tissue Sample

Authors: Kara J. Coffman-Rea, Karen E. Samonds

Abstract:

Microplastic pollution is an urgent global threat to our planet and human health. Microplastic particles have been detected within our food, water, and atmosphere, and found within the human stool, placenta, and lung tissue. However, most spectrometric microplastic detection methods require chemical digestion which can alter or destroy microplastic particles and makes it impossible to acquire information about their in-situ distribution. MALDI TOF MS (Matrix-assisted laser desorption ionization-time of flight mass spectrometry) is an analytical method using a soft ionization technique that can be used for polymer analysis. This method provides a valuable opportunity to both acquire information regarding the in-situ distribution of microplastics and also minimizes the destructive element of chemical digestion. In addition, MALDI TOF MS allows for expanded analysis of the microplastics including detection of specific additives that may be present within them. MALDI TOF MS is particularly sensitive to sample preparation and has not yet been used to analyze environmental microplastics within their specific location (e.g., biological tissues, sediment, water). In this study, microplastics were created using polyethylene gloves, polystyrene micro-foam, and polyethylene terephthalate cable sleeving. Plastics were frozen using liquid nitrogen and ground to obtain small fragments. An artificial tissue was created using a cellulose sponge as scaffolding coated with a MaxGel Extracellular Matrix to simulate human lung tissue. Optimal preparation techniques (e.g., matrix, cationization reagent, solvent, mixing ratio, laser intensity) were first established for each specific polymer type. The artificial tissue sample was subsequently spiked with microplastics, and specific polymers were detected using MALDI-TOF-MS. This study presents a novel method for the detection of environmental polyethylene, polyethylene terephthalate, and polystyrene microplastics within a complex sample. Results of this study provide an effective method that can be used in future microplastics research and can aid in determining the potential threats to environmental and human health that they pose.

Keywords: environmental plastic pollution, MALDI-TOF MS, microplastics, polymer identification

Procedia PDF Downloads 255
3075 Effect of Nano/Micro Alumina Matrix on Alumina-Cubic Boron Nitride Composites Consolidated by Spark Plasma Sintering

Authors: A. S. Hakeem, B. Ahmed, M. Ehsan, A. Ibrahim, H. M. Irshad, T. Laoui

Abstract:

Alumina (Al2O3) - cubic boron nitride (cBN) ceramic composites were sintered by spark plasma sintering (SPS) using α-Al2O3 particle sizes; 150 µm, 150 nm and cBN particle size of 42 µm. Alumina-cBN composites containing 10, 20 and 30wt% cBN with and without Ni coated were sintering at an elevated temperature of 1400°C at a constant uniaxial pressure of 50 MPa. The effect of matrix particle size, cBN and Ni content on mechanical properties and thermal properties, i.e., thermal conductivity, diffusivity, expansion, densification, phase transformation, microstructure, hardness and toughness of the Al2O3-cBN/(Ni) composites under specific sintering conditions were investigated. The highest relative densification of 150 nm-Al2O3 containing 30wt% cBN (Ni coated) composite was 99% at TSPS = 1400°C. In case of 150 µm- Al2O3 compositions, the phase transformation of cBN to hBN were observed, and the relative densification decreased. Thermal conductivity depicts maximum value in case of 150 nm- Al2O3-30wt% cBN-Ni composition. The Vickers hardness of this composition at TSPS = 1400°C also showed the highest value of 29 GPa.

Keywords: alumina composite, cubic boron nitride, mechanical properties, phase transformation, Spark plasma sintering

Procedia PDF Downloads 340
3074 Physically Informed Kernels for Wave Loading Prediction

Authors: Daniel James Pitchforth, Timothy James Rogers, Ulf Tyge Tygesen, Elizabeth Jane Cross

Abstract:

Wave loading is a primary cause of fatigue within offshore structures and its quantification presents a challenging and important subtask within the SHM framework. The accurate representation of physics in such environments is difficult, however, driving the development of data-driven techniques in recent years. Within many industrial applications, empirical laws remain the preferred method of wave loading prediction due to their low computational cost and ease of implementation. This paper aims to develop an approach that combines data-driven Gaussian process models with physical empirical solutions for wave loading, including Morison’s Equation. The aim here is to incorporate physics directly into the covariance function (kernel) of the Gaussian process, enforcing derived behaviors whilst still allowing enough flexibility to account for phenomena such as vortex shedding, which may not be represented within the empirical laws. The combined approach has a number of advantages, including improved performance over either component used independently and interpretable hyperparameters.

Keywords: offshore structures, Gaussian processes, Physics informed machine learning, Kernel design

Procedia PDF Downloads 190
3073 Specification Requirements for a Combined Dehumidifier/Cooling Panel: A Global Scale Analysis

Authors: Damien Gondre, Hatem Ben Maad, Abdelkrim Trabelsi, Frédéric Kuznik, Joseph Virgone

Abstract:

The use of a radiant cooling solution would enable to lower cooling needs which is of great interest when the demand is initially high (hot climate). But, radiant systems are not naturally compatibles with humid climates since a low-temperature surface leads to condensation risks as soon as the surface temperature is close to or lower than the dew point temperature. A radiant cooling system combined to a dehumidification system would enable to remove humidity for the space, thereby lowering the dew point temperature. The humidity removal needs to be especially effective near the cooled surface. This requirement could be fulfilled by a system using a single desiccant fluid for the removal of both excessive heat and moisture. This task aims at providing an estimation of the specification requirements of such system in terms of cooling power and dehumidification rate required to fulfill comfort issues and to prevent any condensation risk on the cool panel surface. The present paper develops a preliminary study on the specification requirements, performances and behavior of a combined dehumidifier/cooling ceiling panel for different operating conditions. This study has been carried using the TRNSYS software which allows nodal calculations of thermal systems. It consists of the dynamic modeling of heat and vapor balances of a 5m x 3m x 2.7m office space. In a first design estimation, this room is equipped with an ideal heating, cooling, humidification and dehumidification system so that the room temperature is always maintained in between 21C and 25C with a relative humidity in between 40% and 60%. The room is also equipped with a ventilation system that includes a heat recovery heat exchanger and another heat exchanger connected to a heat sink. Main results show that the system should be designed to meet a cooling power of 42W.m−2 and a desiccant rate of 45 gH2O.h−1. In a second time, a parametric study of comfort issues and system performances has been achieved on a more realistic system (that includes a chilled ceiling) under different operating conditions. It enables an estimation of an acceptable range of operating conditions. This preliminary study is intended to provide useful information for the system design.

Keywords: dehumidification, nodal calculation, radiant cooling panel, system sizing

Procedia PDF Downloads 174
3072 Estimating the Receiver Operating Characteristic Curve from Clustered Data and Case-Control Studies

Authors: Yalda Zarnegarnia, Shari Messinger

Abstract:

Receiver operating characteristic (ROC) curves have been widely used in medical research to illustrate the performance of the biomarker in correctly distinguishing the diseased and non-diseased groups. Correlated biomarker data arises in study designs that include subjects that contain same genetic or environmental factors. The information about correlation might help to identify family members at increased risk of disease development, and may lead to initiating treatment to slow or stop the progression to disease. Approaches appropriate to a case-control design matched by family identification, must be able to accommodate both the correlation inherent in the design in correctly estimating the biomarker’s ability to differentiate between cases and controls, as well as to handle estimation from a matched case control design. This talk will review some developed methods for ROC curve estimation in settings with correlated data from case control design and will discuss the limitations of current methods for analyzing correlated familial paired data. An alternative approach using Conditional ROC curves will be demonstrated, to provide appropriate ROC curves for correlated paired data. The proposed approach will use the information about the correlation among biomarker values, producing conditional ROC curves that evaluate the ability of a biomarker to discriminate between diseased and non-diseased subjects in a familial paired design.

Keywords: biomarker, correlation, familial paired design, ROC curve

Procedia PDF Downloads 238
3071 An Evaluation of the Effects of Special Safeguards in Meat upon International Trade and the Brazilian Economy

Authors: Cinthia C. Costa, Heloisa L. Burnquist, Joaquim J. M. Guilhoto

Abstract:

This study identified the impact of special agricultural safeguards (SSG) for the global market of meat and for the Brazilian economy. The tariff lines subject to SSG were selected and the period of analysis was 1995 (when the rules about the SSGs were established) to 2015 (more recent period for which there are notifications). The value of additional tariff was calculated for each of the most important tariff lines. The import volume and the price elasticities for imports were used to estimate the impacts of each additional tariff estimated on imports. Finally, the effect of Brazilian exports of meat without SSG taxes was calculated as well as its impact in the country’s economy by using an input-output matrix. The most important markets that applied SSGs were the U.S. for beef and European Union for poultry. However, the additional tariffs could be estimated in only two of the sixteen years that the U.S. applied SSGs on beef imports, suggesting that its use has been enforced when the average annual price has been higher than the trigger price level. The results indicated that the value of the bovine and poultry meat that could not be exported by Brazil due to SSGs to both markets (EU and the U.S.) was equivalent to BRL 804 million. The impact of this loss in trade was about: BRL 3.7 billion of the economy’s production value (at 2015 prices) and almost BRL 2 billion of the Brazilian Gross Domestic Product (GDP).

Keywords: beef, poultry meat, SSG tariff, input-output matrix, Brazil

Procedia PDF Downloads 119
3070 Bayesian Inference for High Dimensional Dynamic Spatio-Temporal Models

Authors: Sofia M. Karadimitriou, Kostas Triantafyllopoulos, Timothy Heaton

Abstract:

Reduced dimension Dynamic Spatio-Temporal Models (DSTMs) jointly describe the spatial and temporal evolution of a function observed subject to noise. A basic state space model is adopted for the discrete temporal variation, while a continuous autoregressive structure describes the continuous spatial evolution. Application of such a DSTM relies upon the pre-selection of a suitable reduced set of basic functions and this can present a challenge in practice. In this talk, we propose an online estimation method for high dimensional spatio-temporal data based upon DSTM and we attempt to resolve this issue by allowing the basis to adapt to the observed data. Specifically, we present a wavelet decomposition in order to obtain a parsimonious approximation of the spatial continuous process. This parsimony can be achieved by placing a Laplace prior distribution on the wavelet coefficients. The aim of using the Laplace prior, is to filter wavelet coefficients with low contribution, and thus achieve the dimension reduction with significant computation savings. We then propose a Hierarchical Bayesian State Space model, for the estimation of which we offer an appropriate particle filter algorithm. The proposed methodology is illustrated using real environmental data.

Keywords: multidimensional Laplace prior, particle filtering, spatio-temporal modelling, wavelets

Procedia PDF Downloads 424
3069 Path Planning for Orchard Robot Using Occupancy Grid Map in 2D Environment

Authors: Satyam Raikwar, Thomas Herlitzius, Jens Fehrmann

Abstract:

In recent years, the autonomous navigation of orchard and field robots is an emerging technology of the mobile robotics in agriculture. One of the core aspects of autonomous navigation builds upon path planning, which is still a crucial issue. Generally, for simple representation, the path planning for a mobile robot is performed in a two-dimensional space, which creates a path between the start and goal point. This paper presents the automatic path planning approach for robots used in orchards and vineyards using occupancy grid maps with field consideration. The orchards and vineyards are usually structured environment and their topology is assumed to be constant over time; therefore, in this approach, an RGB image of a field is used as a working environment. These images undergone different image processing operations and then discretized into two-dimensional grid matrices. The individual grid or cell of these grid matrices represents the occupancy of the space, whether it is free or occupied. The grid matrix represents the robot workspace for motion and path planning. After the grid matrix is described, a probabilistic roadmap (PRM) path algorithm is used to create the obstacle-free path over these occupancy grids. The path created by this method was successfully verified in the test area. Furthermore, this approach is used in the navigation of the orchard robot.

Keywords: orchard robots, automatic path planning, occupancy grid, probabilistic roadmap

Procedia PDF Downloads 155
3068 Prospects of Low Immune Response Transplants Based on Acellular Organ Scaffolds

Authors: Inna Kornienko, Svetlana Guryeva, Anatoly Shekhter, Elena Petersen

Abstract:

Transplantation is an effective treatment option for patients suffering from different end-stage diseases. However, it is plagued by a constant shortage of donor organs and the subsequent need of a lifelong immunosuppressive therapy for the patient. Currently some researchers look towards using of pig organs to replace human organs for transplantation since the matrix derived from porcine organs is a convenient substitute for the human matrix. As an initial step to create a new ex vivo tissue engineered model, optimized protocols have been created to obtain organ-specific acellular matrices and evaluated their potential as tissue engineered scaffolds for culture of normal cells and tumor cell lines. These protocols include decellularization by perfusion in a bioreactor system and immersion-agitation on an orbital shaker with use of various detergents (SDS, Triton X-100) and freezing. Complete decellularization – in terms of residual DNA amount – is an important predictor of probability of immune rejection of materials of natural origin. However, the signs of cellular material may still remain within the matrix even after harsh decellularization protocols. In this regard, the matrices obtained from tissues of low-immunogenic pigs with α3Galactosyl-tranferase gene knock out (GalT-KO) may be a promising alternative to native animal sources. The research included a study of induced effect of frozen and fresh fragments of GalT-KO skin on healing of full-thickness plane wounds in 80 rats. Commercially available wound dressings (Ksenoderm, Hyamatrix and Alloderm) as well as allogenic skin were used as a positive control and untreated wounds were analyzed as a negative control. The results were evaluated on the 4th day after grafting, which corresponds to the time of start of normal wound epithelization. It has been shown that a non-specific immune response in models treated with GalT-Ko pig skin was milder than in all the control groups. Research has been performed to measure technical skin characteristics: stiffness and elasticity properties, corneometry, tevametry, and cutometry. These metrics enabled the evaluation of hydratation level, corneous layer husking level, as well as skin elasticity and micro- and macro-landscape. These preliminary data may contribute to development of personalized transplantable organs from GalT-Ko pigs with significantly limited potential of immune rejection. By applying growth factors to a decellularized skin sample it is possible to achieve various regenerative effects based on the particular situation. In this particular research BMP2 and Heparin-binding EGF-like growth factor have been used. Ideally, a bioengineered organ must be biocompatible, non-immunogenic and support cell growth. Porcine organs are attractive for xenotransplantation if severe immunologic concerns can be bypassed. The results indicate that genetically modified pig tissues with knock-outed α3Galactosyl-tranferase gene may be used for production of low-immunogenic matrix suitable for transplantation.

Keywords: decellularization, low-immunogenic, matrix, scaffolds, transplants

Procedia PDF Downloads 274
3067 Object Negotiation Mechanism for an Intelligent Environment Using Event Agents

Authors: Chiung-Hui Chen

Abstract:

With advancements in science and technology, the concept of the Internet of Things (IoT) has gradually developed. The development of the intelligent environment adds intelligence to objects in the living space by using the IoT. In the smart environment, when multiple users share the living space, if different service requirements from different users arise, then the context-aware system will have conflicting situations for making decisions about providing services. Therefore, the purpose of establishing a communication and negotiation mechanism among objects in the intelligent environment is to resolve those service conflicts among users. This study proposes developing a decision-making methodology that uses “Event Agents” as its core. When the sensor system receives information, it evaluates a user’s current events and conditions; analyses object, location, time, and environmental information; calculates the priority of the object; and provides the user services based on the event. Moreover, when the event is not single but overlaps with another, conflicts arise. This study adopts the “Multiple Events Correlation Matrix” in order to calculate the degree values of incidents and support values for each object. The matrix uses these values as the basis for making inferences for system service, and to further determine appropriate services when there is a conflict.

Keywords: internet of things, intelligent object, event agents, negotiation mechanism, degree of similarity

Procedia PDF Downloads 289
3066 Assessment of DNA Degradation Using Comet Assay: A Versatile Technique for Forensic Application

Authors: Ritesh K. Shukla

Abstract:

Degradation of biological samples in terms of macromolecules (DNA, RNA, and protein) are the major challenges in the forensic investigation which misleads the result interpretation. Currently, there are no precise methods available to circumvent this problem. Therefore, at the preliminary level, some methods are urgently needed to solve this issue. In this order, Comet assay is one of the most versatile, rapid and sensitive molecular biology technique to assess the DNA degradation. This technique helps to assess DNA degradation even at very low amount of sample. Moreover, the expedient part of this method does not require any additional process of DNA extraction and isolation during DNA degradation assessment. Samples directly embedded on agarose pre-coated microscopic slide and electrophoresis perform on the same slide after lysis step. After electrophoresis microscopic slide stained by DNA binding dye and observed under fluorescent microscope equipped with Komet software. With the help of this technique extent of DNA degradation can be assessed which can help to screen the sample before DNA fingerprinting, whether it is appropriate for DNA analysis or not. This technique not only helps to assess degradation of DNA but many other challenges in forensic investigation such as time since deposition estimation of biological fluids, repair of genetic material from degraded biological sample and early time since death estimation could also be resolved. With the help of this study, an attempt was made to explore the application of well-known molecular biology technique that is Comet assay in the field of forensic science. This assay will open avenue in the field of forensic research and development.

Keywords: comet assay, DNA degradation, forensic, molecular biology

Procedia PDF Downloads 153
3065 Comparison of Live Weight of Pure and Mixed Races Tizpar 30-Day Squabs

Authors: Sepehr Moradi, Mehdi Asadi Rad

Abstract:

The aim of this study is to evaluate and compare live weight of pure and mixed races Tizpar 30-day pigeons to investigate about their sex, race, and some auxiliary variables. In this paper, 70 pieces of pigeons as 35 male and female pairs with equal age are studied randomly. A natural incubation was done from each pair. All produced chickens were weighted at 30 days age before and after hunger by a scale with one gram precision. A covariance analysis was used since there were many auxiliary variables and unequal observations. SAS software was used for statistical analysis. Mean weight of live in pure race (Tizpar-Tizpar) with 12 records, 182.3±60.9 gr and mixed races of Tizpar-Kabood, Tizpar-Parvazy, Tizpar-Namebar, Kabood-Tizpar, Namebar-Tizpar, and Parvazy-Tizpar with 10, 10, 8, 6, 12, and 12 records were 114.3±71.6, 210.6±71.7, 353.2±86, 520.8±81.5, 288.3±65.6, and 382.6±70.4 gr, respectively. Effects of sex, race and some auxiliary variables were also significant in 1% level (P < 0.01). Difference live weight of 30-day of Tizpar-Tizpar race with Tizpar-Namebar and Parvazi-Tizpar mixed races was significant in 5% level (P < 0.05) and with Kabood-Tizpar mixed races was significant in 1% level (P < 0.01) but with Tizpar-Kabood, Nmaebar-Tizpar and Tizpar-Parvazy mixed races was not significant. The results showed that most and least weights of live belonged to Kabood-Tizpar and Tizpar-Kabood.

Keywords: squabs, Tizpar race, 30-day live weight, pigeons

Procedia PDF Downloads 175
3064 Estimation of Normalized Glandular Doses Using a Three-Layer Mammographic Phantom

Authors: Kuan-Jen Lai, Fang-Yi Lin, Shang-Rong Huang, Yun-Zheng Zeng, Po-Chieh Hsu, Jay Wu

Abstract:

The normalized glandular dose (DgN) estimates the energy deposition of mammography in clinical practice. The Monte Carlo simulations frequently use uniformly mixed phantom for calculating the conversion factor. However, breast tissues are not uniformly distributed, leading to errors of conversion factor estimation. This study constructed a three-layer phantom to estimated more accurate of normalized glandular dose. In this study, MCNP code (Monte Carlo N-Particles code) was used to create the geometric structure. We simulated three types of target/filter combinations (Mo/Mo, Mo/Rh, Rh/Rh), six voltages (25 ~ 35 kVp), six HVL parameters and nine breast phantom thicknesses (2 ~ 10 cm) for the three-layer mammographic phantom. The conversion factor for 25%, 50% and 75% glandularity was calculated. The error of conversion factors compared with the results of the American College of Radiology (ACR) was within 6%. For Rh/Rh, the difference was within 9%. The difference between the 50% average glandularity and the uniform phantom was 7.1% ~ -6.7% for the Mo/Mo combination, voltage of 27 kVp, half value layer of 0.34 mmAl, and breast thickness of 4 cm. According to the simulation results, the regression analysis found that the three-layer mammographic phantom at 0% ~ 100% glandularity can be used to accurately calculate the conversion factors. The difference in glandular tissue distribution leads to errors of conversion factor calculation. The three-layer mammographic phantom can provide accurate estimates of glandular dose in clinical practice.

Keywords: Monte Carlo simulation, mammography, normalized glandular dose, glandularity

Procedia PDF Downloads 188
3063 Sensitivity Enhancement in Graphene Based Surface Plasmon Resonance (SPR) Biosensor

Authors: Angad S. Kushwaha, Rajeev Kumar, Monika Srivastava, S. K. Srivastava

Abstract:

A lot of research work is going on in the field of graphene based SPR biosensor. In the conventional SPR based biosensor, graphene is used as a biomolecular recognition element. Graphene adsorbs biomolecules due to carbon based ring structure through sp2 hybridization. The proposed SPR based biosensor configuration will open a new avenue for efficient biosensing by taking the advantage of Graphene and its fascinating nanofabrication properties. In the present study, we have studied an SPR biosensor based on graphene mediated by Zinc Oxide (ZnO) and Gold. In the proposed structure, prism (BK7) base is coated with Zinc Oxide followed by Gold and Graphene. Using the waveguide approach by transfer matrix method, the proposed structure has been investigated theoretically. We have analyzed the reflectance versus incidence angle curve using He-Ne laser of wavelength 632.8 nm. Angle, at which the reflectance is minimized, termed as SPR angle. The shift in SPR angle is responsible for biosensing. From the analysis of reflectivity curve, we have found that there is a shift in SPR angle as the biomolecules get attached on the graphene surface. This graphene layer also enhances the sensitivity of the SPR sensor as compare to the conventional sensor. The sensitivity also increases by increasing the no of graphene layer. So in our proposed biosensor we have found minimum possible reflectivity with optimum level of sensitivity.

Keywords: biosensor, sensitivity, surface plasmon resonance, transfer matrix method

Procedia PDF Downloads 415
3062 Magnetorheological Silicone Composites Filled with Micro- and Nano-Sized Magnetites with the Addition of Ionic Liquids

Authors: M. Masłowski, M. Zaborski

Abstract:

Magnetorheological elastomer composites based on micro- and nano-sized Fe3O4 magnetoactive fillers in silicone rubber are reported and studied. To improve the dispersion of applied fillers in polymer matrix, ionic liquids such as 1-ethyl-3-methylimidazolium diethylphosphate, 1-butyl-3-methylimidazolium hexafluorophosphate, 1-hexyl-3-methylimidazolium chloride, 1-butyl-3-methylimidazolium trifluoromethanesulfonate,1-butyl-3-methylimidazolium tetrafluoroborate, trihexyltetradecylphosphonium chloride were added during the process of composites preparation. The method of preparation process influenced the specific properties of MREs (isotropy/anisotropy), similarly to ferromagnetic particles content and theirs quantity. Micro and non-sized magnetites were active fillers improving the mechanical properties of elastomers. They also changed magnetic properties and reinforced the magnetorheological effect of composites. Application of ionic liquids as dispersing agents influenced the dispersion of magnetic fillers in the elastomer matrix. Scanning electron microscopy images used to observe magnetorheological elastomer microstructures proved that the dispersion improvement had a significant effect on the composites properties. Moreover, the particles orientation and their arrangement in the elastomer investigated by vibration sample magnetometer showed the correlation between MRE microstructure and their magnetic properties.

Keywords: magnetorheological elastomers, iron oxides, ionic liquids, dispersion

Procedia PDF Downloads 328
3061 Synthesis of Antibacterial Bone Cement from Re-Cycle Biowaste Containing Methylmethacrylate (MMA) Matrix

Authors: Sungging Pintowantoro, Yuli Setiyorini, Rochman Rochim, Agung Purniawan

Abstract:

The bacterial infections are frequent and undesired occurrences after bone fracture treatment. One approach to reduce the incidence of bone fracture infection is the additional of microbial agents into bone cement. In this study, the synthesis of bone cement from re-cycles biowaste was successfully conducted completed with anti-bacterial function. The re-cycle of biowaste using microwave assisted was done in our previous studies in order to produce some of powder (calcium carbonate, carbonated-hydroxyapatite and chitosan). The ratio of these powder combined with methylmethacrylate (MMA) as the matrix in bone cement were investigated using XRD, FTIR, SEM-EDX, hardness test and anti-bacterial test, respectively. From the XRD, FTIR and EDX were resulted the formation of carbonated-hydroxyapatite, calcium carbonate and chitosan. The morphology was revealed porous structure both C2H3K1L and C2H1K3L, respectively. The antibacterial activity was tested against Staphylococcus aureus (S. aureus) for 24 hours. The inhibition of S. aureus was clearly shown, the hollow zone was resulted in various distance 14.2mm, 7.5mm, and 7.7mm, respectively. The hardness test was depicted in various results, however, C2H1K3L can be achived 36.84HV which is closed to dry cancelous bone 35HV. In general, this study results was promising materials to use as bone cement materials.

Keywords: biomaterials, biowaste recycling, materials processing, microwave processing

Procedia PDF Downloads 350
3060 Earnings vs Cash Flows: The Valuation Perspective

Authors: Megha Agarwal

Abstract:

The research paper is an effort to compare the earnings based and cash flow based methods of valuation of an enterprise. The theoretically equivalent methods based on either earnings such as Residual Earnings Model (REM), Abnormal Earnings Growth Model (AEGM), Residual Operating Income Method (ReOIM), Abnormal Operating Income Growth Model (AOIGM) and its extensions multipliers such as price/earnings ratio, price/book value ratio; or cash flow based models such as Dividend Valuation Method (DVM) and Free Cash Flow Method (FCFM) all provide different estimates of valuation of the Indian giant corporate Reliance India Limited (RIL). An ex-post analysis of published accounting and financial data for four financial years from 2008-09 to 2011-12 has been conducted. A comparison of these valuation estimates with the actual market capitalization of the company shows that the complex accounting based model AOIGM provides closest forecasts. These different estimates may be derived due to inconsistencies in discount rate, growth rates and the other forecasted variables. Although inputs for earnings based models may be available to the investor and analysts through published statements, precise estimation of free cash flows may be better undertaken by the internal management. The estimation of value from more stable parameters as residual operating income and RNOA could be considered superior to the valuations from more volatile return on equity.

Keywords: earnings, cash flows, valuation, Residual Earnings Model (REM)

Procedia PDF Downloads 375
3059 Behavior of Cold Formed Steel in Trusses

Authors: Reinhard Hermawan Lasut, Henki Wibowo Ashadi

Abstract:

The use of materials in Indonesia's construction sector requires engineers and practitioners to develop efficient construction technology, one of the materials used in cold-formed steel. Generally, the use of cold-formed steel is used in the construction of roof trusses found in houses or factories. The failure of the roof truss structure causes errors in the calculation analysis in the form of cross-sectional dimensions or frame configuration. The roof truss structure, vertical distance effect to the span length at the edge of the frame carries the compressive load. If the span is too long, local buckling will occur which causes problems in the frame strength. The model analysis uses various shapes of roof trusses, span lengths and angles with analysis of the structural stiffness matrix method. Model trusses with one-fifth shortened span and one-sixth shortened span also The trusses model is reviewed with increasing angles. It can be concluded that the trusses model by shortening the span in the compression area can reduce deflection and the model by increasing the angle does not get good results because the higher the roof, the heavier the load carried by the roof so that the force is not channeled properly. The shape of the truss must be calculated correctly so the truss is able to withstand the working load so that there is no structural failure.

Keywords: cold-formed, trusses, deflection, stiffness matrix method

Procedia PDF Downloads 165
3058 State Estimator Performance Enhancement: Methods for Identifying Errors in Modelling and Telemetry

Authors: M. Ananthakrishnan, Sunil K Patil, Koti Naveen, Inuganti Hemanth Kumar

Abstract:

State estimation output of EMS forms the base case for all other advanced applications used in real time by a power system operator. Ensuring tuning of state estimator is a repeated process and cannot be left once a good solution is obtained. This paper attempts to demonstrate methods to improve state estimator solution by identifying incorrect modelling and telemetry inputs to the application. In this work, identification of database topology modelling error by plotting static network using node-to-node connection details is demonstrated with examples. Analytical methods to identify wrong transmission parameters, incorrect limits and mistakes in pseudo load and generator modelling are explained with various cases observed. Further, methods used for active and reactive power tuning using bus summation display, reactive power absorption summary, and transformer tap correction are also described. In a large power system, verifying all network static data and modelling parameter on regular basis is difficult .The proposed tuning methods can be easily used by operators to quickly identify errors to obtain the best possible state estimation performance. This, in turn, can lead to improved decision-support capabilities, ultimately enhancing the safety and reliability of the power grid.

Keywords: active power tuning, database modelling, reactive power, state estimator

Procedia PDF Downloads 5
3057 Application of Taguchi Techniques on Machining of A356/Al2O3 Metal Matrix Nano-Composite

Authors: Abdallah M. Abdelkawy, Tarek M. El Hossainya, I. El Mahallawib

Abstract:

Recently, significant achievements have been made in development and manufacturing of nano-dispersed metal matrix nanocomposites (MMNCs). They gain their importance due to their high strength to weight ratio. The machining problems of these new materials are less widely investigated, thus this work focuses on machining of them. Aluminum-Silicon (A356)/ MMNC dispersed with alumina (Al2O3) is important in many applications include engine blocks. The final finish process of this application depends heavily on machining. The most important machining parameter studied includes: cutting force and surface roughness. Experimental trails are performed on the number of special samples of MMNC (with different Al2O3%) where the relation between Al2O3% and cutting speed, feed rate and cutting depth with cutting force and surface roughness were studied. The data obtained were statistically analyzed using Analysis of variance (ANOVA) to define the significant factors on both cutting force and surface roughness and their level of confident. Response Surface Methodology (RSM) is used to build a model relating cutting conditions and Al2O3% to the cutting force and surface roughness. The results have shown that feed and depth of cut have the major contribution on the cutting force and the surface roughness followed by cutting speed and nano-percent in MMNCs.

Keywords: machinability, cutting force, surface roughness, Ra, RSM, ANOVA, MMNCs

Procedia PDF Downloads 368
3056 Effect of Hollow and Solid Recycled-Poly Fibers on the Mechanical and Morphological Properties of Short-Fiber-Reinforced Polypropylene Composites

Authors: S. Kerakra, S. Bouhelal, M. Poncot

Abstract:

The aim of this study is to give a comprehensive overview of the effect of short hollow and solid recycled polyethylene terephthalate (PET) fibers in different breaking tenacities reinforced isotactic polypropylene (iPP) composites on the mechanical and morphological properties. Composites of iPP/3, 7and 10 wt% of solid and hollow recycled PET fibers were prepared by batched melt mixing in a Brabender. The incorporation of solid recycled-PET fibers in isotactic polypropylene increase Young’s modulus of iPP relatively, meanwhile it increased proportionally with hollow fibers content. An improvement of the storage modulus, and a shift up in glass transition temperatures of hollow fibers/iPP composites was determined by DMA results. The morphology of composites was determined by scanning electron microscope (SEM) and optical polarized microscopy (OM) showing a good dispersion of the hollow fibers. Also, their flexible aspect (folding, bending) was observed. But, one weak interaction between the polymer/fibers phases was shown. Polymers can be effectively reinforced with short hollow recycled PET fibers due to their characteristics like recyclability, lightweight and the flexible aspect, which allows the absorbance of the energy of a striker with a minimum damage of the matrix. Aiming to improve the affinity matrix–recycled hollow PET fibers, it is suggested the addition of compatibilizers, as maleic anhydride.

Keywords: isotactic polypropylene, hollow recycled PET fibers, solid recycled-PET fibers, composites, short fiber, scanning electron microscope

Procedia PDF Downloads 275
3055 Immunosupressive Effect of Chloroquine through the Inhibition of Myeloperoxidase

Authors: J. B. Minari, O. B. Oloyede

Abstract:

Polymorphonuclear neutrophils (PMNs) play a crucial role in a variety of infections caused by bacteria, fungi, and parasites. Indeed, the involvement of PMNs in host defence against Plasmodium falciparum is well documented both in vitro and in vivo. Many of the antimalarial drugs such as chloroquine used in the treatment of human malaria significantly reduce the immune response of the host in vitro and in vivo. Myeloperoxidase is the most abundant enzyme found in the polymorphonuclear neutrophil which plays a crucial role in its function. This study was carried out to investigate the effect of chloroquine on the enzyme. In investigating the effects of the drug on myeloperoxidase, the influence of concentration, pH, partition ratio estimation and kinetics of inhibition were studied. This study showed that chloroquine is concentration-dependent inhibitor of myeloperoxidase with an IC50 of 0.03 mM. Partition ratio estimation showed that 40 enzymatic turnover cycles are required for complete inhibition of myeloperoxidase in the presence of chloroquine. The influence of pH on the effect of chloroquine on the enzyme showed significant inhibition of myeloperoxidase at physiological pH. The kinetic inhibition studies showed that chloroquine caused a non-competitive inhibition with an inhibition constant Ki of 0.27mM. The results obtained from this study shows that chloroquine is a potent inhibitor of myeloperoxidase and it is capable of inactivating the enzyme. It is therefore considered that the inhibition of myeloperoxidase in the presence of chloroquine as revealed in this study may partly explain the impairment of polymorphonuclear neutrophil and consequent immunosuppression of the host defence system against secondary infections.

Keywords: myeloperoxidase, chloroquine, inhibition, neutrophil, immune

Procedia PDF Downloads 372
3054 The Effectiveness of Intensive Short-Term Dynamic Psychotherapy on Ambiguity Tolerance, Emotional Intelligence and Stress Coping Strategies in Financial Market Traders

Authors: Ahmadreza Jabalameli, Mohammad Ebrahimpour Borujeni

Abstract:

This study aims to evaluate the effectiveness of intensive short-term dynamic psychotherapy (ISTDP) on ambiguity tolerance, emotional intelligence and stress coping strategies in financial market traders. The methodology of this study was quasi-experimental, pre-test and post-test with control group. The statistical population of this study includes all students at Jabalameli Information Technology Academy in 2022. Among them, 30 people were selected by voluntary sampling through interviews, and were randomly divided into two experimental and control groups of 51 people. And the components were measured according to McLain Ambiguity Tolerance Questionnaire, Bar-On Emotional Intelligence and Lazarus Stress Coping Strategies. The data were obtained by SPSS software and were analyzed by using multivariate analysis of covariance. The results indicate that intensive short-term dynamic psychotherapy influences the emotional intelligence as well as the ambiguity tolerance of traders.

Keywords: ISTDP, ambiguity tolerance, trading, emotional intelligence, stress

Procedia PDF Downloads 86
3053 Advantages of Matrix Solid Phase Dispersive (MSPD) Extraction Associated to MIPS versus MAE Liquid Extraction for the Simultaneous Analysis of PAHs, PCBs and Some Hydroxylated PAHs in Sediments

Authors: F. Portet-Koltalo, Y. Tian, I. Berger, C. Boulanger-Lecomte, A. Benamar, N. Machour

Abstract:

Sediments are complex environments which can accumulate a great variety of persistent toxic contaminants such as polychlorobiphenyles (PCBs), polycyclic aromatic hydrocarbons (PAHs) and some of their more toxic degradation metabolites such as hydroxylated PAHs (OH-PAHs). Owing to their composition, fine clayey sediments can be more difficult to extract than soils using conventional solvent extraction processes. So this study aimed to compare the potential of MSPD (matrix solid phase dispersive extraction) to extract PCBs, PAHs and OH-PAHs, in comparison with microwave assisted extraction (MAE). Methodologies: MAE extraction with various solvent mixtures was used to extract PCBs, PAHs and OH-PAHs from sediments in two runs, followed by two GC-MS analyses. MSPD consisted in crushing the dried sediment with dispersive agents, introducing the mixture in cartridges and eluting the target compounds with an appropriate volume of selected solvents. So MSPD combined with cartridges containing MIPs (molecularly imprinted polymers) designed for OH-PAHs was used to extract the three families of target compounds in only one run, followed by parallel analyses in GC-MS for PAHs/PCBs and HPLC-FLD for OH-PAHs. Results: MAE extraction was optimized to extract from clayey sediments, in two runs, PAHs/PCBs in one hand and OH-PAHs in the other hand. Indeed, the best conditions of extractions (mixtures of extracting solvents, temperature) were different if we consider the polarity and the thermodegradability of the different families of target contaminants: PAHs/PCBs were better extracted using an acetone/toluene 50/50 mixture at 130°C whereas OH-PAHs were better extracted using an acetonitrile/toluene 90/10 mixture at 100°C. Moreover, the two consecutive GC-MS analyses contributed to double the total analysis time. A matrix solid phase dispersive (MSPD) extraction procedure was also optimized, with the first objective of increasing the extraction recovery yields of PAHs and PCBs from fine-grained sediment. The crushing time (2-10 min), the nature of the dispersing agents added for purifying and increasing the extraction yields (Florisil, octadecylsilane, 3-chloropropyle, 4-benzylchloride), the nature and the volume of eluting solvents (methylene chloride, hexane, hexane/acetone…) were studied. It appeared that in the best conditions, MSPD was a better extraction method than MAE for PAHs and PCBs, with respectively, mean increases of 8.2% and 71%. This method was also faster, easier and less expensive. But the other advantage of MSPD was that it allowed to introduce easily, just after the first elution process of PAHs/PCBs, a step permitting the selective recovery of OH-PAHs. A cartridge containing MIPs designed for phenols was coupled to the cartridge containing the dispersed sediment, and various eluting solvents, different from those used for PAHs and PCBs, were tested to selectively concentrate and extract OH-PAHs. Thereafter OH-PAHs could be analyzed at the same time than PAHs and PCBs: the OH-PAH extract could be analyzed with HPLC-FLD, whereas the PAHs/PCBs extract was analyzed with GC-MS, adding only few minutes more to the total duration of the analytical process. Conclusion: MSPD associated to MIPs appeared to be an easy, fast and low expensive method, able to extract in one run a complex mixture of toxic apolar and more polar contaminants present in clayey fine-grained sediments, an environmental matrix which is generally difficult to analyze.

Keywords: contaminated fine-grained sediments, matrix solid phase dispersive extraction, microwave assisted extraction, molecularly imprinted polymers, multi-pollutant analysis

Procedia PDF Downloads 352
3052 Ultra-High Molecular Weight Polyethylene (UHMWPE) for Radiation Dosimetry Applications

Authors: Malik Sajjad Mehmood, Aisha Ali, Hamna Khan, Tariq Yasin, Masroor Ikram

Abstract:

Ultra-high molecular weight polyethylene (UHMWPE) is one of the polymers belongs to polyethylene (PE) family having monomer –CH2– and average molecular weight is approximately 3-6 million g/mol. Due its chemical, mechanical, physical and biocompatible properties, it has been extensively used in the field of electrical insulation, medicine, orthopedic, microelectronics, engineering, chemistry and the food industry etc. In order to alter/modify the properties of UHMWPE for particular application of interest, certain various procedures are in practice e.g. treating the material with high energy irradiations like gamma ray, e-beam, and ion bombardment. Radiation treatment of UHMWPE induces free radicals within its matrix, and these free radicals are the precursors of chain scission, chain accumulation, formation of double bonds, molecular emission, crosslinking etc. All the aforementioned physical and chemical processes are mainly responsible for the modification of polymers properties to use them in any particular application of our interest e.g. to fabricate LEDs, optical sensors, antireflective coatings, polymeric optical fibers, and most importantly for radiation dosimetry applications. It is therefore, to check the feasibility of using UHMWPE for radiation dosimetery applications, the compressed sheets of UHMWPE were irradiated at room temperature (~25°C) for total dose values of 30 kGy and 100 kGy, respectively while one were kept un-irradiated as reference. Transmittance data (from 400 nm to 800 nm) of e-beam irradiated UHMWPE and its hybrids were measured by using Muller matrix spectro-polarimeter. As a result significant changes occur in the absorption behavior of irradiated samples. To analyze these (radiation induced) changes in polymer matrix Urbach edge method and modified Tauc’s equation has been used. The results reveal that optical activation energy decreases with irradiation. The values of activation energies are 2.85 meV, 2.48 meV, and 2.40 meV for control, 30 kGy, and 100 kGy samples, respectively. Direct and indirect energy band gaps were also found to decrease with irradiation due to variation of C=C unsaturation in clusters. We believe that the reported results would open new horizons for radiation dosimetery applications.

Keywords: electron beam, radiation dosimetry, Tauc’s equation, UHMWPE, Urbach method

Procedia PDF Downloads 405
3051 Surface Roughness of Al-Si/10% AlN MMC Material in Milling Operation Using the Taguchi Method

Authors: M. S. Said, J. A. Ghani, Izzati Osman, Z. A. Latiff, S. A .F. Syed Mohd

Abstract:

Metal matrix composites have demand for light-weight structural and functional materials. MMCs have been shown to offer improvements in strength, rigidity, temperature stability, wear resistance, reliability and control of physical properties such as density and coefficient of thermal expansion, thereby providing improved engineering performance in comparison to the un-reinforced matrix. Experiment were conducted at various cutting speed, feed rate and difference cutting tools according to Taguchi method using a standard orthogonal array L9. The volume of AlN reinforced particle was 10% in MMC. The milling process was carried out under dry cutting condition using uncoated carbide, TiN and TiCN tool insert. The parameters used were the cutting speed of (230,300,370 m/min) the federate used were (0.4, 0.6, 0.8 mm/tooth) while the depth of cut is constant (0.3 mm). The tool diameter is 20mm. From the project, the surface roughness mechanism was investigated in detail using Mitutoyo portable surface roughness measurements surftest SJ-310. This machining will be fabricated on MMC with 150mm length, 100mm width and 30mm thick. The results showed using S/N ratio, concluded that a combination of low cutting speed, medium feed rate and uncoated insert give a remarkable surface finish. From the ANOVA result showed the feed rate was major contributing factor (43.76%) following type of insert (40.89%).

Keywords: MMC, milling operation and surface roughness, Taguchi method

Procedia PDF Downloads 528
3050 Perceptual Image Coding by Exploiting Internal Generative Mechanism

Authors: Kuo-Cheng Liu

Abstract:

In the perceptual image coding, the objective is to shape the coding distortion such that the amplitude of distortion does not exceed the error visibility threshold, or to remove perceptually redundant signals from the image. While most researches focus on color image coding, the perceptual-based quantizer developed for luminance signals are always directly applied to chrominance signals such that the color image compression methods are inefficient. In this paper, the internal generative mechanism is integrated into the design of a color image compression method. The internal generative mechanism working model based on the structure-based spatial masking is used to assess the subjective distortion visibility thresholds that are visually consistent to human eyes better. The estimation method of structure-based distortion visibility thresholds for color components is further presented in a locally adaptive way to design quantization process in the wavelet color image compression scheme. Since the lowest subband coefficient matrix of images in the wavelet domain preserves the local property of images in the spatial domain, the error visibility threshold inherent in each coefficient of the lowest subband for each color component is estimated by using the proposed spatial error visibility threshold assessment. The threshold inherent in each coefficient of other subbands for each color component is then estimated in a local adaptive fashion based on the distortion energy allocation. By considering that the error visibility thresholds are estimated using predicting and reconstructed signals of the color image, the coding scheme incorporated with locally adaptive perceptual color quantizer does not require side information. Experimental results show that the entropies of three color components obtained by using proposed IGM-based color image compression scheme are lower than that obtained by using the existing color image compression method at perceptually lossless visual quality.

Keywords: internal generative mechanism, structure-based spatial masking, visibility threshold, wavelet domain

Procedia PDF Downloads 247
3049 Effect Analysis of an Improved Adaptive Speech Noise Reduction Algorithm in Online Communication Scenarios

Authors: Xingxing Peng

Abstract:

With the development of society, there are more and more online communication scenarios such as teleconference and online education. In the process of conference communication, the quality of voice communication is a very important part, and noise may cause the communication effect of participants to be greatly reduced. Therefore, voice noise reduction has an important impact on scenarios such as voice calls. This research focuses on the key technologies of the sound transmission process. The purpose is to maintain the audio quality to the maximum so that the listener can hear clearer and smoother sound. Firstly, to solve the problem that the traditional speech enhancement algorithm is not ideal when dealing with non-stationary noise, an adaptive speech noise reduction algorithm is studied in this paper. Traditional noise estimation methods are mainly used to deal with stationary noise. In this chapter, we study the spectral characteristics of different noise types, especially the characteristics of non-stationary Burst noise, and design a noise estimator module to deal with non-stationary noise. Noise features are extracted from non-speech segments, and the noise estimation module is adjusted in real time according to different noise characteristics. This adaptive algorithm can enhance speech according to different noise characteristics, improve the performance of traditional algorithms to deal with non-stationary noise, so as to achieve better enhancement effect. The experimental results show that the algorithm proposed in this chapter is effective and can better adapt to different types of noise, so as to obtain better speech enhancement effect.

Keywords: speech noise reduction, speech enhancement, self-adaptation, Wiener filter algorithm

Procedia PDF Downloads 55
3048 Modeling Default Probabilities of the Chosen Czech Banks in the Time of the Financial Crisis

Authors: Petr Gurný

Abstract:

One of the most important tasks in the risk management is the correct determination of probability of default (PD) of particular financial subjects. In this paper a possibility of determination of financial institution’s PD according to the credit-scoring models is discussed. The paper is divided into the two parts. The first part is devoted to the estimation of the three different models (based on the linear discriminant analysis, logit regression and probit regression) from the sample of almost three hundred US commercial banks. Afterwards these models are compared and verified on the control sample with the view to choose the best one. The second part of the paper is aimed at the application of the chosen model on the portfolio of three key Czech banks to estimate their present financial stability. However, it is not less important to be able to estimate the evolution of PD in the future. For this reason, the second task in this paper is to estimate the probability distribution of the future PD for the Czech banks. So, there are sampled randomly the values of particular indicators and estimated the PDs’ distribution, while it’s assumed that the indicators are distributed according to the multidimensional subordinated Lévy model (Variance Gamma model and Normal Inverse Gaussian model, particularly). Although the obtained results show that all banks are relatively healthy, there is still high chance that “a financial crisis” will occur, at least in terms of probability. This is indicated by estimation of the various quantiles in the estimated distributions. Finally, it should be noted that the applicability of the estimated model (with respect to the used data) is limited to the recessionary phase of the financial market.

Keywords: credit-scoring models, multidimensional subordinated Lévy model, probability of default

Procedia PDF Downloads 454