Search results for: loci method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18691

Search results for: loci method

13621 In Vitro Studies on Antimicrobial Activities of Lactic Acid Bacteria Isolated from Fresh Fruits for Biocontrol of Pathogens

Authors: Okolie Pius Ifeanyi, Emerenini Emilymary Chima

Abstract:

Aims: The study investigated the diversity and identities of Lactic Acid Bacteria (LAB) isolated from different fresh fruits using Molecular Nested PCR analysis and the efficacy of cell free supernatants from Lactic Acid Bacteria (LAB) isolated from fresh fruits for in vitro control of some tomato pathogens. Study Design: Nested PCR approach was used in this study employing universal 16S rRNA gene primers in the first round PCR and LAB specific Primers in the second round PCR with the view of generating specific Nested PCR products for the LAB diversity present in the samples. The inhibitory potentials of supernatant obtained from LAB isolates of fruits origin that were molecularly characterized were investigated against some tomato phytopathogens using agar-well method with the view to develop biological agents for some tomato disease causing organisms. Methodology: Gram positive, catalase negative strains of LAB were isolated from fresh fruits on Man Rogosa and Sharpe agar (Lab M) using streaking method. Isolates obtained were molecularly characterized by means of genomic DNA extraction kit (Norgen Biotek, Canada) method. Standard methods were used for Nested Polymerase Chain Reaction (PCR) amplification targeting the 16S rRNA gene using universal 16S rRNA gene and LAB specific primers, agarose gel electrophoresis, purification and sequencing of generated Nested PCR products (Macrogen Inc., USA). The partial sequences obtained were identified by blasting in the non-redundant nucleotide database of National Center for Biotechnology Information (NCBI). The antimicrobial activities of characterized LAB against some tomato phytopathogenic bacteria which include (Xanthomonas campestries, Erwinia caratovora, and Pseudomonas syringae) were obtained by using the agar well diffusion method. Results: The partial sequences obtained were deposited in the database of National Centre for Biotechnology Information (NCBI). Isolates were identified based upon the sequences as Weissella cibaria (4, 18.18%), Weissella confusa (3, 13.64%), Leuconostoc paramensenteroides (1, 4.55%), Lactobacillus plantarum (8, 36.36%), Lactobacillus paraplantarum (1, 4.55%) and Lactobacillus pentosus (1, 4.55%). The cell free supernatants of LAB from fresh fruits origin (Weissella cibaria, Weissella confusa, Leuconostoc paramensenteroides, Lactobacillus plantarum, Lactobacillus paraplantarum and Lactobacillus pentosus) can inhibits these bacteria by creating clear zones of inhibition around the wells containing cell free supernatants of the above mentioned strains of lactic acid bacteria. Conclusion: This study shows that potentially LAB can be quickly characterized by molecular methods to specie level by nested PCR analysis of the bacteria isolate genomic DNA using universal 16S rRNA primers and LAB specific primer. Tomato disease causing organisms can be most likely biologically controlled by using extracts from LAB. This finding will reduce the potential hazard from the use of chemical herbicides on plant.

Keywords: nested pcr, molecular characterization, 16s rRNA gene, lactic acid bacteria

Procedia PDF Downloads 400
13620 Numerical Analysis of Core-Annular Blood Flow in Microvessels at Low Reynolds Numbers

Authors: L. Achab, F. Iachachene

Abstract:

In microvessels, red blood cells (RBCs) exhibit a tendency to migrate towards the vessel center, establishing a core-annular flow pattern. The core region, marked by a high concentration of RBCs, is governed by significantly non-Newtonian viscosity. Conversely, the annular layer, composed of cell-free plasma, is characterized by Newtonian low viscosity. This property enables the plasma layer to act as a lubricant for the vessel walls, efficiently reducing resistance to the movement of blood cells. In this study, we investigate the factors influencing blood flow in microvessels and the thickness of the annular plasma layer using a non-miscible fluids approach in a 2D axisymmetric geometry. The governing equations of an incompressible unsteady flow are solved numerically through the Volume of Fluid (VOF) method to track the interface between the two immiscible fluids. To model blood viscosity in the core region, we adopt the Quemada constitutive law which is accurately captures the shear-thinning blood rheology over a wide range of shear rates. Our results are then compared to an established theoretical approach under identical flow conditions, particularly concerning the radial velocity profile and the thickness of the annular plasma layer. The simulation findings for low Reynolds numbers, demonstrate a notable agreement with the theoretical solution, emphasizing the pivotal role of blood’s rheological properties in the core region in determining the thickness of the annular plasma layer.

Keywords: core-annular flows, microvessels, Quemada model, plasma layer thickness, volume of fluid method

Procedia PDF Downloads 38
13619 Studying the Temperature Field of Hypersonic Vehicle Structure with Aero-Thermo-Elasticity Deformation

Authors: Geng Xiangren, Liu Lei, Gui Ye-Wei, Tang Wei, Wang An-ling

Abstract:

The malfunction of thermal protection system (TPS) caused by aerodynamic heating is a latent trouble to aircraft structure safety. Accurately predicting the structure temperature field is quite important for the TPS design of hypersonic vehicle. Since Thornton’s work in 1988, the coupled method of aerodynamic heating and heat transfer has developed rapidly. However, little attention has been paid to the influence of structural deformation on aerodynamic heating and structural temperature field. In the flight, especially the long-endurance flight, the structural deformation, caused by the aerodynamic heating and temperature rise, has a direct impact on the aerodynamic heating and structural temperature field. Thus, the coupled interaction cannot be neglected. In this paper, based on the method of static aero-thermo-elasticity, considering the influence of aero-thermo-elasticity deformation, the aerodynamic heating and heat transfer coupled results of hypersonic vehicle wing model were calculated. The results show that, for the low-curvature region, such as fuselage or center-section wing, structure deformation has little effect on temperature field. However, for the stagnation region with high curvature, the coupled effect is not negligible. Thus, it is quite important for the structure temperature prediction to take into account the effect of elastic deformation. This work has laid a solid foundation for improving the prediction accuracy of the temperature distribution of aircraft structures and the evaluation capacity of structural performance.

Keywords: aerothermoelasticity, elastic deformation, structural temperature, multi-field coupling

Procedia PDF Downloads 329
13618 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model

Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You

Abstract:

The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.

Keywords: DBSCAN, potential function, speech signal, the UBSS model

Procedia PDF Downloads 120
13617 Stereoselective Glycosylation and Functionalization of Unbiased Site of Sweet System via Dual-Catalytic Transition Metal Systems/Wittig Reaction

Authors: Mukul R. Gupta, Rajkumar Gandhi, Rajitha Sachan, Naveen K. Khare

Abstract:

The field of glycoscience has burgeoned in the last several decades, leading to the identification of many glycosides which could serve critical roles in a wide range of biological processes. This has prompted a resurgence in synthetic interest, with a particular focus on new approaches to construct the selective glycosidic bond. Despite the numerous elegant strategies and methods developed for the formation of glycosidic bonds, stereoselective construction of glycosides remains challenging. Here, we have recently developed the novel Hexafluoroisopropanol (HFIP) catalyzed stereoselective glycosylation methods by using KDN imidate glycosyl donor and a variety of alcohols in excellent yield. This method is broadly applicable to a wide range of substrates and with excellent selectivity of glycoside. Also, herein we are reporting the functionalization of the unbiased side of newly formed glycosides by dual-catalytic transition metal systems (Ru- or Fe-). We are using the innovative Reverse & Catalyst strategy, i.e., a reversible activation reaction by one catalyst with a functionalization reaction by another catalyst, together with enabling functionalization of substrates at their inherently unreactive sites. As well, we are targeting the diSia derivative synthesis by Wittig reaction. This synthetic method is applicable in mild conditions, functional group tolerance of the dual-catalytic systems and also highlights the potential of the multicatalytic approach to address challenging transformations to avoid multistep procedures in carbohydrate synthesis.

Keywords: KDN, stereoselective glycosylation, dual-catalytic functionalization, Wittig reaction

Procedia PDF Downloads 177
13616 Evolutionary Swarm Robotics: Dynamic Subgoal-Based Path Formation and Task Allocation for Exploration and Navigation in Unknown Environments

Authors: Lavanya Ratnabala, Robinroy Peter, E. Y. A. Charles

Abstract:

This research paper addresses the challenges of exploration and navigation in unknown environments from an evolutionary swarm robotics perspective. Path formation plays a crucial role in enabling cooperative swarm robots to accomplish these tasks. The paper presents a method called the sub-goal-based path formation, which establishes a path between two different locations by exploiting visually connected sub-goals. Simulation experiments conducted in the Argos simulator demonstrate the successful formation of paths in the majority of trials. Furthermore, the paper tackles the problem of inter-collision (traffic) among a large number of robots engaged in path formation, which negatively impacts the performance of the sub-goal-based method. To mitigate this issue, a task allocation strategy is proposed, leveraging local communication protocols and light signal-based communication. The strategy evaluates the distance between points and determines the required number of robots for the path formation task, reducing unwanted exploration and traffic congestion. The performance of the sub-goal-based path formation and task allocation strategy is evaluated by comparing path length, time, and resource reduction against the A* algorithm. The simulation experiments demonstrate promising results, showcasing the scalability, robustness, and fault tolerance characteristics of the proposed approach.

Keywords: swarm, path formation, task allocation, Argos, exploration, navigation, sub-goal

Procedia PDF Downloads 31
13615 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain

Authors: Bita Payami-Shabestari, Dariush Eslami

Abstract:

The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.

Keywords: economic production quantity, random cost, supply chain management, vendor-managed inventory

Procedia PDF Downloads 113
13614 Optimization Parameters Using Response Surface Method on Biomechanical Analysis for Malaysian Soccer Players

Authors: M. F. M. Ali, A. R. Ismail, B. M. Deros

Abstract:

Soccer is very popular and ranked as the top sports in the world as well as in Malaysia. Although soccer sport in Malaysia is currently professionalized, but it’s plunging achievements within recent years continue and are not to be proud of. After review, the Malaysian soccer players are still weak in terms of kicking techniques. The instep kick is a technique, which is often used in soccer for the purpose of short passes and making a scoring. This study presents the 3D biomechanics analysis on a soccer player during performing instep kick. This study was conducted to determine the optimization value for approach angle, distance of supporting leg from the ball and ball internal pressure respect to the knee angular velocity of the ball on the kicking leg. Six subjects from different categories using dominant right leg and free from any injury were selected to take part in this study. Subjects were asked to perform one step instep kick according to the setting for the variables with different parameter. Data analysis was performed using 3 Dimensional “Qualisys Track Manager” system and will focused on the bottom of the body from the waist to the ankle. For this purpose, the marker will be attached to the bottom of the body before the kicking is perform by the subjects. Statistical analysis was conducted by using Minitab software using Response Surface Method through Box-Behnken design. The results of this study found the optimization values for all three parameters, namely the approach angle, 53.6º, distance of supporting leg from the ball, 8.84sm and ball internal pressure, 0.9bar with knee angular velocity, 779.27 degrees/sec have been produced.

Keywords: biomechanics, instep kick, soccer, optimization

Procedia PDF Downloads 214
13613 Operative Tips of Strattice Based Breast Reconstruction

Authors: Cho Ee Ng, Hazem Khout, Tarannum Fasih

Abstract:

Acellular dermal matrices are increasingly used to reinforce the lower pole of the breast during implant breast reconstruction. There is no standard technique described in literature for the use of this product. In this article, we share our operative method of fixation.

Keywords: strattice, acellular dermal matric, breast reconstruction, implant

Procedia PDF Downloads 386
13612 Prenatal Diagnosis of Beta Thalassemia Intermedia in Vietnamese Family: Case Report

Authors: Ha T. T. Ly, Truc B. Truc, Hai N. Truong, Mai P. T. Nguyen, Ngoc D. Ngo, Khanh V. Tran, Hai T. Le

Abstract:

Beta thalassemia is one of the most common inherited blood disorders, which is characterized by decreased or absent in beta globin expression. Patients with Beta thalassemia whose anemia is not so severe as to necessitate transfusions are said to have thalassemia intermedia. Objective: The goal of this study is prenatal diagnosis for pregnancy woman with Beta thalassemia intermedia and her husband with Beta thalassemia carrier at high risk of Beta thalassemia major in Northern of Vietnam. Material and method: The family has a 6 years-old compound heterozygous thalassemia major for CD71/72(+A) and Hbb:c. -78A>G/nt-28(A>G) male child. The father was heterozygous for CD71/72(+A) mutation which is Beta plus type and the mother was compound heterozygosity of two different variants, namely, Hbb: c. -78A>G/nt-28(A>G) and CD26(A-G) HbE. Prenatal Beta thalassemia mutation detection in fetal DNA was carried out using multiplex Amplification-refractory mutation system ARMS-PCR and confirmed by direct Sanger-sequencing Hbb gene. Prenatal diagnoses were perfomed by amniotic fluid sampling from pregnant woman in the 16-18th week of pregnancy after the genotypes of parents of the probands were identified. Result: When amniotic fluid sample was analyzed for Beta globin gene (Hbb), we found that the genotype is heterozygous for CD71/72(+A) and CD26(A-G) HbE. This genotype is different from the 1st child of this family. Conclusion: Prenatal diagnosis helps the parents to know the genotype and the thalassemia status of the fetus, so they can have early decision on their pregnancy. Genetic diagnosis provided a useful method in diagnosis for familial members in pedigree, genetic counseling and prenatal diagnosis.

Keywords: beta thalassemia intermedia, Hbb gene, pedigree, prenatal diagnosis

Procedia PDF Downloads 371
13611 Literature Review of the Management of Parry Romberg Syndrome with Fillers

Authors: Sana Ilyas

Abstract:

Parry-Romberg syndrome is a rare condition clinically defined by slowly progressive atrophy of the skin and soft tissues. This usually effects one side of the face, although a few cases have been documented of bilateral presentation. It is more prevalent in females and usually affects the left side of the face. The syndrome can also be accompanied by neurological abnormalities. It usually occurs in the first two decades of life with a variable rate of progression. The aetiology is unknown, and the disease eventually stabilises. The treatment options usually involve surgical management. The least invasive of these options is the management of facial asymmetry, associated with Parry Romberg syndrome, through the use of tissue fillers. This paper will review the existing literature on the management of Parry Romberg syndrome with tissue filler. Aim: The aim of the study is to explore the current published literature for the management of Parry Romberg syndrome with fillers. It is to assess the development that has been made in this method of management, its benefits and limitations, and its effectiveness for the management of Parry Romberg syndrome. Methodology: There was a thorough assessment of the current literature published on this topic. PubMed database was used for search of the published literature on this method of the management. Papers were analysed and compared with one another to assess the success and limitation of the management of Parry Romberg with dermal fillers Results and Conclusion: Case reports of the use of tissue fillers discuss the varying degrees of success with the treatment. However, this procedure has it’s limitation, which are discussed in the paper in detail. However, it is still the least invasive of all the surgical options for the management of Parry Romberg Syndrome, and therefore, it is important to explore this option with patients, as they may be more comfortable with pursuingtreatment that is less invasive and can still improve their facial asymmetry

Keywords: dermal fillers, facial asymmetry, parry romberg syndrome, tissue fillers

Procedia PDF Downloads 76
13610 Wetting Characterization of High Aspect Ratio Nanostructures by Gigahertz Acoustic Reflectometry

Authors: C. Virgilio, J. Carlier, P. Campistron, M. Toubal, P. Garnier, L. Broussous, V. Thomy, B. Nongaillard

Abstract:

Wetting efficiency of microstructures or nanostructures patterned on Si wafers is a real challenge in integrated circuits manufacturing. In fact, bad or non-uniform wetting during wet processes limits chemical reactions and can lead to non-complete etching or cleaning inside the patterns and device defectivity. This issue is more and more important with the transistors size shrinkage and concerns mainly high aspect ratio structures. Deep Trench Isolation (DTI) structures enabling pixels’ isolation in imaging devices are subject to this phenomenon. While low-frequency acoustic reflectometry principle is a well-known method for Non Destructive Test applications, we have recently shown that it is also well suited for nanostructures wetting characterization in a higher frequency range. In this paper, we present a high-frequency acoustic reflectometry characterization of DTI wetting through a confrontation of both experimental and modeling results. The acoustic method proposed is based on the evaluation of the reflection of a longitudinal acoustic wave generated by a 100 µm diameter ZnO piezoelectric transducer sputtered on the silicon wafer backside using MEMS technologies. The transducers have been fabricated to work at 5 GHz corresponding to a wavelength of 1.7 µm in silicon. The DTI studied structures, manufactured on the wafer frontside, are crossing trenches of 200 nm wide and 4 µm deep (aspect ratio of 20) etched into a Si wafer frontside. In that case, the acoustic signal reflection occurs at the bottom and at the top of the DTI enabling its characterization by monitoring the electrical reflection coefficient of the transducer. A Finite Difference Time Domain (FDTD) model has been developed to predict the behavior of the emitted wave. The model shows that the separation of the reflected echoes (top and bottom of the DTI) from different acoustic modes is possible at 5 Ghz. A good correspondence between experimental and theoretical signals is observed. The model enables the identification of the different acoustic modes. The evaluation of DTI wetting is then performed by focusing on the first reflected echo obtained through the reflection at Si bottom interface, where wetting efficiency is crucial. The reflection coefficient is measured with different water / ethanol mixtures (tunable surface tension) deposited on the wafer frontside. Two cases are studied: with and without PFTS hydrophobic treatment. In the untreated surface case, acoustic reflection coefficient values with water show that liquid imbibition is partial. In the treated surface case, the acoustic reflection is total with water (no liquid in DTI). The impalement of the liquid occurs for a specific surface tension but it is still partial for pure ethanol. DTI bottom shape and local pattern collapse of the trenches can explain these incomplete wetting phenomena. This high-frequency acoustic method sensitivity coupled with a FDTD propagative model thus enables the local determination of the wetting state of a liquid on real structures. Partial wetting states for non-hydrophobic surfaces or low surface tension liquids are then detectable with this method.

Keywords: wetting, acoustic reflectometry, gigahertz, semiconductor

Procedia PDF Downloads 319
13609 Generation Mechanism of Opto-Acoustic Wave from in vivo Imaging Agent

Authors: Hiroyuki Aoki

Abstract:

The optoacoustic effect is the energy conversion phenomenon from light to sound. In recent years, this optoacoustic effect has been utilized for an imaging agent to visualize a tumor site in a living body. The optoacoustic imaging agent absorbs the light and emits the sound signal. The sound wave can propagate in a living organism with a small energy loss; therefore, the optoacoustic imaging method enables the molecular imaging of the deep inside of the body. In order to improve the imaging quality of the optoacoustic method, the more signal intensity is desired; however, it has been difficult to enhance the signal intensity of the optoacoustic imaging agent because the fundamental mechanism of the signal generation is unclear. This study deals with the mechanism to generate the sound wave signal from the optoacoustic imaging agent following the light absorption by experimental and theoretical approaches. The optoacoustic signal efficiency for the nano-particles consisting of metal and polymer were compared, and it was found that the polymer particle was better. The heat generation and transfer process for optoacoustic agents of metal and polymer were theoretically examined. It was found that heat generated in the metal particle rapidly transferred to the water medium, whereas the heat in the polymer particle was confined in itself. The confined heat in the small particle induces the massive volume expansion, resulting in the large optoacoustic signal for the polymeric particle agent. Thus, we showed that heat confinement is a crucial factor in designing the highly efficient optoacoustic imaging agent.

Keywords: nano-particle, opto-acoustic effect, in vivo imaging, molecular imaging

Procedia PDF Downloads 117
13608 An Investigation of the Fracture Behavior of Model MgO-C Refractories Using the Discrete Element Method

Authors: Júlia Cristina Bonaldo, Christophe L. Martin, Martiniano Piccico, Keith Beale, Roop Kishore, Severine Romero-Baivier

Abstract:

Refractory composite materials employed in steel casting applications are prone to cracking and material damage because of the very high operating temperature (thermal shock) and mismatched properties of the constituent phases. The fracture behavior of a model MgO-C composite refractory is investigated to quantify and characterize its thermal shock resistance, employing a cold crushing test and Brazilian test with fractographic analysis. The discrete element method (DEM) is used to generate numerical refractory composites. The composite in DEM is represented by an assembly of bonded particle clusters forming perfectly spherical aggregates and single spherical particles. For the stresses to converge with a low standard deviation and a minimum number of particles to allow reasonable CPU calculation time, representative volume element (RVE) numerical packings are created with various numbers of particles. Key microscopic properties are calibrated sequentially by comparing stress-strain curves from crushing experimental data. Comparing simulations with experiments also allows for the evaluation of crack propagation, fracture energy, and strength. The crack propagation during Brazilian experimental tests is monitored with digital image correlation (DIC). Simulations and experiments reveal three distinct types of fracture. The crack may spread throughout the aggregate, at the aggregate-matrix interface, or throughout the matrix.

Keywords: refractory composite, fracture mechanics, crack propagation, DEM

Procedia PDF Downloads 62
13607 Development of a Matlab® Program for the Bi-Dimensional Truss Analysis Using the Stiffness Matrix Method

Authors: Angel G. De Leon Hernandez

Abstract:

A structure is defined as a physical system or, in certain cases, an arrangement of connected elements, capable of bearing certain loads. The structures are presented in every part of the daily life, e.g., in the designing of buildings, vehicles and mechanisms. The main goal of a structure designer is to develop a secure, aesthetic and maintainable system, considering the constraint imposed to every case. With the advances in the technology during the last decades, the capabilities of solving engineering problems have increased enormously. Nowadays the computers, play a critical roll in the structural analysis, pitifully, for university students the vast majority of these software are inaccessible due to the high complexity and cost they represent, even when the software manufacturers offer student versions. This is exactly the reason why the idea of developing a more reachable and easy-to-use computing tool. This program is designed as a tool for the university students enrolled in courser related to the structures analysis and designs, as a complementary instrument to achieve a better understanding of this area and to avoid all the tedious calculations. Also, the program can be useful for graduated engineers in the field of structural design and analysis. A graphical user interphase is included in the program to make it even simpler to operate it and understand the information requested and the obtained results. In the present document are included the theoretical basics in which the program is based to solve the structural analysis, the logical path followed in order to develop the program, the theoretical results, a discussion about the results and the validation of those results.

Keywords: stiffness matrix method, structural analysis, Matlab® applications, programming

Procedia PDF Downloads 105
13606 Forest Fire Burnt Area Assessment in a Part of West Himalayan Region Using Spectral Unmixing Method and Assess the Extent and Severity of the Affected Area Using Neural Network Approach

Authors: Sunil Chandra, Triparna Barman, Vikas Gusain, Himanshu Rawat

Abstract:

Forest fires are a recurrent phenomenon in the Himalayan region owing to the presence of vulnerable forest types, topographical gradients, climatic weather conditions, and anthropogenic pressure. The present study focuses on the identification of forest fire-affected areas in a small part of the West Himalayan region using a differential normalized burnt ratio method and spectral unmixing methods. The study area has a rugged terrain with the presence of sub-tropical pine forest, montane temperate forest, and sub-alpine forest and scrub. The major reason for fires in this region is anthropogenic in nature, with the practice of human-induced fires for getting fresh leaves, scaring wild animals to protect agricultural crops, grazing practices within the reserved forest, and igniting fires for cooking and other reasons. The fires caused by the above reasons affect a large area on the ground, necessitating its precise estimation for further management and policy making. In the present study, two approaches have been used for carrying out a burnt area analysis. The first approach followed for burnt area analysis uses a differential burnt normalized ratio index (dNBR) approach that uses the burnt ratio values generated using Short Wave Infra Red (SWIR) band and Near Infra Red (NIR) bands of the Sentinel-2A image. The results of the dNBR have been compared with the outputs of the spectral mixing methods. It has been found that the dNBR is able to create good results in fire-affected areas having homogenous forest stratum and with slope degree <5 degrees. However, in a rugged terrain where the landscape is largely influenced by the topographical variations, vegetation types, tree density, the results may be largely influenced by the effects of topography, complexity in tree composition, fuel load composition, and soil moisture. Hence, such variations in the factors influencing burnt area assessment may not be effectively carried out using a dNBR approach which is commonly followed for burnt area assessment over a large area. Hence, another approach that has been attempted in the present study utilizes a spectral mixing method where the individual pixel is tested before assigning an information class to it. The method uses a neural network approach utilizing Sentinel 2A bands. The training and testing data are generated from the sentinel-2A data and the national field inventory, which is further used for generating outputs using ML tools. The analysis of the results indicates that the fire-affected regions and their severity can be better estimated in rugged terrain using spectral unmixing methods which have the capability to resolve the noise in the data and can classify the individual pixel to the precise burnt/unburnt class.

Keywords: dNBR, spectral unmixing, neural network, forest stratum

Procedia PDF Downloads 5
13605 A Spatial Hypergraph Based Semi-Supervised Band Selection Method for Hyperspectral Imagery Semantic Interpretation

Authors: Akrem Sellami, Imed Riadh Farah

Abstract:

Hyperspectral imagery (HSI) typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image. Hence, a pixel in HSI is a high-dimensional vector of intensities with a large spectral range and a high spectral resolution. Therefore, the semantic interpretation is a challenging task of HSI analysis. We focused in this paper on object classification as HSI semantic interpretation. However, HSI classification still faces some issues, among which are the following: The spatial variability of spectral signatures, the high number of spectral bands, and the high cost of true sample labeling. Therefore, the high number of spectral bands and the low number of training samples pose the problem of the curse of dimensionality. In order to resolve this problem, we propose to introduce the process of dimensionality reduction trying to improve the classification of HSI. The presented approach is a semi-supervised band selection method based on spatial hypergraph embedding model to represent higher order relationships with different weights of the spatial neighbors corresponding to the centroid of pixel. This semi-supervised band selection has been developed to select useful bands for object classification. The presented approach is evaluated on AVIRIS and ROSIS HSIs and compared to other dimensionality reduction methods. The experimental results demonstrate the efficacy of our approach compared to many existing dimensionality reduction methods for HSI classification.

Keywords: dimensionality reduction, hyperspectral image, semantic interpretation, spatial hypergraph

Procedia PDF Downloads 297
13604 Dielectric, Electrical and Magnetic Properties of Elastomer Filled with in situ Thermally Reduced Graphene Oxide and Spinel Ferrite NiFe₂O₄ Nanoparticles

Authors: Raghvendra Singh Yadav, Ivo Kuritka, Jarmila Vilcakova, Pavel Urbanek, Michal Machovsky, David Skoda, Milan Masar

Abstract:

The elastomer nanocomposites were synthesized by solution mixing method with an elastomer as a matrix and in situ thermally reduced graphene oxide (RGO) and spinel ferrite NiFe₂O₄ nanoparticles as filler. Spinel ferrite NiFe₂O₄ nanoparticles were prepared by the starch-assisted sol-gel auto-combustion method. The influence of filler on the microstructure, morphology, dielectric, electrical and magnetic properties of Reduced Graphene Oxide-Nickel Ferrite-Elastomer nanocomposite was characterized by X-ray diffraction, Raman spectroscopy, Fourier transform infrared spectroscopy, field emission scanning electron microscopy, X-ray photoelectron spectroscopy, the Dielectric Impedance analyzer, and vibrating sample magnetometer. Scanning electron microscopy study revealed that the fillers were incorporated in elastomer matrix homogeneously. The dielectric constant and dielectric tangent loss of nanocomposites was decreased with the increase of frequency, whereas, the dielectric constant increases with the addition of filler. Further, AC conductivity was increased with the increase of frequency and addition of fillers. Furthermore, the prepared nanocomposites exhibited ferromagnetic behavior. This work was supported by the Ministry of Education, Youth and Sports of the Czech Republic – Program NPU I (LO1504).

Keywords: polymer-matrix composites, nanoparticles as filler, dielectric property, magnetic property

Procedia PDF Downloads 158
13603 The Effect of Additive Acid on the Phytoremediation Efficiency

Authors: G. Hosseini, A. Sadighzadeh, M. Rahimnejad, N. Hosseini, Z. Jamalzadeh

Abstract:

Metal pollutants, especially heavy metals from anthropogenic sources such as metallurgical industries’ waste including mining, smelting, casting or production of nuclear fuel, including mining, concentrate production and uranium processing ends in the environment contamination (water and soil) and risk to human health around the facilities of this type of industrial activity. There are different methods that can be used to remove these contaminants from water and soil. These are very expensive and time-consuming. In this case, the people have been forced to leave the area and the decontamination is not done. For example, in the case of Chernobyl accident, an area of 30 km around the plant was emptied of human life. A very efficient and cost-effective method for decontamination of the soil and the water is phytoremediation. In this method, the plants preferentially native plants which are more adaptive to the regional climate are well used. In this study, three types of plants including Alfalfa, Sunflower and wheat were used to Barium decontamination. Alfalfa and Sunflower were not grown good enough in Saghand mine’s soil sample. This can be due to non-native origin of these plants. But, Wheat rise in Saghand Uranium Mine soil sample was satisfactory. In this study, we have investigated the effect of 4 types of acids inclusive nitric acid, oxalic acid, acetic acid and citric acid on the removal efficiency of Barium by Wheat. Our results indicate the increase of Barium absorption in the presence of citric acid in the soil. In this paper, we will present our research and laboratory results.

Keywords: phytoremediation, heavy metal, wheat, soil

Procedia PDF Downloads 321
13602 Flow Field Optimization for Proton Exchange Membrane Fuel Cells

Authors: Xiao-Dong Wang, Wei-Mon Yan

Abstract:

The flow field design in the bipolar plates affects the performance of the proton exchange membrane (PEM) fuel cell. This work adopted a combined optimization procedure, including a simplified conjugate-gradient method and a completely three-dimensional, two-phase, non-isothermal fuel cell model, to look for optimal flow field design for a single serpentine fuel cell of size 9×9 mm with five channels. For the direct solution, the two-fluid method was adopted to incorporate the heat effects using energy equations for entire cells. The model assumes that the system is steady; the inlet reactants are ideal gases; the flow is laminar; and the porous layers such as the diffusion layer, catalyst layer and PEM are isotropic. The model includes continuity, momentum and species equations for gaseous species, liquid water transport equations in the channels, gas diffusion layers, and catalyst layers, water transport equation in the membrane, electron and proton transport equations. The Bulter-Volumer equation was used to describe electrochemical reactions in the catalyst layers. The cell output power density Pcell is maximized subjected to an optimal set of channel heights, H1-H5, and channel widths, W2-W5. The basic case with all channel heights and widths set at 1 mm yields a Pcell=7260 Wm-2. The optimal design displays a tapered characteristic for channels 1, 3 and 4, and a diverging characteristic in height for channels 2 and 5, producing a Pcell=8894 Wm-2, about 22.5% increment. The reduced channel heights of channels 2-4 significantly increase the sub-rib convection and widths for effectively removing liquid water and oxygen transport in gas diffusion layer. The final diverging channel minimizes the leakage of fuel to outlet via sub-rib convection from channel 4 to channel 5. Near-optimal design without huge loss in cell performance but is easily manufactured is tested. The use of a straight, final channel of 0.1 mm height has led to 7.37% power loss, while the design with all channel widths to be 1 mm with optimal channel heights obtained above yields only 1.68% loss of current density. The presence of a final, diverging channel has greater impact on cell performance than the fine adjustment of channel width at the simulation conditions set herein studied.

Keywords: optimization, flow field design, simplified conjugate-gradient method, serpentine flow field, sub-rib convection

Procedia PDF Downloads 285
13601 Ergosterol Biosynthesis: Non-Conventional Method for Improving Process

Authors: Madalina Postaru, Alexandra Tucaliuc, Dan Cascaval, Anca Irina Galaction

Abstract:

Ergosterol (ergosta-5,7,22-trien-3β-ol) is the precursor of vitamin D2 (ergocalciferol), known as provitamin D2 as it is converted under UV radiation to this vitamin. The natural sources of ergosterol are mainly the yeasts (Saccharomyces sp., Candida sp.), but it can be also found in fungus (Claviceps sp.) or plants (orchids). As ergosterol is mainly accumulated in yeast cell membranes, especially in free form in the plasma-membrane, and the chemical synthesis of ergosterol does not represent an efficient method for its production, this study aimed to analyze the influence of aeration efficiency on ergosterol production by S. cerevisiae in batch and fed-batch fermentations, by considering different levels of mixing intensity, aeration rate, and n-dodecane concentration. Our previous studies on ergosterol production by S. cerevisiae in batch and fed-batch fermentation systems indicated that the addition of n-dodecane led to the increase of almost 50% of this sterol concentration, the highest productivity being reached for the fed-batch process. The experiments were carried out in a laboratory stirred bioreactor, provided with computer-controlled and recorded parameters. In batch fermentation system, the study indicated that the oxygen mass transfer coefficient, kLa, is amplified for about 3 times by increasing the volumetric concentration of n-dodecane from 0 to 15%. Moreover, the increase of dissolved oxygen concentration by adding n-dodecane leads to the diminution for 3.5 times of the produced alcohol amount. In fed-batch fermentation process, the positive influence of hydrocarbon on oxygen transfer rate is amplified mainly at its higher concentration level, as the result of the increased yeasts cells amount. Thus, by varying n-dodecane concentration from 0 to 15% vol., the kLa value increase becomes more important than for the batch fermentation, being of 4 times

Keywords: ergosterol, yeast fermentation, n-dodecane, oxygen-vector

Procedia PDF Downloads 102
13600 Biochemical Effects of Low Dose Dimethyl Sulfoxide on HepG2 Liver Cancer Cell Line

Authors: Esra Sengul, R. G. Aktas, M. E. Sitar, H. Isan

Abstract:

Hepatocellular carcinoma (HCC) is a hepatocellular tumor commonly found on the surface of the chronic liver. HepG2 is the most commonly used cell type in HCC studies. The main proteins remaining in the blood serum after separation of plasma fibrinogen are albumin and globulin. The fact that the albumin showed hepatocellular damage and reflect the synthesis capacity of the liver was the main reason for our use. Alpha-Fetoprotein (AFP) is an albumin-like structural embryonic globulin found in the embryonic cortex, cord blood, and fetal liver. It has been used as a marker in the follow-up of tumor growth in various malign tumors and in the efficacy of surgical-medical treatments, so it is a good protein to look at with albumins. We have seen the morphological changes of dimethyl sulfoxide (DMSO) on HepG2 and decided to investigate its biochemical effects. We examined the effects of DMSO, which is used in cell cultures, on albumin, AFP and total protein at low doses. Material Method: Cell Culture: Medium was prepared in cell culture using Dulbecco's Modified Eagle Media (DMEM), Fetal Bovine Serum Dulbecco's (FBS), Phosphate Buffered Saline and trypsin maintained at -20 ° C. Fixation of Cells: HepG2 cells, which have been appropriately developed at the end of the first week, were fixed with acetone. We stored our cells in PBS at + 4 ° C until the fixation was completed. Area Calculation: The areas of the cells are calculated in the ImageJ (IJ). Microscope examination: The examination was performed with a Zeiss Inverted Microscope. Daytime photographs were taken at 40x, 100x 200x and 400x. Biochemical Tests: Protein (Total): Serum sample was analyzed by a spectrophotometric method in autoanalyzer. Albumin: Serum sample was analyzed by a spectrophotometric method in autoanalyzer. Alpha-fetoprotein: Serum sample was analyzed by ECLIA method. Results: When liver cancer cells were cultured in medium with 1% DMSO for 4 weeks, a significant difference was observed when compared with the control group. As a result, we have seen that DMSO can be used as an important agent in the treatment of liver cancer. Cell areas were reduced in the DMSO group compared to the control group and the confluency ratio increased. The ability to form spheroids was also significantly higher in the DMSO group. Alpha-fetoprotein was lower than the values of an ordinary liver cancer patient and the total protein amount increased to the reference range of the normal individual. Because the albumin sample was below the specimen value, the numerical results could not be obtained on biochemical examinations. We interpret all these results as making DMSO a caretaking aid. Since each one was not enough alone we used 3 parameters and the results were positive when we refer to the values of a normal healthy individual in parallel. We hope to extend the study further by adding new parameters and genetic analyzes, by increasing the number of samples, and by using DMSO as an adjunct agent in the treatment of liver cancer.

Keywords: hepatocellular carcinoma, HepG2, dimethyl sulfoxide, cell culture, ELISA

Procedia PDF Downloads 124
13599 Use Multiphysics Simulations and Resistive Pulse Sensing to Study the Effect of Metal and Non-Metal Nanoparticles in Different Salt Concentration

Authors: Chun-Lin Chiang, Che-Yen Lee, Yu-Shan Yeh, Jiunn-Haur Shaw

Abstract:

Wafer fabrication is a critical part of the semiconductor process, when the finest linewidth with the improvement of technology continues to decline and the structure development from 2D towards to 3D. The nanoparticles contained in the slurry or in the ultrapure water which used for cleaning have a large influence on the manufacturing process. Therefore, semiconductor industry is hoping to find a viable method for on-line detection the nanoparticles size and concentration. The resistive pulse sensing technology is one of the methods that may cover this question. As we know that nanoparticles properties of material differ significantly from their properties at larger length scales. So, we want to clear that the metal and non-metal nanoparticles translocation dynamic when we use the resistive pulse sensing technology. In this study we try to use the finite element method that contains three governing equations to do multiphysics coupling simulations. The Navier-Stokes equation describes the laminar motion, the Nernst-Planck equation describes the ion transport, and the Poisson equation describes the potential distribution in the flow channel. To explore that the metal nanoparticles and the non-metal nanoparticles in different concentration electrolytes, through the nanochannel caused by ion current changes. Then the reliability of the simulation results was verified by resistive pulse sensing test. The existing results show that the lower ion concentration, the greater effect of nanoparticles on the ion concentration in the nanochannel. The conductive spikes are correlated with nanoparticles surface charge. Then we can be concluded that in the resistive pulse sensing technique, the ion concentration in the nanochannel and nanoparticle properties are important for the translocation dynamic, and they have the interactions.

Keywords: multiphysics simulations, resistive pulse sensing, nanoparticles, nanochannel

Procedia PDF Downloads 328
13598 Geosynthetic Reinforced Unpaved Road: Literature Study and Design Example

Authors: D. Jayalakshmi, S. S. Bhosale

Abstract:

This paper, in its first part, presents the state-of-the-art literature of design approaches for geosynthetic reinforced unpaved roads. The literature starting since 1970 and the critical appraisal of flexible pavement design by Giroud and Han (2004) and Jonathan Fannin (2006) is presented. The design example is illustrated for Indian conditions. The example emphasizes the results computed by Giroud and Han's (2004) design method with the Indian road congress guidelines by IRC SP 72 -2015. The input data considered are related to the subgrade soil condition of Maharashtra State in India. The unified soil classification of the subgrade soil is inorganic clay with high plasticity (CH), which is expansive with a California bearing ratio (CBR) of 2% to 3%. The example exhibits the unreinforced case and geotextile as reinforcement by varying the rut depth from 25 mm to 100 mm. The present result reveals the base thickness for the unreinforced case from the IRC design catalogs is in good agreement with Giroud and Han (2004) approach for a range of 75 mm to 100 mm rut depth. Since Giroud and Han (2004) method is applicable for both reinforced and unreinforced cases, for the same data with appropriate Nc factor, for the same rut depth, the base thickness for the reinforced case has arrived for the Indian condition. From this trial, for the CBR of 2%, the base thickness reduction due to geotextile inclusion is 35%. For the CBR range of 2% to 5% with different stiffness in geosynthetics, the reduction in base course thickness will be evaluated, and the validation will be executed by the full-scale accelerated pavement testing set up at the College of Engineering Pune (COE), India.

Keywords: base thickness, design approach, equation, full scale accelerated pavement set up, Indian condition

Procedia PDF Downloads 174
13597 Two-Dimensional Observation of Oil Displacement by Water in a Petroleum Reservoir through Numerical Simulation and Application to a Petroleum Reservoir

Authors: Ahmad Fahim Nasiry, Shigeo Honma

Abstract:

We examine two-dimensional oil displacement by water in a petroleum reservoir. The pore fluid is immiscible, and the porous media is homogenous and isotropic in the horizontal direction. Buckley-Leverett theory and a combination of Laplacian and Darcy’s law are used to study the fluid flow through porous media, and the Laplacian that defines the dispersion and diffusion of fluid in the sand using heavy oil is discussed. The reservoir is homogenous in the horizontal direction, as expressed by the partial differential equation. Two main factors which are observed are the water saturation and pressure distribution in the reservoir, and they are evaluated for predicting oil recovery in two dimensions by a physical and mathematical simulation model. We review the numerical simulation that solves difficult partial differential reservoir equations. Based on the numerical simulations, the saturation and pressure equations are calculated by the iterative alternating direction implicit method and the iterative alternating direction explicit method, respectively, according to the finite difference assumption. However, to understand the displacement of oil by water and the amount of water dispersion in the reservoir better, an interpolated contour line of the water distribution of the five-spot pattern, that provides an approximate solution which agrees well with the experimental results, is also presented. Finally, a computer program is developed to calculate the equation for pressure and water saturation and to draw the pressure contour line and water distribution contour line for the reservoir.

Keywords: numerical simulation, immiscible, finite difference, IADI, IDE, waterflooding

Procedia PDF Downloads 316
13596 Analysis of the Level of Production Failures by Implementing New Assembly Line

Authors: Joanna Kochanska, Dagmara Gornicka, Anna Burduk

Abstract:

The article examines the process of implementing a new assembly line in a manufacturing enterprise of the household appliances industry area. At the initial stages of the project, a decision was made that one of its foundations should be the concept of lean management. Because of that, eliminating as many errors as possible in the first phases of its functioning was emphasized. During the start-up of the line, there were identified and documented all production losses (from serious machine failures, through any unplanned downtime, to micro-stops and quality defects). During 6 weeks (line start-up period), all errors resulting from problems in various areas were analyzed. These areas were, among the others, production, logistics, quality, and organization. The aim of the work was to analyze the occurrence of production failures during the initial phase of starting up the line and to propose a method for determining their critical level during its full functionality. There was examined the repeatability of the production losses in various areas and at different levels at such an early stage of implementation, by using the methods of statistical process control. Based on the Pareto analysis, there were identified the weakest points in order to focus improvement actions on them. The next step was to examine the effectiveness of the actions undertaken to reduce the level of recorded losses. Based on the obtained results, there was proposed a method for determining the critical failures level in the studied areas. The developed coefficient can be used as an alarm in case of imbalance of the production, which is caused by the increased failures level in production and production support processes in the period of the standardized functioning of the line.

Keywords: production failures, level of production losses, new production line implementation, assembly line, statistical process control

Procedia PDF Downloads 114
13595 Frequency Response of Complex Systems with Localized Nonlinearities

Authors: E. Menga, S. Hernandez

Abstract:

Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.

Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber

Procedia PDF Downloads 256
13594 A Delphi Study of Factors Affecting the Forest Biorefinery Development in the Pulp and Paper Industry: The Case of Bio-Based Products

Authors: Natasha Gabriella, Josef-Peter Schöggl, Alfred Posch

Abstract:

Being a mature industry, pulp and paper industry (PPI) possess strength points coming from its existing infrastructure, technology know-how, and abundant availability of biomass. However, the declining trend of the wood-based products sales sends a clear signal to the industry to transform its business model in order to increase its profitability. With the emerging global attention on bio-based economy and circular economy, coupled with the low price of fossil feedstock, the PPI starts to integrate biorefinery as a value-added business model to keep the industry’s competitiveness. Nonetheless, biorefinery as an innovation exposes the PPI with some barriers, of which the uncertainty of the promising product becomes one of the major hurdles. This study aims to assess factors that affect the diffusion and development of forest biorefinery in the PPI, including drivers, barriers, advantages, disadvantages, as well as the most promising bio-based products of forest biorefinery. The study examines the identified factors according to the layer of business environment, being the macro-environment, industry, and strategic group level. Besides, an overview of future state of the identified factors is elaborated as to map necessary improvements for implementing forest biorefinery. A two-phase Delphi method is used to collect the empirical data for the study, comprising of an online-based survey and interviews. Delphi method is an effective communication tools to elicit ideas from a group of experts to further reach a consensus of forecasting future trends. Collaborating a total of 50 experts in the panel, the study reveals that influential factors are found in every layers of business of the PPI. The politic dimension is apparent to have a significant influence for tackling the economy barrier while reinforcing the environmental and social benefits in the macro-environment. In the industry level, the biomass availability appears to be a strength point of the PPI while the knowledge gap on technology and market seem to be barriers. Consequently, cooperation with academia and the chemical industry has to be improved. Human resources issue is indicated as one important premise behind the preceding barrier, along with the indication of the PPI’s resistance towards biorefinery implementation as an innovation. Further, cellulose-based products are acknowledged for near-term product development whereas lignin-based products are emphasized to gain importance in the long-term future.

Keywords: forest biorefinery, pulp and paper, bio-based product, Delphi method

Procedia PDF Downloads 261
13593 Numerical Modelling of Skin Tumor Diagnostics through Dynamic Thermography

Authors: Luiz Carlos Wrobel, Matjaz Hribersek, Jure Marn, Jurij Iljaz

Abstract:

Dynamic thermography has been clinically proven to be a valuable diagnostic technique for skin tumor detection as well as for other medical applications such as breast cancer diagnostics, diagnostics of vascular diseases, fever screening, dermatological and other applications. Thermography for medical screening can be done in two different ways, observing the temperature response under steady-state conditions (passive or static thermography), and by inducing thermal stresses by cooling or heating the observed tissue and measuring the thermal response during the recovery phase (active or dynamic thermography). The numerical modelling of heat transfer phenomena in biological tissue during dynamic thermography can aid the technique by improving process parameters or by estimating unknown tissue parameters based on measured data. This paper presents a nonlinear numerical model of multilayer skin tissue containing a skin tumor, together with the thermoregulation response of the tissue during the cooling-rewarming processes of dynamic thermography. The model is based on the Pennes bioheat equation and solved numerically by using a subdomain boundary element method which treats the problem as axisymmetric. The paper includes computational tests and numerical results for Clark II and Clark IV tumors, comparing the models using constant and temperature-dependent thermophysical properties, which showed noticeable differences and highlighted the importance of using a local thermoregulation model.

Keywords: boundary element method, dynamic thermography, static thermography, skin tumor diagnostic

Procedia PDF Downloads 92
13592 Discourse Analysis: Where Cognition Meets Communication

Authors: Iryna Biskub

Abstract:

The interdisciplinary approach to modern linguistic studies is exemplified by the merge of various research methods, which sometimes causes complications related to the verification of the research results. This methodological confusion can be resolved by means of creating new techniques of linguistic analysis combining several scientific paradigms. Modern linguistics has developed really productive and efficient methods for the investigation of cognitive and communicative phenomena of which language is the central issue. In the field of discourse studies, one of the best examples of research methods is the method of Critical Discourse Analysis (CDA). CDA can be viewed both as a method of investigation, as well as a critical multidisciplinary perspective. In CDA the position of the scholar is crucial from the point of view exemplifying his or her social and political convictions. The generally accepted approach to obtaining scientifically reliable results is to use a special well-defined scientific method for researching special types of language phenomena: cognitive methods applied to the exploration of cognitive aspects of language, whereas communicative methods are thought to be relevant only for the investigation of communicative nature of language. In the recent decades discourse as a sociocultural phenomenon has been the focus of careful linguistic research. The very concept of discourse represents an integral unity of cognitive and communicative aspects of human verbal activity. Since a human being is never able to discriminate between cognitive and communicative planes of discourse communication, it doesn’t make much sense to apply cognitive and communicative methods of research taken in isolation. It is possible to modify the classical CDA procedure by means of mapping human cognitive procedures onto the strategic communicative planning of discourse communication. The analysis of the electronic petition 'Block Donald J Trump from UK entry. The signatories believe Donald J Trump should be banned from UK entry' (584, 459 signatures) and the parliamentary debates on it has demonstrated the ability to map cognitive and communicative levels in the following way: the strategy of discourse modeling (communicative level) overlaps with the extraction of semantic macrostructures (cognitive level); the strategy of discourse management overlaps with the analysis of local meanings in discourse communication; the strategy of cognitive monitoring of the discourse overlaps with the formation of attitudes and ideologies at the cognitive level. Thus, the experimental data have shown that it is possible to develop a new complex methodology of discourse analysis, where cognition would meet communication, both metaphorically and literally. The same approach may appear to be productive for the creation of computational models of human-computer interaction, where the automatic generation of a particular type of a discourse could be based on the rules of strategic planning involving cognitive models of CDA.

Keywords: cognition, communication, discourse, strategy

Procedia PDF Downloads 237