Search results for: multicolor magnetic particle imaging
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3868

Search results for: multicolor magnetic particle imaging

358 Numerical Simulation on Airflow Structure in the Human Upper Respiratory Tract Model

Authors: Xiuguo Zhao, Xudong Ren, Chen Su, Xinxi Xu, Fu Niu, Lingshuai Meng

Abstract:

The respiratory diseases such as asthma, emphysema and bronchitis are connected with the air pollution and the number of these diseases tends to increase, which may attribute to the toxic aerosol deposition in human upper respiratory tract or in the bifurcation of human lung. The therapy of these diseases mostly uses pharmaceuticals in the form of aerosol delivered into the human upper respiratory tract or the lung. Understanding of airflow structures in human upper respiratory tract plays a very important role in the analysis of the “filtering” effect in the pharynx/larynx and for obtaining correct air-particle inlet conditions to the lung. However, numerical simulation based CFD (Computational Fluid Dynamics) technology has its own advantage on studying airflow structure in human upper respiratory tract. In this paper, a representative human upper respiratory tract is built and the CFD technology was used to investigate the air movement characteristic in the human upper respiratory tract. The airflow movement characteristic, the effect of the airflow movement on the shear stress distribution and the probability of the wall injury caused by the shear stress are discussed. Experimentally validated computational fluid-aerosol dynamics results showed the following: the phenomenon of airflow separation appears near the outer wall of the pharynx and the trachea. The high velocity zone is created near the inner wall of the trachea. The airflow splits at the divider and a new boundary layer is generated at the inner wall of the downstream from the bifurcation with the high velocity near the inner wall of the trachea. The maximum velocity appears at the exterior of the boundary layer. The secondary swirls and axial velocity distribution result in the high shear stress acting on the inner wall of the trachea and bifurcation, finally lead to the inner wall injury. The enhancement of breathing intensity enhances the intensity of the shear stress acting on the inner wall of the trachea and the bifurcation. If human keep the high breathing intensity for long time, not only the ability for the transportation and regulation of the gas through the trachea and the bifurcation fall, but also result in the increase of the probability of the wall strain and tissue injury.

Keywords: airflow structure, computational fluid dynamics, human upper respiratory tract, wall shear stress, numerical simulation

Procedia PDF Downloads 218
357 Single and Sequential Extraction for Potassium Fractionation and Nano-Clay Flocculation Structure

Authors: Chakkrit Poonpakdee, Jing-Hua Tzen, Ya-Zhen Huang, Yao-Tung Lin

Abstract:

Potassium (K) is a known macro nutrient and essential element for plant growth. Single leaching and modified sequential extraction schemes have been developed to estimate the relative phase associations of soil samples. The sequential extraction process is a step in analyzing the partitioning of metals affected by environmental conditions, but it is not a tool for estimation of K bioavailability. While, traditional single leaching method has been used to classify K speciation for a long time, it depend on its availability to the plants and use for potash fertilizer recommendation rate. Clay mineral in soil is a factor for controlling soil fertility. The change of the micro-structure of clay minerals during various environment (i.e. swelling or shrinking) is characterized using Transmission X-Ray Microscopy (TXM). The objective of this study are to 1) compare the distribution of K speciation between single leaching and sequential extraction process 2) determined clay particle flocculation structure before/after suspension with K+ using TXM. Four tropical soil samples: farming without K fertilizer (10 years), long term applied K fertilizer (10 years; 168-240 kg K2O ha-1 year-1), red soil (450-500 kg K2O ha-1 year-1) and forest soil were selected. The results showed that the amount of K speciation by single leaching method were high in mineral K, HNO3 K, Non-exchangeable K, NH4OAc K, exchangeable K and water soluble K respectively. Sequential extraction process indicated that most K speciations in soil were associated with residual, organic matter, Fe or Mn oxide and exchangeable fractions and K associate fraction with carbonate was not detected in tropical soil samples. In farming long term applied K fertilizer and red soil were higher exchangeable K than farming long term without K fertilizer and forest soil. The results indicated that one way to increase the available K (water soluble K and exchangeable K) should apply K fertilizer and organic fertilizer for providing available K. The two-dimension of TXM image of clay particles suspension with K+ shows that the aggregation structure of clay mineral closed-void cellular networks. The porous cellular structure of soil aggregates in 1 M KCl solution had large and very larger empty voids than in 0.025 M KCl and deionized water respectively. TXM nanotomography is a new technique can be useful in the field as a tool for better understanding of clay mineral micro-structure.

Keywords: potassium, sequential extraction process, clay mineral, TXM

Procedia PDF Downloads 264
356 Genetic Improvement Potential for Wood Production in Melaleuca cajuputi

Authors: Hong Nguyen Thi Hai, Ryota Konda, Dat Kieu Tuan, Cao Tran Thanh, Khang Phung Van, Hau Tran Tin, Harry Wu

Abstract:

Melaleuca cajuputi is a moderately fast-growing species and considered as a multi-purpose tree as it provides fuelwood, piles and frame poles in construction, leaf essential oil and honey. It occurs in Australia, Papua New Guinea, and South-East Asia. M. cajuputi plantation can be harvested on 6-7 year rotations for wood products. Its timber can also be used for pulp and paper, fiber and particle board, producing quality charcoal and potentially sawn timber. However, most reported M. cajuputi breeding programs have been focused on oil production rather than wood production. In this study, breeding program of M. cajuputi aimed to improve wood production was examined by estimating genetic parameters for growth (tree height, diameter at breast height (DBH), and volume), stem form, stiffness (modulus of elasticity (MOE)), bark thickness and bark ratio in a half-sib family progeny trial including 80 families in the Mekong Delta of Vietnam. MOE is one of the key wood properties of interest to the wood industry. Non-destructive wood stiffness was measured indirectly by acoustic velocity using FAKOPP Microsecond Timer and especially unaffected by bark mass. Narrow-sense heritability for the seven traits ranged from 0.13 to 0.27 at age 7 years. MOE and stem form had positive genetic correlations with growth while the negative correlation between bark ratio and growth was also favorable. Breeding for simultaneous improvement of multiple traits, faster growth with higher MOE and reduction of bark ratio should be possible in M. cajuputi. Index selection based on volume and MOE showed genetic gains of 31 % in volume, 6 % in MOE and 13 % in stem form. In addition, heritability and age-age genetic correlations for growth traits increased with time and optimal early selection age for growth of M. cajuputi based on DBH alone was 4 years. Selected thinning resulted in an increase of heritability due to considerable reduction of phenotypic variation but little effect on genetic variation.

Keywords: acoustic velocity, age-age correlation, bark thickness, heritability, Melaleuca cajuputi, stiffness, thinning effect

Procedia PDF Downloads 152
355 Estimation of Hydrogen Production from PWR Spent Fuel Due to Alpha Radiolysis

Authors: Sivakumar Kottapalli, Abdesselam Abdelouas, Christoph Hartnack

Abstract:

Spent nuclear fuel generates a mixed field of ionizing radiation to the water. This radiation field is generally dominated by gamma rays and a limited flux of fast neutrons. The fuel cladding effectively attenuates beta and alpha particle radiation. Small fraction of the spent nuclear fuel exhibits some degree of fuel cladding penetration due to pitting corrosion and mechanical failure. Breaches in the fuel cladding allow the exposure of small volumes of water in the cask to alpha and beta ionizing radiation. The safety of the transport of radioactive material is assured by the package complying with the IAEA Requirements for the Safe Transport of Radioactive Material SSR-6. It is of high interest to avoid generation of hydrogen inside the cavity which may to an explosive mixture. The risk of hydrogen production along with other radiation gases should be analyzed for a typical spent fuel for safety issues. This work aims to perform a realistic study of the production of hydrogen by radiolysis assuming most penalizing initial conditions. It consists in the calculation of the radionuclide inventory of a pellet taking into account the burn up and decays. Westinghouse 17X17 PWR fuel has been chosen and data has been analyzed for different sets of enrichment, burnup, cycles of irradiation and storage conditions. The inventory is calculated as the entry point for the simulation studies of hydrogen production by radiolysis kinetic models by MAKSIMA-CHEMIST. Dose rates decrease strongly within ~45 μm from the fuel surface towards the solution(water) in case of alpha radiation, while the dose rate decrease is lower in case of beta and even slower in case of gamma radiation. Calculations are carried out to obtain spectra as a function of time. Radiation dose rate profiles are taken as the input data for the iterative calculations. Hydrogen yield has been found to be around 0.02 mol/L. Calculations have been performed for a realistic scenario considering a capsule containing the spent fuel rod. Thus, hydrogen yield has been debated. Experiments are under progress to validate the hydrogen production rate using cyclotron at > 5MeV (at ARRONAX, Nantes).

Keywords: radiolysis, spent fuel, hydrogen, cyclotron

Procedia PDF Downloads 499
354 Commercial Winding for Superconducting Cables and Magnets

Authors: Glenn Auld Knierim

Abstract:

Automated robotic winding of high-temperature superconductors (HTS) addresses precision, efficiency, and reliability critical to the commercialization of products. Today’s HTS materials are mature and commercially promising but require manufacturing attention. In particular to the exaggerated rectangular cross-section (very thin by very wide), winding precision is critical to address the stress that can crack the fragile ceramic superconductor (SC) layer and destroy the SC properties. Damage potential is highest during peak operations, where winding stress magnifies operational stress. Another challenge is operational parameters such as magnetic field alignment affecting design performance. Winding process performance, including precision, capability for geometric complexity, and efficient repeatability, are required for commercial production of current HTS. Due to winding limitations, current HTS magnets focus on simple pancake configurations. HTS motors, generators, MRI/NMR, fusion, and other projects are awaiting robotic wound solenoid, planar, and spherical magnet configurations. As with conventional power cables, full transposition winding is required for long length alternating current (AC) and pulsed power cables. Robotic production is required for transposition, periodic swapping of cable conductors, and placing into precise positions, which allows power utility required minimized reactance. A full transposition SC cable, in theory, has no transmission length limits for AC and variable transient operation due to no resistance (a problem with conventional cables), negligible reactance (a problem for helical wound HTS cables), and no long length manufacturing issues (a problem with both stamped and twisted stacked HTS cables). The Infinity Physics team is solving manufacturing problems by developing automated manufacturing to produce the first-ever reliable and utility-grade commercial SC cables and magnets. Robotic winding machines combine mechanical and process design, specialized sense and observer, and state-of-the-art optimization and control sequencing to carefully manipulate individual fragile SCs, especially HTS, to shape previously unattainable, complex geometries with electrical geometry equivalent to commercially available conventional conductor devices.

Keywords: automated winding manufacturing, high temperature superconductor, magnet, power cable

Procedia PDF Downloads 120
353 The Effect of Air Filter Performance on Gas Turbine Operation

Authors: Iyad Al-Attar

Abstract:

Air filters are widely used in gas turbines applications to ensure that the large mass (500kg/s) of clean air reach the compressor. The continuous demand of high availability and reliability has highlighted the critical role of air filter performance in providing enhanced air quality. In addition to being challenged with different environments [tropical, coastal, hot], gas turbines confront wide array of atmospheric contaminants with various concentrations and particle size distributions that would lead to performance degradation and components deterioration. Therefore, the role of air filters is of a paramount importance since fouled compressor can reduce power output and availability of the gas turbine to over 70 % throughout operation. Consequently, accurate filter performance prediction is critical tool in their selection considering their role in minimizing the economic impact of outages. In fact, actual performance of Efficient Particulate Air [EPA] filters used in gas turbine tend to deviate from the performance predicted by laboratory results. This experimental work investigates the initial pressure drop and fractional efficiency curves of full-scale pleated V-shaped EPA filters used globally in gas turbine. The investigation involved examining the effect of different operational conditions such as flow rates [500 to 5000 m3/h] and design parameters such as pleat count [28, 30, 32 and 34 pleats per 100mm]. This experimental work has highlighted the underlying reasons behind the reduction in filter permeability due to the increase of flow rates and pleat density. The reasons, which led to surface area losses of filtration media, are due to one or combination of the following effects: pleat-crowding, deflection of the entire pleated panel, pleat distortion at the corner of the pleat and/or filtration medium compression. This paper also demonstrates that the effect of increasing the flow rate has more pronounced effect on filter performance compared to pleating density. This experimental work suggests that a valid comparison of the pleat densities should be based on the effective surface area, namely, the area that participates in the filtration process, and not the total surface area the pleat density provides. Throughout this study, optimal pleat count that satisfies both initial pressure drop and efficiency requirements may not have necessarily existed.

Keywords: filter efficiency, EPA Filters, pressure drop, permeability

Procedia PDF Downloads 221
352 Treatment of Premalignant Lesions: Curcumin a Promising Non-Surgical Option

Authors: Heba A. Hazzah, Ragwa M. Farid, Maha M. A. Nasra, Mennatallah Zakria, Magda A. El Massik, Ossama Y. Abdallah

Abstract:

Introduction: Curcumin (Cur) is a polyphenol derived from the herbal remedy and dietary spice turmeric. It possesses diverse anti-inflammatory and anti-cancer properties following oral or topical administration. The buccal delivery of curcumin can be useful for both systemic and local disease treatments such as gingivitis, periodontal diseases, oral carcinomas, and precancerous oral lesions. Despite of its high activity, it suffers a limited application due to its low oral bioavailability, poor aqueous solubility, and instability. Aim: Preparation and characterization of curcumin solid lipid nanoparticles with a high loading capacity into a mucoadhesive gel for buccal application. Methodology: Curcumin was formulated as nanoparticles using different lipids, namely Gelucire 39/01, Gelucire 50/13, Precirol, Compritol, and Polaxomer 407 as a surfactant. The SLN were dispersed in a mucoadhesive gel matrix to be applied to the buccal mucosa. All formulations were evaluated for their content, entrapment efficiency, particle size, in vitro drug dialysis, ex vivo mucoadhesion test, and ex vivo permeation study using chicken buccal mucosa. Clinical evaluation was conducted on 15 cases suffering oral erythroplakia and erosive lichen planus. Results: The results showed high entrapment efficiency reaching almost 90 % using Gelucire 50, the loaded gel with Cur-SLN showed good adhesion property and 25 minutes in vivo residence time. In addition to stability enhancement for the Cur powder. All formulae did not show any drug permeated however, a significant amount of Cur was retained within the mucosal tissue. Pain and lesion sizes were significantly reduced upon topical treatment. Complete healing was observed after 6 weeks of treatment. Conclusion: These results open a room for the pharmaceutical technology to optimize the use of this golden magical powder to get the best out of it. In addition, the lack of local anti-inflammatory compounds with reduced side effects intensifies the importance of studying natural products for this purpose.

Keywords: curcumin, erythroplakia, mucoadhesive, pain, solid lipid nanoparticles

Procedia PDF Downloads 431
351 The Ameliorative Effects of Nanoencapsulated Triterpenoids from Petri-Dish Cultured Antrodia cinnamomea on Reproductive Function of Diabetic Male Rats

Authors: Sabri Sudirman, Yuan-Hua Hsu, Zwe-Ling Kong

Abstract:

Male reproductive dysfunction is predominantly due to insulin resistance and hyperglycemia result in inflammation and oxidative stress. Furthermore, nanotechnology provides an alternative approach to improve the bioavailability of natural active food ingredients. Therefore, the aim of this study were to investigate nanoencapsulated triterpenoids from petri-dish cultured Antrodia cinnamomea (PAC) nanoparticles whether it could increase the bioavailability; in addition, the anti-inflammatory and anti-oxidative effects could more effectively ameliorate the reproductive function of diabetic male rats. First, PAC encapsulated in chitosan-silica nanoparticles (Nano-PAC) were prepared by biosilicification method. Scanning electron micrographs confirm the average particle size is about 30 nm, and the encapsulation efficiency is 83.7% by HPLC. Diabetic male Sprague-Dawley rats were induced by high fat diet (40% kcal from fat) and streptozotocin (35 mg/kg). Nano-PAC was administered by oral gavage in three doses (4, 8 and 20 mg/kg) for 6 weeks. Besides, metformin (300 mg/kg) and nanoparticles (Nano) were treated as the positive and negative control respectively. Results indicated that 4 mg/kg Nano-PAC administration for 6 weeks improved hyperglycemia, insulin resistance, and also reduced advanced glycation end products in plasma. In addition, 8 mg/kg Nano-PAC ameliorated morphological of testicular seminiferous tubules, sperm morphology and motility, reactive oxygen species production and mitochondrial membrane potential. Moreover, 20 mg/kg Nano-PAC restored reproductive endocrine system function and increased KiSS-1 level in plasma. In plasma or testis anti-oxidant superoxide dismutase, glutathione peroxidase and catalase were increased whereas malondialdehyde, as well as pro-inflammatory cytokines tumor necrosis factor-α, interleukin-6, and interferon-gamma, decreased. Most importantly, 8 mg/kg Nano-PAC down-regulated the oxidative stress induced c-Jun N-terminal kinase (JNK) signaling pathway. Our study successfully nanoencapsulated PAC to form nanoparticles and low-dose Nano-PAC improved diabetes-induced hyperglycemia, inflammation and oxidative stress to ameliorate the reproductive function of diabetic male rats.

Keywords: Antrodia cinnamomea, diabetes mellitus, male reproduction, nanoparticles

Procedia PDF Downloads 205
350 A Novel Harmonic Compensation Algorithm for High Speed Drives

Authors: Lakdar Sadi-Haddad

Abstract:

The past few years study of very high speed electrical drives have seen a resurgence of interest. An inventory of the number of scientific papers and patents dealing with the subject makes it relevant. In fact democratization of magnetic bearing technology is at the origin of recent developments in high speed applications. These machines have as main advantage a much higher power density than the state of the art. Nevertheless particular attention should be paid to the design of the inverter as well as control and command. Surface mounted permanent magnet synchronous machine is the most appropriate technology to address high speed issues. However, it has the drawback of using a carbon sleeve to contain magnets that could tear because of the centrifugal forces generated in rotor periphery. Carbon fiber is well known for its mechanical properties but it has poor heat conduction. It results in a very bad evacuation of eddy current losses induce in the magnets by time and space stator harmonics. The three-phase inverter is the main harmonic source causing eddy currents in the magnets. In high speed applications such harmonics are harmful because on the one hand the characteristic impedance is very low and on the other hand the ratio between the switching frequency and that of the fundamental is much lower than that of the state of the art. To minimize the impact of these harmonics a first lever is to use strategy of modulation producing low harmonic distortion while the second is to introduce a sinus filter between the inverter and the machine to smooth voltage and current waveforms applied to the machine. Nevertheless, in very high speed machine the interaction of the processes mentioned above may introduce particular harmonics that can irreversibly damage the system: harmonics at the resonant frequency, harmonics at the shaft mode frequency, subharmonics etc. Some studies address these issues but treat these phenomena with separate solutions (specific strategy of modulation, active damping methods ...). The purpose of this paper is to present a complete new active harmonic compensation algorithm based on an improvement of the standard vector control as a global solution to all these issues. This presentation will be based on a complete theoretical analysis of the processes leading to the generation of such undesired harmonics. Then a state of the art of available solutions will be provided before developing the content of a new active harmonic compensation algorithm. The study will be completed by a validation study using simulations and practical case on a high speed machine.

Keywords: active harmonic compensation, eddy current losses, high speed machine

Procedia PDF Downloads 371
349 The Ideal Memory Substitute for Computer Memory Hierarchy

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

Computer system components such as the CPU, the Controllers, and the operating system, work together as a team, and storage or memory is the essential parts of this team apart from the processor. The memory and storage system including processor caches, main memory, and storage, form basic storage component of a computer system. The characteristics of the different types of storage are inherent in the design and the technology employed in the manufacturing. These memory characteristics define the speed, compatibility, cost, volatility, and density of the various storage types. Most computers rely on a hierarchy of storage devices for performance. The effective and efficient use of the memory hierarchy of the computer system therefore is the single most important aspect of computer system design and use. The memory hierarchy is becoming a fundamental performance and energy bottleneck, due to the widening gap between the increasing demands of modern computer applications and the limited performance and energy efficiency provided by traditional memory technologies. With the dramatic development in the computers systems, computer storage has had a difficult time keeping up with the processor speed. Computer architects are therefore facing constant challenges in developing high-speed computer storage with high-performance which is energy-efficient, cost-effective and reliable, to intercept processor requests. It is very clear that substantial advancements in redesigning the existing memory physical and logical structures to meet up with the latest processor potential is crucial. This research work investigates the importance of computer memory (storage) hierarchy in the design of computer systems. The constituent storage types of the hierarchy today were investigated looking at the design technologies and how the technologies affect memory characteristics: speed, density, stability and cost. The investigation considered how these characteristics could best be harnessed for overall efficiency of the computer system. The research revealed that the best single type of storage, which we refer to as ideal memory is that logical single physical memory which would combine the best attributes of each memory type that make up the memory hierarchy. It is a single memory with access speed as high as one found in CPU registers, combined with the highest storage capacity, offering excellent stability in the presence or absence of power as found in the magnetic and optical disks as against volatile DRAM, and yet offers a cost-effective attribute that is far away from the expensive SRAM. The research work suggests that to overcome these barriers it may then mean that memory manufacturing will take a total deviation from the present technologies and adopt one that overcomes the associated challenges with the traditional memory technologies.

Keywords: cache, memory-hierarchy, memory, registers, storage

Procedia PDF Downloads 136
348 Deregulation of Thorium for Room Temperature Superconductivity

Authors: Dong Zhao

Abstract:

Abstract—Extensive research on obtaining applicable room temperature superconductors meets the major barrier, and the record Tc of 135 K achieved via cuprate has been idling for decades. Even though, the accomplishment of higher Tc than the cuprate was made through pressurizing certain compounds composed of light elements, such as for the LaH10 and for the metallic hydrogen. Room temperature superconductivity under ambient pressure is still the preferred approach and is believed to be the ultimate solution for many applications. While racing to find the breakthrough method to achieve this room temperature Tc milestone in superconducting research, a report stated a discovery of a possible high-temperature superconductor, i.e., the thorium sulfide ThS. Apparently, ThS’s Tc can be at room temperature or even higher. This is because ThS revealed an unusual property of the ‘coexistence of high electrical conductivity and diamagnetism’. Noticed that this property of coexistence of high electrical conductivity and diamagnetism is in line with superconductors, meaning ThS is also at its superconducting state. Surprisingly, ThS owns the property of superconductivity at least at room temperature and under atmosphere pressure. Further study of the ThS’s electrical and magnetic properties in comparison with thorium di-iodide ThI2 concluded its molecular configuration as [Th4+(e-)2]S. This means the ThS’s cation is composed of a [Th4+(e-)2]2+ cation core. It is noticed that this cation core is built by an oxidation state +4 of thorium atom plus an electron pair on this thorium atom that resulted in an oxidation state +2 of this [Th4+(e-)2]2+ cation core. This special construction of [Th4+(e-)2]2+ cation core may lead to the ThS’s room temperature superconductivity because of this characteristic electron lone pair residing on the thorium atom. Since the study of thorium chemistry was carried out in the period of before 1970s. the exploration about ThS’s possible room temperature superconductivity would require resynthesizing ThS. This re-preparation of ThS will provide the sample and enable professionals to verify the ThS’s room temperature superconductivity. Regrettably, the current regulation prevents almost everyone from getting access to thorium metal or thorium compounds due to the radioactive nature of thorium-232 (Th-232), even though the radioactive level of Th-232 is extremely low with its half-life of 14.05 billion years. Consequently, further confirmation of ThS’s high-temperature superconductivity through experiments will be impossible unless the use of corresponding thorium metal and related thorium compounds can be deregulated. This deregulation would allow researchers to obtain the necessary starting materials for the study of ThS. Hopefully, the confirmation of ThS’s room temperature superconductivity can not only establish a method to obtain applicable superconductors but also to pave the way for fully understanding the mechanism of superconductivity.

Keywords: co-existence of high electrical conductivity and diamagnetism, electron pairing and electron lone pair, room temperature superconductivity, the special molecular configuration of thorium sulfide ThS

Procedia PDF Downloads 29
347 Using Photogrammetric Techniques to Map the Mars Surface

Authors: Ahmed Elaksher, Islam Omar

Abstract:

For many years, Mars surface has been a mystery for scientists. Lately with the help of geospatial data and photogrammetric procedures researchers were able to capture some insights about this planet. Two of the most imperative data sources to explore Mars are the The High Resolution Imaging Science Experiment (HiRISE) and the Mars Orbiter Laser Altimeter (MOLA). HiRISE is one of six science instruments carried by the Mars Reconnaissance Orbiter, launched August 12, 2005, and managed by NASA. The MOLA sensor is a laser altimeter carried by the Mars Global Surveyor (MGS) and launched on November 7, 1996. In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images for generating a more accurate and trustful surface of Mars. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. In this project, we employed three different 3D to 2D transformation models. These are the parallel projection (3D affine) transformation model; the extended parallel projection transformation model; the Direct Linear Transformation (DLT) model. A set of tie-points was digitized from both datasets. These points were split into two sets: Ground Control Points (GCPs), used to evaluate the transformation parameters using least squares adjustment techniques, and check points (ChkPs) to evaluate the computed transformation parameters. Results were evaluated using the RMSEs between the precise horizontal coordinates of the digitized check points and those estimated through the transformation models using the computed transformation parameters. For each set of GCPs, three different configurations of GCPs and check points were tested, and average RMSEs are reported. It was found that for the 2D transformation models, average RMSEs were in the range of five meters. Increasing the number of GCPs from six to ten points improve the accuracy of the results with about two and half meters. Further increasing the number of GCPs didn’t improve the results significantly. Using the 3D to 2D transformation parameters provided three to two meters accuracy. Best results were reported using the DLT transformation model. However, increasing the number of GCPS didn’t have substantial effect. The results support the use of the DLT model as it provides the required accuracy for ASPRS large scale mapping standards. However, well distributed sets of GCPs is a key to provide such accuracy. The model is simple to apply and doesn’t need substantial computations.

Keywords: mars, photogrammetry, MOLA, HiRISE

Procedia PDF Downloads 45
346 Comparing the Gap Formation around Composite Restorations in Three Regions of Tooth Using Optical Coherence Tomography (OCT)

Authors: Rima Zakzouk, Yasushi Shimada, Yuan Zhou, Yasunori Sumi, Junji Tagami

Abstract:

Background and Purpose: Swept source optical coherence tomography (OCT) is an interferometric imaging technique that has been recently used in cariology. In spite of progress made in adhesive dentistry, the composite restoration has been failing due to secondary caries which occur due to environmental factors in oral cavities. Therefore, a precise assessment to effective marginal sealing of restoration is highly required. The aim of this study was evaluating gap formation at composite/cavity walls interface with or without phosphoric acid etching using SS-OCT. Materials and Methods: Round tapered cavities (2×2 mm) were prepared in three locations, mid-coronal, cervical, and root of bovine incisors teeth in two groups (SE and PA Groups). While self-etching adhesive (Clearfil SE Bond) was applied for the both groups, Group PA had been already pretreated with phosphoric acid etching (K-Etchant gel). Subsequently, both groups were restored by Estelite Flow Quick Flowable Composite Resin. Following 5000 thermal cycles, three cross-sectionals were obtained from each cavity using OCT at 1310-nm wavelength at 0°, 60°, 120° degrees. Scanning was repeated after two months to monitor the gap progress. Then the average percentage of gap length was calculated using image analysis software, and the difference of mean between both groups was statistically analyzed by t-test. Subsequently, the results were confirmed by sectioning and observing representative specimens under Confocal Laser Scanning Microscope (CLSM). Results: The results showed that pretreatment with phosphoric acid etching, Group PA, led to significantly bigger gaps in mid-coronal and cervical compared to SE group, while in the root cavity no significant difference was observed between both groups. On the other hand, the gaps formed in root’s cavities were significantly bigger than those in mid-coronal and cervical within the same group. This study investigated the effect of phosphoric acid on gap length progress on the composite restorations. In conclusions, phosphoric acid etching treatment did not reduce the gap formation even in different regions of the tooth. Significance: The cervical region of tooth was more exposing to gap formation than mid-coronal region, especially when we added pre-etching treatment.

Keywords: image analysis, optical coherence tomography, phosphoric acid etching, self-etch adhesives

Procedia PDF Downloads 200
345 Modification of Aliphatic-Aromatic Copolyesters with Polyether Block for Segmented Copolymers with Elastothemoplastic Properties

Authors: I. Irska, S. Paszkiewicz, D. Pawlikowska, E. Piesowicz, A. Linares, T. A. Ezquerra

Abstract:

Due to the number of advantages such as high tensile strength, sensitivity to hydrolytic degradation, and biocompatibility poly(lactic acid) (PLA) is one of the most common polyesters for biomedical and pharmaceutical applications. However, PLA is a rigid, brittle polymer with low heat distortion temperature and slow crystallization rate. In order to broaden the range of PLA applications, it is necessary to improve these properties. In recent years a number of new strategies have been evolved to obtain PLA-based materials with improved characteristics, including manipulation of crystallinity, plasticization, blending, and incorporation into block copolymers. Among the other methods, synthesis of aliphatic-aromatic copolyesters has been attracting considerable attention as they may combine the mechanical performance of aromatic polyesters with biodegradability known from aliphatic ones. Given the need for highly flexible biodegradable polymers, in this contribution, a series of aromatic-aliphatic based on poly(butylene terephthalate) and poly(lactic acid) (PBT-b-PLA) copolyesters exhibiting superior mechanical properties were copolymerized with an additional poly(tetramethylene oxide) (PTMO) soft block. The structure and properties of both series were characterized by means of attenuated total reflectance – Fourier transform infrared spectroscopy (ATR-FTIR), nuclear magnetic resonance spectroscopy (¹H NMR), differential scanning calorimetry (DSC), wide-angle X-ray scattering (WAXS) and dynamic mechanical, thermal analysis (DMTA). Moreover, the related changes in tensile properties have been evaluated and discussed. Lastly, the viscoelastic properties of synthesized poly(ester-ether) copolymers were investigated in detail by step cycle tensile tests. The block lengths decreased with the advance of treatment, and the block-random diblock terpolymers of (PBT-ran-PLA)-b-PTMO were obtained. DSC and DMTA analysis confirmed unambiguously that synthesized poly(ester-ether) copolymers are microphase-separated systems. The introduction of polyether co-units resulted in a decrease in crystallinity degree and melting temperature. X-ray diffraction patterns revealed that only PBT blocks are able to crystallize. The mechanical properties of (PBT-ran-PLA)-b-PTMO copolymers are a result of a unique arrangement of immiscible hard and soft blocks, providing both strength and elasticity.

Keywords: aliphatic-aromatic copolymers, multiblock copolymers, phase behavior, thermoplastic elastomers

Procedia PDF Downloads 115
344 Association between Noise Levels, Particulate Matter Concentrations and Traffic Intensities in a Near-Highway Urban Area

Authors: Mohammad Javad Afroughi, Vahid Hosseini, Jason S. Olfert

Abstract:

Both traffic-generated particles and noise have been associated with the development of cardiovascular diseases, especially in near-highway environments. Although noise and particulate matters (PM) have different mechanisms of dispersion, sharing the same emission source in urban areas (road traffics) can result in a similar degree of variability in their levels. This study investigated the temporal variation of and correlation between noise levels, PM concentrations and traffic intensities near a major highway in Tehran, Iran. Tehran particulate concentration is highly influenced by road traffic. Additionally, Tehran ultrafine particles (UFP, PM<0.1 µm) are mostly emitted from combustion processes of motor vehicles. This gives a high possibility of a strong association between traffic-related noise and UFP in near-highway environments of this megacity. Hourly average of equivalent continuous sound pressure level (Leq), total number concentration of UFPs, mass concentration of PM2.5 and PM10, as well as traffic count and speed were simultaneously measured over a period of three days in winter. Additionally, meteorological data including temperature, relative humidity, wind speed and direction were collected in a weather station, located 3 km from the monitoring site. Noise levels showed relatively low temporal variability in near-highway environments compared to PM concentrations. Hourly average of Leq ranged from 63.8 to 69.9 dB(A) (mean ~ 68 dB(A)), while hourly concentration of particles varied from 30,800 to 108,800 cm-3 for UFP (mean ~ 64,500 cm-3), 41 to 75 µg m-3 for PM2.5 (mean ~ 53 µg m-3), and 62 to 112 µg m-3 for PM10 (mean ~ 88 µg m-3). The Pearson correlation coefficient revealed strong relationship between noise and UFP (r ~ 0.61) overall. Under downwind conditions, UFP number concentration showed the strongest association with noise level (r ~ 0.63). The coefficient decreased to a lesser degree under upwind conditions (r ~ 0.24) due to the significant role of wind and humidity in UFP dynamics. Furthermore, PM2.5 and PM10 correlated moderately with noise (r ~ 0.52 and 0.44 respectively). In general, traffic counts were more strongly associated with noise and PM compared to traffic speeds. It was concluded that noise level combined with meteorological data can be used as a proxy to estimate PM concentrations (specifically UFP number concentration) in near-highway environments of Tehran. However, it is important to measure joint variability of noise and particles to study their health effects in epidemiological studies.

Keywords: noise, particulate matter, PM10, PM2.5, ultrafine particle

Procedia PDF Downloads 169
343 Spatio-Temporal Dynamics of Snow Cover and Melt/Freeze Conditions in Indian Himalayas

Authors: Rajashree Bothale, Venkateswara Rao

Abstract:

Indian Himalayas also known as third pole with 0.9 Million SQ km area, contain the largest reserve of ice and snow outside poles and affect global climate and water availability in the perennial rivers. The variations in the extent of snow are indicative of climate change. The snow melt is sensitive to climate change (warming) and also an influencing factor to the climate change. A study of the spatio-temporal dynamics of snow cover and melt/freeze conditions is carried out using space based observations in visible and microwave bands. An analysis period of 2003 to 2015 is selected to identify and map the changes and trend in snow cover using Indian Remote Sensing (IRS) Advanced Wide Field Sensor (AWiFS) and Moderate Resolution Imaging Spectroradiometer(MODIS) data. For mapping of wet snow, microwave data is used, which is sensitive to the presence of liquid water in the snow. The present study uses Ku-band scatterometer data from QuikSCAT and Oceansat satellites. The enhanced resolution images at 2.25 km from the 13.6GHz sensor are used to analyze the backscatter response to dry and wet snow for the period of 2000-2013 using threshold method. The study area is divided into three major river basins namely Brahmaputra, Ganges and Indus which also represent the diversification in Himalayas as the Eastern Himalayas, Central Himalayas and Western Himalayas. Topographic variations across different zones show that a majority of the study area lies in 4000–5500 m elevation range and the maximum percent of high elevated areas (>5500 m) lies in Western Himalayas. The effect of climate change could be seen in the extent of snow cover and also on the melt/freeze status in different parts of Himalayas. Melt onset day increases from east (March11+11) to west (May12+15) with large variation in number of melt days. Western Himalayas has shorter melt duration (120+15) in comparison to Eastern Himalayas (150+16) providing lesser time for melt. Eastern Himalaya glaciers are prone for enhanced melt due to large melt duration. The extent of snow cover coupled with the status of melt/freeze indicating solar radiation can be used as precursor for monsoon prediction.

Keywords: Indian Himalaya, Scatterometer, Snow Melt/Freeze, AWiFS, Cryosphere

Procedia PDF Downloads 235
342 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation

Authors: Jonathan Gong

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning

Procedia PDF Downloads 107
341 Numerical Evaluation of Lateral Bearing Capacity of Piles in Cement-Treated Soils

Authors: Reza Ziaie Moayed, Saeideh Mohammadi

Abstract:

Soft soil is used in many of civil engineering projects like coastal, marine and road projects. Because of low shear strength and stiffness of soft soils, large settlement and low bearing capacity will occur under superstructure loads. This will make the civil engineering activities more difficult and costlier. In the case of soft soils, improvement is a suitable method to increase the shear strength and stiffness for engineering purposes. In recent years, the artificial cementation of soil by cement and lime has been extensively used for soft soil improvement. Cement stabilization is a well-established technique for improving soft soils. Artificial cementation increases the shear strength and hardness of the natural soils. On the other hand, in soft soils, the use of piles to transfer loads to the depths of ground is usual. By using cement treated soil around the piles, high bearing capacity and low settlement in piles can be achieved. In the present study, lateral bearing capacity of short piles in cemented soils is investigated by numerical approach. For this purpose, three dimensional (3D) finite difference software, FLAC 3D is used. Cement treated soil has a strain hardening-softening behavior, because of breaking of bonds between cement agent and soil particle. To simulate such behavior, strain hardening-softening soil constitutive model is used for cement treated soft soil. Additionally, conventional elastic-plastic Mohr Coulomb constitutive model and linear elastic model are used for stress-strain behavior of natural soils and pile. To determine the parameters of constitutive models and also for verification of numerical model, the results of available triaxial laboratory tests on and insitu loading of piles in cement treated soft soil are used. Different parameters are considered in parametric study to determine the effective parameters on the bearing of the piles on cemented treated soils. In the present paper, the effect of various length and height of the artificial cemented area, different diameter and length of the pile and the properties of the materials are studied. Also, the effect of choosing a constitutive model for cemented treated soils in the bearing capacity of the pile is investigated.

Keywords: bearing capacity, cement-treated soils, FLAC 3D, pile

Procedia PDF Downloads 105
340 Rapid Flood Damage Assessment of Population and Crops Using Remotely Sensed Data

Authors: Urooj Saeed, Sajid Rashid Ahmad, Iqra Khalid, Sahar Mirza, Imtiaz Younas

Abstract:

Pakistan, a flood-prone country, has experienced worst floods in the recent past which have caused extensive damage to the urban and rural areas by loss of lives, damage to infrastructure and agricultural fields. Poor flood management system in the country has projected the risks of damages as the increasing frequency and magnitude of floods are felt as a consequence of climate change; affecting national economy directly or indirectly. To combat the needs of flood emergency, this paper focuses on remotely sensed data based approach for rapid mapping and monitoring of flood extent and its damages so that fast dissemination of information can be done, from local to national level. In this research study, spatial extent of the flooding caused by heavy rains of 2014 has been mapped by using space borne data to assess the crop damages and affected population in sixteen districts of Punjab. For this purpose, moderate resolution imaging spectroradiometer (MODIS) was used to daily mark the flood extent by using Normalised Difference Water Index (NDWI). The highest flood value data was integrated with the LandScan 2014, 1km x 1km grid based population, to calculate the affected population in flood hazard zone. It was estimated that the floods covered an area of 16,870 square kilometers, with 3.0 million population affected. Moreover, to assess the flood damages, Object Based Image Analysis (OBIA) aided with spectral signatures was applied on Landsat image to attain the thematic layers of healthy (0.54 million acre) and damaged crops (0.43 million acre). The study yields that the population of Jhang district (28% of 2.5 million population) was affected the most. Whereas, in terms of crops, Jhang and Muzzafargarh are the ‘highest damaged’ ranked district of floods 2014 in Punjab. This study was completed within 24 hours of the peak flood time, and proves to be an effective methodology for rapid assessment of damages due to flood hazard

Keywords: flood hazard, space borne data, object based image analysis, rapid damage assessment

Procedia PDF Downloads 303
339 CO₂ Conversion by Low-Temperature Fischer-Tropsch

Authors: Pauline Bredy, Yves Schuurman, David Farrusseng

Abstract:

To fulfill climate objectives, the production of synthetic e-fuels using CO₂ as a raw material appears as part of the solution. In particular, Power-to-Liquid (PtL) concept combines CO₂ with hydrogen supplied from water electrolysis, powered by renewable sources, which is currently gaining interest as it allows the production of sustainable fossil-free liquid fuels. The proposed process discussed here is an upgrading of the well-known Fischer-Tropsch synthesis. The concept deals with two cascade reactions in one pot, with first the conversion of CO₂ into CO via the reverse water gas shift (RWGS) reaction, which is then followed by the Fischer-Tropsch Synthesis (FTS). Instead of using a Fe-based catalyst, which can carry out both reactions, we have chosen the strategy to decouple the two functions (RWGS and FT) on two different catalysts within the same reactor. The FTS shall shift the equilibrium of the RWGS reaction (which alone would be limited to 15-20% of conversion at 250°C) by converting the CO into hydrocarbons. This strategy shall enable optimization of the catalyst pair and thus lower the temperature of the reaction thanks to the equilibrium shift to gain selectivity in the liquid fraction. The challenge lies in maximizing the activity of the RWGS catalyst but also in the ability of the FT catalyst to be highly selective. Methane production is the main concern as the energetic barrier of CH₄ formation is generally lower than that of the RWGS reaction, so the goal will be to minimize methane selectivity. Here we report the study of different combinations of copper-based RWGS catalysts with different cobalt-based FTS catalysts. We investigated their behaviors under mild process conditions by the use of high-throughput experimentation. Our results show that at 250°C and 20 bars, Cobalt catalysts mainly act as methanation catalysts. Indeed, CH₄ selectivity never drops under 80% despite the addition of various protomers (Nb, K, Pt, Cu) on the catalyst and its coupling with active RWGS catalysts. However, we show that the activity of the RWGS catalyst has an impact and can lead to longer hydrocarbons chains selectivities (C₂⁺) of about 10%. We studied the influence of the reduction temperature on the activity and selectivity of the tandem catalyst system. Similar selectivity and conversion were obtained at reduction temperatures between 250-400°C. This leads to the question of the active phase of the cobalt catalysts, which is currently investigated by magnetic measurements and DRIFTS. Thus, in coupling it with a more selective FT catalyst, better results are expected. This was achieved using a cobalt/iron FTS catalyst. The CH₄ selectivity dropped to 62% at 265°C, 20 bars, and a GHSV of 2500ml/h/gcat. We propose that the conditions used for the cobalt catalysts could have generated this methanation because these catalysts are known to have their best performance around 210°C in classical FTS, whereas the iron catalysts are more flexible but are also known to have an RWGS activity.

Keywords: cobalt-copper catalytic systems, CO₂-hydrogenation, Fischer-Tropsch synthesis, hydrocarbons, low-temperature process

Procedia PDF Downloads 37
338 Segmented Pupil Phasing with Deep Learning

Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan

Abstract:

Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.

Keywords: wavefront sensing, deep learning, deployable telescope, space telescope

Procedia PDF Downloads 80
337 An Adjoint-Based Method to Compute Derivatives with Respect to Bed Boundary Positions in Resistivity Measurements

Authors: Mostafa Shahriari, Theophile Chaumont-Frelet, David Pardo

Abstract:

Resistivity measurements are used to characterize the Earth’s subsurface. They are categorized into two different groups: (a) those acquired on the Earth’s surface, for instance, controlled source electromagnetic (CSEM) and Magnetotellurics (MT), and (b) those recorded with borehole logging instruments such as Logging-While-Drilling (LWD) devices. LWD instruments are mostly used for geo-steering purposes, i.e., to adjust dip and azimuthal angles of a well trajectory to drill along a particular geological target. Modern LWD tools measure all nine components of the magnetic field corresponding to three orthogonal transmitter and receiver orientations. In order to map the Earth’s subsurface and perform geo-steering, we invert measurements using a gradient-based method that utilizes the derivatives of the recorded measurements with respect to the inversion variables. For resistivity measurements, these inversion variables are usually the constant resistivity value of each layer and the bed boundary positions. It is well-known how to compute derivatives with respect to the constant resistivity value of each layer using semi-analytic or numerical methods. However, similar formulas for computing the derivatives with respect to bed boundary positions are unavailable. The main contribution of this work is to provide an adjoint-based formulation for computing derivatives with respect to the bed boundary positions. The key idea to obtain the aforementioned adjoint state formulations for the derivatives is to separate the tangential and normal components of the field and treat them differently. This formulation allows us to compute the derivatives faster and more accurately than with traditional finite differences approximations. In the presentation, we shall first derive a formula for computing the derivatives with respect to the bed boundary positions for the potential equation. Then, we shall extend our formulation to 3D Maxwell’s equations. Finally, by considering a 1D domain and reducing the dimensionality of the problem, which is a common practice in the inversion of resistivity measurements, we shall derive a formulation to compute the derivatives of the measurements with respect to the bed boundary positions using a 1.5D variational formulation. Then, we shall illustrate the accuracy and convergence properties of our formulations by comparing numerical results with the analytical derivatives for the potential equation. For the 1.5D Maxwell’s system, we shall compare our numerical results based on the proposed adjoint-based formulation vs those obtained with a traditional finite difference approach. Numerical results shall show that our proposed adjoint-based technique produces enhanced accuracy solutions while its cost is negligible, as opposed to the finite difference approach that requires the solution of one additional problem per derivative.

Keywords: inverse problem, bed boundary positions, electromagnetism, potential equation

Procedia PDF Downloads 160
336 Unlocking Health Insights: Studying Data for Better Care

Authors: Valentina Marutyan

Abstract:

Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.

Keywords: data mining, healthcare, big data, large amounts of data

Procedia PDF Downloads 44
335 Anthropometric Indices of Obesity and Coronary Artery Atherosclerosis: An Autopsy Study in South Indian population

Authors: Francis Nanda Prakash Monteiro, Shyna Quadras, Tanush Shetty

Abstract:

The association between human physique and morbidity and mortality resulting from coronary artery disease has been studied extensively over several decades. Multiple studies have also been done on the correlation between grade of atherosclerosis, coronary artery diseases and anthropometrical measurements. However, the number of autopsy-based studies drastically reduces this number. It has been suggested that while in living subjects, it would be expensive, difficult, and even harmful to subject them to imaging modalities like CT scans and procedures involving contrast media to study mild atherosclerosis, no such harm is encountered in study of autopsy cases. This autopsy-based study was aimed to correlate the anthropometric measurements and indices of obesity, such as waist circumference (WC), hip circumference (HC), body mass index (BMI) and waist hip ratio (WHR) with the degree of atherosclerosis in the right coronary artery (RCA), main branch of the left coronary artery (LCA) and the left anterior descending artery (LADA) in 95 South Indian origin victims of both the genders between the age of 18 years and 75 years. The grading of atherosclerosis was done according to criteria suggested by the American Heart Association. The study also analysed the correlation of the anthropometric measurements and indices of obesity with the number of coronaries affected with atherosclerosis in an individual. All the anthropometric measurements and the derived indices were found to be significantly correlated to each other in both the genders except for the age, which is found to have a significant correlation only with the WHR. In both the genders severe degree of atherosclerosis was commonly observed in LADA, followed by LCA and RCA. Grade of atherosclerosis in RCA is significantly related to the WHR in males. Grade of atherosclerosis in LCA and LADA is significantly related to the WHR in females. Significant relation was observed between grade of atherosclerosis in RCA and WC, and WHR, and between grade of atherosclerosis in LADA and HC in males. Significant relation was observed between grade of atherosclerosis in RCA and WC, and WHR, and between grade of atherosclerosis in LADA and HC in females. Anthropometric measurements/indices of obesity can be an effective means to identify high risk cases of atherosclerosis at an early stage that can be effective in reducing the associated cardiac morbidity and mortality. A person with anthropometric measurements suggestive of mild atherosclerosis can be advised to modify his lifestyle, along with decreasing his exposure to the other risk factors. Those with measurements suggestive of higher degree of atherosclerosis can be subjected to confirmatory procedures to start effective treatment.

Keywords: atherosclerosis, coronary artery disease, indices, obesity

Procedia PDF Downloads 43
334 Use of Activated Carbon from Olive Stone for CO₂ Capture in Porous Mortars

Authors: A. González-Caro, A. M. Merino-Lechuga, D. Suescum-Morales, E. Fernández-Ledesma, J. R. Jiménez, J. M. Fernández-Rodríguez

Abstract:

Climate change is one of the most significant issues today. Since the 19th century, the rise in temperature has not only been due to natural change, but also to human activities, which have been the main cause of climate change, mainly due to the burning of fossil fuels such as coal, oil and gas. The boom in the construction sector in recent years is also one of the main contributors to CO₂ emissions into the atmosphere; for example, for every tonne of cement produced, 1 tonne of CO₂ is emitted into the atmosphere. Most of the research being carried out in this sector is focused on reducing the large environmental impact generated during the manufacturing process of building materials. In detail, this research focuses on the recovery of waste from olive oil mills. Spain is the world's largest producer of olive oil, and this sector generates a large amount of waste and by-products such as olive pits, “alpechín” or “alpeorujo”. This olive stone by means of a pyrosilisis process gives rise to the production of active carbon. The process causes the carbon to develop many internal spaces. This study is based on the manufacture of porous mortars with Portland cement and natural limestone sand, with an addition of 5% and 10% of activated carbon. Two curing environments were used: i) dry chamber, with a humidity of 65 ± 10% and temperature of 21 ± 2 ºC and an atmospheric CO₂ concentration (approximately 0.04%); ii) accelerated carbonation chamber, with a humidity of 65 ± 10% and temperature of 21 ± 2 ºC and an atmospheric CO₂ concentration of 5%. In addition to eliminating waste from an industry, the aim of this study is to reduce atmospheric CO₂. For this purpose, first, a physicochemical and mineralogical characterisation of all raw materials was carried out, using techniques such as fluorescence and X-ray diffraction. The particle size and specific surface area of the activated carbon were determined. Subsequently, tests were carried out on the hardened mortar, such as thermogravimetric analysis (to determine the percentage of CO₂ capture), as well as mechanical properties, density, porosity, and water absorption. It was concluded that the activated carbon acts as a sink for CO₂, causing it to be trapped inside the voids. This increases CO₂ capture by 300% with the addition of 10% activated carbon at 7 days of curing. There was an increase in compressive strength of 17.5% with the CO₂ chamber after 7 days of curing using 10% activated carbon compared to the dry chamber.

Keywords: olive stone, activated carbon, porous mortar, CO₂ capture, economy circular

Procedia PDF Downloads 35
333 Investigation of Several New Ionic Liquids’ Behaviour during ²¹⁰PB/²¹⁰BI Cherenkov Counting in Waters

Authors: Nataša Todorović, Jovana Nikolov, Ivana Stojković, Milan Vraneš, Jovana Panić, Slobodan Gadžurić

Abstract:

The detection of ²¹⁰Pb levels in aquatic environments evokes interest in various scientific studies. Its precise determination is important not only for the radiological assessment of drinking waters but also ²¹⁰Pb, and ²¹⁰Po distribution in the marine environment are significant for the assessment of the removal rates of particles from the ocean and particle fluxes during transport along the coast, as well as particulate organic carbon export in the upper ocean. Measurement techniques for ²¹⁰Pb determination, gamma spectrometry, alpha spectrometry, or liquid scintillation counting (LSC) are either time-consuming or demand expensive equipment or complicated chemical pre-treatments. However, one other possibility is to measure ²¹⁰Pb on an LS counter if it is in equilibrium with its progeny ²¹⁰Bi - through the Cherenkov counting method. It is unaffected by the chemical quenching and assumes easy sample preparation but has the drawback of lower counting efficiencies than standard LSC methods, typically from 10% up to 20%. The aim of the presented research in this paper is to investigate the possible increment of detection efficiency of Cherenkov counting during ²¹⁰Pb/²¹⁰Bi detection on an LS counter Quantulus 1220. Considering naturally low levels of ²¹⁰Pb in aqueous samples, the addition of ionic liquids to the counting vials with the analysed samples has the benefit of detection limit’s decrement during ²¹⁰Pb quantification. Our results demonstrated that ionic liquid, 1-butyl-3-methylimidazolium salicylate, is more efficient in Cherenkov counting efficiency increment than the previously explored 2-hydroxypropan-1-amminium salicylate. Consequently, the impact of a few other ionic liquids that were synthesized with the same cation group (1-butyl-3-methylimidazolium benzoate, 1-butyl-3-methylimidazolium 3-hydroxybenzoate, and 1-butyl-3-methylimidazolium 4-hydroxybenzoate) was explored in order to test their potential influence on Cherenkov counting efficiency. It was confirmed that, among the explored ones, only ionic liquids in the form of salicylates exhibit a wavelength shifting effect. Namely, the addition of small amounts (around 0.8 g) of 1-butyl-3-methylimidazolium salicylate increases the detection efficiency from 16% to >70%, consequently reducing the detection threshold by more than four times. Moreover, the addition of ionic liquids could find application in the quantification of other radionuclides besides ²¹⁰Pb/²¹⁰Bi via Cherenkov counting method.

Keywords: liquid scintillation counting, ionic liquids, Cherenkov counting, ²¹⁰PB/²¹⁰BI in water

Procedia PDF Downloads 80
332 Valorization of Plastic and Cork Wastes in Design of Composite Materials

Authors: Svetlana Petlitckaia, Toussaint Barboni, Paul-Antoine Santoni

Abstract:

Plastic is a revolutionary material. However, the pollution caused by plastics damages the environment, human health and the economy of different countries. It is important to find new ways to recycle and reuse plastic material. The use of waste materials as filler and as a matrix for composite materials is receiving increasing attention as an approach to increasing the economic value of streams. In this study, a new composite material based on high-density polyethylene (HDPE) and polypropylene (PP) wastes from bottle caps and cork powder from unused cork (virgin cork), which has a high capacity for thermal insulation, was developed. The composites were prepared with virgin and modified cork. The composite materials were obtained through twin-screw extrusion and injection molding. The composites were produced with proportions of 0 %, 5 %, 10 %, 15 %, and 20 % of cork powder in a polymer matrix with and without coupling agent and flame retardant. These composites were investigated in terms of mechanical, structural and thermal properties. The effect of cork fraction, particle size and the use of flame retardant on the properties of composites were investigated. The properties of samples elaborated with the polymer and the cork were compared to them with the coupling agent and commercial flame retardant. It was observed that the morphology of HDPE/cork and PP/cork composites revealed good distribution and dispersion of cork particles without agglomeration. The results showed that the addition of cork powder in the polymer matrix reduced the density of the composites. However, the incorporation of natural additives doesn’t have a significant effect on water adsorption. Regarding the mechanical properties, the value of tensile strength decreases with the addition of cork powder, ranging from 30 MPa to 19 MPa for PP composites and from 19 MPa to 17 MPa for HDPE composites. The value of thermal conductivity of composites HDPE/cork and PP/ cork is about 0.230 W/mK and 0.170 W/mK, respectively. Evaluation of the flammability of the composites was performed using a cone calorimeter. The results of thermal analysis and fire tests show that it is important to add flame retardants to improve fire resistance. The samples elaborated with the coupling agent and flame retardant have better mechanical properties and fire resistance. The feasibility of the composites based on cork and PP and HDPE wastes opens new ways of valorizing plastic waste and virgin cork. The formulation of composite materials must be optimized.

Keywords: composite materials, cork and polymer wastes, flammability, modificated cork

Procedia PDF Downloads 57
331 Multiperson Drone Control with Seamless Pilot Switching Using Onboard Camera and Openpose Real-Time Keypoint Detection

Authors: Evan Lowhorn, Rocio Alba-Flores

Abstract:

Traditional classification Convolutional Neural Networks (CNN) attempt to classify an image in its entirety. This becomes problematic when trying to perform classification with a drone’s camera in real-time due to unpredictable backgrounds. Object detectors with bounding boxes can be used to isolate individuals and other items, but the original backgrounds remain within these boxes. These basic detectors have been regularly used to determine what type of object an item is, such as “person” or “dog.” Recent advancement in computer vision, particularly with human imaging, is keypoint detection. Human keypoint detection goes beyond bounding boxes to fully isolate humans and plot points, or Regions of Interest (ROI), on their bodies within an image. ROIs can include shoulders, elbows, knees, heads, etc. These points can then be related to each other and used in deep learning methods such as pose estimation. For drone control based on human motions, poses, or signals using the onboard camera, it is important to have a simple method for pilot identification among multiple individuals while also giving the pilot fine control options for the drone. To achieve this, the OpenPose keypoint detection network was used with body and hand keypoint detection enabled. OpenPose supports the ability to combine multiple keypoint detection methods in real-time with a single network. Body keypoint detection allows simple poses to act as the pilot identifier. The hand keypoint detection with ROIs for each finger can then offer a greater variety of signal options for the pilot once identified. For this work, the individual must raise their non-control arm to be identified as the operator and send commands with the hand on their other arm. The drone ignores all other individuals in the onboard camera feed until the current operator lowers their non-control arm. When another individual wish to operate the drone, they simply raise their arm once the current operator relinquishes control, and then they can begin controlling the drone with their other hand. This is all performed mid-flight with no landing or script editing required. When using a desktop with a discrete NVIDIA GPU, the drone’s 2.4 GHz Wi-Fi connection combined with OpenPose restrictions to only body and hand allows this control method to perform as intended while maintaining the responsiveness required for practical use.

Keywords: computer vision, drone control, keypoint detection, openpose

Procedia PDF Downloads 158
330 A Rare Cause of Abdominal Pain Post Caesarean Section

Authors: Madeleine Cox

Abstract:

Objective: discussion of diagnosis of vernix caseosa peritonitis, recovery and subsequent caesarean seciton Case: 30 year old G4P1 presented in labour at 40 weeks, planning a vaginal birth afterprevious caesarean section. She underwent an emergency caesarean section due to concerns for fetal wellbeing on CTG. She was found to have a thin lower segment with a very small area of dehiscence centrally. The operation was uncomplicated, and she recovered and went home 2 days later. She then represented to the emergency department day 6 post partum feeling very unwell, with significant abdominal pain, tachycardia as well as urinary retention. Raised white cell count of 13.7 with neutrophils of 11.64, CRP of 153. An abdominal ultrasound was poorly tolerated by the patient and did not aide in the diagnosis. Chest and abdominal xray were normal. She underwent a CT chest and abdomen, which found a small volume of free fluid with no apparent collection. Given no obvious cause of her symptoms were found and the patient did not improve, she had a repeat CT 2 days later, which showed progression of free fluid. A diagnostic laparoscopy was performed with general surgeons, which reveled turbid fluid, an inflamed appendix which was removed. The patient improved remarkably post operatively. The histology showed periappendicitis with acute appendicitis with marked serosal inflammatory reaction to vernix caseosa. Following this, the patient went on to recover well. 4 years later, the patient was booked for an elective caesarean section, on entry into the abdomen, there were very minimal adhesions, and the surgery and her subsequent recovery was uncomplicated. Discussion: this case represents the diagnostic dilemma of a patient who presents unwell without a clear cause. In this circumstance, multiple modes of imaging did not aide in her diagnosis, and so she underwent diagnostic surgery. It is important to evaluate if a patient is or is not responding to the typical causes of post operative pain and adjust management accordingly. A multiteam approach can help to provide a diagnosis for these patients. Conclusion: Vernix caseosa peritonitis is a rare cause of acute abdomen post partum. There are few reports in the literature of the initial presentation and no reports on the possible effects on future pregnancies. This patient did not have any complications in her following pregnancy or delivery secondary to her diagnosis of vernix caseosa peritonitis. This may assist in counselling other women who have had this uncommon diagnosis.

Keywords: peritonitis, obstetrics, caesarean section, pain

Procedia PDF Downloads 76
329 Tuning of Indirect Exchange Coupling in FePt/Al₂O₃/Fe₃Pt System

Authors: Rajan Goyal, S. Lamba, S. Annapoorni

Abstract:

The indirect exchange coupled system consists of two ferromagnetic layers separated by non-magnetic spacer layer. The type of exchange coupling may be either ferro or anti-ferro depending on the thickness of the spacer layer. In the present work, the strength of exchange coupling in FePt/Al₂O₃/Fe₃Pt has been investigated by varying the thickness of the spacer layer Al₂O₃. The FePt/Al₂O₃/Fe₃Pt trilayer structure is fabricated on Si <100> single crystal substrate using sputtering technique. The thickness of FePt and Fe₃Pt is fixed at 60 nm and 2 nm respectively. The thickness of spacer layer Al₂O₃ was varied from 0 to 16 nm. The normalized hysteresis loops recorded at room temperature both in the in-plane and out of plane configuration reveals that the orientation of easy axis lies along the plane of the film. It is observed that the hysteresis loop for ts=0 nm does not exhibit any knee around H=0 indicating that the hard FePt layer and soft Fe₃Pt layer are strongly exchange coupled. However, the insertion of Al₂O₃ spacer layer of thickness ts = 0.7 nm results in appearance of a minor knee around H=0 suggesting the weakening of exchange coupling between FePt and Fe₃Pt. The disappearance of knee in hysteresis loop with further increase in thickness of the spacer layer up to 8 nm predicts the co-existence of ferromagnetic (FM) and antiferromagnetic (AFM) exchange interaction between FePt and Fe₃Pt. In addition to this, the out of plane hysteresis loop also shows an asymmetry around H=0. The exchange field Hex = (Hc↑-HC↓)/2, where Hc↑ and Hc↓ are the coercivity estimated from lower and upper branch of hysteresis loop, increases from ~ 150 Oe to ~ 700 Oe respectively. This behavior may be attributed to the uncompensated moments in the hard FePt layer and soft Fe₃Pt layer at the interface. A better insight into the variation in indirect exchange coupling has been investigated using recoil curves. It is observed that the almost closed recoil curves are obtained for ts= 0 nm up to a reverse field of ~ 5 kOe. On the other hand, the appearance of appreciable open recoil curves at lower reverse field ~ 4 kOe for ts = 0.7 nm indicates that uncoupled soft phase undergoes irreversible magnetization reversal at lower reverse field suggesting the weakening of exchange coupling. The openness of recoil curves decreases with increase in thickness of the spacer layer up to 8 nm. This behavior may be attributed to the competition between FM and AFM exchange interactions. The FM exchange coupling between FePt and Fe₃Pt due to porous nature of Al₂O₃ decreases much slower than the weak AFM coupling due to interaction between Fe ions of FePt and Fe₃Pt via O ions of Al₂O₃. The hysteresis loop has been simulated using Monte Carlo based on Metropolis algorithm to investigate the variation in strength of exchange coupling in FePt/Al₂O₃/Fe₃Pt trilayer system.

Keywords: indirect exchange coupling, MH loop, Monte Carlo simulation, recoil curve

Procedia PDF Downloads 171