Search results for: radiation damage
410 Functional Performance of Unpaved Roads Reinforced with Treated Coir Geotextiles
Authors: Priya Jaswal, Vivek, S. K. Sinha
Abstract:
One of the most important and complicated factors influencing the functional performance of unpaved roads is traffic loading. The complexity of traffic loading is caused by the variable magnitude and frequency of load, which causes unpaved roads to fail prematurely. Unpaved roads are low-volume roads, and as peri-urbanization increases, unpaved roads act as a means to boost the rural economy. This has also increased traffic on unpaved roads, intensifying the issue of settlement, rutting, and fatigue failure. This is a major concern for unpaved roads built on poor subgrade soil, as excessive rutting caused by heavy loads can cause driver discomfort, vehicle damage, and an increase in maintenance costs. Some researchers discovered that when a consistent static load is exerted as opposed to a rapidly changing load, the rate of deformation of unpaved roads increases. Previously, some of the most common methods for overcoming the problem of rutting and fatigue failure included chemical stabilisation, fibre reinforcement, and so on. However, due to their high cost, engineers' attention has shifted to geotextiles which are used as reinforcement in unpaved roads. Geotextiles perform the function of filtration, lateral confinement of base material, vertical restraint of subgrade soil, and the tension membrane effect. The use of geotextiles in unpaved roads increases the strength of unpaved roads and is an economically viable method because it reduces the required aggregate thickness, which would need less earthwork, and is thus recommended for unpaved road applications. The majority of geotextiles used previously were polymeric, but with a growing awareness of sustainable development to preserve the environment, researchers' focus has shifted to natural fibres. Coir is one such natural fibre that possesses the advantage of having a higher tensile strength than other bast fibres, being eco-friendly, low in cost, and biodegradable. However, various researchers have discovered that the surface of coir fibre is covered with various impurities, voids, and cracks, which act as a plane of weakness and limit the potential application of coir geotextiles. To overcome this limitation, chemical surface modification of coir geotextiles is widely accepted by researchers because it improves the mechanical properties of coir geotextiles. The current paper reviews the effect of using treated coir geotextiles as reinforcement on the load-deformation behaviour of a two-layered unpaved road model.Keywords: coir, geotextile, treated, unpaved
Procedia PDF Downloads 95409 Selecting the Best Risk Exposure to Assess Collision Risks in Container Terminals
Authors: Mohammad Ali Hasanzadeh, Thierry Van Elslander, Eddy Van De Voorde
Abstract:
About 90 percent of world merchandise trade by volume being carried by sea. Maritime transport remains as back bone behind the international trade and globalization meanwhile all seaborne goods need using at least two ports as origin and destination. Amid seaborne traded cargos, container traffic is a prosperous market with about 16% in terms of volume. Albeit containerized cargos are less in terms of tonnage but, containers carry the highest value cargos amongst all. That is why efficient handling of containers in ports is very important. Accidents are the foremost causes that lead to port inefficiency and a surge in total transport cost. Having different port safety management systems (PSMS) in place, statistics on port accidents show that numerous accidents occur in ports. Some of them claim peoples’ life; others damage goods, vessels, port equipment and/or the environment. Several accident investigation illustrate that the most common accidents take place throughout transport operation, it sometimes accounts for 68.6% of all events, therefore providing a safer workplace depends on reducing collision risk. In order to quantify risks at the port area different variables can be used as exposure measurement. One of the main motives for defining and using exposure in studies related to infrastructure is to account for the differences in intensity of use, so as to make comparisons meaningful. In various researches related to handling containers in ports and intermodal terminals, different risk exposures and also the likelihood of each event have been selected. Vehicle collision within the port area (10-7 per kilometer of vehicle distance travelled) and dropping containers from cranes, forklift trucks, or rail mounted gantries (1 x 10-5 per lift) are some examples. According to the objective of the current research, three categories of accidents selected for collision risk assessment; fall of container during ship to shore operation, dropping container during transfer operation and collision between vehicles and objects within terminal area. Later on various consequences, exposure and probability identified for each accident. Hence, reducing collision risks profoundly rely on picking the right risk exposures and probability of selected accidents, to prevent collision accidents in container terminals and in the framework of risk calculations, such risk exposures and probabilities can be useful in assessing the effectiveness of safety programs in ports.Keywords: container terminal, collision, seaborne trade, risk exposure, risk probability
Procedia PDF Downloads 377408 Graphene Metamaterials Supported Tunable Terahertz Fano Resonance
Authors: Xiaoyong He
Abstract:
The manipulation of THz waves is still a challenging task due to lack of natural materials interacted with it strongly. Designed by tailoring the characters of unit cells (meta-molecules), the advance of metamaterials (MMs) may solve this problem. However, because of Ohmic and radiation losses, the performance of MMs devices is subjected to the dissipation and low quality factor (Q-factor). This dilemma may be circumvented by Fano resonance, which arises from the destructive interference between a bright continuum mode and dark discrete mode (or a narrow resonance). Different from symmetric Lorentz spectral curve, Fano resonance indicates a distinct asymmetric line-shape, ultrahigh quality factor, steep variations in spectrum curves. Fano resonance is usually realized through symmetry breaking. However, if concentric double rings (DR) are placed closely to each other, the near-field coupling between them gives rise to two hybridized modes (bright and narrowband dark modes) because of the local asymmetry, resulting into the characteristic Fano line shape. Furthermore, from the practical viewpoint, it is highly desirable requirement that to achieve the modulation of Fano spectral curves conveniently, which is an important and interesting research topics. For current Fano systems, the tunable spectral curves can be realized by adjusting the geometrical structural parameters or magnetic fields biased the ferrite-based structure. But due to limited dispersion properties of active materials, it is still a tough work to tailor Fano resonance conveniently with the fixed structural parameters. With the favorable properties of extreme confinement and high tunability, graphene is a strong candidate to achieve this goal. The DR-structure possesses the excitation of so-called “trapped modes,” with the merits of simple structure and high quality of resonances in thin structures. By depositing graphene circular DR on the SiO2/Si/ polymer substrate, the tunable Fano resonance has been theoretically investigated in the terahertz regime, including the effects of graphene Fermi level, structural parameters and operation frequency. The results manifest that the obvious Fano peak can be efficiently modulated because of the strong coupling between incident waves and graphene ribbons. As Fermi level increases, the peak amplitude of Fano curve increases, and the resonant peak position shifts to high frequency. The amplitude modulation depth of Fano curves is about 30% if Fermi level changes in the scope of 0.1-1.0 eV. The optimum gap distance between DR is about 8-12 μm, where the value of figure of merit shows a peak. As the graphene ribbon width increases, the Fano spectral curves become broad, and the resonant peak denotes blue shift. The results are very helpful to develop novel graphene plasmonic devices, e.g. sensors and modulators.Keywords: graphene, metamaterials, terahertz, tunable
Procedia PDF Downloads 345407 The Stereotypical Images of Marginalized Women in the Poetry of Rita Dove
Authors: Wafaa Kamal Isaac
Abstract:
This paper attempts to shed light upon the stereotypical images of marginalized black women as shown through the poetry of Rita Dove. Meanwhile, it explores how stereotypical images held by the society and public perceptions perpetuate the marginalization of black women. Dove is considered one of the most fundamental African-American poets who devoted her writings to explore the problem of identity that confronted marginalized women in America. Besides tackling the issue of black women’s stereotypical images, this paper focuses upon the psychological damage which the black women had suffered from due to their stripped identity. In ‘Thomas and Beulah’, Dove reflects the black woman’s longing for her homeland in order to make up for her lost identity. This poem represents atavistic feelings deal with certain recurrent images, both aural and visual, like the image of Beulah who represents the African-American woman who searches for an identity, as she is being denied and humiliated one in the newly founded society. In an attempt to protest against the stereotypical mule image that had been imposed upon black women in America, Dove in ‘On the Bus with Rosa Parks’ tries to ignite the beaten spirits to struggle for their own rights by revitalizing the rebellious nature and strong determination of the historical figure ‘Rosa Parks’ that sparked the Civil Rights Movement. In ‘Daystar’, Dove proves that black women are subjected to double-edged oppression; firstly, in terms of race as a black woman in an unjust white society that violates her rights due to her black origins and secondly, in terms of gender as a member of the female sex that is meant to exist only to serve man’s needs. Similarly, in the ‘Adolescence’ series, Dove focuses on the double marginalization which the black women had experienced. It concludes that the marginalization of black women has resulted from the domination of the masculine world and the oppression of the white world. Moreover, Dove’s ‘Beauty and the Beast’ investigates the African-American women’s problem of estrangement and identity crisis in America. It also sheds light upon the psychological consequences that resulted from the violation of marginalized women’s identity. Furthermore, this poem shows the black women’s self-debasement, helplessness, and double consciousness that emanate from the sense of uprootedness. Finally, this paper finds out that the negative, debased and inferior stereotypical image held by the society did not only contribute to the marginalization of black women but also silenced and muted their voices.Keywords: stereotypical images, marginalized women, Rita Dove, identity
Procedia PDF Downloads 166406 The Microstructural and Mechanical Characterization of Organo-Clay-Modified Bitumen, Calcareous Aggregate, and Organo-Clay Blends
Authors: A. Gürses, T. B. Barın, Ç. Doğar
Abstract:
Bitumen has been widely used as the binder of aggregate in road pavement due to its good viscoelastic properties, as a viscous organic mixture with various chemical compositions. Bitumen is a liquid at high temperature and it becomes brittle at low temperatures, and this temperature-sensitivity can cause the rutting and cracking of the pavement and limit its application. Therefore, the properties of existing asphalt materials need to be enhanced. The pavement with polymer modified bitumen exhibits greater resistance to rutting and thermal cracking, decreased fatigue damage, as well as stripping and temperature susceptibility; however, they are expensive and their applications have disadvantages. Bituminous mixtures are composed of very irregular aggregates bound together with hydrocarbon-based asphalt, with a low volume fraction of voids dispersed within the matrix. Montmorillonite (MMT) is a layered silicate with low cost and abundance, which consists of layers of tetrahedral silicate and octahedral hydroxide sheets. Recently, the layered silicates have been widely used for the modification of polymers, as well as in many different fields. However, there are not too much studies related with the preparation of the modified asphalt with MMT, currently. In this study, organo-clay-modified bitumen, and calcareous aggregate and organo-clay blends were prepared by hot blending method with OMMT, which has been synthesized using a cationic surfactant (Cetyltrymethylammonium bromide, CTAB) and long chain hydrocarbon, and MMT. When the exchangeable cations in the interlayer region of pristine MMT were exchanged with hydrocarbon attached surfactant ions, the MMT becomes organophilic and more compatible with bitumen. The effects of the super hydrophobic OMMT onto the micro structural and mechanic properties (Marshall Stability and volumetric parameters) of the prepared blends were investigated. Stability and volumetric parameters of the blends prepared were measured using Marshall Test. Also, in order to investigate the morphological and micro structural properties of the organo-clay-modified bitumen and calcareous aggregate and organo-clay blends, their SEM and HRTEM images were taken. It was observed that the stability and volumetric parameters of the prepared mixtures improved significantly compared to the conventional hot mixes and even the stone matrix mixture. A micro structural analysis based on SEM images indicates that the organo-clay platelets dispersed in the bitumen have a dominant role in the increase of effectiveness of bitumen - aggregate interactions.Keywords: hot mix asphalt, stone matrix asphalt, organo clay, Marshall test, calcareous aggregate, modified bitumen
Procedia PDF Downloads 239405 A Theoretical Approach of Tesla Pump
Authors: Cristian Sirbu-Dragomir, Stefan-Mihai Sofian, Adrian Predescu
Abstract:
This paper aims to study Tesla pumps for circulating biofluids. It is desired to make a small pump for the circulation of biofluids. This type of pump will be studied because it has the following characteristics: It doesn’t have blades which results in very small frictions; Reduced friction forces; Low production cost; Increased adaptability to different types of fluids; Low cavitation (towards 0); Low shocks due to lack of blades; Rare maintenance due to low cavity; Very small turbulences in the fluid; It has a low number of changes in the direction of the fluid (compared to rotors with blades); Increased efficiency at low powers.; Fast acceleration; The need for a low torque; Lack of shocks in blades at sudden starts and stops. All these elements are necessary to be able to make a small pump that could be inserted into the thoracic cavity. The pump will be designed to combat myocardial infarction. Because the pump must be inserted in the thoracic cavity, elements such as Low friction forces, shocks as low as possible, low cavitation and as little maintenance as possible are very important. The operation should be performed once, without having to change the rotor after a certain time. Given the very small size of the pump, the blades of a classic rotor would be very thin and sudden starts and stops could cause considerable damage or require a very expensive material. At the same time, being a medical procedure, the low cost is important in order to be easily accessible to the population. The lack of turbulence or vortices caused by a classic rotor is again a key element because when it comes to blood circulation, the flow must be laminar and not turbulent. The turbulent flow can even cause a heart attack. Due to these aspects, Tesla's model could be ideal for this work. Usually, the pump is considered to reach an efficiency of 40% being used for very high powers. However, the author of this type of pump claimed that the maximum efficiency that the pump can achieve is 98%. The key element that could help to achieve this efficiency or one as close as possible is the fact that the pump will be used for low volumes and pressures. The key elements to obtain the best efficiency for this model are the number of rotors placed in parallel and the distance between them. The distance between them must be small, which helps to obtain a pump as small as possible. The principle of operation of such a rotor is to place in several parallel discs cut inside. Thus the space between the discs creates the vacuum effect by pulling the liquid through the holes in the rotor and throwing it outwards. Also, a very important element is the viscosity of the liquid. It dictates the distance between the disks to achieve a lossless power flow.Keywords: lubrication, temperature, tesla-pump, viscosity
Procedia PDF Downloads 179404 Effects of Heat Treatment on the Mechanical Properties of Kenaf Fiber
Authors: Paulo Teodoro De Luna Carada, Toru Fujii, Kazuya Okubo
Abstract:
Natural fibers have wide variety of uses (e.g., rope, paper, and building materials). One specific application of it is in the field of composite materials (i.e., green composites). Huge amount of research are being done in this field due to rising concerns in the harmful effects of synthetic materials to the environment. There are several natural fibers used in this field, one of which can be extracted from a plant called kenaf (Hibiscus cannabinus L.). Kenaf fiber is regarded as a good alternative because the plant is easy to grow and the fiber is easy to extract. Additionally, it has good properties. Treatments, which are classified as mechanical or chemical in nature, can be done in order to improve the properties of the fiber. The aim of this study is to assess the effects of heat treatment in kenaf fiber. It specifically aims to observe the effect in the tensile strength and modulus of the fiber. Kenaf fiber bundles with an average diameter of at most 100μm was used for this purpose. Heat treatment was done using a constant temperature oven with the following heating temperatures: (1) 160̊C, (2) 180̊C, and (3) 200̊C for a duration of one hour. As a basis for comparison, tensile test was first done to kenaf fibers without any heat treatment. For every heating temperature, three groups of samples were prepared. Two groups of which were for doing tensile test (one group was tested right after heat treatment while the remaining group was kept inside a closed container with relative humidity of at least 95% for two days). The third group was used to observe how much moisture the treated fiber will absorb when it is enclosed in a high moisture environment for two days. The results showed that kenaf fiber can retain its tensile strength when heated up to a temperature of 160̊C. However, when heated at a temperature of about 180̊C or higher, the tensile strength decreases significantly. The same behavior was observed for the tensile modulus of the fiber. Additionally, the fibers which were stored for two days absorbed nearly the same amount of moisture (about 20% of the dried weight) regardless of the heating temperature. Heat treatment might have damaged the fiber in some way. Additional test was done in order to see if the damage due to heat treatment is attributed to changes in the viscoelastic property of the fiber. The findings showed that kenaf fibers can be heated for at most 160̊C to attain good tensile strength and modulus. Additionally, heating the fiber at high temperature (>180̊C) causes changes in its viscoelastic property. The results of this study is significant for processes which requires heat treatment not only in kenaf fiber but might also be helpful for natural fibers in general.Keywords: heat treatment, kenaf fiber, natural fiber, mechanical properties
Procedia PDF Downloads 354403 The Asymptotic Hole Shape in Long Pulse Laser Drilling: The Influence of Multiple Reflections
Authors: Torsten Hermanns, You Wang, Stefan Janssen, Markus Niessen, Christoph Schoeler, Ulrich Thombansen, Wolfgang Schulz
Abstract:
In long pulse laser drilling of metals, it can be demonstrated that the ablation shape approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from ultra short pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in long pulse drilling of metals is identified, a model for the description of the asymptotic hole shape numerically implemented, tested and clearly confirmed by comparison with experimental data. The model assumes a robust process in that way that the characteristics of the melt flow inside the arising melt film does not change qualitatively by changing the laser or processing parameters. Only robust processes are technically controllable and thus of industrial interest. The condition for a robust process is identified by a threshold for the mass flow density of the assist gas at the hole entrance which has to be exceeded. Within a robust process regime the melt flow characteristics can be captured by only one model parameter, namely the intensity threshold. In analogy to USP ablation (where it is already known for a long time that the resulting hole shape results from a threshold for the absorbed laser fluency) it is demonstrated that in the case of robust long pulse ablation the asymptotic shape forms in that way that along the whole contour the absorbed heat flux density is equal to the intensity threshold. The intensity threshold depends on the special material and radiation properties and has to be calibrated be one reference experiment. The model is implemented in a numerical simulation which is called AsymptoticDrill and requires such a few amount of resources that it can run on common desktop PCs, laptops or even smart devices. Resulting hole shapes can be calculated within seconds what depicts a clear advantage over other simulations presented in literature in the context of industrial every day usage. Against this background the software additionally is equipped with a user-friendly GUI which allows an intuitive usage. Individual parameters can be adjusted using sliders while the simulation result appears immediately in an adjacent window. A platform independent development allow a flexible usage: the operator can use the tool to adjust the process in a very convenient manner on a tablet during the developer can execute the tool in his office in order to design new processes. Furthermore, at the best knowledge of the authors AsymptoticDrill is the first simulation which allows the import of measured real beam distributions and thus calculates the asymptotic hole shape on the basis of the real state of the specific manufacturing system. In this paper the emphasis is placed on the investigation of the effect of multiple reflections on the asymptotic hole shape which gain in importance when drilling holes with large aspect ratios.Keywords: asymptotic hole shape, intensity threshold, long pulse laser drilling, robust process
Procedia PDF Downloads 214402 Biocontrol Potential of Trichoderma longibrachiatum as an Entomopathogenic Fungi against Bemisia tabaci
Authors: Waheed Anwar, Kiran Nawaz, Muhammad Saleem Haider, Ahmad Ali Shahid, Sehrish Iftikhar
Abstract:
The whitefly, Bemisia tabaci (Gennadius), is a complex insect species, including many cryptic species or biotypes. Whitefly causes damage to many ornamental and horticultural crops through directly feeding on phloem sap, resulting in sooty mould and critically decreases the rate of photosynthesis of many host plants. Biological control has emerged as one of the most important methods for the management of soil-borne plant pathogens. Among the natural enemies of insects different entomopathogenic fungi are mostly used as biological control of the pest. The purpose of this research was to find indigenous insect-associated fungi and their virulence against Bemisia tabaci. A detailed survey of cotton fields in sample collection was conducted during July and August 2013 from the central mixed zone of Punjab, Pakistan. For the isolation of T. longibrachiatum, sabouraud dextrose peptone yeast extract agar (SDAY) media was used and morphological characterization of isolated T. longibrachiatum was studied using different dichotomous keys. Molecular Identification of the pathogen was confirmed by amplifying the internal transcribed spacer region. Blastn analysis showed 100% homology with already reported sequences on the database. For these bioassays, two conidial concentrations 4 × 108/mL & 4 × 104/mL of T. longibrachiatum was sprayed in clip cages for nymph and adult B. tabaci respectively under controlled environmental conditions. The pathogenicity of T. longibrachiatum was tested on nymph and adult whitefly to check mortality. Mortality of B. tabaci at nymphal and adult stages were observed after 24-hour intervals. Percentage mortality of nymphs treated with 4 x 104/mL conidia of T. longibrachiatum was 20, 24, 36 and 40% after 48, 72, 96, 72, 96, 120 and 144 hours respectively. However, no considerable difference was recorded in percentage mortality of whitefly after 120 and 144 hours. There were great variations after 24, 48, 72 and 96 hours in the rate of mortality. The efficacy of T. longibrachiatum as entomopathogenic fungi was evaluated in adult and nymphal stages of whitefly. Trichoderma longibrachiatum showed maximum activity on nymphal stages of whitefly as compared to adult stages. The percentage of conidial germination was also recorded on the outer surface of adult and nymphal stages of B. tabaci. The present findings indicated that T. longibrachiatum is an entomopathogenic fungus against B. tabaci and many species of Trichoderma were already reported as an antagonistc organism against a wide range of bacterial and fungal pathogens.Keywords: efficacy, Trichoderma, virulence, bioassay
Procedia PDF Downloads 288401 Dynamic Two-Way FSI Simulation for a Blade of a Small Wind Turbine
Authors: Alberto Jiménez-Vargas, Manuel de Jesús Palacios-Gallegos, Miguel Ángel Hernández-López, Rafael Campos-Amezcua, Julio Cesar Solís-Sanchez
Abstract:
An optimal wind turbine blade design must be able of capturing as much energy as possible from the wind source available at the area of interest. Many times, an optimal design means the use of large quantities of material and complicated processes that make the wind turbine more expensive, and therefore, less cost-effective. For the construction and installation of a wind turbine, the blades may cost up to 20% of the outline pricing, and become more important due to they are part of the rotor system that is in charge of transmitting the energy from the wind to the power train, and where the static and dynamic design loads for the whole wind turbine are produced. The aim of this work is the develop of a blade fluid-structure interaction (FSI) simulation that allows the identification of the major damage zones during the normal production situation, and thus better decisions for design and optimization can be taken. The simulation is a dynamic case, since we have a time-history wind velocity as inlet condition instead of a constant wind velocity. The process begins with the free-use software NuMAD (NREL), to model the blade and assign material properties to the blade, then the 3D model is exported to ANSYS Workbench platform where before setting the FSI system, a modal analysis is made for identification of natural frequencies and modal shapes. FSI analysis is carried out with the two-way technic which begins with a CFD simulation to obtain the pressure distribution on the blade surface, then these results are used as boundary condition for the FEA simulation to obtain the deformation levels for the first time-step. For the second time-step, CFD simulation is reconfigured automatically with the next time-step inlet wind velocity and the deformation results from the previous time-step. The analysis continues the iterative cycle solving time-step by time-step until the entire load case is completed. This work is part of a set of projects that are managed by a national consortium called “CEMIE-Eólico” (Mexican Center in Wind Energy Research), created for strengthen technological and scientific capacities, the promotion of creation of specialized human resources, and to link the academic with private sector in national territory. The analysis belongs to the design of a rotor system for a 5 kW wind turbine design thought to be installed at the Isthmus of Tehuantepec, Oaxaca, Mexico.Keywords: blade, dynamic, fsi, wind turbine
Procedia PDF Downloads 482400 Electrophoretic Light Scattering Based on Total Internal Reflection as a Promising Diagnostic Method
Authors: Ekaterina A. Savchenko, Elena N. Velichko, Evgenii T. Aksenov
Abstract:
The development of pathological processes, such as cardiovascular and oncological diseases, are accompanied by changes in molecular parameters in cells, tissues, and serum. The study of the behavior of protein molecules in solutions is of primarily importance for diagnosis of such diseases. Various physical and chemical methods are used to study molecular systems. With the advent of the laser and advances in electronics, optical methods, such as scanning electron microscopy, sedimentation analysis, nephelometry, static and dynamic light scattering, have become the most universal, informative and accurate tools for estimating the parameters of nanoscale objects. The electrophoretic light scattering is the most effective technique. It has a high potential in the study of biological solutions and their properties. This technique allows one to investigate the processes of aggregation and dissociation of different macromolecules and obtain information on their shapes, sizes and molecular weights. Electrophoretic light scattering is an analytical method for registration of the motion of microscopic particles under the influence of an electric field by means of quasi-elastic light scattering in a homogeneous solution with a subsequent registration of the spectral or correlation characteristics of the light scattered from a moving object. We modified the technique by using the regime of total internal reflection with the aim of increasing its sensitivity and reducing the volume of the sample to be investigated, which opens the prospects of automating simultaneous multiparameter measurements. In addition, the method of total internal reflection allows one to study biological fluids on the level of single molecules, which also makes it possible to increase the sensitivity and the informativeness of the results because the data obtained from an individual molecule is not averaged over an ensemble, which is important in the study of bimolecular fluids. To our best knowledge the study of electrophoretic light scattering in the regime of total internal reflection is proposed for the first time, latex microspheres 1 μm in size were used as test objects. In this study, the total internal reflection regime was realized on a quartz prism where the free electrophoresis regime was set. A semiconductor laser with a wavelength of 655 nm was used as a radiation source, and the light scattering signal was registered by a pin-diode. Then the signal from a photodetector was transmitted to a digital oscilloscope and to a computer. The autocorrelation functions and the fast Fourier transform in the regime of Brownian motion and under the action of the field were calculated to obtain the parameters of the object investigated. The main result of the study was the dependence of the autocorrelation function on the concentration of microspheres and the applied field magnitude. The effect of heating became more pronounced with increasing sample concentrations and electric field. The results obtained in our study demonstrated the applicability of the method for the examination of liquid solutions, including biological fluids.Keywords: light scattering, electrophoretic light scattering, electrophoresis, total internal reflection
Procedia PDF Downloads 216399 Commercial Winding for Superconducting Cables and Magnets
Authors: Glenn Auld Knierim
Abstract:
Automated robotic winding of high-temperature superconductors (HTS) addresses precision, efficiency, and reliability critical to the commercialization of products. Today’s HTS materials are mature and commercially promising but require manufacturing attention. In particular to the exaggerated rectangular cross-section (very thin by very wide), winding precision is critical to address the stress that can crack the fragile ceramic superconductor (SC) layer and destroy the SC properties. Damage potential is highest during peak operations, where winding stress magnifies operational stress. Another challenge is operational parameters such as magnetic field alignment affecting design performance. Winding process performance, including precision, capability for geometric complexity, and efficient repeatability, are required for commercial production of current HTS. Due to winding limitations, current HTS magnets focus on simple pancake configurations. HTS motors, generators, MRI/NMR, fusion, and other projects are awaiting robotic wound solenoid, planar, and spherical magnet configurations. As with conventional power cables, full transposition winding is required for long length alternating current (AC) and pulsed power cables. Robotic production is required for transposition, periodic swapping of cable conductors, and placing into precise positions, which allows power utility required minimized reactance. A full transposition SC cable, in theory, has no transmission length limits for AC and variable transient operation due to no resistance (a problem with conventional cables), negligible reactance (a problem for helical wound HTS cables), and no long length manufacturing issues (a problem with both stamped and twisted stacked HTS cables). The Infinity Physics team is solving manufacturing problems by developing automated manufacturing to produce the first-ever reliable and utility-grade commercial SC cables and magnets. Robotic winding machines combine mechanical and process design, specialized sense and observer, and state-of-the-art optimization and control sequencing to carefully manipulate individual fragile SCs, especially HTS, to shape previously unattainable, complex geometries with electrical geometry equivalent to commercially available conventional conductor devices.Keywords: automated winding manufacturing, high temperature superconductor, magnet, power cable
Procedia PDF Downloads 141398 Diagnostic Yield of CT PA and Value of Pre Test Assessments in Predicting the Probability of Pulmonary Embolism
Authors: Shanza Akram, Sameen Toor, Heba Harb Abu Alkass, Zainab Abdulsalam Altaha, Sara Taha Abdulla, Saleem Imran
Abstract:
Acute pulmonary embolism (PE) is a common disease and can be fatal. The clinical presentation is variable and nonspecific, making accurate diagnosis difficult. Testing patients with suspected acute PE has increased dramatically. However, the overuse of some tests, particularly CT and D-dimer measurement, may not improve care while potentially leading to patient harm and unnecessary expense. CTPA is the investigation of choice for PE. Its easy availability, accuracy and ability to provide alternative diagnosis has lowered the threshold for performing it, resulting in its overuse. Guidelines have recommended the use of clinical pretest probability tools such as ‘Wells score’ to assess risk of suspected PE. Unfortunately, implementation of guidelines in clinical practice is inconsistent. This has led to low risk patients being subjected to unnecessary imaging, exposure to radiation and possible contrast related complications. Aim: To study the diagnostic yield of CT PA, clinical pretest probability of patients according to wells score and to determine whether or not there was an overuse of CTPA in our service. Methods: CT scans done on patients with suspected P.E in our hospital from 1st January 2014 to 31st December 2014 were retrospectively reviewed. Medical records were reviewed to study demographics, clinical presentation, final diagnosis, and to establish if Wells score and D-Dimer were used correctly in predicting the probability of PE and the need for subsequent CTPA. Results: 100 patients (51male) underwent CT PA in the time period. Mean age was 57 years (24-91 years). Majority of patients presented with shortness of breath (52%). Other presenting symptoms included chest pain 34%, palpitations 6%, collapse 5% and haemoptysis 5%. D Dimer test was done in 69%. Overall Wells score was low (<2) in 28 %, moderate (>2 - < 6) in 47% and high (> 6) in 15% of patients. Wells score was documented in medical notes of only 20% patients. PE was confirmed in 12% (8 male) patients. 4 had bilateral PE’s. In high-risk group (Wells > 6) (n=15), there were 5 diagnosed PEs. In moderate risk group (Wells >2 - < 6) (n=47), there were 6 and in low risk group (Wells <2) (n=28), one case of PE was confirmed. CT scans negative for PE showed pleural effusion in 30, Consolidation in 20, atelactasis in 15 and pulmonary nodule in 4 patients. 31 scans were completely normal. Conclusion: Yield of CT for pulmonary embolism was low in our cohort at 12%. A significant number of our patients who underwent CT PA had low Wells score. This suggests that CT PA is over utilized in our institution. Wells score was poorly documented in medical notes. CT-PA was able to detect alternative pulmonary abnormalities explaining the patient's clinical presentation. CT-PA requires concomitant pretest clinical probability assessment to be an effective diagnostic tool for confirming or excluding PE. . Clinicians should use validated clinical prediction rules to estimate pretest probability in patients in whom acute PE is being considered. Combining Wells scores with clinical and laboratory assessment may reduce the need for CTPA.Keywords: CT PA, D dimer, pulmonary embolism, wells score
Procedia PDF Downloads 233397 Teaching Material, Books, Publications versus the Practice: Myths and Truths about Installation and Use of Downhole Safety Valve
Authors: Robson da Cunha Santos, Caio Cezar R. Bonifacio, Diego Mureb Quesada, Gerson Gomes Cunha
Abstract:
The paper is related to the safety of oil wells and environmental preservation on the planet, because they require great attention and commitment from oil companies and people who work with these equipments. This must occur from drilling the well until it is abandoned in order to safeguard the environment and prevent possible damage. The project had as main objective the constitution resulting from comparatives made among books, articles and publications with information gathered in technical visits to operational bases of Petrobras. After the visits, the information from methods of utilization and present managements, which were not available before, became available to the general audience. As a result, it is observed a huge flux of incorrect and out-of-date information that comprehends not only bibliographic archives, but also academic resources and materials. During the gathering of more in-depth information on the manufacturing, assembling, and use aspects of DHSVs, several issues that were previously known as correct, customary issues were discovered to be uncertain and outdated. Information of great importance resulted in affirmations about subjects as the depth of the valve installation that was before installed to 30 meters from the seabed (mud line). Despite this, the installation should vary in conformity to the ideal depth to escape from area with the biggest tendency to hydrates formation according to the temperature and pressure. Regarding to valves with nitrogen chamber, in accordance with books, they have their utilization linked to water line ≥ 700 meters, but in Brazilian exploratory fields, their use occurs from 600 meters of water line. The valves used in Brazilian fields are able to be inserted to the production column and self-equalizing, but the use of screwed valve in the column of production and equalizing is predominant. Although these valves are more expensive to acquire, they are more reliable, efficient, with a bigger shelf life and they do not cause restriction to the fluid flux. It follows that based on researches and theoretical information confronted to usual forms used in fields, the present project is important and relevant. This project will be used as source of actualization and information equalization that connects academic environment and real situations in exploratory situations and also taking into consideration the enrichment of precise and easy to understand information to future researches and academic upgrading.Keywords: down hole safety valve, security devices, installation, oil-wells
Procedia PDF Downloads 273396 Luteolin Exhibits Anti-Diabetic Effects by Increasing Oxidative Capacity and Regulating Anti-Oxidant Metabolism
Authors: Eun-Young Kwon, Myung-Sook Choi, Su-Jung Cho, Ji-Young Choi, So Young Kim, Youngji Han
Abstract:
Overweight and obesity have been linked to a low-grade chronic inflammatory response and an increased risk of developing metabolic syndrome including insulin resistance, type 2 diabetes mellitus and certain types of cancers. Luteolin is a dietary flavonoid with anti-inflammatory, anti-oxidant, anti-cancer and anti-diabetic properties. However, little is known about the detailed mechanism associated with the effect of luteolin on inflammation-related obesity and its complications. The aim of the present study was to reveal the anti-diabetic effect of luteolin in diet-induced obesity mice using “transcriptomics” tool. Thirty-nine male C57BL/6J mice (4-week-old) were randomly divided into 3 groups and were fed normal diet, high-fat diet (HFD, 20% fat) and HFD+0.005% (w/w) luteolin for 16 weeks. Luteolin improved insulin resistance, as measured by HOMA-IR and glucose tolerance, along with preservation action of pancreatic β-cells, compared to the HFD group. Luteoiln was significantly decreased the levels of leptin and ghrelin that play a pivotal role in energy balance, and the macrophage low-grade inflammation marker sCD163 (soluble Cd antigen 163) in plasma. Activities of hepatic anti-oxidant enzymes (catalase and glutathione peroxidase) were increased, while the levels of plasma transaminase (GOT and GPT) and oxidative damage markers (hepatic mitochondria H2O2 and TBARS) were markedly decreased by luteolin supplementation. In addition, luteolin increased oxidative capacity and fatty acid utilization by presenting decrease in enzyme activities of citrate synthase, cytochrome C oxidase and β-hydroxyacyl CoA dehydrogenase and UCP3 gene expression compared to high-fat diet. Moreover, our microarray results of muscle also revealed down-regulated gene expressions associated with TCA cycle by HFD were reversed to normal level by luteolin treatment. Taken together, our results indicate that luteolin is one of bioactive components for improving insulin resistance by increasing oxidative capacity, modulating anti-oxidant metabolism and suppressing inflammatory signaling cascades in diet-induced obese mice. These results provide possible therapeutic targets for prevention and treatment of diet-induced obesity and its complications.Keywords: anti-oxidant metabolism, diabetes, luteolin, oxidative capacity
Procedia PDF Downloads 338395 Modelling Flood Events in Botswana (Palapye) for Protecting Roads Structure against Floods
Authors: Thabo M. Bafitlhile, Adewole Oladele
Abstract:
Botswana has been affected by floods since long ago and is still experiencing this tragic event. Flooding occurs mostly in the North-West, North-East, and parts of Central district due to heavy rainfalls experienced in these areas. The torrential rains destroyed homes, roads, flooded dams, fields and destroyed livestock and livelihoods. Palapye is one area in the central district that has been experiencing floods ever since 1995 when its greatest flood on record occurred. Heavy storms result in floods and inundation; this has been exacerbated by poor and absence of drainage structures. Since floods are a part of nature, they have existed and will to continue to exist, hence more destruction. Furthermore floods and highway plays major role in erosion and destruction of roads structures. Already today, many culverts, trenches, and other drainage facilities lack the capacity to deal with current frequency for extreme flows. Future changes in the pattern of hydro climatic events will have implications for the design and maintenance costs of roads. Increase in rainfall and severe weather events can affect the demand for emergent responses. Therefore flood forecasting and warning is a prerequisite for successful mitigation of flood damage. In flood prone areas like Palapye, preventive measures should be taken to reduce possible adverse effects of floods on the environment including road structures. Therefore this paper attempts to estimate return periods associated with huge storms of different magnitude from recorded historical rainfall depth using statistical method. The method of annual maxima was used to select data sets for the rainfall analysis. In the statistical method, the Type 1 extreme value (Gumbel), Log Normal, Log Pearson 3 distributions were all applied to the annual maximum series for Palapye area to produce IDF curves. The Kolmogorov-Smirnov test and Chi Squared were used to confirm the appropriateness of fitted distributions for the location and the data do fit the distributions used to predict expected frequencies. This will be a beneficial tool for urgent flood forecasting and water resource administration as proper drainage design will be design based on the estimated flood events and will help to reclaim and protect the road structures from adverse impacts of flood.Keywords: drainage, estimate, evaluation, floods, flood forecasting
Procedia PDF Downloads 373394 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas
Authors: Sahithi Yarlagadda
Abstract:
The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm
Procedia PDF Downloads 111393 Assessment of Bisphenol A and 17 α-Ethinyl Estradiol Bioavailability in Soils Treated with Biosolids
Authors: I. Ahumada, L. Ascar, C. Pedraza, J. Montecino
Abstract:
It has been found that the addition of biosolids to soil is beneficial to soil health, enriching soil with essential nutrient elements. Although this sludge has properties that allow for the improvement of the physical features and productivity of agricultural and forest soils and the recovery of degraded soils, they also contain trace elements, organic trace and pathogens that can cause damage to the environment. The application of these biosolids to land without the total reclamation and the treated wastewater can transfer these compounds into terrestrial and aquatic environments, giving rise to potential accumulation in plants. The general aim of this study was to evaluate the bioavailability of bisphenol A (BPA), and 17 α-ethynyl estradiol (EE2) in a soil-biosolid system using wheat (Triticum aestivum) plant assays and a predictive extraction method using a solution of hydroxypropyl-β-cyclodextrin (HPCD) to determine if it is a reliable surrogate for this bioassay. Two soils were obtained from the central region of Chile (Lo Prado and Chicauma). Biosolids were obtained from a regional wastewater treatment plant. The soils were amended with biosolids at 90 Mg ha-1. Soils treated with biosolids, spiked with 10 mgkg-1 of the EE2 and 15 mgkg-1 and 30 mgkg-1of BPA were also included. The BPA, and EE2 concentration were determined in biosolids, soils and plant samples through ultrasound assisted extraction, solid phase extraction (SPE) and gas chromatography coupled to mass spectrometry determination (GC/MS). The bioavailable fraction found of each one of soils cultivated with wheat plants was compared with results obtained through a cyclodextrin biosimulator method. The total concentration found in biosolid from a treatment plant was 0.150 ± 0.064 mgkg-1 and 12.8±2.9 mgkg-1 of EE2 and BPA respectively. BPA and EE2 bioavailability is affected by the organic matter content and the physical and chemical properties of the soil. The bioavailability response of both compounds in the two soils varied with the EE2 and BPA concentration. It was observed in the case of EE2, the bioavailability in wheat plant crops contained higher concentrations in the roots than in the shoots. The concentration of EE2 increased with increasing biosolids rate. On the other hand, for BPA, a higher concentration was found in the shoot than the roots of the plants. The predictive capability the HPCD extraction was assessed using a simple linear correlation test, for both compounds in wheat plants. The correlation coefficients for the EE2 obtained from the HPCD extraction with those obtained from the wheat plants were r= 0.99 and p-value ≤ 0.05. On the other hand, in the case of BPA a correlation was not found. Therefore, the methodology was validated with respect to wheat plants bioassays, only in the EE2 case. Acknowledgments: The authors thank FONDECYT 1150502.Keywords: emerging compounds, bioavailability, biosolids, endocrine disruptors
Procedia PDF Downloads 147392 Engineering Photodynamic with Radioactive Therapeutic Systems for Sustainable Molecular Polarity: Autopoiesis Systems
Authors: Moustafa Osman Mohammed
Abstract:
This paper introduces Luhmann’s autopoietic social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. A specific type of autopoietic system is explained in the three existing groups of the ecological phenomena: interaction, social and medical sciences. This hypothesis model, nevertheless, has a nonlinear interaction with its natural environment ‘interactional cycle’ for the exchange of photon energy with molecular without any changes in topology. The external forces in the systems environment might be concomitant with the natural fluctuations’ influence (e.g. radioactive radiation, electromagnetic waves). The cantilever sensor deploys insights to the future chip processor for prevention of social metabolic systems. Thus, the circuits with resonant electric and optical properties are prototyped on board as an intra–chip inter–chip transmission for producing electromagnetic energy approximately ranges from 1.7 mA at 3.3 V to service the detection in locomotion with the least significant power losses. Nowadays, therapeutic systems are assimilated materials from embryonic stem cells to aggregate multiple functions of the vessels nature de-cellular structure for replenishment. While, the interior actuators deploy base-pair complementarity of nucleotides for the symmetric arrangement in particular bacterial nanonetworks of the sequence cycle creating double-stranded DNA strings. The DNA strands must be sequenced, assembled, and decoded in order to reconstruct the original source reliably. The design of exterior actuators have the ability in sensing different variations in the corresponding patterns regarding beat-to-beat heart rate variability (HRV) for spatial autocorrelation of molecular communication, which consists of human electromagnetic, piezoelectric, electrostatic and electrothermal energy to monitor and transfer the dynamic changes of all the cantilevers simultaneously in real-time workspace with high precision. A prototype-enabled dynamic energy sensor has been investigated in the laboratory for inclusion of nanoscale devices in the architecture with a fuzzy logic control for detection of thermal and electrostatic changes with optoelectronic devices to interpret uncertainty associated with signal interference. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other and forms its unique spatial structure modules for providing the environment mutual contribution in the investigation of mass temperature changes due to pathogenic archival architecture of clusters.Keywords: autopoiesis, nanoparticles, quantum photonics, portable energy, photonic structure, photodynamic therapeutic system
Procedia PDF Downloads 126391 A Novel Harmonic Compensation Algorithm for High Speed Drives
Authors: Lakdar Sadi-Haddad
Abstract:
The past few years study of very high speed electrical drives have seen a resurgence of interest. An inventory of the number of scientific papers and patents dealing with the subject makes it relevant. In fact democratization of magnetic bearing technology is at the origin of recent developments in high speed applications. These machines have as main advantage a much higher power density than the state of the art. Nevertheless particular attention should be paid to the design of the inverter as well as control and command. Surface mounted permanent magnet synchronous machine is the most appropriate technology to address high speed issues. However, it has the drawback of using a carbon sleeve to contain magnets that could tear because of the centrifugal forces generated in rotor periphery. Carbon fiber is well known for its mechanical properties but it has poor heat conduction. It results in a very bad evacuation of eddy current losses induce in the magnets by time and space stator harmonics. The three-phase inverter is the main harmonic source causing eddy currents in the magnets. In high speed applications such harmonics are harmful because on the one hand the characteristic impedance is very low and on the other hand the ratio between the switching frequency and that of the fundamental is much lower than that of the state of the art. To minimize the impact of these harmonics a first lever is to use strategy of modulation producing low harmonic distortion while the second is to introduce a sinus filter between the inverter and the machine to smooth voltage and current waveforms applied to the machine. Nevertheless, in very high speed machine the interaction of the processes mentioned above may introduce particular harmonics that can irreversibly damage the system: harmonics at the resonant frequency, harmonics at the shaft mode frequency, subharmonics etc. Some studies address these issues but treat these phenomena with separate solutions (specific strategy of modulation, active damping methods ...). The purpose of this paper is to present a complete new active harmonic compensation algorithm based on an improvement of the standard vector control as a global solution to all these issues. This presentation will be based on a complete theoretical analysis of the processes leading to the generation of such undesired harmonics. Then a state of the art of available solutions will be provided before developing the content of a new active harmonic compensation algorithm. The study will be completed by a validation study using simulations and practical case on a high speed machine.Keywords: active harmonic compensation, eddy current losses, high speed machine
Procedia PDF Downloads 395390 Neuro-Epigenetic Changes on Diabetes Induced-Synaptic Fidelity in Brain
Authors: Valencia Fernandes, Dharmendra Kumar Khatri, Shashi Bala Singh
Abstract:
Background and Aim: Epigenetics are the inaudible signatures of several pathological processes in the brain. This study understands the influence of DNA methylation, a major epigenetic modification, in the prefrontal cortex and hippocampus of the diabetic brain and its notable effect on the cellular chaperones and synaptic proteins. Method: Chronic high fat diet and STZ-induced diabetic mice were studied for cognitive dysfunction, and global DNA methylation, as well as DNA methyltransferase (DNMT) activity, were assessed. Further, the cellular chaperones and synaptic proteins were examined using DNMT inhibitor, 5-aza-2′-deoxycytidine (5-aza-dC)-via intracerebroventricular injection. Moreover, % methylation of these synaptic proteins were also studied so as to correlate its epigenetic involvement. Computationally, its interaction with the DNMT enzyme were also studied using bioinformatic tools. Histological studies for morphological alterations and neuronal degeneration were also studied. Neurogenesis, a characteristic marker for new learning and memory formation, was also assessed via the BrdU staining. Finally, the most important behavioral studies, including the Morris water maze, Y maze, passive avoidance, and Novel object recognition test, were performed to study its cognitive functions. Results: Altered global DNA methylation and increased levels of DNMTs within the nucleus were confirmed in the cortex and hippocampus of the diseased mice, suggesting hypermethylation at a genetic level. Treatment with AzadC, a global DNA demethylating agent, ameliorated the protein and gene expression of the cellular chaperones and synaptic fidelity. Furthermore, the methylation analysis profile showed hypermethylation of the hsf1 protein, a master regulator for chaperones and thus, confirmed the epigenetic involvement in the diseased brain. Morphological improvements and decreased neurodegeneration, along with enhanced neurogenesis in the treatment group, suggest that epigenetic modulations do participate in learning and memory. This is supported by the improved behavioral test battery seen in the treatment group. Conclusion: DNA methylation could possibly accord in dysregulating the memory-associated proteins at chronic stages in type 2 diabetes. This could suggest a substantial contribution to the underlying pathophysiology of several metabolic syndromes like insulin resistance, obesity and also participate in transitioning this damage centrally, such as cognitive dysfunction.Keywords: epigenetics, cognition, chaperones, DNA methylation
Procedia PDF Downloads 205389 Assessment of Biochemical Marker Profiles and Their Impact on Morbidity and Mortality of COVID-19 Patients in Tigray, Ethiopia
Authors: Teklay Gebrecherkos, Mahmud Abdulkadir
Abstract:
Abstract: The emergence and subsequent rapid worldwide spread of the COVID-19 pandemic have posed a global crisis, with a tremendously increasing burden of infection, morbidity, and mortality risks. Recent studies have suggested that severe cases of COVID-19 are characterized by massive biochemical, hematological, and inflammatory alterations whose synergistic effect is estimated to progress to multiple organ damage and failure. In this regard, biochemical monitoring of COVID-19 patients, based on comprehensive laboratory assessments and findings, is expected to play a crucial role in effective clinical management and improving the survival rates of patients. However, biochemical markers that can be informative of COVID-19 patient risk stratification and predictor of clinical outcomes are currently scarcely available. The study aims to investigate the profiles of common biochemical markers and their influence on the severity of the COVID-19 infection in Tigray, Ethiopia. Methods: A laboratory-based cross-sectional study was conducted from July to August 2020 at Quiha College of Engineering, Mekelle University COVID-19 isolation and treatment center. Sociodemographic and clinical data were collected using a structured questionnaire. Whole blood was collected from each study participant, and serum samples were separated after being delivered to the laboratory. Hematological biomarkers were analyzed using FACS count, while organ tests and serum electrolytes were analyzed using ion-selective electrode methods using a Cobas-6000 series machine. Data was analyzed using SPSS Vs 20. Results: A total of 120 SARS-CoV-2 patients were enrolled during the study. The participants ranged between 18 and 91 years, with a mean age of 52 (±108.8). The majority (40%) of participants were between the ages of 60 and above. Patients with multiple comorbidities developed severe COVID-19, though not statistically significant (p=0.34). Mann-Whitney U test analysis showed that biochemical tests such as neuropile count (p=0.003), AST levels (p=0.050), serum creatinine (p=0.000), and serum sodium (p=0.015) were significantly correlated with severe COVID-19 disease as compared to non-severe disease. Conclusion: The severity of COVID-19 was associated with higher age, organ tests AST and creatinine, serum Na+, and elevated total neutrophile count. Thus, further study needs to be conducted to evaluate the alterations of biochemical biomarkers and their impact on COVID-19.Keywords: COVID-19, biomarkers, mortality, Tigray, Ethiopia
Procedia PDF Downloads 45388 Investigation of the Effects of 10-Week Nordic Hamstring Exercise Training and Subsequent Detraining on Plasma Viscosity and Oxidative Stress Levels in Healthy Young Men
Authors: H. C. Ozdamar , O. Kilic-Erkek, H. E. Akkaya, E. Kilic-Toprak, M. Bor-Kucukatay
Abstract:
Nordic hamstring exercise (NHE) is used to increase hamstring muscle strength, prevent injuries. The aim of this study was to reveal the acute, long-term effects of 10-week NHE, followed by 5, 10-week detraining on anthropometric measurements, flexibility, anaerobic power, muscle architecture, damage, fatigue, oxidative stress, plasma viscosity (PV), blood lactate levels. 40 sedentary, healthy male volunteers underwent 10 weeks of progressive NHE followed by 5, 10 weeks of detraining. Muscle architecture was determined by ultrasonography, stiffness by strain elastography. Anaerobic power was assessed by double-foot standing, long jump, vertical jump, flexibility by sit-lie, hamstring flexibility tests. Creatine kinase activity, oxidant/antioxidant parameters were measured from venous blood by a commercial kit, whereas PV was determined using a cone-plate viscometer. The blood lactate level was measured from the fingertip. NHE allowed subjects to lose weight, this effect was reversed by detraining for 5 weeks. Exercise caused an increase in knee angles measured by a goniometer, which wasn’t affected by detraining. 10-week NHE caused a partially reversed increase in anaerobic performance upon detraining. NHE resulted in increment of biceps femoris long head (BFub) area, pennation angle, which was reversed by detraining of 10-weeks. Blood lactate levels, muscle pain, fatigue were increased after each exercise session. NHE didn’t change oxidant/antioxidant parameters; 5-week detraining resulted in an increase in total oxidant capacity (TOC) and oxidative stress index (OSI). Detraining of 10 weeks caused a reduction of these parameters. Acute exercise caused a reduction in PV at 1 to 10 weeks. Pre-exercise PV measured on the 10th week was lower than the basal value. Detraining caused the increment of PV. The results may guide the selection of the exercise type to increase performance and muscle strength. Knowing how much of the gains will be lost after a period of detraining can contribute to raising awareness of the continuity of the exercise. This work was supported by PAU Scientific Research Projects Coordination Unit (Project number: 2018SABE034)Keywords: anaerobic power, detraining, Nordic hamstring exercise, oxidative stress, plasma viscosity
Procedia PDF Downloads 127387 Web-Based Instructional Program to Improve Professional Development: Recommendations and Standards for Radioactive Facilities in Brazil
Authors: Denise Levy, Gian M. A. A. Sordi
Abstract:
This web based project focuses on continuing corporate education and improving workers' skills in Brazilian radioactive facilities throughout the country. The potential of Information and Communication Technologies (ICTs) shall contribute to improve the global communication in this very large country, where it is a strong challenge to ensure high quality professional information to as many people as possible. The main objective of this system is to provide Brazilian radioactive facilities a complete web-based repository - in Portuguese - for research, consultation and information, offering conditions for learning and improving professional and personal skills. UNIPRORAD is a web based system to offer unified programs and inter-related information about radiological protection programs. The content includes the best practices for radioactive facilities in order to meet both national standards and international recommendations published by different organizations over the past decades: International Commission on Radiological Protection (ICRP), International Atomic Energy Agency (IAEA) and National Nuclear Energy Commission (CNEN). The website counts on concepts, definitions and theory about optimization and ionizing radiation monitoring procedures. Moreover, the content presents further discussions related to some national and international recommendations, such as potential exposure, which is currently one of the most important research fields in radiological protection. Only two publications of ICRP develop expressively the issue and there is still a lack of knowledge of fail probabilities, for there are still uncertainties to find effective paths to quantify probabilistically the occurrence of potential exposures and the probabilities to reach a certain level of dose. To respond to this challenge, this project discusses and introduces potential exposures in a more quantitative way than national and international recommendations. Articulating ICRP and AIEA valid recommendations and official reports, in addition to scientific papers published in major international congresses, the website discusses and suggests a number of effective actions towards safety which can be incorporated into labor practice. The WEB platform was created according to corporate public needs, taking into account the development of a robust but flexible system, which can be easily adapted to future demands. ICTs provide a vast array of new communication capabilities and allow to spread information to as many people as possible at low costs and high quality communication. This initiative shall provide opportunities for employees to increase professional skills, stimulating development in this large country where it is an enormous challenge to ensure effective and updated information to geographically distant facilities, minimizing costs and optimizing results.Keywords: distance learning, information and communication technology, nuclear science, radioactive facilities
Procedia PDF Downloads 200386 Contribution of PALB2 and BLM Mutations to Familial Breast Cancer Risk in BRCA1/2 Negative South African Breast Cancer Patients Detected Using High-Resolution Melting Analysis
Authors: N. C. van der Merwe, J. Oosthuizen, M. F. Makhetha, J. Adams, B. K. Dajee, S-R. Schneider
Abstract:
Women representing high-risk breast cancer families, who tested negative for pathogenic mutations in BRCA1 and BRCA2, are four times more likely to develop breast cancer compared to women in the general population. Sequencing of genes involved in genomic stability and DNA repair led to the identification of novel contributors to familial breast cancer risk. These include BLM and PALB2. Bloom's syndrome is a rare homozygous autosomal recessive chromosomal instability disorder with a high incidence of various types of neoplasia and is associated with breast cancer when in a heterozygous state. PALB2, on the other hand, binds to BRCA2 and together, they partake actively in DNA damage repair. Archived DNA samples of 66 BRCA1/2 negative high-risk breast cancer patients were retrospectively selected based on the presence of an extensive family history of the disease ( > 3 affecteds per family). All coding regions and splice-site boundaries of both genes were screened using High-Resolution Melting Analysis. Samples exhibiting variation were bi-directionally automated Sanger sequenced. The clinical significance of each variant was assessed using various in silico and splice site prediction algorithms. Comprehensive screening identified a total of 11 BLM and 26 PALB2 variants. The variants detected ranged from global to rare and included three novel mutations. Three BLM and two PALB2 likely pathogenic mutations were identified that could account for the disease in these extensive breast cancer families in the absence of BRCA mutations (BLM c.11T > A, p.V4D; BLM c.2603C > T, p.P868L; BLM c.3961G > A, p.V1321I; PALB2 c.421C > T, p.Gln141Ter; PALB2 c.508A > T, p.Arg170Ter). Conclusion: The study confirmed the contribution of pathogenic mutations in BLM and PALB2 to the familial breast cancer burden in South Africa. It explained the presence of the disease in 7.5% of the BRCA1/2 negative families with an extensive family history of breast cancer. Segregation analysis will be performed to confirm the clinical impact of these mutations for each of these families. These results justify the inclusion of both these genes in a comprehensive breast and ovarian next generation sequencing cancer panel and should be screened simultaneously with BRCA1 and BRCA2 as it might explain a significant percentage of familial breast and ovarian cancer in South Africa.Keywords: Bloom Syndrome, familial breast cancer, PALB2, South Africa
Procedia PDF Downloads 236385 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging
Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen
Abstract:
Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques
Procedia PDF Downloads 100384 Innovative Strategies for Chest Wall Reconstruction Following Resection of Recurrent Breast Carcinoma
Authors: Sean Yao Zu Kong, Khong Yik Chew
Abstract:
Introduction: We described a case report of the successful use of advanced surgical techniques in a patient with recurrent breast cancer who underwent a wide resection including the hemi-sternum, clavicle, multiple ribs, and a lobe of the lung due to tumor involvement. This extensive resection exposed critical structures, requiring a creative approach to reconstruction. To address this complex chest wall reconstruction, a free fibula flap and a 4-zone rectus abdominis musculocutaneous flap were successfully utilized. The use of a free vascularized bone flap allowed for rapid osteointegration and resistance against osteoradionecrosis after adjuvant radiation, while a four-zone tram flap allowed for reconstruction of both the chest wall and breast mound. Although limited recipient vessels made free flaps challenging, the free fibula flap served as both a bony reconstruction and vascular conduit, supercharged with the distal peroneal artery and veins of the peroneal artery from the fibula graft. Our approach highlights the potential of advanced surgical techniques to improve outcomes in complex cases of chest wall reconstruction in patients with recurrent breast cancer, which is becoming increasingly relevant as breast cancer incidence rates increases. Case presentation: This report describes a successful reconstruction of a patient with recurrent breast cancer who required extensive resection, including the anterior chest wall, clavicle, and sternoclavicular joint. Challenges arose due to the loss of accessory muscles and the non-rigid rib cage, which could lead to compromised ventilation and instability. A free fibula osteocutaneous flap and a four-zone TRAM flap with vascular supercharging were utilized to achieve long-term stability and function. The patient has since fully recovered, and during the review, both flaps remained viable, and chest mound reconstruction was satisfactory. A planned nipple/areolar reconstruction was offered pending the patient’s decision after adjuvant radiotherapy. Conclusion: In conclusion, this case report highlights the successful use of innovative surgical techniques in addressing a complex case of recurrent breast cancer requiring extensive resection and radical reconstruction. Our approach, utilized a combination of a free fibula flap and a 4-zone rectus abdominis musculocutaneous flap, demonstrates the potential for advanced techniques in chest wall reconstruction to minimize complications and ensure long-term stability and function. As the incidence of breast cancer continues to rise, it is crucial that healthcare professionals explore and utilize innovative techniques to improve patient outcomes and quality of life.Keywords: free fibula flap, rectus abdominis musculocutaneous flap, post-adjuvant radiotherapy, reconstructive surgery, malignancy
Procedia PDF Downloads 63383 Comparison of the Chest X-Ray and Computerized Tomography Scans Requested from the Emergency Department
Authors: Sahabettin Mete, Abdullah C. Hocagil, Hilal Hocagil, Volkan Ulker, Hasan C. Taskin
Abstract:
Objectives and Goals: An emergency department is a place where people can come for a multitude of reasons 24 hours a day. As it is an easy, accessible place, thanks to self-sacrificing people who work in emergency departments. But the workload and overcrowding of emergency departments are increasing day by day. Under these circumstances, it is important to choose a quick, easily accessible and effective test for diagnosis. This results in laboratory and imaging tests being more than 40% of all emergency department costs. Despite all of the technological advances in imaging methods and available computerized tomography (CT), chest X-ray, the older imaging method, has not lost its appeal and effectiveness for nearly all emergency physicians. Progress in imaging methods are very convenient, but physicians should consider the radiation dose, cost, and effectiveness, as well as imaging methods to be carefully selected and used. The aim of the study was to investigate the effectiveness of chest X-ray in immediate diagnosis against the advancing technology by comparing chest X-ray and chest CT scan results of the patients in the emergency department. Methods: Patients who applied to Bulent Ecevit University Faculty of Medicine’s emergency department were investigated retrospectively in between 1 September 2014 and 28 February 2015. Data were obtained via MIAMED (Clear Canvas Image Server v6.2, Toronto, Canada), information management system which patients’ files are saved electronically in the clinic, and were retrospectively scanned. The study included 199 patients who were 18 or older, had both chest X-ray and chest CT imaging. Chest X-ray images were evaluated by the emergency medicine senior assistant in the emergency department, and the findings were saved to the study form. CT findings were obtained from already reported data by radiology department in the clinic. Chest X-ray was evaluated with seven questions in terms of technique and dose adequacy. Patients’ age, gender, application complaints, comorbid diseases, vital signs, physical examination findings, diagnosis, chest X-ray findings and chest CT findings were evaluated. Data saved and statistical analyses have made via using SPSS 19.0 for Windows. And the value of p < 0.05 were accepted statistically significant. Results: 199 patients were included in the study. In 38,2% (n=76) of all patients were diagnosed with pneumonia and it was the most common diagnosis. The chest X-ray imaging technique was appropriate in patients with the rate of 31% (n=62) of all patients. There was not any statistically significant difference (p > 0.05) between both imaging methods (chest X-ray and chest CT) in terms of determining the rates of displacement of the trachea, pneumothorax, parenchymal consolidation, increased cardiothoracic ratio, lymphadenopathy, diaphragmatic hernia, free air levels in the abdomen (in sections including the image), pleural thickening, parenchymal cyst, parenchymal mass, parenchymal cavity, parenchymal atelectasis and bone fractures. Conclusions: When imaging findings, showing cases that needed to be quickly diagnosed, were investigated, chest X-ray and chest CT findings were matched at a high rate in patients with an appropriate imaging technique. However, chest X-rays, evaluated in the emergency department, were frequently taken with an inappropriate technique.Keywords: chest x-ray, chest computerized tomography, chest imaging, emergency department
Procedia PDF Downloads 193382 The Connection between De Minimis Rule and the Effect on Trade
Authors: Pedro Mario Gonzalez Jimenez
Abstract:
The novelties introduced by the last Notice on agreements of minor importance tighten the application of the ‘De minimis’ safe harbour in the European Union. However, the undetermined legal concept of effect on trade between the Member States becomes importance at the same time. Therefore, the current analysis that the jurist should carry out in the European Union to determine if an agreement appreciably restrict competition under Article 101 of the Treaty on the Functioning of the European Union is double. Hence, it is necessary to know how to balance the significance in competition and the significance in effect on trade between the Member States. It is a crucial issue due to the negative delimitation of restriction of competition affects the positive one. The methodology of this research is rather simple. Beginning with a historical approach to the ‘De Minimis Rule’, their main problems and uncertainties will be found. So, after the analysis of normative documents and the jurisprudence of the Court of Justice of the European Union some proposals of ‘Lege ferenda’ will be offered. These proposals try to overcome the contradictions and questions that currently exist in the European Union as a consequence of the current legal regime of agreements of minor importance. The main findings of this research are the followings: Firstly, the effect on trade is another way to analyze the importance of an agreement different from the ‘De minimis rule’. In point of fact, this concept is singularly adapted to go through agreements that have as object the prevention, restriction or distortion of competition, as it is observed in the most famous European Union case-law. Thanks to the effect on trade, as long as the proper requirements are met there is no a restriction of competition under article 101 of the Treaty on the Functioning of the European Union, even if the agreement had an anti-competitive object. These requirements are an aggregate market share lower than 5% on any of the relevant markets affected by the agreement and turnover lower than 40 million of Euros. Secondly, as the Notice itself says ‘it is also intended to give guidance to the courts and competition authorities of the Member States in their application of Article 101 of the Treaty, but it has no binding force for them’. This reality makes possible the existence of different statements among the different Member States and a confusing perception of what a restriction of competition is. Ultimately, damage on trade between the Member States could be observed for this reason. The main conclusion is that the significant effect on trade between Member States is irrelevant in agreements that restrict competition because of their effects but crucial in agreements that restrict competition because of their object. Thus, the Member States should propose the incorporation of a similar concept in their legal orders in order to apply the content of the Notice. Otherwise, the significance of the restrictive agreement on competition would not be properly assessed.Keywords: De minimis rule, effect on trade, minor importance agreements, safe harbour
Procedia PDF Downloads 183381 Protective Role of Autophagy Challenging the Stresses of Type 2 Diabetes and Dyslipidemia
Authors: Tanima Chatterjee, Maitree Bhattacharyya
Abstract:
The global challenge of type 2 diabetes mellitus is a major health concern in this millennium, and researchers are continuously exploring new targets to develop a novel therapeutic strategy. Type 2 diabetes mellitus (T2DM) is often coupled with dyslipidemia increasing the risks for cardiovascular (CVD) complications. Enhanced oxidative and nitrosative stresses appear to be the major risk factors underlying insulin resistance, dyslipidemia, β-cell dysfunction, and T2DM pathogenesis. Autophagy emerges to be a promising defense mechanism against stress-mediated cell damage regulating tissue homeostasis, cellular quality control, and energy production, promoting cell survival. In this study, we have attempted to explore the pivotal role of autophagy in T2DM subjects with or without dyslipidemia in peripheral blood mononuclear cells and insulin-resistant HepG2 cells utilizing flow cytometric platform, confocal microscopy, and molecular biology techniques like western blotting, immunofluorescence, and real-time polymerase chain reaction. In the case of T2DM with dyslipidemia higher population of autophagy, positive cells were detected compared to patients with the only T2DM, which might have resulted due to higher stress. Autophagy was observed to be triggered both by oxidative and nitrosative stress revealing a novel finding of our research. LC3 puncta was observed in peripheral blood mononuclear cells and periphery of HepG2 cells in the case of the diabetic and diabetic-dyslipidemic conditions. Increased expression of ATG5, LC3B, and Beclin supports the autophagic pathway in both PBMC and insulin-resistant Hep G2 cells. Upon blocking autophagy by 3-methyl adenine (3MA), the apoptotic cell population increased significantly, as observed by caspase‐3 cleavage and reduced expression of Bcl2. Autophagy has also been evidenced to control oxidative stress-mediated up-regulation of inflammatory markers like IL-6 and TNF-α. To conclude, this study elucidates autophagy to play a protective role in the case of diabetes mellitus with dyslipidemia. In the present scenario, this study demands to have a significant impact on developing a new therapeutic strategy for diabetic dyslipidemic subjects by enhancing autophagic activity.Keywords: autophagy, apoptosis, dyslipidemia, reactive oxygen species, reactive nitrogen species, Type 2 diabetes
Procedia PDF Downloads 131