Search results for: Josef Novak
33 Reorientation of Anisotropic Particles in Free Liquid Microjets
Authors: Mathias Schlenk, Susanne Seibt, Sabine Rosenfeldt, Josef Breu, Stephan Foerster
Abstract:
Thin liquid jets on micrometer scale play an important role in processing such as in fiber fabrication, inkjet printing, but also for sample delivery in modern synchrotron X-ray devices. In all these cases the liquid jets contain solvents and dissolved materials such as polymers, nanoparticles, fibers pigments or proteins. As liquid flow in liquid jets differs significantly from flow in capillaries and microchannels, particle localization and orientation will also be different. This is of critical importance for applications, which depend on well-defined homogeneous particle and fiber distribution and orientation in liquid jets. Investigations of particle orientation in liquid microjets of diluted solutions have been rare, despite their importance. With the arise of micro-focused X-ray beams it has become possible to scan across samples with micrometer resolution to locally analyse structure and orientation of the samples. In the present work, we used this method to scan across liquid microjets to determine the local distribution and orientation of anisotropic particles. The compromise wormlike block copolymer micelles as an example of long flexible fibrous structures, hectorite materials as a model of extended nanosheet structures, and gold nanorods as an illustration of short stiff cylinders to comprise all relevant anisotropic geometries. We find that due to the different velocity profile in the liquid jet, which resembles plug flow, the orientation of the particles which was generated in the capillary is lost or changed into non-oriented or bi-axially orientations depending on the geometrical shape of the particle.Keywords: anisotropic particles, liquid microjets, reorientation, SAXS
Procedia PDF Downloads 33932 Identification Algorithm of Critical Interface, Modelling Perils on Critical Infrastructure Subjects
Authors: Jiří. J. Urbánek, Hana Malachová, Josef Krahulec, Jitka Johanidisová
Abstract:
The paper deals with crisis situations investigation and modelling within the organizations of critical infrastructure. Every crisis situation has an origin in the emergency event occurrence in the organizations of energetic critical infrastructure especially. Here, the emergency events can be both the expected events, then crisis scenarios can be pre-prepared by pertinent organizational crisis management authorities towards their coping or the unexpected event (Black Swan effect) – without pre-prepared scenario, but it needs operational coping of crisis situations as well. The forms, characteristics, behaviour and utilization of crisis scenarios have various qualities, depending on real critical infrastructure organization prevention and training processes. An aim is always better organizational security and continuity obtainment. This paper objective is to find and investigate critical/ crisis zones and functions in critical situations models of critical infrastructure organization. The DYVELOP (Dynamic Vector Logistics of Processes) method is able to identify problematic critical zones and functions, displaying critical interfaces among actors of crisis situations on the DYVELOP maps named Blazons. Firstly, for realization of this ability is necessary to derive and create identification algorithm of critical interfaces. The locations of critical interfaces are the flags of crisis situation in real organization of critical infrastructure. Conclusive, the model of critical interface will be displayed at real organization of Czech energetic crisis infrastructure subject in Black Out peril environment. The Blazons need live power Point presentation for better comprehension of this paper mission.Keywords: algorithm, crisis, DYVELOP, infrastructure
Procedia PDF Downloads 41031 Clean Sky 2 Project LiBAT: Light Battery Pack for High Power Applications in Aviation – Simulation Methods in Early Stage Design
Authors: Jan Dahlhaus, Alejandro Cardenas Miranda, Frederik Scholer, Maximilian Leonhardt, Matthias Moullion, Frank Beutenmuller, Julia Eckhardt, Josef Wasner, Frank Nittel, Sebastian Stoll, Devin Atukalp, Daniel Folgmann, Tobias Mayer, Obrad Dordevic, Paul Riley, Jean-Marc Le Peuvedic
Abstract:
Electrical and hybrid aerospace technologies pose very challenging demands on the battery pack – especially with respect to weight and power. In the Clean Sky 2 research project LiBAT (funded by the EU), the consortium is currently building an ambitious prototype with state-of-the art cells that shows the potential of an intelligent pack design with a high level of integration, especially with respect to thermal management and power electronics. For the latter, innovative multi-level-inverter technology is used to realize the required power converting functions with reduced equipment. In this talk the key approaches and methods of the LiBat project will be presented and central results shown. Special focus will be set on the simulative methods used to support the early design and development stages from an overall system perspective. The applied methods can efficiently handle multiple domains and deal with different time and length scales, thus allowing the analysis and optimization of overall- or sub-system behavior. It will be shown how these simulations provide valuable information and insights for the efficient evaluation of concepts. As a result, the construction and iteration of hardware prototypes has been reduced and development cycles shortened.Keywords: electric aircraft, battery, Li-ion, multi-level-inverter, Novec
Procedia PDF Downloads 16830 Use of Locally Effective Microorganisms in Conjunction with Biochar to Remediate Mine-Impacted Soils
Authors: Thomas F. Ducey, Kristin M. Trippe, James A. Ippolito, Jeffrey M. Novak, Mark G. Johnson, Gilbert C. Sigua
Abstract:
The Oronogo-Duenweg mining belt –approximately 20 square miles around the Joplin, Missouri area– is a designated United States Environmental Protection Agency Superfund site due to lead-contaminated soil and groundwater by former mining and smelting operations. Over almost a century of mining (from 1848 to the late 1960’s), an estimated ten million tons of cadmium, lead, and zinc containing material have been deposited on approximately 9,000 acres. Sites that have undergone remediation, in which the O, A, and B horizons have been removed along with the lead contamination, the exposed C horizon remains incalcitrant to revegetation efforts. These sites also suffer from poor soil microbial activity, as measured by soil extracellular enzymatic assays, though 16S ribosomal ribonucleic acid (rRNA) indicates that microbial diversity is equal to sites that have avoided mine-related contamination. Soil analysis reveals low soil organic carbon, along with high levels of bio-available zinc, that reflect the poor soil fertility conditions and low microbial activity. Our study looked at the use of several materials to restore and remediate these sites, with the goal of improving soil health. The following materials, and their purposes for incorporation into the study, were as follows: manure-based biochar for the binding of zinc and other heavy metals responsible for phytotoxicity, locally sourced biosolids and compost to incorporate organic carbon into the depleted soils, effective microorganisms harvested from nearby pristine sites to provide a stable community for nutrient cycling in the newly composited 'soil material'. Our results indicate that all four materials used in conjunction result in the greatest benefit to these mine-impacted soils, based on above ground biomass, microbial biomass, and soil enzymatic activities.Keywords: locally effective microorganisms, biochar, remediation, reclamation
Procedia PDF Downloads 21729 Non-Invasive Techniques of Analysis of Painting in Forensic Fields
Authors: Radka Sefcu, Vaclava Antuskova, Ivana Turkova
Abstract:
A growing market with modern artworks of a high price leads to the creation and selling of artwork counterfeits. Material analysis is an important part of the process of assessment of authenticity. Knowledge of materials and techniques used by original authors is also necessary. The contribution presents possibilities of non-invasive methods of structural analysis in research on paintings. It was proved that unambiguous identification of many art materials is feasible without sampling. The combination of Raman spectroscopy with FTIR-external reflection enabled the identification of pigments and binders on selected artworks of prominent Czech painters from the first half of the 20th century – Josef Čapek, Emil Filla, Václav Špála and Jan Zrzavý. Raman spectroscopy confirmed the presence of a wide range of white pigments - lead white, zinc white, titanium white, barium white and also Freeman's white as a special white pigment of painting. Good results were obtained for red, blue and most of the yellow areas. Identification of green pigments was often impossible due to strong fluorescence. Oil was confirmed as a binding medium on most of the analyzed artworks via FTIR - external reflection. Collected data present the valuable background for the determination of art materials characteristic for each painter (his palette) and its development over time. Obtained results will further serve as comparative material for the authentication of artworks. This work has been financially supported by the project of the Ministry of the Interior of the Czech Republic: The Development of a Strategic Cluster for Effective Instrumental Technological Methods of Forensic Authentication of Modern Artworks (VJ01010004).Keywords: non-invasive analysis, Raman spectroscopy, FTIR-external reflection, forgeries
Procedia PDF Downloads 17228 Computational Study of Composite Films
Authors: Rudolf Hrach, Stanislav Novak, Vera Hrachova
Abstract:
Composite and nanocomposite films represent the class of promising materials and are often objects of the study due to their mechanical, electrical and other properties. The most interesting ones are probably the composite metal/dielectric structures consisting of a metal component embedded in an oxide or polymer matrix. Behaviour of composite films varies with the amount of the metal component inside what is called filling factor. The structures contain individual metal particles or nanoparticles completely insulated by the dielectric matrix for small filling factors and the films have more or less dielectric properties. The conductivity of the films increases with increasing filling factor and finally a transition into metallic state occurs. The behaviour of composite films near a percolation threshold, where the change of charge transport mechanism from a thermally-activated tunnelling between individual metal objects to an ohmic conductivity is observed, is especially important. Physical properties of composite films are given not only by the concentration of metal component but also by the spatial and size distributions of metal objects which are influenced by a technology used. In our contribution, a study of composite structures with the help of methods of computational physics was performed. The study consists of two parts: -Generation of simulated composite and nanocomposite films. The techniques based on hard-sphere or soft-sphere models as well as on atomic modelling are used here. Characterizations of prepared composite structures by image analysis of their sections or projections follow then. However, the analysis of various morphological methods must be performed as the standard algorithms based on the theory of mathematical morphology lose their sensitivity when applied to composite films. -The charge transport in the composites was studied by the kinetic Monte Carlo method as there is a close connection between structural and electric properties of composite and nanocomposite films. It was found that near the percolation threshold the paths of tunnel current forms so-called fuzzy clusters. The main aim of the present study was to establish the correlation between morphological properties of composites/nanocomposites and structures of conducting paths in them in the dependence on the technology of composite films.Keywords: composite films, computer modelling, image analysis, nanocomposite films
Procedia PDF Downloads 39327 The Consistency of Gerhard Kittel’s “Christian” Antisemitism in His "Die Judenfrage" and "Meine Verteidigung"
Authors: Catherine Harrison
Abstract:
Faced with arrest, imprisonment and the denazification process in 1945, Tübingen University’s Professor of Theology, Gerhard Kittel, refused to abandon the “Christian” antisemitism which he had first expounded in his Die Judenfrage [The Jewish Question] (1933 and 1934). At the heart of this paper is a critical engagement with Die Judenfrage, the first in English. Putting Die Judenfrage into dialogue with Kittel’s 1946, Meine Verteidigung [My Defence] (1945-6) exposes the remarkable consistency of Kittel’s idiosyncratic but closely argued Christian theology of antisemitism. Girdling his career as a foremost theologian, antisemite and enthusiastic supporter of Hitler and the NSDAP, the consistency between Die Judenfrage and Meine Verteidigung attests Kittel’s consistent and authentic, intellectual position. In both texts, he claims to be advancing Christian, as opposed to “vulgar” or racial, antisemitism. Yet, in the thirteen years which divide them, Kittel had mediated contact with Nazi illuminati Rudolph Hess, Alfred Rosenberg, Winnifred Wagner, Josef Goebbels and Baldur von Schirach, through his publications in various antisemitic journals. The paper argues: Die Judenfrage, as both a text and as a theme, is axiomatic to Kittel’s defence statement; and that Die Judenfrage constitutes the template of Kittel’s arcane, personal “Christian” antisemitism of which Meine Verteidigung is a faithful impression. Both are constructed on the same theologically chimeric and abstruse hypotheses regarding Volk, Spätjudentum [late Judaism] and Heilgeschichte [salvation history]. Problematising these and other definitional vagaries that make up Kittel’s “Christian” antisemitism highlight the remarkable theoretical consistency between Die Judenfrage and Meine Verteidigung. It is concluded that a deadly synergy of Nazi racial antisemitism and the New Testament antisemitism shaped Kittel’s judgement to the degree that, despite the slipstream of concentration camp footage which was shaking the foundations of post-war German academia, Meine Verteidigung is a simple restatement of the antisemitsm conveyed in Die Judenfrage.Keywords: Gerhard Kittel, Third Reich theology, the Jewish Question, Nazi antisemitism
Procedia PDF Downloads 16726 Effects of Different Types of Perioperative Analgesia on Minimal Residual Disease Development After Colon Cancer Surgery
Authors: Lubomir Vecera, Tomas Gabrhelik, Benjamin Tolmaci, Josef Srovnal, Emil Berta, Petr Prasil, Petr Stourac
Abstract:
Cancer is the second leading cause of death worldwide and colon cancer is the second most common type of cancer. Currently, there are only a few studies evaluating the effect of postoperative analgesia on the prognosis of patients undergoing radical colon cancer surgery. Postoperative analgesia in patients undergoing colon cancer surgery is usually managed in two ways, either with strong opioids (morphine, piritramide) or epidural analgesia. In our prospective study, we evaluated the effect of postoperative analgesia on the presence of circulating tumor cells or minimal residual disease after colon cancer surgery. A total of 60 patients who underwent radical colon cancer surgery were enrolled in this prospective, randomized, two-center study. Patients were randomized into three groups, namely piritramide, morphine and postoperative epidural analgesia. We evaluated the presence of carcinoembryonic antigen (CEA) and cytokeratin 20 (CK-20) mRNA positive circulating tumor cells in peripheral blood before surgery, immediately after surgery, on postoperative day two and one month after surgery. The presence of circulating tumor cells was assessed by quantitative real-time reverse transcriptase-polymerase chain reaction (qRT-PCR). In the priritramide postoperative analgesia group, the presence of CEA mRNA positive cells was significantly lower on a postoperative day two compared to the other groups (p=0.04). The value of CK-20 mRNA positive cells was the same in all groups on all days. In all groups, both types of circulating tumor cells returned to normal levels one month after surgery. Demographic and baseline clinical characteristics were similar in all groups. Compared with morphine and epidural analgesia, piritramide significantly reduces the amount of CEA mRNA positive circulating tumor cells after radical colon cancer surgery.Keywords: cancer progression, colon cancer, minimal residual disease, perioperative analgesia.
Procedia PDF Downloads 19025 Computer-Assisted Management of Building Climate and Microgrid with Model Predictive Control
Authors: Vinko Lešić, Mario Vašak, Anita Martinčević, Marko Gulin, Antonio Starčić, Hrvoje Novak
Abstract:
With 40% of total world energy consumption, building systems are developing into technically complex large energy consumers suitable for application of sophisticated power management approaches to largely increase the energy efficiency and even make them active energy market participants. Centralized control system of building heating and cooling managed by economically-optimal model predictive control shows promising results with estimated 30% of energy efficiency increase. The research is focused on implementation of such a method on a case study performed on two floors of our faculty building with corresponding sensors wireless data acquisition, remote heating/cooling units and central climate controller. Building walls are mathematically modeled with corresponding material types, surface shapes and sizes. Models are then exploited to predict thermal characteristics and changes in different building zones. Exterior influences such as environmental conditions and weather forecast, people behavior and comfort demands are all taken into account for deriving price-optimal climate control. Finally, a DC microgrid with photovoltaics, wind turbine, supercapacitor, batteries and fuel cell stacks is added to make the building a unit capable of active participation in a price-varying energy market. Computational burden of applying model predictive control on such a complex system is relaxed through a hierarchical decomposition of the microgrid and climate control, where the former is designed as higher hierarchical level with pre-calculated price-optimal power flows control, and latter is designed as lower level control responsible to ensure thermal comfort and exploit the optimal supply conditions enabled by microgrid energy flows management. Such an approach is expected to enable the inclusion of more complex building subsystems into consideration in order to further increase the energy efficiency.Keywords: price-optimal building climate control, Microgrid power flow optimisation, hierarchical model predictive control, energy efficient buildings, energy market participation
Procedia PDF Downloads 46624 Microscopic Analysis of Interfacial Transition Zone of Cementitious Composites Prepared by Various Mixing Procedures
Authors: Josef Fládr, Jiří Němeček, Veronika Koudelková, Petr Bílý
Abstract:
Mechanical parameters of cementitious composites differ quite significantly based on the composition of cement matrix. They are also influenced by mixing times and procedure. The research presented in this paper was aimed at identification of differences in microstructure of normal strength (NSC) and differently mixed high strength (HSC) cementitious composites. Scanning electron microscopy (SEM) investigation together with energy dispersive X-ray spectroscopy (EDX) phase analysis of NSC and HSC samples was conducted. Evaluation of interfacial transition zone (ITZ) between the aggregate and cement matrix was performed. Volume share, thickness, porosity and composition of ITZ were studied. In case of HSC, samples obtained by several different mixing procedures were compared in order to find the most suitable procedure. In case of NSC, ITZ was identified around 40-50% of aggregate grains and its thickness typically ranged between 10 and 40 µm. Higher porosity and lower share of clinker was observed in this area as a result of increased water-to-cement ratio (w/c) and the lack of fine particles improving the grading curve of the aggregate. Typical ITZ with lower content of Ca was observed only in one HSC sample, where it was developed around less than 15% of aggregate grains. The typical thickness of ITZ in this sample was similar to ITZ in NSC (between 5 and 40 µm). In the remaining four HSC samples, no ITZ was observed. In general, the share of ITZ in HSC samples was found to be significantly smaller than in NSC samples. As ITZ is the weakest part of the material, this result explains to large extent the improved mechanical properties of HSC compared to NSC. Based on the comparison of characteristics of ITZ in HSC samples prepared by different mixing procedures, the most suitable mixing procedure from the point of view of properties of ITZ was identified.Keywords: electron diffraction spectroscopy, high strength concrete, interfacial transition zone, normal strength concrete, scanning electron microscopy
Procedia PDF Downloads 29223 Computational Assistance of the Research, Using Dynamic Vector Logistics of Processes for Critical Infrastructure Subjects Continuity
Authors: Urbánek Jiří J., Krahulec Josef, Urbánek Jiří F., Johanidesová Jitka
Abstract:
These Computational assistance for the research and modelling of critical infrastructure subjects continuity deal with this paper. It enables us the using of prevailing operation system MS Office (SmartArt...) for mathematical models, using DYVELOP (Dynamic Vector Logistics of Processes) method. It serves for crisis situations investigation and modelling within the organizations of critical infrastructure. In the first part of the paper, it will be introduced entities, operators and actors of DYVELOP method. It uses just three operators of Boolean algebra and four types of the entities: the Environments, the Process Systems, the Cases and the Controlling. The Process Systems (PrS) have five “brothers”: Management PrS, Transformation PrS, Logistic PrS, Event PrS and Operation PrS. The Cases have three “sisters”: Process Cell Case, Use Case and Activity Case. They all need for the controlling of their functions special Ctrl actors, except ENV – it can do without Ctrl. Model´s maps are named the Blazons and they are able mathematically - graphically express the relationships among entities, actors and processes. In the second part of this paper, the rich blazons of DYVELOP method will be used for the discovering and modelling of the cycling cases and their phases. The blazons need live PowerPoint presentation for better comprehension of this paper mission. The crisis management of energetic crisis infrastructure organization is obliged to use the cycles for successful coping of crisis situations. Several times cycling of these cases is a necessary condition for the encompassment of the both the emergency event and the mitigation of organization´s damages. Uninterrupted and continuous cycling process bring for crisis management fruitfulness and it is a good indicator and controlling actor of organizational continuity and its sustainable development advanced possibilities. The research reliable rules are derived for the safety and reliable continuity of energetic critical infrastructure organization in the crisis situation.Keywords: blazons, computational assistance, DYVELOP method, critical infrastructure
Procedia PDF Downloads 38422 The Effects of Nano Zerovalent Iron (nZVI) and Magnesium Oxide Nanoparticles on Methane Production during Anaerobic Digestion of Waste Activated Sludge
Authors: Passkorn Khanthongthip, John T. Novak
Abstract:
Many studies have been reported that the nZVI and MgO NPs were often found in waste activated sludge (WAS). However, little is known about the impact of those NPs on WAS stabilization. The aims of this study were to investigate the effects of both NPs on WAS anaerobic digestion for methane production and to examine the change of metanogenic population under those different environments using qPCR. Four dosages (2, 50, 100, and 200 mg/g-TSS) of MgO NPs were added to four different bottles containing WAS to investigate the impact of MgO NPs on methane production during WAS anaerobic digestion. The effects of nZVI on methane production during WAS anaerobic digestion were also conducted in another four bottles using the same methods described above except that the MgO NPs were replaced by nZVI. A bottle of WAS anaerobic digestion without nanoparticles addition was also operated to serve as a control. It was found that the relative amounts, compared to the control system, of methane production in each WAS anaerobic digestion bottle adding 2, 50, 100, 200 mg/gTSS MgO NPs were 98, 62, 28, and 14 %, respectively. This suggests that higher MgO NPs resulted in lower methane production. The data of batch test for the effects of corresponding released Mg2+ indicated that 50 mg/gTSS MgO NPs or higher could inhibit methane production at least 25%. Moreover, the volatile fatty acid (VFA) concentration was 328, 384, 928, 3,684, and 7,848 mg/L for the control and four WAS anaerobic digestion bottles with 2, 50, 100, 200 mg/gTSS MgO NPs addition, respectively. Higher VFA concentration could reduce pH and subsequently decrease methanogen growth, resulting in lower methane production. The relative numbers of total gene copies of methanogens analyzed from samples taken from WAS anaerobic digestion bottles were approximately 99, 68, 38, and 24 % of control for the addition of 2, 50, 100, and 200 mg/gTSS, respectively. Obviously, the more MgO NPs appeared in sludge anaerobic digestion system, the less methanogens remained. In contrast, the relative amount of methane production found in another four WAS anaerobic digestion bottles adding 2, 50, 100, and 200 mg/gTSS nZVI were 102, 128, 112, and 104 % of the control, respectively. The measurement of methanogenic population indicated that the relative content of methanogen gene copies were 101, 132, 120, and 112 % of those found in control, respectively. Additionally, the cumulative VFA was 320, 234, 308, and 330 mg/L, respectively. This reveals that nZVI addition could assist to increase methanogenic population. Higher amount of methanogen accelerated VFA degradation for greater methane production, resulting in lower VFA accumulation in digesters. Moreover, the data for effects of corresponding released Fe2+ conducted by batch tests suggest that the addition of approximately 50 mg/gTSS nZVI increased methane production by 20%. In conclusion, the presence of MgO NPs appeared to diminish the methane production during WAS anaerobic digestion. Higher MgO NPs dosages resulted in more inhibition on methane production. In contrast, nZVI addition promoted the amount of methanogenic population which facilitated methane production.Keywords: magnesium oxide nanoparticles, methane production, methanogenic population, nano zerovalent iron
Procedia PDF Downloads 29621 Stress Corrosion Crackings Test of Candidate Materials in Support of the Development of the European Small Modular Supercritical Water Cooled Rector Concept
Authors: Radek Novotny, Michal Novak, Daniela Marusakova, Monika Sipova, Hugo Fuentes, Peter Borst
Abstract:
This research has been conducted within the European HORIZON 2020 project ECC-SMART. The main objective is to assess whether it is feasible to design and develop a small modular reactor (SMR) that would be cooled by supercritical water (SCW). One of the main objectives for material research concerns the corrosion of the candidate cladding materials. The experimental part has been conducted in support of the qualification procedure of the future SCW-SMR constructional materials. The last objective was to identify the gaps in current norms and guidelines. Apart from corrosion, resistance testing of candidate materials stresses corrosion cracking susceptibility tests have been performed in supercritical water. This paper describes part of these tests, in particular, those slow strain rate tensile loading applied for tangential ring shape specimens of two candidate materials, Alloy 800H and 310S stainless steel. These ring tensile tests are one the methods used for tensile testing of nuclear cladding. Here full circular heads with dimensions roughly equal to the inner diameter of the sample and the gage sections are placed in the parallel direction to the applied load. Slow strain rate tensile tests have been conducted in 380 or 500oC supercritical water applying two different elongation rates, 1x10-6 and 1x10-7 s-1. The effect of temperature and dissolved oxygen content on the SCC susceptibility of Alloy 800H and 310S stainless steel was investigated when two different temperatures and concentrations of dissolved oxygen were applied in supercritical water. The post-fracture analysis includes fractographic analysis of the fracture surfaces using SEM as well as cross-sectional analysis on the occurrence of secondary cracks. Assessment of the effect of environment and dissolved oxygen content was by comparing to the results of the reference tests performed in air and N2 gas overpressure. The effect of high temperature on creep and its role in the initiation of SCC was assessed as well. It has been concluded that the applied test method could be very useful for the investigation of stress corrosion cracking susceptibility of candidate cladding materials in supercritical water.Keywords: stress corrosion cracking, ring tensile tests, super-critical water, alloy 800H, 310S stainless steel
Procedia PDF Downloads 8820 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers
Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran
Abstract:
With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.Keywords: optical fiber, multi-mode, data centers, encircled flux
Procedia PDF Downloads 37719 Liquid Illumination: Fabricating Images of Fashion and Architecture
Authors: Sue Hershberger Yoder, Jon Yoder
Abstract:
“The appearance does not hide the essence, it reveals it; it is the essence.”—Jean-Paul Sartre, Being and Nothingness Three decades ago, transarchitect Marcos Novak developed an early form of algorithmic animation he called “liquid architecture.” In that project, digitally floating forms morphed seamlessly in cyberspace without claiming to evolve or improve. Change itself was seen as inevitable. And although some imagistic moments certainly stood out, none was hierarchically privileged over another. That project challenged longstanding assumptions about creativity and artistic genius by posing infinite parametric possibilities as inviting alternatives to traditional notions of stability, originality, and evolution. Through ephemeral processes of printing, milling, and projecting, the exhibition “Liquid Illumination” destabilizes the solid foundations of fashion and architecture. The installation is neither worn nor built in the conventional sense, but—like the sensual art forms of fashion and architecture—it is still radically embodied through the logics and techniques of design. Appearances are everything. Surface pattern and color are no longer understood as minor afterthoughts or vapid carriers of dubious content. Here, they become essential but ever-changing aspects of precisely fabricated images. Fourteen silk “colorways” (a term from the fashion industry) are framed selections from ongoing experiments with intricate pattern and complex color configurations. Whether these images are printed on fabric, milled in foam, or illuminated through projection, they explore and celebrate the untapped potentials of the surficial and superficial. Some components of individual prints appear to float in front of others through stereoscopic superimpositions; some figures appear to melt into others due to subtle changes in hue without corresponding changes in value; and some layers appear to vibrate via moiré effects that emerge from unexpected pattern and color combinations. The liturgical atmosphere of Liquid Illumination is intended to acknowledge that, like the simultaneously sacred and superficial qualities of rose windows and illuminated manuscripts, artistic and religious ideologies are also always malleable. The intellectual provocation of this paper pushes the boundaries of current thinking concerning viable applications for fashion print designs and architectural images—challenging traditional boundaries between fine art and design. The opportunistic installation of digital printing, CNC milling, and video projection mapping in a gallery that is normally reserved for fine art exhibitions raises important questions about cultural/commercial display, mass customization, digital reproduction, and the increasing prominence of surface effects (color, texture, pattern, reflection, saturation, etc.) across a range of artistic practices and design disciplines.Keywords: fashion, print design, architecture, projection mapping, image, fabrication
Procedia PDF Downloads 8818 A Delphi Study of Factors Affecting the Forest Biorefinery Development in the Pulp and Paper Industry: The Case of Bio-Based Products
Authors: Natasha Gabriella, Josef-Peter Schöggl, Alfred Posch
Abstract:
Being a mature industry, pulp and paper industry (PPI) possess strength points coming from its existing infrastructure, technology know-how, and abundant availability of biomass. However, the declining trend of the wood-based products sales sends a clear signal to the industry to transform its business model in order to increase its profitability. With the emerging global attention on bio-based economy and circular economy, coupled with the low price of fossil feedstock, the PPI starts to integrate biorefinery as a value-added business model to keep the industry’s competitiveness. Nonetheless, biorefinery as an innovation exposes the PPI with some barriers, of which the uncertainty of the promising product becomes one of the major hurdles. This study aims to assess factors that affect the diffusion and development of forest biorefinery in the PPI, including drivers, barriers, advantages, disadvantages, as well as the most promising bio-based products of forest biorefinery. The study examines the identified factors according to the layer of business environment, being the macro-environment, industry, and strategic group level. Besides, an overview of future state of the identified factors is elaborated as to map necessary improvements for implementing forest biorefinery. A two-phase Delphi method is used to collect the empirical data for the study, comprising of an online-based survey and interviews. Delphi method is an effective communication tools to elicit ideas from a group of experts to further reach a consensus of forecasting future trends. Collaborating a total of 50 experts in the panel, the study reveals that influential factors are found in every layers of business of the PPI. The politic dimension is apparent to have a significant influence for tackling the economy barrier while reinforcing the environmental and social benefits in the macro-environment. In the industry level, the biomass availability appears to be a strength point of the PPI while the knowledge gap on technology and market seem to be barriers. Consequently, cooperation with academia and the chemical industry has to be improved. Human resources issue is indicated as one important premise behind the preceding barrier, along with the indication of the PPI’s resistance towards biorefinery implementation as an innovation. Further, cellulose-based products are acknowledged for near-term product development whereas lignin-based products are emphasized to gain importance in the long-term future.Keywords: forest biorefinery, pulp and paper, bio-based product, Delphi method
Procedia PDF Downloads 27817 An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem
Authors: Maxwell Henderson, Tristan Cook, Justin Chan Jin Le, Mark Hodson, YoungJung Chang, John Novak, Daniel Padilha, Nishan Kulatilaka, Ansu Bagchi, Sanjoy Ray, John Kelly
Abstract:
We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.Keywords: adiabatic quantum optimization, mixed integer nonlinear programming, quantum computing, NP-hard
Procedia PDF Downloads 52716 Spray Nebulisation Drying: Alternative Method to Produce Microparticulated Proteins
Authors: Josef Drahorad, Milos Beran, Ondrej Vltavsky, Marian Urban, Martin Fronek, Jiri Sova
Abstract:
Engineering efforts of researchers of the Food research institute Prague and the Czech Technical University in spray drying technologies led to the introduction of a demonstrator ATOMIZER and a new technology of Carbon Dioxide-Assisted Spray Nebulization Drying (CASND). The equipment combines the spray drying technology, when the liquid to be dried is atomized by a rotary atomizer, with Carbon Dioxide Assisted Nebulization - Bubble Dryer (CAN-BD) process in an original way. A solution, emulsion or suspension is saturated by carbon dioxide at pressure up to 80 bar before the drying process. The atomization process takes place in two steps. In the first step, primary droplets are produced at the outlet of the rotary atomizer of special construction. In the second step, the primary droplets are divided in secondary droplets by the CO2 expansion from the inside of primary droplets. The secondary droplets, usually in the form of microbubbles, are rapidly dried by warm air stream at temperatures up to 60ºC and solid particles are formed in a drying chamber. Powder particles are separated from the drying air stream in a high efficiency fine powder separator. The product is frequently in the form of submicron hollow spheres. The CASND technology has been used to produce microparticulated protein concentrates for human nutrition from alternative plant sources - hemp and canola seed filtration cakes. Alkali extraction was used to extract the proteins from the filtration cakes. The protein solutions after the alkali extractions were dried with the demonstrator ATOMIZER. Aerosol particle size distribution and concentration in the draying chamber were determined by two different on-line aerosol spectrometers SMPS (Scanning Mobility Particle Sizer) and APS (Aerodynamic Particle Sizer). The protein powders were in form of hollow spheres with average particle diameter about 600 nm. The particles were characterized by the SEM method. The functional properties of the microparticulated protein concentrates were compared with the same protein concentrates dried by the conventional spray drying process. Microparticulated protein has been proven to have improved foaming and emulsifying properties, water and oil absorption capacities and formed long-term stable water dispersions. This work was supported by the research grants TH03010019 of the Technology Agency of the Czech Republic.Keywords: carbon dioxide-assisted spray nebulization drying, canola seed, hemp seed, microparticulated proteins
Procedia PDF Downloads 16915 Biodegradable Self-Supporting Nanofiber Membranes Prepared by Centrifugal Spinning
Authors: Milos Beran, Josef Drahorad, Ondrej Vltavsky, Martin Fronek, Jiri Sova
Abstract:
While most nanofibers are produced using electrospinning, this technique suffers from several drawbacks, such as the requirement for specialized equipment, high electrical potential, and electrically conductive targets. Consequently, recent years have seen the increasing emergence of novel strategies in generating nanofibers in a larger scale and higher throughput manner. The centrifugal spinning is simple, cheap and highly productive technology for nanofiber production. In principle, the drawing of solution filament into nanofibers using centrifugal spinning is achieved through the controlled manipulation of centrifugal force, viscoelasticity, and mass transfer characteristics of the spinning solutions. Engineering efforts of researches of the Food research institute Prague and the Czech Technical University in the field the centrifugal nozzleless spinning led to introduction of a pilot plant demonstrator NANOCENT. The main advantages of the demonstrator are lower investment cost - thanks to simpler construction compared to widely used electrospinning equipments, higher production speed, new application possibilities and easy maintenance. The centrifugal nozzleless spinning is especially suitable to produce submicron fibers from polymeric solutions in highly volatile solvents, such as chloroform, DCM, THF, or acetone. To date, submicron fibers have been prepared from PS, PUR and biodegradable polyesters, such as PHB, PLA, PCL, or PBS. The products are in form of 3D structures or nanofiber membranes. Unique self-supporting nanofiber membranes were prepared from the biodegradable polyesters in different mixtures. The nanofiber membranes have been tested for different applications. Filtration efficiencies for water solutions and aerosols in air were evaluated. Different active inserts were added to the solutions before the spinning process, such as inorganic nanoparticles, organic precursors of metal oxides, antimicrobial and wound healing compounds or photocatalytic phthalocyanines. Sintering can be subsequently carried out to remove the polymeric material and transfer the organic precursors to metal oxides, such as Si02, or photocatalytic Zn02 and Ti02, to obtain inorganic nanofibers. Electrospinning is more suitable technology to produce membranes for the filtration applications than the centrifugal nozzleless spinning, because of the formation of more homogenous nanofiber layers and fibers with smaller diameters. The self-supporting nanofiber membranes prepared from the biodegradable polyesters are especially suitable for medical applications, such as wound or burn healing dressings or tissue engineering scaffolds. This work was supported by the research grants TH03020466 of the Technology Agency of the Czech Republic.Keywords: polymeric nanofibers, self-supporting nanofiber membranes, biodegradable polyesters, active inserts
Procedia PDF Downloads 16614 A Design Framework for an Open Market Platform of Enriched Card-Based Transactional Data for Big Data Analytics and Open Banking
Authors: Trevor Toy, Josef Langerman
Abstract:
Around a quarter of the world’s data is generated by financial with an estimated 708.5 billion global non-cash transactions reached between 2018 and. And with Open Banking still a rapidly developing concept within the financial industry, there is an opportunity to create a secure mechanism for connecting its stakeholders to openly, legitimately and consensually share the data required to enable it. Integration and data sharing of anonymised transactional data are still operated in silos and centralised between the large corporate entities in the ecosystem that have the resources to do so. Smaller fintechs generating data and businesses looking to consume data are largely excluded from the process. Therefore there is a growing demand for accessible transactional data for analytical purposes and also to support the rapid global adoption of Open Banking. The following research has provided a solution framework that aims to provide a secure decentralised marketplace for 1.) data providers to list their transactional data, 2.) data consumers to find and access that data, and 3.) data subjects (the individuals making the transactions that generate the data) to manage and sell the data that relates to themselves. The platform also provides an integrated system for downstream transactional-related data from merchants, enriching the data product available to build a comprehensive view of a data subject’s spending habits. A robust and sustainable data market can be developed by providing a more accessible mechanism for data producers to monetise their data investments and encouraging data subjects to share their data through the same financial incentives. At the centre of the platform is the market mechanism that connects the data providers and their data subjects to the data consumers. This core component of the platform is developed on a decentralised blockchain contract with a market layer that manages transaction, user, pricing, payment, tagging, contract, control, and lineage features that pertain to the user interactions on the platform. One of the platform’s key features is enabling the participation and management of personal data by the individuals from whom the data is being generated. This framework developed a proof-of-concept on the Etheruem blockchain base where an individual can securely manage access to their own personal data and that individual’s identifiable relationship to the card-based transaction data provided by financial institutions. This gives data consumers access to a complete view of transactional spending behaviour in correlation to key demographic information. This platform solution can ultimately support the growth, prosperity, and development of economies, businesses, communities, and individuals by providing accessible and relevant transactional data for big data analytics and open banking.Keywords: big data markets, open banking, blockchain, personal data management
Procedia PDF Downloads 7413 Conceptualizing of Priorities in the Dynamics of Public Administration Contemporary Reforms
Authors: Larysa Novak-Kalyayeva, Aleksander Kuczabski, Orystlava Sydorchuk, Nataliia Fersman, Tatyana Zemlinskaia
Abstract:
The article presents the results of the creative analysis and comparison of trends in the development of the theory of public administration during the period from the second half of the 20th to the beginning of the 21st century. The process of conceptualization of the priorities of public administration in the dynamics of reforming was held under the influence of such factors as globalization, integration, information and technological changes and human rights is examined. The priorities of the social state in the concepts of the second half of the 20th century are studied. Peculiar approaches to determining the priorities of public administration in the countries of "Soviet dictatorship" in Central and Eastern Europe in the same period are outlined. Particular attention is paid to the priorities of public administration regarding the interaction between public power and society and the development of conceptual foundations for the modern managerial process. There is a thought that the dynamics of the formation of concepts of the European governance is characterized by the sequence of priorities: from socio-economic and moral-ethical to organizational-procedural and non-hierarchical ones. The priorities of the "welfare state" were focused on the decent level of material wellbeing of population. At the same time, the conception of "minimal state" emphasized priorities of human responsibility for their own fate under the conditions of minimal state protection. Later on, the emphasis was placed on horizontal ties and redistribution of powers and competences of "effective state" with its developed procedures and limits of responsibility at all levels of government and in close cooperation with the civil society. The priorities of the contemporary period are concentrated on human rights in the concepts of "good governance" and all the following ones, which recognize the absolute priority of public administration with compliance, provision and protection of human rights. There is a proved point of view that civilizational changes taking place under the influence of information and technological imperatives also stipulate changes in priorities, redistribution of emphases and update principles of managerial concepts on the basis of publicity, transparency, departure from traditional forms of hierarchy and control in favor of interactivity and inter-sectoral interaction, decentralization and humanization of managerial processes. The necessity to permanently carry out the reorganization, by establishing the interaction between different participants of public power and social relations, to establish a balance between political forces and social interests on the basis of mutual trust and mutual understanding determines changes of social, political, economic and humanitarian paradigms of public administration and their theoretical comprehension. The further studies of theoretical foundations of modern public administration in interdisciplinary discourse in the context of ambiguous consequences of the globalizational and integrational processes of modern European state-building would be advisable. This is especially true during the period of political transformations and economic crises which are the characteristic of the contemporary Europe, especially for democratic transition countries.Keywords: concepts of public administration, democratic transition countries, human rights, the priorities of public administration, theory of public administration
Procedia PDF Downloads 17512 Carbonyl Iron Particles Modified with Pyrrole-Based Polymer and Electric and Magnetic Performance of Their Composites
Authors: Miroslav Mrlik, Marketa Ilcikova, Martin Cvek, Josef Osicka, Michal Sedlacik, Vladimir Pavlinek, Jaroslav Mosnacek
Abstract:
Magnetorheological elastomers (MREs) are a unique type of materials consisting of two components, magnetic filler, and elastomeric matrix. Their properties can be tailored upon application of an external magnetic field strength. In this case, the change of the viscoelastic properties (viscoelastic moduli, complex viscosity) are influenced by two crucial factors. The first one is magnetic performance of the particles and the second one is off-state stiffness of the elastomeric matrix. The former factor strongly depends on the intended applications; however general rule is that higher magnetic performance of the particles provides higher MR performance of the MRE. Since magnetic particles possess low stability properties against temperature and acidic environment, several methods how to improve these drawbacks have been developed. In the most cases, the preparation of the core-shell structures was employed as a suitable method for preservation of the magnetic particles against thermal and chemical oxidations. However, if the shell material is not single-layer substance, but polymer material, the magnetic performance is significantly suppressed, due to the in situ polymerization technique, when it is very difficult to control the polymerization rate and the polymer shell is too thick. The second factor is the off-state stiffness of the elastomeric matrix. Since the MR effectivity is calculated as the relative value of the elastic modulus upon magnetic field application divided by elastic modulus in the absence of the external field, also the tuneability of the cross-linking reaction is highly desired. Therefore, this study is focused on the controllable modification of magnetic particles using a novel monomeric system based on 2-(1H-pyrrol-1-yl)ethyl methacrylate. In this case, the short polymer chains of different chain lengths and low polydispersity index will be prepared, and thus tailorable stability properties can be achieved. Since the relatively thin polymer chains will be grafted on the surface of magnetic particles, their magnetic performance will be affected only slightly. Furthermore, also the cross-linking density will be affected, due to the presence of the short polymer chains. From the application point of view, such MREs can be utilized for, magneto-resistors, piezoresistors or pressure sensors especially, when the conducting shell on the magnetic particles will be created. Therefore, the selection of the pyrrole-based monomer is very crucial and controllably thin layer of conducting polymer can be prepared. Finally, such composite particle consisting of magnetic core and conducting shell dispersed in elastomeric matrix can find also the utilization in shielding application of electromagnetic waves.Keywords: atom transfer radical polymerization, core-shell, particle modification, electromagnetic waves shielding
Procedia PDF Downloads 21111 Residual Plastic Deformation Capacity in Reinforced Concrete Beams Subjected to Drop Weight Impact Test
Authors: Morgan Johansson, Joosef Leppanen, Mathias Flansbjer, Fabio Lozano, Josef Makdesi
Abstract:
Concrete is commonly used for protective structures and how impact loading affects different types of concrete structures is an important issue. Often the knowledge gained from static loading is also used in the design of impulse loaded structures. A large plastic deformation capacity is essential to obtain a large energy absorption in an impulse loaded structure. However, the structural response of an impact loaded concrete beam may be very different compared to a statically loaded beam. Consequently, the plastic deformation capacity and failure modes of the concrete structure can be different when subjected to dynamic loads; and hence it is not sure that the observations obtained from static loading are also valid for dynamic loading. The aim of this paper is to investigate the residual plastic deformation capacity in reinforced concrete beams subjected to drop weight impact tests. A test-series consisting of 18 simply supported beams (0.1 x 0.1 x 1.18 m, ρs = 0.7%) with a span length of 1.0 m and subjected to a point load in the beam mid-point, was carried out. 2x6 beams were first subjected to drop weight impact tests, and thereafter statically tested until failure. The drop in weight had a mass of 10 kg and was dropped from 2.5 m or 5.0 m. During the impact tests, a high-speed camera was used with 5 000 fps and for the static tests, a camera was used with 0.5 fps. Digital image correlation (DIC) analyses were conducted and from these the velocities of the beam and the drop weight, as well as the deformations and crack propagation of the beam, were effectively measured. Additionally, for the static tests, the applied load and midspan deformation were measured. The load-deformation relations for the beams subjected to an impact load were compared with 6 reference beams that were subjected to static loading only. The crack pattern obtained were compared using DIC, and it was concluded that the resulting crack formation depended much on the test method used. For the static tests, only bending cracks occurred. For the impact loaded beams, though, distinctive diagonal shear cracks also formed below the zone of impact and less wide shear cracks were observed in the region half-way to the support. Furthermore, due to wave propagation effects, bending cracks developed in the upper part of the beam during initial loading. The results showed that the plastic deformation capacity increased for beams subjected to drop weight impact tests from a high drop height of 5.0 m. For beams subjected to an impact from a low drop height of 2.5 m, though, the plastic deformation capacity was in the same order of magnitude as for the statically loaded reference beams. The beams tested were designed to fail due to bending when subjected to a static load. However, for the impact tested beams, one beam exhibited a shear failure at a significantly reduced load level when it was tested statically; indicating that there might be a risk of reduced residual load capacity for impact loaded structures.Keywords: digital image correlation (DIC), drop weight impact, experiments, plastic deformation capacity, reinforced concrete
Procedia PDF Downloads 14910 The Role of Time-Dependent Treatment of Exogenous Salicylic Acid on Endogenous Phytohormone Levels under Salinity Stress
Authors: Hülya Torun, Ondřej Novák, Jaromír Mikulík, Miroslav Strnad, Faik A. Ayaz
Abstract:
World climate is changing. Millions of people in the world still face chronic undernourishment for conducting a healthy life and the world’s population is growing steadily. To meet this growing demand, agriculture and food systems must adapt to the adverse effects of climate change and become more resilient, productive and sustainable. From this perspective, to determine tolerant cultivars for undesirable environmental conditions will be necessary food production for sustainable development. Among abiotic stresses, soil salinity is one of the most detrimental global fact restricting plant sources. Development of salt-tolerant lines is required in order to increase the crop productivity and quality in salt-treated lands. Therefore, the objective of this study was to investigate the morphological and physiological responses of barley cultivars accessions to salinity stress by NaCl. For this purpose, it was aimed to determine the crosstalk between some endogenous phytohormones and exogenous salicylic acid (SA) in two different vegetative parts (leaves and roots) of barley (Hordeum vulgare L.; Poaceae; 2n=14; Ince-04) which is detected salt-tolerant. The effects of SA on growth parameters, leaf relative water content (RWC), endogenous phytohormones; including indole-3-acetic acid (IAA), cytokinins (CKs), abscisic acid (ABA), jasmonic acid (JA) and ethylene were investigated in barley cultivars under salinity stress. SA was applied to 17-day-old seedlings of barley in two different ways including before (pre-treated for 24 h) and simultaneously with NaCl stress treatment. NaCl (0, 150, 300 mM) exposure in the hydrophonic system was associated with a rapid decrease in growth parameters and RWC, which is an indicator of plant water status, resulted in a strong up-regulation of ABA as a stress indicator. Roots were more dramatically affected than leaves. Water conservation in 150 mM NaCl treated-barley plants did not change, but decreased in 300 mM NaCl treated plants. Pre- and simultaneously treatment of SA did not significantly alter growth parameters and RWC. ABA, JA and ethylene are known to be related with stress. In the present work, ethylene also increased, similarly to ABA, but not with the same intensity. While ABA and ethylene increased by the increment of salt concentrations, JA levels rapidly decreased especially in roots. Both pre- and simultaneously SA applications alleviated salt-induced decreases in 300 mM NaCl resulted in the increment of ABA levels. CKs and IAA are related to cell growth and development. At high salinity (300 mM NaCl), CKs (cZ+cZR) contents increased in both vegetative organs while IAA levels stayed at the same level with control groups. However, IAA increased and cZ+cZR rapidly decreased in leaves of barley plants with SA treatments before salt applications (in pre- SA treated groups). Simultaneously application of SA decreased CKs levels in both leaves and roots of the cultivar. Due to increasing concentrations of NaCl in association with decreasing ABA, JA and ethylene content and increments in CKs and IAA were recorded with SA treatments. As results of the study, in view of all the phytohormones that we tested, exogenous SA induced greater tolerance to salinity particularly when applied before salinity stress.Keywords: Barley, Hordeum vulgare, phytohormones, salicylic acid, salinity
Procedia PDF Downloads 2299 Irradion: Portable Small Animal Imaging and Irradiation Unit
Authors: Josef Uher, Jana Boháčová, Richard Kadeřábek
Abstract:
In this paper, we present a multi-robot imaging and irradiation research platform referred to as Irradion, with full capabilities of portable arbitrary path computed tomography (CT). Irradion is an imaging and irradiation unit entirely based on robotic arms for research on cancer treatment with ion beams on small animals (mice or rats). The platform comprises two subsystems that combine several imaging modalities, such as 2D X-ray imaging, CT, and particle tracking, with precise positioning of a small animal for imaging and irradiation. Computed Tomography: The CT subsystem of the Irradion platform is equipped with two 6-joint robotic arms that position a photon counting detector and an X-ray tube independently and freely around the scanned specimen and allow image acquisition utilizing computed tomography. Irradiation measures nearly all conventional 2D and 3D trajectories of X-ray imaging with precisely calibrated and repeatable geometrical accuracy leading to a spatial resolution of up to 50 µm. In addition, the photon counting detectors allow X-ray photon energy discrimination, which can suppress scattered radiation, thus improving image contrast. It can also measure absorption spectra and recognize different materials (tissue) types. X-ray video recording and real-time imaging options can be applied for studies of dynamic processes, including in vivo specimens. Moreover, Irradion opens the door to exploring new 2D and 3D X-ray imaging approaches. We demonstrate in this publication various novel scan trajectories and their benefits. Proton Imaging and Particle Tracking: The Irradion platform allows combining several imaging modules with any required number of robots. The proton tracking module comprises another two robots, each holding particle tracking detectors with position, energy, and time-sensitive sensors Timepix3. Timepix3 detectors can track particles entering and exiting the specimen and allow accurate guiding of photon/ion beams for irradiation. In addition, quantifying the energy losses before and after the specimen brings essential information for precise irradiation planning and verification. Work on the small animal research platform Irradion involved advanced software and hardware development that will offer researchers a novel way to investigate new approaches in (i) radiotherapy, (ii) spectral CT, (iii) arbitrary path CT, (iv) particle tracking. The robotic platform for imaging and radiation research developed for the project is an entirely new product on the market. Preclinical research systems with precision robotic irradiation with photon/ion beams combined with multimodality high-resolution imaging do not exist currently. The researched technology can potentially cause a significant leap forward compared to the current, first-generation primary devices.Keywords: arbitrary path CT, robotic CT, modular, multi-robot, small animal imaging
Procedia PDF Downloads 918 Detection, Analysis and Determination of the Origin of Copy Number Variants (CNVs) in Intellectual Disability/Developmental Delay (ID/DD) Patients and Autistic Spectrum Disorders (ASD) Patients by Molecular and Cytogenetic Methods
Authors: Pavlina Capkova, Josef Srovnal, Vera Becvarova, Marie Trkova, Zuzana Capkova, Andrea Stefekova, Vaclava Curtisova, Alena Santava, Sarka Vejvalkova, Katerina Adamova, Radek Vodicka
Abstract:
ASDs are heterogeneous and complex developmental diseases with a significant genetic background. Recurrent CNVs are known to be a frequent cause of ASD. These CNVs can have, however, a variable expressivity which results in a spectrum of phenotypes from asymptomatic to ID/DD/ASD. ASD is associated with ID in ~75% individuals. Various platforms are used to detect pathogenic mutations in the genome of these patients. The performed study is focused on a determination of the frequency of pathogenic mutations in a group of ASD patients and a group of ID/DD patients using various strategies along with a comparison of their detection rate. The possible role of the origin of these mutations in aetiology of ASD was assessed. The study included 35 individuals with ASD and 68 individuals with ID/DD (64 males and 39 females in total), who underwent rigorous genetic, neurological and psychological examinations. Screening for pathogenic mutations involved karyotyping, screening for FMR1 mutations and for metabolic disorders, a targeted MLPA test with probe mixes Telomeres 3 and 5, Microdeletion 1 and 2, Autism 1, MRX and a chromosomal microarray analysis (CMA) (Illumina or Affymetrix). Chromosomal aberrations were revealed in 7 (1 in the ASD group) individuals by karyotyping. FMR1 mutations were discovered in 3 (1 in the ASD group) individuals. The detection rate of pathogenic mutations in ASD patients with a normal karyotype was 15.15% by MLPA and CMA. The frequencies of the pathogenic mutations were 25.0% by MLPA and 35.0% by CMA in ID/DD patients with a normal karyotype. CNVs inherited from asymptomatic parents were more abundant than de novo changes in ASD patients (11.43% vs. 5.71%) in contrast to the ID/DD group where de novo mutations prevailed over inherited ones (26.47% vs. 16.18%). ASD patients shared more frequently their mutations with their fathers than patients from ID/DD group (8.57% vs. 1.47%). Maternally inherited mutations predominated in the ID/DD group in comparison with the ASD group (14.7% vs. 2.86 %). CNVs of an unknown significance were found in 10 patients by CMA and in 3 patients by MLPA. Although the detection rate is the highest when using CMA, recurrent CNVs can be easily detected by MLPA. CMA proved to be more efficient in the ID/DD group where a larger spectrum of rare pathogenic CNVs was revealed. This study determined that maternally inherited highly penetrant mutations and de novo mutations more often resulted in ID/DD without ASD in patients. The paternally inherited mutations could be, however, a source of the greater variability in the genome of the ASD patients and contribute to the polygenic character of the inheritance of ASD. As the number of the subjects in the group is limited, a larger cohort is needed to confirm this conclusion. Inherited CNVs have a role in aetiology of ASD possibly in combination with additional genetic factors - the mutations elsewhere in the genome. The identification of these interactions constitutes a challenge for the future. Supported by MH CZ – DRO (FNOl, 00098892), IGA UP LF_2016_010, TACR TE02000058 and NPU LO1304.Keywords: autistic spectrum disorders, copy number variant, chromosomal microarray, intellectual disability, karyotyping, MLPA, multiplex ligation-dependent probe amplification
Procedia PDF Downloads 3527 Approach for the Mathematical Calculation of the Damping Factor of Railway Bridges with Ballasted Track
Authors: Andreas Stollwitzer, Lara Bettinelli, Josef Fink
Abstract:
The expansion of the high-speed rail network over the past decades has resulted in new challenges for engineers, including traffic-induced resonance vibrations of railway bridges. Excessive resonance-induced speed-dependent accelerations of railway bridges during high-speed traffic can lead to negative consequences such as fatigue symptoms, distortion of the track, destabilisation of the ballast bed, and potentially even derailment. A realistic prognosis of bridge vibrations during high-speed traffic must not only rely on the right choice of an adequate calculation model for both bridge and train but first and foremost on the use of dynamic model parameters which reflect reality appropriately. However, comparisons between measured and calculated bridge vibrations are often characterised by considerable discrepancies, whereas dynamic calculations overestimate the actual responses and therefore lead to uneconomical results. This gap between measurement and calculation constitutes a complex research issue and can be traced to several causes. One major cause is found in the dynamic properties of the ballasted track, more specifically in the persisting, substantial uncertainties regarding the consideration of the ballasted track (mechanical model and input parameters) in dynamic calculations. Furthermore, the discrepancy is particularly pronounced concerning the damping values of the bridge, as conservative values have to be used in the calculations due to normative specifications and lack of knowledge. By using a large-scale test facility, the analysis of the dynamic behaviour of ballasted track has been a major research topic at the Institute of Structural Engineering/Steel Construction at TU Wien in recent years. This highly specialised test facility is designed for isolated research of the ballasted track's dynamic stiffness and damping properties – independent of the bearing structure. Several mechanical models for the ballasted track consisting of one or more continuous spring-damper elements were developed based on the knowledge gained. These mechanical models can subsequently be integrated into bridge models for dynamic calculations. Furthermore, based on measurements at the test facility, model-dependent stiffness and damping parameters were determined for these mechanical models. As a result, realistic mechanical models of the railway bridge with different levels of detail and sufficiently precise characteristic values are available for bridge engineers. Besides that, this contribution also presents another practical application of such a bridge model: Based on the bridge model, determination equations for the damping factor (as Lehr's damping factor) can be derived. This approach constitutes a first-time method that makes the damping factor of a railway bridge calculable. A comparison of this mathematical approach with measured dynamic parameters of existing railway bridges illustrates, on the one hand, the apparent deviation between normatively prescribed and in-situ measured damping factors. On the other hand, it is also shown that a new approach, which makes it possible to calculate the damping factor, provides results that are close to reality and thus raises potentials for minimising the discrepancy between measurement and calculation.Keywords: ballasted track, bridge dynamics, damping, model design, railway bridges
Procedia PDF Downloads 1646 Evaluation of Alternative Approaches for Additional Damping in Dynamic Calculations of Railway Bridges under High-Speed Traffic
Authors: Lara Bettinelli, Bernhard Glatz, Josef Fink
Abstract:
Planning engineers and researchers use various calculation models with different levels of complexity, calculation efficiency and accuracy in dynamic calculations of railway bridges under high-speed traffic. When choosing a vehicle model to depict the dynamic loading on the bridge structure caused by passing high-speed trains, different goals are pursued: On the one hand, the selected vehicle models should allow the calculation of a bridge’s vibrations as realistic as possible. On the other hand, the computational efficiency and manageability of the models should be preferably high to enable a wide range of applications. The commonly adopted and straightforward vehicle model is the moving load model (MLM), which simplifies the train to a sequence of static axle loads moving at a constant speed over the structure. However, the MLM can significantly overestimate the structure vibrations, especially when resonance events occur. More complex vehicle models, which depict the train as a system of oscillating and coupled masses, can reproduce the interaction dynamics between the vehicle and the bridge superstructure to some extent and enable the calculation of more realistic bridge accelerations. At the same time, such multi-body models require significantly greater processing capacities and precise knowledge of various vehicle properties. The European standards allow for applying the so-called additional damping method when simple load models, such as the MLM, are used in dynamic calculations. An additional damping factor depending on the bridge span, which should take into account the vibration-reducing benefits of the vehicle-bridge interaction, is assigned to the supporting structure in the calculations. However, numerous studies show that when the current standard specifications are applied, the calculation results for the bridge accelerations are in many cases still too high compared to the measured bridge accelerations, while in other cases, they are not on the safe side. A proposal to calculate the additional damping based on extensive dynamic calculations for a parametric field of simply supported bridges with a ballasted track was developed to address this issue. In this contribution, several different approaches to determine the additional damping of the supporting structure considering the vehicle-bridge interaction when using the MLM are compared with one another. Besides the standard specifications, this includes the approach mentioned above and two additional recently published alternative formulations derived from analytical approaches. For a bridge catalogue of 65 existing bridges in Austria in steel, concrete or composite construction, calculations are carried out with the MLM for two different high-speed trains and the different approaches for additional damping. The results are compared with the calculation results obtained by applying a more sophisticated multi-body model of the trains used. The evaluation and comparison of the results allow assessing the benefits of different calculation concepts for the additional damping regarding their accuracy and possible applications. The evaluation shows that by applying one of the recently published redesigned additional damping methods, the calculation results can reflect the influence of the vehicle-bridge interaction on the design-relevant structural accelerations considerably more reliable than by using normative specifications.Keywords: Additional Damping Method, Bridge Dynamics, High-Speed Railway Traffic, Vehicle-Bridge-Interaction
Procedia PDF Downloads 1615 Evaluation of Forensic Pathology Practice Outside Germany – Experiences From 20 Years of Second Look Autopsies in Cooperation with the Institute of Legal Medicine Munich
Authors: Michael Josef Schwerer, Oliver Peschel
Abstract:
Background: The sense and purpose of forensic postmortem examinations are undoubtedly the same in Institutes of Legal Medicine all over the world. Cause and manner of death must be determined, persons responsible for unnatural death must be brought to justice, and accidents demand changes in the respective scenarios to avoid future mishaps. The latter particularly concerns aircraft accidents, not only regarding consequences from criminal or civil law but also in pursuance of the International Civil Aviation Authority’s regulations, which demand lessons from mishap investigations to improve flight safety. Irrespective of the distinct circumstances of a given casualty or the respective questions in subsequent death investigations, a forensic autopsy is the basis for all further casework, the clue to otherwise hidden solutions, and the crucial limitation for final success when not all possible findings have been properly collected. This also implies that the targeted work of police forces and expert witnesses strongly depends on the quality of forensic pathology practice. Deadly events in foreign countries, which lead to investigations not only abroad but also in Germany, can be challenging in this context. Frequently, second-look autopsies after the repatriation of the deceased to Germany are requested by the legal authorities to ensure proper and profound documentation of all relevant findings. Aims and Methods: To validate forensic postmortem practice abroad, a retrospective study using the findings in the corresponding second-look autopsies in the Institute of Legal Medicine Munich over the last 20 years was carried out. New findings unreported in the previous autopsy were recorded and judged for their relevance to solving the respective case. Further, the condition of the corpse at the time of the second autopsy was rated to discuss artifacts mimicking evidence or the possibility of lost findings resulting from, e.g., decomposition. Recommendations for future handling of death cases abroad and efficient autopsy practice were pursued. Results and Discussion: Our re-evaluation confirmed a high quality of autopsy practice abroad in the vast majority of cases. However, in some casework, incomplete documentation of pathology findings was revealed along with either insufficient or misconducted dissection of organs. Further, some of the bodies showed missing parts of some organs, most probably resulting from sampling for histology studies during the first postmortem. For the aeromedical evaluation of a decedent’s health status prior to an aviation mishap, particularly lost or obscured findings in the heart, lungs, and brain impeded expert testimony. Moreover, incomplete fixation of the body or body parts for repatriation was seen in several cases. This particularly involved previously dissected organs deposited back into the body cavities at the end of the first autopsy. Conclusions and Recommendations: Detailed preparation in the first forensic autopsy avoids the necessity of a second-look postmortem in the majority of cases. To limit decomposition changes during repatriation from abroad, special care must be taken to include pre-dissected organs in the chemical fixation process, particularly when they are separated from the blood vessels and just deposited back into the body cavities.Keywords: autopsy practice, second-look autopsy, retrospective study, quality standards, decomposition changes, repatriation
Procedia PDF Downloads 514 Antibiotic Prophylaxis Habits in Oral Implant Surgery in the Netherlands: A Cross-Sectional Survey
Authors: Fabio Rodriguez Sanchez, Josef Bruers, Iciar Arteagoitia, Carlos Rodriguez Andres
Abstract:
Background: Oral implants are a routine treatment to replace lost teeth. Although they have a high rate of success, implant failures do occur. Perioperative antibiotics have been suggested to prevent postoperative infections and dental implant failures, but they remain a controversial treatment among healthy patients. The objective of this study was to determine whether antibiotic prophylaxis is a common treatment in the Netherlands among general dentists, maxillofacial-surgeons, periodontists and implantologists in conjunction with oral implant surgery among healthy patients and to assess the nature of antibiotics prescriptions in order to evaluate whether any consensus has been reached and the current recommendations are being followed. Methodology: Observational cross-sectional study based on a web-survey reported according to the Strengthening the Reporting of Observational studies in Epidemiology (STROBE) guidelines. A validated questionnaire, developed by Deeb et al. (2015), was translated and slightly adjusted to circumstances in the Netherlands. It was used with the explicit permission of the authors. This questionnaire contained both close-ended and some open-ended questions in relation to the following topics: demographics, qualification, antibiotic type, prescription-duration and dosage. An email was sent February 2018 to a sample of 600 general dentists and all 302 oral implantologists, periodontists and maxillofacial surgeons who were recognized by the Dutch Association of Oral Implantology (NVOI) as oral health care providers placing oral implants. The email included a brief introduction about the study objectives and a link to the web questionnaire, which could be filled in anonymously. Overall, 902 questionnaires were sent. However, 29 questionnaires were not correctly received due to an incorrect email address. So a total number of 873 professionals were reached. Collected data were analyzed using SPSS (IBM Corp., released 2012, Armonk, NY). Results: The questionnaire was sent back by a total number of 218 participants (response rate=24.2%), 45 female (20.8%) and 171 male (79.2%). Two respondents were excluded from the study group because they were not currently working as oral health providers. Overall 151 (69.9%) placed oral implants on regular basis. Approximately 79 (52.7%) of these participants prescribed antibiotics only in determined situations, 66 (44.0%) prescribed antibiotics always and 5 dentists (3.3%) did not prescribe antibiotics at all when placing oral implants. Overall, 83 participants who prescribed antibiotics, did so both pre- and postoperatively (58.5%), 12 exclusively postoperative (8.5%), and 47 followed an exclusive preoperative regime (33.1%). A single dose of 2,000 mg amoxicillin orally 1-hour prior treatment was the most prescribed preoperative regimen. The most frequent prescribed postoperative regimen was 500 mg amoxicillin three times daily for 7 days after surgery. On average, oral health professionals prescribed 6,923 mg antibiotics in conjunction with oral implant surgery, varying from 500 to 14,600 mg. Conclusions: Antibiotic prophylaxis in conjunction with oral implant surgery is prescribed in the Netherlands on a rather large scale. Dutch professionals might prescribe antibiotics more cautiously than in other countries and there seems to be a lower range on the different antibiotic types and regimens being prescribed. Anyway, recommendations based on last-published evidence are frequently not being followed.Keywords: clinical decision making, infection control, antibiotic prophylaxis, dental implants
Procedia PDF Downloads 141