Search results for: Matthias Goerke
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 66

Search results for: Matthias Goerke

36 Disrupted or Discounted Cash Flow: Impact of Digitisation on Business Valuation

Authors: Matthias Haerri, Tobias Huettche, Clemens Kustner

Abstract:

This article discusses the impact of digitization on business valuation. In order to become and remain ‘digital’, investments are necessary whose return on investment (ROI) often remains vague. This uncertainty is contradictory for a valuation, that rely on predictable cash flows, fixed capital structures and the steady state. However digitisation does not make a company valuation impossible, but traditional approaches must be reconsidered. The authors identify four areas that are to be changing: (1) Tools instead of intuition - In the future, company valuation will neither be art nor science, but craft. This does not require intuition, but experience and good tools. Digital evaluation tools beyond Excel will therefore gain in importance. (2) Real-time instead of deadline - At present, company valuations are always carried out on a case-by-case basis and on a specific key date. This will change with the digitalization and the introduction of web-based valuation tools. Company valuations can thus not only be carried out faster and more efficiently, but can also be offered more frequently. Instead of calculating the value for a previous key date, current and real-time valuations can be carried out. (3) Predictive planning instead of analysis of the past - Past data will also be needed in the future, but its use will not be limited to monovalent time series or key figure analyses. With pictures of ‘black swans’ and the ‘turkey illusion’ it was made clear to us that we build forecasts on too few data points of the past and underestimate the power of chance. Predictive planning can help here. (4) Convergence instead of residual value - Digital transformation shortens the lifespan of viable business models. If companies want to live forever, they have to change forever. For the company valuation, this means that the business model valid on the valuation date only has a limited service life.

Keywords: business valuation, corporate finance, digitisation, disruption

Procedia PDF Downloads 98
35 Hydraulic Optimization of an Adjustable Spiral-Shaped Evaporator

Authors: Matthias Feiner, Francisco Javier Fernández García, Michael Arneman, Martin Kipfmüller

Abstract:

To ensure reliability in miniaturized devices or processes with increased heat fluxes, very efficient cooling methods have to be employed in order to cope with small available cooling surfaces. To address this problem, a certain type of evaporator/heat exchanger was developed: It is called a swirl evaporator due to its flow characteristic. The swirl evaporator consists of a concentrically eroded screw geometry in which a capillary tube is guided, which is inserted into a pocket hole in components with high heat load. The liquid refrigerant R32 is sprayed through the capillary tube to the end face of the blind hole and is sucked off against the injection direction in the screw geometry. Its inner diameter is between one and three millimeters. The refrigerant is sprayed into the pocket hole via a small tube aligned in the center of the bore hole and is sucked off on the front side of the hole against the direction of injection. The refrigerant is sucked off in a helical geometry (twisted flow) so that it is accelerated against the hot wall (centrifugal acceleration). This results in an increase in the critical heat flux of up to 40%. In this way, more heat can be dissipated on the same surface/available installation space. This enables a wide range of technical applications. To optimize the design for the needs in various fields of industry, like the internal tool cooling when machining nickel base alloys like Inconel 718, a correlation-based model of the swirl-evaporator was developed. The model is separated into 3 subgroups with overall 5 regimes. The pressure drop and heat transfer are calculated separately. An approach to determine the locality of phase change in the capillary and the swirl was implemented. A test stand has been developed to verify the simulation.

Keywords: helically-shaped, oil-free, R-32, swirl-evaporator, twist-flow

Procedia PDF Downloads 81
34 Localized Detection of ᴅ-Serine by Using an Enzymatic Amperometric Biosensor and Scanning Electrochemical Microscopy

Authors: David Polcari, Samuel C. Perry, Loredano Pollegioni, Matthias Geissler, Janine Mauzeroll

Abstract:

ᴅ-serine acts as an endogenous co-agonist for N-methyl-ᴅ-aspartate receptors in neuronal synapses. This makes it a key component in the development and function of a healthy brain, especially given its role in several neurodegenerative diseases such as Alzheimer’s disease and dementia. Despite such clear research motivations, the primary site and mechanism of ᴅ-serine release is still currently unclear. For this reason, we are developing a biosensor for the detection of ᴅ-serine utilizing a microelectrode in combination with a ᴅ-amino acid oxidase enzyme, which produces stoichiometric quantities of hydrogen peroxide in response to ᴅ-serine. For the fabrication of a biosensor with good selectivity, we use a permselective poly(meta-phenylenediamine) film to ensure only the target molecule is reacted, according to the size exclusion principle. In this work, we investigated the effect of the electrodeposition conditions used on the biosensor’s response time and selectivity. Careful optimization of the fabrication process allowed for enhanced biosensor response time. This allowed for the real time sensing of ᴅ-serine in a bulk solution, and also provided in means to map the efflux of ᴅ-serine in real time. This was done using scanning electrochemical microscopy (SECM) with the optimized biosensor to measure localized release of ᴅ-serine from an agar filled glass capillary sealed in an epoxy puck, which acted as a model system. The SECM area scan simultaneously provided information regarding the rate of ᴅ-serine flux from the model substrate, as well as the size of the substrate itself. This SECM methodology, which provides high spatial and temporal resolution, could be useful to investigate the primary site and mechanism of ᴅ-serine release in other biological samples.

Keywords: ᴅ-serine, enzymatic biosensor, microelectrode, scanning electrochemical microscopy

Procedia PDF Downloads 203
33 Systematic Analysis of Immune Response to Biomaterial Surface Characteristics

Authors: Florian Billing, Soren Segan, Meike Jakobi, Elsa Arefaine, Aliki Jerch, Xin Xiong, Matthias Becker, Thomas Joos, Burkhard Schlosshauer, Ulrich Rothbauer, Nicole Schneiderhan-Marra, Hanna Hartmann, Christopher Shipp

Abstract:

The immune response plays a major role in implant biocompatibility, but an understanding of how to design biomaterials for specific immune responses is yet to be achieved. We aimed to better understand how changing certain material properties can drive immune responses. To this end, we tested immune response to experimental implant coatings that vary in specific characteristics. A layer-by-layer approach was employed to vary surface charge and wettability. Human-based in vitro models (THP-1 macrophages and primary peripheral blood mononuclear cells (PBMCS)) were used to assess immune responses using multiplex cytokine analysis, flow cytometry (CD molecule expression) and microscopy (cell morphology). We observed dramatic differences in immune response due to specific alterations in coating properties. For example altering the surface charge of coating A from anionic to cationic resulted in the substantial elevation of the pro-inflammatory molecules IL-1beta, IL-6, TNF-alpha and MIP-1beta, while the pro-wound healing factor VEGF was significantly down-regulated. We also observed changes in cell surface marker expression in relation to altered coating properties, such as CD16 on NK Cells and HLA-DR on monocytes. We furthermore observed changes in the morphology of THP-1 macrophages following cultivation on different coatings. A correlation between these morphological changes and the cytokine expression profile is ongoing. Targeted changes in biomaterial properties can produce vast differences in immune response. The properties of the coatings examined here may, therefore, be a method to direct specific biological responses in order to improve implant biocompatibility.

Keywords: biomaterials, coatings, immune system, implants

Procedia PDF Downloads 151
32 Chiral Amine Synthesis and Recovery by Using High Molecular Weight Amine Donors

Authors: Claudia Matassa, Matthias Hohne, Dominic Ormerod, Yamini Satyawali

Abstract:

Chiral amines integrate the backbone of several active pharmaceutical ingredients (APIs) used in modern medicine for the treatment of a vast range of diseases. Despite the demand, their synthesis remains challenging. Besides a range of chemicals and enzymatical methods, chiral amine synthesis using transaminases (EC 2.6.1.W) represents a useful alternative to access this important class of compounds. Even though transaminases exhibit excellent stereo and regioselectivity and the potential for high yield, the reaction suffers from a number of challenges, including the thermodynamic equilibrium, product inhibition, and low substrate solubility. In this work, we demonstrate a membrane assisted strategy for addressing these challenges. It involves the use of high molecular weight (HMW) amine donors for the transaminase-catalyzed synthesis of 4-phenyl-2-butylamine in both aqueous and organic solvent media. In contrast to common amine donors such as alanine or isopropylamine, these large molecules, provided in excess for thermodynamic equilibrium shifting, are easily retained by commercial nanofiltration membranes; thus a selective permeation of the desired smaller product amine is possible. The enzymatic transamination in aqueous media, combined with selective product removal shifted the equilibrium enhancing substrate conversion by an additional 25% compared to the control reaction. Along with very efficient amine product removal, there was undesirable loss of ketone substrate and low product concentration was achieved. The system was therefore further improved by performing the reaction in organic solvent (n-heptane). Coupling the reaction system with membrane-assisted product removal resulted in a highly concentrated and relatively pure ( > 97%) product solution. Moreover, a product yield of 60% was reached, compared to 15% without product removal.

Keywords: amine donor, chiral amines, in situ product removal, transamination

Procedia PDF Downloads 115
31 Natural and Construction/Demolition Waste Aggregates: A Comparative Study

Authors: Debora C. Mendes, Matthias Eckert, Claudia S. Moço, Helio Martins, Jean-Pierre Gonçalves, Miguel Oliveira, Jose P. Da Silva

Abstract:

Disposal of construction and demolition waste (C&DW) in embankments in the periphery of cities causes both environmental and social problems. To achieve the management of C&DW, a detailed analysis of the properties of these materials should be done. In this work we report a comparative study of the physical, chemical and environmental properties of natural and C&DW aggregates from 25 different origins. Assays were performed according to European Standards. Analysis of heavy metals and organic compounds, namely polycyclic aromatic hydrocarbons (PAHs) and polychlorinated biphenyls (PCBs), were performed. Finally, properties of concrete prepared with C&DW aggregates are reported. Physical analyses of C&DW aggregates indicated lower quality properties than natural aggregates, particularly for concrete preparation and unbound layers of road pavements. Chemical properties showed that most samples (80%) meet the values required by European regulations for concrete and unbound layers of road pavements. Analyses of heavy metals Cd, Cr, Cu, Pb, Ni, Mo and Zn in the C&DW leachates showed levels below the limits established by the Council Decision of 19 December 2002. Identification and quantification of PCBs and PAHs indicated that few samples shows the presence of these compounds. The measured levels of PCBs and PAHs are also below the limits. Other compounds identified in the C&DW leachates include phthalates and diphenylmethanol. The characterized C&DW aggregates show lower quality properties than natural aggregates but most samples showed to be environmentally safe. A continuous monitoring of the presence of heavy metals and organic compounds should be made to trial safe C&DW aggregates. C&DW aggregates provide a good economic and environmental alternative to natural aggregates.

Keywords: concrete preparation, construction and demolition waste, heavy metals, organic pollutants

Procedia PDF Downloads 327
30 Coupling Static Multiple Light Scattering Technique With the Hansen Approach to Optimize Dispersibility and Stability of Particle Dispersions

Authors: Guillaume Lemahieu, Matthias Sentis, Giovanni Brambilla, Gérard Meunier

Abstract:

Static Multiple Light Scattering (SMLS) has been shown to be a straightforward technique for the characterization of colloidal dispersions without dilution, as multiply scattered light in backscattered and transmitted mode is directly related to the concentration and size of scatterers present in the sample. In this view, the use of SMLS for stability measurement of various dispersion types has already been widely described in the literature. Indeed, starting from a homogeneous dispersion, the variation of backscattered or transmitted light can be attributed to destabilization phenomena, such as migration (sedimentation, creaming) or particle size variation (flocculation, aggregation). In a view to investigating more on the dispersibility of colloidal suspensions, an experimental set-up for “at the line” SMLS experiment has been developed to understand the impact of the formulation parameters on particle size and dispersibility. The SMLS experiment is performed with a high acquisition rate (up to 10 measurements per second), without dilution, and under direct agitation. Using such experimental device, SMLS detection can be combined with the Hansen approach to optimize the dispersing and stabilizing properties of TiO₂ particles. It appears that the dispersibility and the stability spheres generated are clearly separated, arguing that lower stability is not necessarily a consequence of poor dispersibility. Beyond this clarification, this combined SMLS-Hansen approach is a major step toward the optimization of dispersibility and stability of colloidal formulations by finding solvents having the best compromise between dispersing and stabilizing properties. Such study can be intended to find better dispersion media, greener and cheaper solvents to optimize particles suspensions, reduce the content of costly stabilizing additives or satisfy product regulatory requirements evolution in various industrial fields using suspensions (paints & inks, coatings, cosmetics, energy).

Keywords: dispersibility, stability, Hansen parameters, particles, solvents

Procedia PDF Downloads 70
29 RPM-Synchronous Non-Circular Grinding: An Approach to Enhance Efficiency in Grinding of Non-Circular Workpieces

Authors: Matthias Steffan, Franz Haas

Abstract:

The production process grinding is one of the latest steps in a value-added manufacturing chain. Within this step, workpiece geometry and surface roughness are determined. Up to this process stage, considerable costs and energy have already been spent on components. According to the current state of the art, therefore, large safety reserves are calculated in order to guarantee a process capability. Especially for non-circular grinding, this fact leads to considerable losses of process efficiency. With present technology, various non-circular geometries on a workpiece must be grinded subsequently in an oscillating process where X- and Q-axis of the machine are coupled. With the approach of RPM-Synchronous Noncircular Grinding, such workpieces can be machined in an ordinary plung grinding process. Therefore, the workpieces and the grinding wheels revolutionary rate are in a fixed ratio. A non-circular grinding wheel is used to transfer its geometry onto the workpiece. The authors use a worldwide unique machine tool that was especially designed for this technology. Highest revolution rates on the workpiece spindle (up to 4500 rpm) are mandatory for the success of this grinding process. This grinding approach is performed in a two-step process. For roughing, a highly porous vitrified bonded grinding wheel with medium grain size is used. It ensures high specific material removal rates for efficiently producing the non-circular geometry on the workpiece. This process step is adapted by a force control algorithm, which uses acquired data from a three-component force sensor located in the dead centre of the tailstock. For finishing, a grinding wheel with a fine grain size is used. Roughing and finishing are performed consecutively among the same clamping of the workpiece with two locally separated grinding spindles. The approach of RPM-Synchronous Noncircular Grinding shows great efficiency enhancement in non-circular grinding. For the first time, three-dimensional non-circular shapes can be grinded that opens up various fields of application. Especially automotive industries show big interest in the emerging trend in finishing machining.

Keywords: efficiency enhancement, finishing machining, non-circular grinding, rpm-synchronous grinding

Procedia PDF Downloads 256
28 Influence of Ammonia Emissions on Aerosol Formation in Northern and Central Europe

Authors: A. Aulinger, A. M. Backes, J. Bieser, V. Matthias, M. Quante

Abstract:

High concentrations of particles pose a threat to human health. Thus, legal maximum concentrations of PM10 and PM2.5 in ambient air have been steadily decreased over the years. In central Europe, the inorganic species ammonium sulphate and ammonium nitrate make up a large fraction of fine particles. Many studies investigate the influence of emission reductions of sulfur- and nitrogen oxides on aerosol concentration. Here, we focus on the influence of ammonia (NH3) emissions. While emissions of sulphate and nitrogen oxides are quite well known, ammonia emissions are subject to high uncertainty. This is due to the uncertainty of location, amount, time of fertilizer application in agriculture, and the storage and treatment of manure from animal husbandry. For this study, we implemented a crop growth model into the SMOKE emission model. Depending on temperature, local legislation, and crop type individual temporal profiles for fertilizer and manure application are calculated for each model grid cell. Additionally, the diffusion from soils and plants and the direct release from open and closed barns are determined. The emission data was used as input for the Community Multiscale Air Quality (CMAQ) model. Comparisons to observations from the EMEP measurement network indicate that the new ammonia emission module leads to a better agreement of model and observation (for both ammonia and ammonium). Finally, the ammonia emission model was used to create emission scenarios. This includes emissions based on future European legislation, as well as a dynamic evaluation of the influence of different agricultural sectors on particle formation. It was found that a reduction of ammonia emissions by 50% lead to a 24% reduction of total PM2.5 concentrations during winter time in the model domain. The observed reduction was mainly driven by reduced formation of ammonium nitrate. Moreover, emission reductions during winter had a larger impact than during the rest of the year.

Keywords: ammonia, ammonia abatement strategies, ctm, seasonal impact, secondary aerosol formation

Procedia PDF Downloads 320
27 Determinants of Carbon-Certified Small-Scale Agroforestry Adoption In Rural Mount Kenyan

Authors: Emmanuel Benjamin, Matthias Blum

Abstract:

Purpose – We address smallholder farmers’ restricted possibilities to adopt sustainable technologies which have direct and indirect benefits. Smallholders often face little asset endowment due to small farm size und insecure property rights, therefore experiencing constraints in adopting agricultural innovation. A program involving payments for ecosystem services (PES) benefits poor smallholder farmers in developing countries in many ways and has been suggested as a means of easing smallholder farmers’ financial constraints. PES may also provide additional mainstay which can eventually result in more favorable credit contract terms due to the availability of collateral substitute. Results of this study may help to understand the barriers, motives and incentives for smallholders’ participation in PES and help in designing a strategy to foster participation in beneficial programs. Design/methodology/approach – This paper uses a random utility model and a logistic regression approach to investigate factors that influence agroforestry adoption. We investigate non-monetary factors, such as information spillover, that influence the decision to adopt such conservation strategies. We collected original data from non-government-run agroforestry mitigation programs with PES that have been implemented in the Mount Kenya region. Preliminary Findings – We find that spread of information, existing networks and peer involvement in such programs drive participation. Conversely, participation by smallholders does not seem to be influenced by education, land or asset endowment. Contrary to some existing literature, we found weak evidence for a positive correlation between the adoption of agroforestry with PES and age of smallholder, e.g., one increases with the other, in the Mount Kenyan region. Research implications – Poverty alleviation policies for developing countries should target social capital to increase the adoption rate of modern technologies amongst smallholders.

Keywords: agriculture innovation, agroforestry adoption, smallholders, payment for ecosystem services, Sub-Saharan Africa

Procedia PDF Downloads 343
26 Determination of Influence Lines for Train Crossings on a Tied Arch Bridge to Optimize the Construction of the Hangers

Authors: Martin Mensinger, Marjolaine Pfaffinger, Matthias Haslbeck

Abstract:

The maintenance and expansion of the railway network represents a central task for transport planning in the future. In addition to the ultimate limit states, the aspects of resource conservation and sustainability are increasingly more necessary to include in the basic engineering. Therefore, as part of the AiF research project, ‘Integrated assessment of steel and composite railway bridges in accordance with sustainability criteria’, the entire lifecycle of engineering structures is involved in planning and evaluation, offering a way to optimize the design of steel bridges. In order to reduce the life cycle costs and increase the profitability of steel structures, it is particularly necessary to consider the demands on hanger connections resulting from fatigue. In order for accurate analysis, a number simulations were conducted as part of the research project on a finite element model of a reference bridge, which gives an indication of the internal forces of the individual structural components of a tied arch bridge, depending on the stress incurred by various types of trains. The calculations were carried out on a detailed FE-model, which allows an extraordinarily accurate modeling of the stiffness of all parts of the constructions as it is made up surface elements. The results point to a large impact of the formation of details on fatigue-related changes in stress, on the one hand, and on the other, they could depict construction-specific specifics over the course of adding stress. Comparative calculations with varied axle-stress distribution also provide information about the sensitivity of the results compared to the imposition of stress and axel distribution on the stress-resultant development. The calculated diagrams help to achieve an optimized hanger connection design through improved durability, which helps to reduce the maintenance costs of rail networks and to give practical application notes for the formation of details.

Keywords: fatigue, influence line, life cycle, tied arch bridge

Procedia PDF Downloads 298
25 Physical, Chemical and Environmental Properties of Natural and Construction/Demolition Recycled Aggregates

Authors: Débora C. Mendes, Matthias Eckert, Cláudia S. Moço, Hélio Martins, Jean-Pierre P. Gonçalves, Miguel Oliveira, José P. Da Silva

Abstract:

Uncontrolled disposal of construction and demolition waste (C & DW) in embankments in the periphery of cities causes both environmental and social problems, namely erosion, deforestation, water contamination and human conflicts. One of the milestones of EU Horizon 2020 Programme is the management of waste as a resource. To achieve this purpose for C & DW, a detailed analysis of the properties of these materials should be done. In this work we report the physical, chemical and environmental properties of C & DW aggregates from 25 different origins. The results are compared with those of common natural aggregates used in construction. Assays were performed according to European Standards. Additional analysis of heavy metals and organic compounds such as polycyclic aromatic hydrocarbons (PAHs) and polychlorinated biphenyls (PCBs), were performed to evaluate their environmental impact. Finally, properties of concrete prepared with C & DW aggregates are also reported. Physical analyses of C & DW aggregates indicated lower quality properties than natural aggregates, particularly for concrete preparation and unbound layers of road pavements. Chemical properties showed that most samples (80%) meet the values required by European regulations for concrete and unbound layers of road pavements. Analyses of heavy metals Cd, Cr, Cu, Pb, Ni, Mo and Zn in the C&DW leachates showed levels below the limits established by the Council Decision of 19 December 2002. Identification and quantification of PCBs and PAHs indicated that few samples shows the presence of these compounds. The measured levels of PCBs and PAHs are also below the limits. Other compounds identified in the C&DW leachates include phthalates and diphenylmethanol. In conclusion, the characterized C&DW aggregates show lower quality properties than natural aggregates but most samples showed to be environmentally safe. A continuous monitoring of the presence of heavy metals and organic compounds should be made to trial safe C&DW aggregates. C&DW aggregates provide a good economic and environmental alternative to natural aggregates.

Keywords: concrete preparation, construction and demolition waste, heavy metals, organic pollutants

Procedia PDF Downloads 318
24 Systematic Analysis of Logistics Location Search Methods under Aspects of Sustainability

Authors: Markus Pajones, Theresa Steiner, Matthias Neubauer

Abstract:

Selecting a logistics location is vital for logistics providers, food retailing and other trading companies since the selection poses an essential factor for economic success. Therefore various location search methods like cost-benefit analysis and others are well known and under usage. The development of a logistics location can be related to considerable negative effects for the eco system such as sealing the surface, wrecking of biodiversity or CO2 and noise emissions generated by freight and commuting traffic. The increasing importance of sustainability demands for taking an informed decision when selecting a logistics location for the future. Sustainability considers economic, ecologic and social aspects which should be equally integrated in the process of location search. Objectives of this paper are to define various methods which support the selection of sustainable logistics locations and to generate knowledge about the suitability, assets and limitations of the methods within the selection process. This paper investigates the role of economical, ecological and social aspects when searching for new logistics locations. Thereby, related work targeted towards location search is analyzed with respect to encoded sustainability aspects. In addition, this research aims to gain knowledge on how to include aspects of sustainability and take an informed decision when searching for a logistics location. As a result, a decomposition of the various location search methods in there components leads to a comparative analysis in form of a matrix. The comparison within a matrix enables a transparent overview about the mentioned assets and limitations of the methods and their suitability for selecting sustainable logistics locations. A further result is to generate knowledge on how to combine the separate methods to a new method for a more efficient selection of logistics locations in the context of sustainability. Future work will especially investigate the above mentioned combination of various location search methods. The objective is to develop an innovative instrument, which supports the search for logistics locations with a focus on a balanced sustainability (economy, ecology, social). Because of an ideal selection of logistics locations, induced traffic should be reduced and a mode shift to rail and public transport should be facilitated.

Keywords: commuting traffic, freight traffic, logistics location search, location search method

Procedia PDF Downloads 291
23 Innovation Management in State-Owned-Enterprises in the Digital Transformation: An Empirical Case Study of Swiss Post

Authors: Jiayun Shen, Lorenz Wyss, Thierry Golliard, Matthias Finger

Abstract:

Innovation is widely recognized as the key for private enterprises to win the market competition. The state-owned-enterprises need to be innovative to compete in the market after the privatization as well. However, it is a lack of research to study how state-owned-enterprises manage innovation to create new products and services. Swiss Post, a Swiss state-owned-enterprises, has established a department to transform the corporate culture and foster innovation to achieve digital transformation. This paper describes the innovation management process at the Swiss Post and analyzes the impacts of the instruments, the organizational structure, and explores the barriers of innovation. This study used qualitative methods based on a review of the literature on innovation management and semi-structured interviews. Being established for over five years, the Swiss Post’s innovation management department has established a software-assisted modularized platform with systematic instruments to help the internal employees with the different innovation processes. It guides the innovators from idea creation to piloting in markets and supports with a separate financing source, with knowledge inputs and coaching, as well as with connections to external partners through the open innovation and venturing team. The platform also adapts to different business units within the corporate with a customized tailor for the various operational business units. The separate financing instruments enabled the creation and further development of new ideas; the coaching services contribute greatly to the transformation of teams’ innovation culture by providing new knowledge, thinking methods, and use cases for inspiration. It also facilitates organizational learning to help the whole corporate with the digital transformation. However, it is also confronted with a big challenge in twofold. Internally, the disruptive projects often hardly overcome the obstacles of long-established operational processes in the traditional business units; externally, the expectations of the public and restrictions from the federal government have become high hurdles for the company to stay and compete in the innovation track.

Keywords: empirical case study, innovation management, state-owned-enterprise, Swiss Post

Procedia PDF Downloads 96
22 IL4/IL13 STAT6 Mediated Macrophage Polarization During Acute and Chronic Pancreatitis

Authors: Hager Elsheikh, Juliane Glaubitz, Frank Ulrich Weiss, Matthias Sendler

Abstract:

Aim: Acute pancreatitis (AP) and chronic pancreatitis (CP) are both accompanied by a prominent immune response which influences the course of disease. Whereas during AP the pro-inflammatory immune response dominates, during CP a fibroinflammatory response regulates organ remodeling. The transcription factor signal transducer and activator of transcription 6 (STAT6) is a crucial part of the Type 2 immune response. Here we investigate the role of STAT6 in a mouse model of AP and CP. Material and Methods: AP was induced by hourly repetitive i.p. injections of caerulein (50µg/kg/bodyweight) in C57Bl/6 J and STAT6-/- mice. CP was induced by repetitive caerulein injections 6 times a day, 3 days a week over 4 weeks. Disease severity was evaluated by serum amylase/lipase measurement, H&E staining of pancreas. Pancreatic infiltrate was characterized by immunofluorescent labeling of CD68, CD206, CCR2, CD4 and CD8. Pancreas fibrosis was evaluated by Azan blue staining. qRT-PCR was performed of Arg1, Nos2, Il6, Il1b, Col3a, Socs3 and Ym1. Affymetrix chip array analyses were done to illustrate the IL4/IL13/STAT6 signaling in bone marrow derived macrophages. Results: AP severity is mitigated in STAT6-/- mice, as shown by decreased serum amylase and lipase, as well as histological damage. CP mice surprisingly showed only slightly reduced fibrosis of the pancreas. Also staining of CD206 a classical marker of alternatively activated macrophages showed no decrease of M2-like polarization in the absence of STAT6. In contrast, transcription profile analysis in BMDM showed complete blockade of the IL4/IL13 pathway in STAT6-/- animals. Conclusion: STAT6 signaling pathway is protective during AP and mitigates the pancreatic damage. During chronic pancreatitis the IL4/IL13 – STAT6 axisis involved in organ fibrogenesis. Notably, fibrosis is not dependent on a single signaling pathway, and alternative macrophage activation is also complex and involves different subclasses (M2a, M2b, M2c and M2d) which could be independent of the IL4/IL13 STAT6 axis.

Keywords: chronic pancreatitis, macrophages, IL4/IL13, Type immune response

Procedia PDF Downloads 18
21 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.

Keywords: classification, achine learning, predictive quality, feature selection

Procedia PDF Downloads 133
20 Characterizing Solid Glass in Bending, Torsion and Tension: High-Temperature Dynamic Mechanical Analysis up to 950 °C

Authors: Matthias Walluch, José Alberto Rodríguez, Christopher Giehl, Gunther Arnold, Daniela Ehgartner

Abstract:

Dynamic mechanical analysis (DMA) is a powerful method to characterize viscoelastic properties and phase transitions for a wide range of materials. It is often used to characterize polymers and their temperature-dependent behavior, including thermal transitions like the glass transition temperature Tg, via determination of storage and loss moduli in tension (Young’s modulus, E) and shear or torsion (shear modulus, G) or other testing modes. While production and application temperatures for polymers are often limited to several hundred degrees, material properties of glasses usually require characterization at temperatures exceeding 600 °C. This contribution highlights a high temperature setup for rotational and oscillatory rheometry as well as for DMA in different modes. The implemented standard convection oven enables the characterization of glass in different loading modes at temperatures up to 950 °C. Three-point bending, tension and torsional measurements on different glasses, with E and G moduli as a function of frequency and temperature, are presented. Additional tests include superimposing several frequencies in a single temperature sweep (“multiwave”). This type of test results in a considerable reduction of the experiment time and allows to evaluate structural changes of the material and their frequency dependence. Furthermore, DMA in torsion and tension was performed to determine the complex Poisson’s ratio as a function of frequency and temperature within a single test definition. Tests were performed in a frequency range from 0.1 to 10 Hz and temperatures up to the glass transition. While variations in the frequency did not reveal significant changes of the complex Poisson’s ratio of the glass, a monotonic increase of this parameter was observed when increasing the temperature. This contribution outlines the possibilities of DMA in bending, tension and torsion for an extended temperature range. It allows the precise mechanical characterization of material behavior from room temperature up to the glass transition and the softening temperature interval. Compared to other thermo-analytical methods, like Dynamic Scanning Calorimetry (DSC) where mechanical stress is neglected, the frequency-dependence links measurement results (e.g. relaxation times) to real applications

Keywords: dynamic mechanical analysis, oscillatory rheometry, Poisson's ratio, solid glass, viscoelasticity

Procedia PDF Downloads 49
19 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 109
18 In situ Grazing Incidence Small Angle X-Ray Scattering Study of Permalloy Thin Film Growth on Nanorippled Si

Authors: Sarathlal Koyiloth Vayalil, Stephan V. Roth, Gonzalo Santoro, Peng Zhang, Matthias Schwartzkopf, Bjoern Beyersdorff

Abstract:

Nanostructured magnetic thin films have gained significant relevance due to its applications in magnetic storage and recording media. Self-organized arrays of nanoparticles and nanowires can be produced by depositing metal thin films on nano-rippled substrates. The substrate topography strongly affects the film growth giving rise to anisotropic properties (optical, magnetic, electronic transport). Ion-beam erosion (IBE) method can provide large-area patterned substrates with the valuable possibility to widely modify pattern length scale by simply acting on ion beam parameters (i.e. energy, ions, geometry, etc.). In this work, investigation of the growth mechanism of Permalloy thin films on such nano-rippled Si (100) substrates using in situ grazing incidence small angle x-ray scattering measurements (GISAXS) have been done. In situ GISAXS measurements during the deposition of thin films have been carried out at the P03/MiNaXS beam line of PETRA III storage ring of DESY, Hamburg. Nanorippled Si substrates prepared by low energy ion beam sputtering with an average wavelength of 33 nm and 1 nm have been used as templates. It has been found that the film replicates the morphology up to larger thickness regimes and also the growth is highly anisotropic along and normal to the ripple wave vectors. Various growth regimes have been observed. Further, magnetic measurements have been done using magneto-optical Kerr effect by rotating the sample in the azimuthal direction. Strong uniaxial magnetic anisotropy with its easy axis in a direction normal to the ripple wave vector has been observed. The strength of the magnetic anisotropy is found to be decreasing with increasing thin film thickness values. The mechanism of the observed strong uniaxial magnetic anisotropy and its depends on the thickness of the film has been explained by correlating it with the GISAXS results. In conclusion, we have done a detailed growth analysis of Permalloy thin films deposited on nanorippled Si templates and tried to explain the correlation between structure, morphology to the observed magnetic properties.

Keywords: grazing incidence small angle x-ray scattering, magnetic thin films, magnetic anisotropy, nanoripples

Procedia PDF Downloads 287
17 Considering Uncertainties of Input Parameters on Energy, Environmental Impacts and Life Cycle Costing by Monte Carlo Simulation in the Decision Making Process

Authors: Johannes Gantner, Michael Held, Matthias Fischer

Abstract:

The refurbishment of the building stock in terms of energy supply and efficiency is one of the major challenges of the German turnaround in energy policy. As the building sector accounts for 40% of Germany’s total energy demand, additional insulation is key for energy efficient refurbished buildings. Nevertheless the energetic benefits often the environmental and economic performances of insulation materials are questioned. The methods Life Cycle Assessment (LCA) as well as Life Cycle Costing (LCC) can form the standardized basis for answering this doubts and more and more become important for material producers due efforts such as Product Environmental Footprint (PEF) or Environmental Product Declarations (EPD). Due to increasing use of LCA and LCC information for decision support the robustness and resilience of the results become crucial especially for support of decision and policy makers. LCA and LCC results are based on respective models which depend on technical parameters like efficiencies, material and energy demand, product output, etc.. Nevertheless, the influence of parameter uncertainties on lifecycle results are usually not considered or just studied superficially. Anyhow the effect of parameter uncertainties cannot be neglected. Based on the example of an exterior wall the overall lifecycle results are varying by a magnitude of more than three. As a result simple best case worst case analyses used in practice are not sufficient. These analyses allow for a first rude view on the results but are not taking effects into account such as error propagation. Thereby LCA practitioners cannot provide further guidance for decision makers. Probabilistic analyses enable LCA practitioners to gain deeper understanding of the LCA and LCC results and provide a better decision support. Within this study, the environmental and economic impacts of an exterior wall system over its whole lifecycle are illustrated, and the effect of different uncertainty analysis on the interpretation in terms of resilience and robustness are shown. Hereby the approaches of error propagation and Monte Carlo Simulations are applied and combined with statistical methods in order to allow for a deeper understanding and interpretation. All in all this study emphasis the need for a deeper and more detailed probabilistic evaluation based on statistical methods. Just by this, misleading interpretations can be avoided, and the results can be used for resilient and robust decisions.

Keywords: uncertainty, life cycle assessment, life cycle costing, Monte Carlo simulation

Procedia PDF Downloads 258
16 Ultra-deformable Drug-free Sequessome™ Vesicles (TDT 064) for the Treatment of Joint Pain Following Exercise: A Case Report and Clinical Data

Authors: Joe Collins, Matthias Rother

Abstract:

Background: Oral non-steroidal anti-inflammatory drugs (NSAIDs) are widely used for the relief of joint pain during and post-exercise. However, oral NSAIDs increase the risk of systemic side effects, even in healthy individuals, and retard recovery from muscle soreness. TDT 064 (Flexiseq®), a topical formulation containing ultra-deformable drug-free Sequessome™ vesicles, has demonstrated equivalent efficacy to oral celecoxib in reducing osteoarthritis-associated joint pain and stiffness. TDT 064 does not cause NSAID-related adverse effects. We describe clinical study data and a case report on the effectiveness of TDT 064 in reducing joint pain after exercise. Methods: Participants with a pain score ≥3 (10-point scale) 12–16 hours post-exercise were randomized to receive TDT 064 plus oral placebo, TDT 064 plus oral ketoprofen, or ketoprofen in ultra-deformable phospholipid vesicles plus oral placebo. Results: In the 168 study participants, pain scores were significantly higher with oral ketoprofen plus TDT 064 than with TDT 064 plus placebo in the 7 days post-exercise (P = 0.0240) and recovery from muscle soreness was significantly longer (P = 0.0262). There was a low incidence of adverse events. These data are supported by clinical experience. A 24-year-old male professional rugby player suffered a traumatic lisfranc fracture in March 2014 and underwent operative reconstruction. He had no relevant medical history and was not receiving concomitant medications. He had undergone anterior cruciate ligament reconstruction in 2008. The patient reported restricted training due to pain (score 7/10), stiffness (score 9/10) and poor function, as well as pain when changing direction and running on consecutive days. In July 2014 he started using TDT 064 twice daily at the recommended dose. In November 2014 he noted reduced pain on running (score 2-3/10), decreased morning stiffness (score 4/10) and improved joint mobility and was able to return to competitive rugby without restrictions. No side effects of TDT 064 were reported. Conclusions: TDT 064 shows efficacy against exercise- and injury-induced joint pain, as well as that associated with osteoarthritis. It does not retard muscle soreness recovery after exercise compared with an oral NSAID, making it an alternative approach for the treatment of joint pain during and post-exercise.

Keywords: exercise, joint pain, TDT 064, phospholipid vesicles

Procedia PDF Downloads 453
15 Characterization of Dota-Girentuximab Conjugates for Radioimmunotherapy

Authors: Tais Basaco, Stefanie Pektor, Josue A. Moreno, Matthias Miederer, Andreas Türler

Abstract:

Radiopharmaceuticals based in monoclonal anti-body (mAb) via chemical linkers have become a potential tool in nuclear medicine because of their specificity and the large variability and availability of therapeutic radiometals. It is important to identify the conjugation sites and number of attached chelator to mAb to obtain radioimmunoconjugates with required immunoreactivity and radiostability. Girentuximab antibody (G250) is a potential candidate for radioimmunotherapy of clear cell carcinomas (RCCs) because it is reactive with CAIX antigen, a transmembrane glycoprotein overexpressed on the cell surface of most ( > 90%) (RCCs). G250 was conjugated with the bifunctional chelating agent DOTA (1,4,7,10-Tetraazacyclododecane-N,N’,N’’,N’’’-tetraacetic acid) via a benzyl-thiocyano group as a linker (p-SCN-Bn-DOTA). DOTA-G250 conjugates were analyzed by size exclusion chromatography (SE-HPLC) and by electrophoresis (SDS-PAGE). The potential site-specific conjugation was identified by liquid chromatography–mass spectrometry (LC/MS-MS) and the number of linkers per molecule of mAb was calculated using the molecular weight (MW) measured by matrix assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS). The average number obtained in the conjugates in non-reduced conditions was between 8-10 molecules of DOTA per molecule of mAb. The average number obtained in the conjugates in reduced conditions was between 1-2 and 3-4 molecules of DOTA per molecule of mAb in the light chain (LC) and heavy chain (HC) respectively. Potential DOTA modification sites of the chelator were identified in lysine residues. The biological activity of the conjugates was evaluated by flow cytometry (FACS) using CAIX negative (SKRC-18) and CAIX positive (SKRC-52). The DOTA-G250 conjugates were labelled with 177Lu with a radiochemical yield > 95% reaching specific activities of 12 MBq/µg. The stability in vitro of different types of radioconstructs was analyzed in human serum albumin (HSA). The radiostability of 177Lu-DOTA-G250 at high specific activity was increased by addition of sodium ascorbate after the labelling. The immunoreactivity was evaluated in vitro and in vivo. Binding to CAIX positive cells (SK-RC-52) at different specific activities was higher for conjugates with less DOTA content. Protein dose was optimized in mice with subcutaneously growing SK-RC-52 tumors using different amounts of 177Lu- DOTA-G250.

Keywords: mass spectrometry, monoclonal antibody, radiopharmaceuticals, radioimmunotheray, renal cancer

Procedia PDF Downloads 270
14 CertifHy: Developing a European Framework for the Generation of Guarantees of Origin for Green Hydrogen

Authors: Frederic Barth, Wouter Vanhoudt, Marc Londo, Jaap C. Jansen, Karine Veum, Javier Castro, Klaus Nürnberger, Matthias Altmann

Abstract:

Hydrogen is expected to play a key role in the transition towards a low-carbon economy, especially within the transport sector, the energy sector and the (petro)chemical industry sector. However, the production and use of hydrogen only make sense if the production and transportation are carried out with minimal impact on natural resources, and if greenhouse gas emissions are reduced in comparison to conventional hydrogen or conventional fuels. The CertifHy project, supported by a wide range of key European industry leaders (gas companies, chemical industry, energy utilities, green hydrogen technology developers and automobile manufacturers, as well as other leading industrial players) therefore aims to: 1. Define a widely acceptable definition of green hydrogen. 2. Determine how a robust Guarantee of Origin (GoO) scheme for green hydrogen should be designed and implemented throughout the EU. It is divided into the following work packages (WPs). 1. Generic market outlook for green hydrogen: Evidence of existing industrial markets and the potential development of new energy related markets for green hydrogen in the EU, overview of the segments and their future trends, drivers and market outlook (WP1). 2. Definition of “green” hydrogen: step-by-step consultation approach leading to a consensus on the definition of green hydrogen within the EU (WP2). 3. Review of existing platforms and interactions between existing GoO and green hydrogen: Lessons learnt and mapping of interactions (WP3). 4. Definition of a framework of guarantees of origin for “green” hydrogen: Technical specifications, rules and obligations for the GoO, impact analysis (WP4). 5. Roadmap for the implementation of an EU-wide GoO scheme for green hydrogen: the project implementation plan will be presented to the FCH JU and the European Commission as the key outcome of the project and shared with stakeholders before finalisation (WP5 and 6). Definition of Green Hydrogen: CertifHy Green hydrogen is hydrogen from renewable sources that is also CertifHy Low-GHG-emissions hydrogen. Hydrogen from renewable sources is hydrogen belonging to the share of production equal to the share of renewable energy sources (as defined in the EU RES directive) in energy consumption for hydrogen production, excluding ancillary functions. CertifHy Low-GHG hydrogen is hydrogen with emissions lower than the defined CertifHy Low-GHG-emissions threshold, i.e. 36.4 gCO2eq/MJ, produced in a plant where the average emissions intensity of the non-CertifHy Low-GHG hydrogen production (based on an LCA approach), since sign-up or in the past 12 months, does not exceed the emissions intensity of the benchmark process (SMR of natural gas), i.e. 91.0 gCO2eq/MJ.

Keywords: green hydrogen, cross-cutting, guarantee of origin, certificate, DG energy, bankability

Procedia PDF Downloads 453
13 Life Cycle Assessment of Todays and Future Electricity Grid Mixes of EU27

Authors: Johannes Gantner, Michael Held, Rafael Horn, Matthias Fischer

Abstract:

At the United Nations Climate Change Conference 2015 a global agreement on the reduction of climate change was achieved stating CO₂ reduction targets for all countries. For instance, the EU targets a reduction of 40 percent in emissions by 2030 compared to 1990. In order to achieve this ambitious goal, the environmental performance of the different European electricity grid mixes is crucial. First, the electricity directly needed for everyone’s daily life (e.g. heating, plug load, mobility) and therefore a reduction of the environmental impacts of the electricity grid mix reduces the overall environmental impacts of a country. Secondly, the manufacturing of every product depends on electricity. Thereby a reduction of the environmental impacts of the electricity mix results in a further decrease of environmental impacts of every product. As a result, the implementation of the two-degree goal highly depends on the decarbonization of the European electricity mixes. Currently the production of electricity in the EU27 is based on fossil fuels and therefore bears a high GWP impact per kWh. Due to the importance of the environmental impacts of the electricity mix, not only today but also in future, within the European research projects, CommONEnergy and Senskin, time-dynamic Life Cycle Assessment models for all EU27 countries were set up. As a methodology, a combination of scenario modeling and life cycle assessment according to ISO14040 and ISO14044 was conducted. Based on EU27 trends regarding energy, transport, and buildings, the different national electricity mixes were investigated taking into account future changes such as amount of electricity generated in the country, change in electricity carriers, COP of the power plants and distribution losses, imports and exports. As results, time-dynamic environmental profiles for the electricity mixes of each country and for Europe overall were set up. Thereby for each European country, the decarbonization strategies of the electricity mix are critically investigated in order to identify decisions, that can lead to negative environmental effects, for instance on the reduction of the global warming of the electricity mix. For example, the withdrawal of the nuclear energy program in Germany and at the same time compensation of the missing energy by non-renewable energy carriers like lignite and natural gas is resulting in an increase in global warming potential of electricity grid mix. Just after two years this increase countervailed by the higher share of renewable energy carriers such as wind power and photovoltaic. Finally, as an outlook a first qualitative picture is provided, illustrating from environmental perspective, which country has the highest potential for low-carbon electricity production and therefore how investments in a connected European electricity grid could decrease the environmental impacts of the electricity mix in Europe.

Keywords: electricity grid mixes, EU27 countries, environmental impacts, future trends, life cycle assessment, scenario analysis

Procedia PDF Downloads 158
12 Using a Card Game as a Tool for Developing a Design

Authors: Matthias Haenisch, Katharina Hermann, Marc Godau, Verena Weidner

Abstract:

Over the past two decades, international music education has been characterized by a growing interest in informal learning for formal contexts and a "compositional turn" that has moved from closed to open forms of composing. This change occurs under social and technological conditions that permeate 21st-century musical practices. This forms the background of Musical Communities in the (Post)Digital Age (MusCoDA), a four-year joint research project of the University of Erfurt (UE) and the University of Education Karlsruhe (PHK), funded by the German Federal Ministry of Education and Research (BMBF). Both explore songwriting processes as an example of collective creativity in (post)digital communities, one in formal and the other in informal learning contexts. Collective songwriting will be studied from a network perspective, that will allow us to view boundaries between both online and offline as well as formal and informal or hybrid contexts as permeable and to reconstruct musical learning practices. By comparing these songwriting processes, possibilities for a pedagogical-didactic interweaving of different educational worlds are highlighted. Therefore, the subproject of the University of Erfurt investigates school music lessons with the help of interviews, videography, and network maps by analyzing new digital pedagogical and didactic possibilities. In the first step, the international literature on songwriting in the music classroom was examined for design development. The analysis focused on the question of which methods and practices are circulating in the current literature. Results from this stage of the project form the basis for the first instructional design that will help teachers in planning regular music classes and subsequently reconstruct musical learning practices under these conditions. In analyzing the literature, we noticed certain structural methods and concepts that recur, such as the Building Blocks method and the pre-structuring of the songwriting process. From these findings, we developed a deck of cards that both captures the current state of research and serves as a method for design development. With this deck of cards, both teachers and students themselves can plan their individual songwriting lessons by independently selecting and arranging topic, structure, and action cards. In terms of science communication, music educators' interactions with the card game provide us with essential insights for developing the first design. The overall goal of MusCoDA is to develop an empirical model of collective musical creativity and learning and an instructional design for teaching music in the postdigital age.

Keywords: card game, collective songwriting, community of practice, network, postdigital

Procedia PDF Downloads 33
11 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains

Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe

Abstract:

The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.

Keywords: digitalization, digital transformation, Industrie 4.0, lean production, value chain

Procedia PDF Downloads 273
10 The Current Application of BIM - An Empirical Study Focusing on the BIM-Maturity Level

Authors: Matthias Stange

Abstract:

Building Information Modelling (BIM) is one of the most promising methods in the building design process and plays an important role in the digitalization of the Architectural, Engineering, and Construction (AEC) Industry. The application of BIM is seen as the key enabler for increasing productivity in the construction industry. The model-based collaboration using the BIM method is intended to significantly reduce cost increases, schedule delays, and quality problems in the planning and construction of buildings. Numerous qualitative studies based on expert interviews support this theory and report perceived benefits from the use of BIM in terms of achieving project objectives related to cost, schedule, and quality. However, there is a large research gap in analysing quantitative data collected from real construction projects regarding the actual benefits of applying BIM based on representative sample size and different application regions as well as different project typologies. In particular, the influence of the project-related BIM maturity level is completely unexplored. This research project examines primary data from 105 construction projects worldwide using quantitative research methods. Projects from the areas of residential, commercial, and industrial construction as well as infrastructure and hydraulic engineering were examined in application regions North America, Australia, Europe, Asia, MENA region, and South America. First, a descriptive data analysis of 6 independent project variables (BIM maturity level, application region, project category, project type, project size, and BIM level) were carried out using statistical methods. With the help of statisticaldata analyses, the influence of the project-related BIM maturity level on 6 dependent project variables (deviation in planning time, deviation in construction time, number of planning collisions, frequency of rework, number of RFIand number of changes) was investigated. The study revealed that most of the benefits of using BIM perceived through numerous qualitative studies have not been confirmed. The results of the examined sample show that the application of BIM did not have an improving influence on the dependent project variables, especially regarding the quality of the planning itself and the adherence to the schedule targets. The quantitative research suggests the conclusion that the BIM planning method in its current application has not (yet) become a recognizable increase in productivity within the planning and construction process. The empirical findings indicate that this is due to the overall low level of BIM maturity in the projects of the examined sample. As a quintessence, the author suggests that the further implementation of BIM should primarily focus on an application-oriented and consistent development of the project-related BIM maturity level instead of implementing BIM for its own sake. Apparently, there are still significant difficulties in the interweaving of people, processes, and technology.

Keywords: AEC-process, building information modeling, BIM maturity level, project results, productivity of the construction industry

Procedia PDF Downloads 48
9 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 107
8 Sustainable Production of Pharmaceutical Compounds Using Plant Cell Culture

Authors: David A. Ullisch, Yantree D. Sankar-Thomas, Stefan Wilke, Thomas Selge, Matthias Pump, Thomas Leibold, Kai Schütte, Gilbert Gorr

Abstract:

Plants have been considered as a source of natural substances for ages. Secondary metabolites from plants are utilized especially in medical applications but are more and more interesting as cosmetical ingredients and in the field of nutraceuticals. However, supply of compounds from natural harvest can be limited by numerous factors i.e. endangered species, low product content, climate impacts and cost intensive extraction. Especially in the pharmaceutical industry the ability to provide sufficient amounts of product and high quality are additional requirements which in some cases are difficult to fulfill by plant harvest. Whereas in many cases the complexity of secondary metabolites precludes chemical synthesis on a reasonable commercial basis, plant cells contain the biosynthetic pathway – a natural chemical factory – for a given compound. A promising approach for the sustainable production of natural products can be plant cell fermentation (PCF®). A thoroughly accomplished development process comprises the identification of a high producing cell line, optimization of growth and production conditions, the development of a robust and reliable production process and its scale-up. In order to address persistent, long lasting production, development of cryopreservation protocols and generation of working cell banks is another important requirement to be considered. So far the most prominent example using a PCF® process is the production of the anticancer compound paclitaxel. To demonstrate the power of plant suspension cultures here we present three case studies: 1) For more than 17 years Phyton produces paclitaxel at industrial scale i.e. up to 75,000 L in scale. With 60 g/kg dw this fully controlled process which is applied according to GMP results in outstanding high yields. 2) Thapsigargin is another anticancer compound which is currently isolated from seeds of Thapsia garganica. Thapsigargin is a powerful cytotoxin – a SERCA inhibitor – and the precursor for the derivative ADT, the key ingredient of the investigational prodrug Mipsagargin (G-202) which is in several clinical trials. Phyton successfully generated plant cell lines capable to express this compound. Here we present data about the screening for high producing cell lines. 3) The third case study covers ingenol-3-mebutate. This compound is found in the milky sap of the intact plants of the Euphorbiacae family at very low concentrations. Ingenol-3-mebutate is used in Picato® which is approved against actinic keratosis. Generation of cell lines expressing significant amounts of ingenol-3-mebutate is another example underlining the strength of plant cell culture. The authors gratefully acknowledge Inspyr Therapeutics for funding.

Keywords: Ingenol-3-mebutate, plant cell culture, sustainability, thapsigargin

Procedia PDF Downloads 214
7 Preparedness is Overrated: Community Responses to Floods in a Context of (Perceived) Low Probability

Authors: Kim Anema, Matthias Max, Chris Zevenbergen

Abstract:

For any flood risk manager the 'safety paradox' has to be a familiar concept: low probability leads to a sense of safety, which leads to more investments in the area, which leads to higher potential consequences: keeping the aggregated risk (probability*consequences) at the same level. Therefore, it is important to mitigate potential consequences apart from probability. However, when the (perceived) probability is so low that there is no recognizable trend for society to adapt to, addressing the potential consequences will always be the lagging point on the agenda. Preparedness programs fail because of lack of interest and urgency, policy makers are distracted by their day to day business and there's always a more urgent issue to spend the taxpayer's money on. The leading question in this study was how to address the social consequences of flooding in a context of (perceived) low probability. Disruptions of everyday urban life, large or small, can be caused by a variety of (un)expected things - of which flooding is only one possibility. Variability like this is typically addressed with resilience - and we used the concept of Community Resilience as the framework for this study. Drawing on face to face interviews, an extensive questionnaire and publicly available statistical data we explored the 'whole society response' to two recent urban flood events; the Brisbane Floods (AUS) in 2011 and the Dresden Floods (GE) in 2013. In Brisbane, we studied how the societal impacts of the floods were counteracted by both authorities and the public, and in Dresden we were able to validate our findings. A large part of the reactions, both public as institutional, to these two urban flood events were not fuelled by preparedness or proper planning. Instead, more important success factors in counteracting social impacts like demographic changes in neighborhoods and (non-)economic losses were dynamics like community action, flexibility and creativity from authorities, leadership, informal connections and a shared narrative. These proved to be the determining factors for the quality and speed of recovery in both cities. The resilience of the community in Brisbane was good, due to (i) the approachability of (local) authorities, (ii) a big group of ‘secondary victims’ and (iii) clear leadership. All three of these elements were amplified by the use of social media and/ or web 2.0 by both the communities and the authorities involved. The numerous contacts and social connections made through the web were fast, need driven and, in their own way, orderly. Similarly in Dresden large groups of 'unprepared', ad hoc organized citizens managed to work together with authorities in a way that was effective and speeded up recovery. The concept of community resilience is better fitted than 'social adaptation' to deal with the potential consequences of an (im)probable flood. Community resilience is built on capacities and dynamics that are part of everyday life and which can be invested in pre-event to minimize the social impact of urban flooding. Investing in these might even have beneficial trade-offs in other policy fields.

Keywords: community resilience, disaster response, social consequences, preparedness

Procedia PDF Downloads 324