Search results for: Bernd Friedrich
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 52

Search results for: Bernd Friedrich

22 Improvement of Parallel Compressor Model in Dealing Outlet Unequal Pressure Distribution

Authors: Kewei Xu, Jens Friedrich, Kevin Dwinger, Wei Fan, Xijin Zhang

Abstract:

Parallel Compressor Model (PCM) is a simplified approach to predict compressor performance with inlet distortions. In PCM calculation, it is assumed that the sub-compressors’ outlet static pressure is uniform and therefore simplifies PCM calculation procedure. However, if the compressor’s outlet duct is not long and straight, such assumption frequently induces error ranging from 10% to 15%. This paper provides a revised calculation method of PCM that can correct the error. The revised method employs energy equation, momentum equation and continuity equation to acquire needed parameters and replace the equal static pressure assumption. Based on the revised method, PCM is applied on two compression system with different blades types. The predictions of their performance in non-uniform inlet conditions are yielded through the revised calculation method and are employed to evaluate the method’s efficiency. Validating the results by experimental data, it is found that although little deviation occurs, calculated result agrees well with experiment data whose error ranges from 0.1% to 3%. Therefore, this proves the revised calculation method of PCM possesses great advantages in predicting the performance of the distorted compressor with limited exhaust duct.

Keywords: parallel compressor model (pcm), revised calculation method, inlet distortion, outlet unequal pressure distribution

Procedia PDF Downloads 301
21 Combustion Chamber Sizing for Energy Recovery from Furnace Process Gas: Waste to Energy

Authors: Balram Panjwani, Bernd Wittgens, Jan Erik Olsen, Stein Tore Johansen

Abstract:

The Norwegian ferroalloy industry is a world leader in sustainable production of ferrosilicon, silicon and manganese alloys with the lowest global specific energy consumption. One of the byproducts during the metal reduction process is energy rich off-gas and usually this energy is not harnessed. A novel concept for sustainable energy recovery from ferroalloy off-gas is discussed. The concept is founded on the idea of introducing a combustion chamber in the off-gas section in which energy rich off-gas mainly consisting of CO will be combusted. This will provide an additional degree of freedom for optimizing energy recovery. A well-controlled and high off-gas temperature will assure a significant increase in energy recovery and reduction of emissions to the atmosphere. Design and operation of the combustion chamber depend on many parameters, including the total power capacity of the combustion chamber, sufficient residence time for combusting the complex Poly Aromatic Hydrocarbon (PAH), NOx, as well as converting other potential pollutants. The design criteria for the combustion chamber have been identified and discussed and sizing of the combustion chamber has been carried out considering these design criteria. Computational Fluid Dynamics (CFD) has been utilized extensively for sizing the combustion chamber. The results from our CFD simulations of the flow in the combustion chamber and exploring different off-gas fuel composition are presented. In brief, the paper covers all aspect which impacts the sizing of the combustion chamber, including insulation thickness, choice of insulating material, heat transfer through extended surfaces, multi-staging and secondary air injection.

Keywords: CFD, combustion chamber, arc furnace, energy recovery

Procedia PDF Downloads 294
20 Describing Professional Purchasers' Performance Applying the 'Big Five Inventory': Findings from a Survey in Austria

Authors: Volker Koch, Sigrid Swobodnik, Bernd M. Zunk

Abstract:

The success of companies on globalized markets is significantly influenced by the performance of purchasing departments and, of course, the individuals employed as professional purchasers. Nonetheless, this is generally accepted in practice, in literature as well as in empirical research, only insufficient attention was given to the assessment of this relationship between the personality of professional purchasers and their individual performance. This paper aims to describe the relationship against the background of the 'Big Five Inventory'. Based on the five dimensions of a personality (openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism) a research model was designed. The research model divides the individual performance of professional purchasers into two major dimensions: operational and strategic. The operational dimension consists of the items 'cost', 'quality delivery' and 'flexibility'; the strategic dimension comprises the positions 'innovation', 'supplier satisfaction' as wells as 'purchasing and supply management integration in the organization'. To test the research model, a survey study was performed, and an online questionnaire was sent out to purchasing professionals in Austrian companies. The data collected from 78 responses was used to test the research model applying a group comparison. The comparison points out that there is (i) an influence of the purchasers’ personality on the individual performance of professional purchasers and (ii) a link between purchasers’ personality to a high or a low individual performance of professional purchasers. The findings of this study may help human resource managers during staff recruitment processes to identify the 'right performing personality' for an operational and/or a strategic position in purchasing departments.

Keywords: big five inventory, individual performance, personality, purchasing professionals

Procedia PDF Downloads 141
19 Using Data Mining in Automotive Safety

Authors: Carine Cridelich, Pablo Juesas Cano, Emmanuel Ramasso, Noureddine Zerhouni, Bernd Weiler

Abstract:

Safety is one of the most important considerations when buying a new car. While active safety aims at avoiding accidents, passive safety systems such as airbags and seat belts protect the occupant in case of an accident. In addition to legal regulations, organizations like Euro NCAP provide consumers with an independent assessment of the safety performance of cars and drive the development of safety systems in automobile industry. Those ratings are mainly based on injury assessment reference values derived from physical parameters measured in dummies during a car crash test. The components and sub-systems of a safety system are designed to achieve the required restraint performance. Sled tests and other types of tests are then carried out by car makers and their suppliers to confirm the protection level of the safety system. A Knowledge Discovery in Databases (KDD) process is proposed in order to minimize the number of tests. The KDD process is based on the data emerging from sled tests according to Euro NCAP specifications. About 30 parameters of the passive safety systems from different data sources (crash data, dummy protocol) are first analysed together with experts opinions. A procedure is proposed to manage missing data and validated on real data sets. Finally, a procedure is developed to estimate a set of rough initial parameters of the passive system before testing aiming at reducing the number of tests.

Keywords: KDD process, passive safety systems, sled test, dummy injury assessment reference values, frontal impact

Procedia PDF Downloads 353
18 Design Study on a Contactless Material Feeding Device for Electro Conductive Workpieces

Authors: Oliver Commichau, Richard Krimm, Bernd-Arno Behrens

Abstract:

A growing demand on the production rate of modern presses leads to higher stroke rates. Commonly used material feeding devices for presses like grippers and roll-feeding systems can only achieve high stroke rates along with high gripper forces, to avoid stick-slip. These forces are limited by the sensibility of the surfaces of the workpieces. Stick-slip leads to scratches on the surface and false positioning of the workpiece. In this paper, a new contactless feeding device is presented, which develops higher feeding force without damaging the surface of the workpiece through gripping forces. It is based on the principle of the linear induction motor. A primary part creates a magnetic field and induces eddy currents in the electrically conductive material. A Lorentz-Force applies to the workpiece in feeding direction as a mutual reaction between the eddy-currents and the magnetic induction. In this study, the FEA model of this approach is shown. The calculation of this model was used to identify the influence of various design parameters on the performance of the feeder and thus showing the promising capabilities and limits of this technology. In order to validate the study, a prototype of the feeding device has been built. An experimental setup was used to measure pulling forces and placement accuracy of the experimental feeder in order to give an outlook of a potential industrial application of this approach.

Keywords: conductive material, contactless feeding, linear induction, Lorentz-Force

Procedia PDF Downloads 156
17 Experimental and Numerical Evaluation of a Shaft Failure Behaviour Using Three-Point Bending Test

Authors: Bernd Engel, Sara Salman Hassan Al-Maeeni

Abstract:

A substantial amount of natural resources are nowadays consumed at a growing rate, as humans all over the world used materials obtained from the Earth. Machinery manufacturing industry is one of the major resource consumers on a global scale. Even though the incessant finding out of the new material, metals, and resources, it is urgent for the industry to develop methods to use the Earth's resources intelligently and more sustainable than before. Re-engineering of machine tools regarding design and failure analysis is an approach whereby out-of-date machines are upgraded and returned to useful life. To ensure the reliable future performance of the used machine components, it is essential to investigate the machine component failure through the material, design, and surface examinations. This paper presents an experimental approach aimed at inspecting the shaft of the rotary draw bending machine as a case to study. The testing methodology, which is based on the principle of the three-point bending test, allows assessing the shaft elastic behavior under loading. Furthermore, the shaft elastic characteristics include the maximum linear deflection, and maximum bending stress was determined by using an analytical approach and finite element (FE) analysis approach. In the end, the results were compared with the ones obtained by the experimental approach. In conclusion, it is seen that the measured bending deflection and bending stress were well close to the permissible design value. Therefore, the shaft can work in the second life cycle. However, based on previous surface tests conducted, the shaft needs surface treatments include re-carburizing and refining processes to ensure the reliable surface performance.

Keywords: deflection, FE analysis, shaft, stress, three-point bending

Procedia PDF Downloads 122
16 Nietzsche's 'Will to Power' as a Potentially Irrational-Rational Psychopathology: How and Why Amor Fati May Prove to Be Its 'Horse Whisperer'

Authors: Nikolai David Blaskow

Abstract:

Nietzsche's scholarship in the main has never quite resolved its deeply divided, at times self-contradictory responses to what Friedrich Nietzsche might have actually meant by his notion of the 'will to power'. Yet, in the context of the current global pandemic and climate change crisis, never has there been a more urgent need to investigate and resolve that contradiction. This paper argues for the 'will to power' as being a potentially irrational-rational psychopathology, one that can properly be understood only by means of Nietzsche's agonistic insights into another psychopathology—that of ressentiment. The argument also makes a case for the contention that amor fati (Nietzsche’s positive affirmation of life) may prove to be ressentiment's cure. In addition, as an integral part of the case’s methodology, the lens defined as the Mimetic and Scapegoat theory of Rene Girard (1923-2015) is brought to bear on resolving the contradiction. Ressentiment and Mimetic Theory will prove to be key players in the investigation, in as much as they expose the reasons for a modernity in crisis. The major finding of this study is that when the explanatory power of the two theories is applied, an understanding of the dynamics of the crisis in which we find ourselves emerges. The keys to that insight will include: (1) how these two psychopathologies closely resemble the contemporary neurologically defined 'borderline conditions' and their implications for culture (2) how identity politics stifle exemplary leadership, and so create toxic cultures (3) a critical assessment of Achille Mbembe's (2019) re-working of Frantz Fanon's 'ethics of the passerby' and its resonances with Nietzsche's amor fati.

Keywords: agon, amor fati, borderline conditions, ethics of the passer by, exemplary leadership, identity politics, mimesis, ressentiment, scapegoat mechanism

Procedia PDF Downloads 224
15 Understanding Profit Shifting by Multinationals in the Context of Cross-Border M&A: A Methodological Exploration

Authors: Michal Friedrich

Abstract:

Cross-border investment has never been easier than in today’s global economy. Despite recent initiatives tightening the international tax landscape, profit shifting and tax optimization by multinational entities (MNEs) in the context of cross-border M&A remain persistent and complex phenomena that warrant in-depth exploration. By synthesizing the outcomes of existing research, this study aims to first provide a methodological framework for identifying MNEs’ profit-shifting behavior and quantifying its fiscal impacts via various macroeconomic and microeconomic approaches. The study also proposes additional methods and qualitative/quantitative measures for extracting insight into the profit shifting behavior of MNEs in the context of their M&A activities at industry and entity levels. To develop the proposed methods, this study applies the knowledge of international tax laws and known profit shifting conduits (incl. dividends, interest, and royalties) on several model cases/types of cross-border acquisitions and post-acquisition integration activities by MNEs and highlights important factors that encourage or discourage tax optimization. Follow-up research is envisaged to apply the methods outlined in this study on published data on real-world M&A transactions to gain practical country-by-country, industry and entity-level insights. In conclusion, this study seeks to contribute to the ongoing discourse on profit shifting by providing a methodological toolkit for exploring profit shifting tendencies MNEs in connection with their M&A activities and to serve as a backbone for further research. The study is expected to provide valuable insight to policymakers, tax authorities, and tax professionals alike.

Keywords: BEPS, cross-border M&A, international taxation, profit shifting, tax optimization

Procedia PDF Downloads 42
14 Stress-Controlled Senescence and Development in Arabidopsis thaliana by Root Associated Factor (RAF), a NAC Transcription Regulator

Authors: Iman Kamranfar, Gang-Ping Xue, Salma Balazadeh, Bernd Mueller-Roeber

Abstract:

Adverse environmental conditions such as salinity stress, high temperature and drought limit plant growth and typically lead to precocious tissue degeneration and leaf senescence, a process by which nutrients from photosynthetic organs are recycled for the formation of flowers and seeds to secure reaching the next generation under such harmful conditions. In addition, abiotic stress affects developmental patterns that help the plant to withstand unfavourable environmental conditions. We discovered an NAC (for NAM, ATAF1, 2, and CUC2) transcription factor (TF), called RAF in the following, which plays a central role in abiotic drought stress-triggered senescence and the control of developmental adaptations to stressful environments. RAF is an ABA-responsive TF; RAF overexpressors are hypersensitive to abscisic acid (ABA) and exhibit precocious senescence while knock-out mutants show delayed senescence. To explore the RAF gene regulatory network (GRN), we determined its preferred DNA binding sites by binding site selection assay (BSSA) and performed microarray-based expression profiling using inducible RAF overexpression lines and chromatin immunoprecipitation (ChIP)-PCR. Our studies identified several direct target genes, including those encoding for catabolic enzymes acting during stress-induced senescence. Furthermore, we identified various genes controlling drought stress-related developmental changes. Based on our results, we conclude that RAF functions as a central transcriptional regulator that coordinates developmental programs with stress-related inputs from the environment. To explore the potential agricultural applications of our findings, we are currently extending our studies towards crop species.

Keywords: abiotic stress, Arabidopsis, development, transcription factor

Procedia PDF Downloads 162
13 Foucault and the Archaeology of Transhumanism

Authors: Michel Foucault, Friedrich Nietzsche, Max More, Natasha Vita-More, Francesca Ferrando

Abstract:

During the early part of his intellectual and academic career (1950s and 1960s), Michel Foucault developed an interest for what we can call the ‘anthropological question’, or how our modernity deals with human nature from an epistemological standpoint. The great originality of Foucault’s thought here lies in the fact that he approaches this question not from the perspective of this ‘sovereign subject’ (that has characterized the History of Western thought) he wishes to disclose and ‘denounce’, but rather at the level of discourses and the way they constitute who we are, so to speak. This led him, in turn, to formulate a series of though-provoking statements during his so-called ‘archaeological period’ of the 1960s concerning what we call ‘man’ in the West, such as that he is an ‘invention of recent date’ (as a proper object of concern and reflection), and, perhaps more importantly, that he might disappear in the near future, ‘like a face drawn in sand at the edge of the sea’. Foucault is following on the footsteps of Nietzsche in that regard, who had famously announced in the 19th ce. the ‘death of God’ and the need for the future generations to surpass (so to speak) the traditional ‘Christian-centred’ Western conception of the human. While Foucault exposed such insights more than half a century ago, they appear to be more actual than ever today with the development and rise in popularity of intellectual movements such as Transhumanism and Posthumanism, which seek to question and propose an alternative to the concepts of ‘man’ or ‘human nature’ in our culture. They rely for that on the same assumption as Foucault and Nietzsche that those concepts (and the meaning we attribute to them) have become ‘obsolete’ as it is and thus must be overcome (at a conceptual, but also a more practical level). Hence, those movements not only echo the important Foucauldian reflection of the 1950s and 1960s on the ‘anthropological question’ but seem to have been literally announced by it, so to speak. The aim of this paper will therefore be to show the relevance of Foucault (and in particular his archaeological method) in understanding the nature of Transhumanism (and Posthumanism), for instance, by analysing and assessing it as a form of discourse that is literally reshaping the way we understand ourselves as human beings in our (post)modern age, drawing for that on a number of key texts including from the early productions of Foucault.

Keywords: foucault, nietzsche, archaeology, transhumanism, posthumanism

Procedia PDF Downloads 41
12 Kriging-Based Global Optimization Method for Bluff Body Drag Reduction

Authors: Bingxi Huang, Yiqing Li, Marek Morzynski, Bernd R. Noack

Abstract:

We propose a Kriging-based global optimization method for active flow control with multiple actuation parameters. This method is designed to converge quickly and avoid getting trapped into local minima. We follow the model-free explorative gradient method (EGM) to alternate between explorative and exploitive steps. This facilitates a convergence similar to a gradient-based method and the parallel exploration of potentially better minima. In contrast to EGM, both kinds of steps are performed with Kriging surrogate model from the available data. The explorative step maximizes the expected improvement, i.e., favors regions of large uncertainty. The exploitive step identifies the best location of the cost function from the Kriging surrogate model for a subsequent weight-biased linear-gradient descent search method. To verify the effectiveness and robustness of the improved Kriging-based optimization method, we have examined several comparative test problems of varying dimensions with limited evaluation budgets. The results show that the proposed algorithm significantly outperforms some model-free optimization algorithms like genetic algorithm and differential evolution algorithm with a quicker convergence for a given budget. We have also performed direct numerical simulations of the fluidic pinball (N. Deng et al. 2020 J. Fluid Mech.) on three circular cylinders in equilateral-triangular arrangement immersed in an incoming flow at Re=100. The optimal cylinder rotations lead to 44.0% net drag power saving with 85.8% drag reduction and 41.8% actuation power. The optimal results for active flow control based on this configuration have achieved boat-tailing mechanism by employing Coanda forcing and wake stabilization by delaying separation and minimizing the wake region.

Keywords: direct numerical simulations, flow control, kriging, stochastic optimization, wake stabilization

Procedia PDF Downloads 79
11 Projected Uncertainties in Herbaceous Production Result from Unpredictable Rainfall Pattern and Livestock Grazing in a Humid Tropical Savanna Ecosystem

Authors: Daniel Osieko Okach, Joseph Otieno Ondier, Gerhard Rambold, John Tenhunen, Bernd Huwe, Dennis Otieno

Abstract:

Increased human activities such as grazing, logging, and agriculture alongside unpredictable rainfall patterns have been detrimental to the ecosystem service delivery, therefore compromising its productivity potential. This study aimed at simulating the impact of drought (50%) and enhanced rainfall (150%) on the future herbaceous CO2 uptake, biomass production and soil C:N dynamics in a humid savanna ecosystem influenced by livestock grazing. Rainfall pattern was predicted using manipulation experiments set up to reduce (50%) and increase (150%) ambient (100%) rainfall amounts in grazed and non-grazed plots. The impact of manipulated rainfall regime on herbaceous CO2 fluxes, biomass production and soil C:N dynamics was measured against volumetric soil water content (VWC) logged every 30 minutes using the 5TE (Decagon Devices Inc., Washington, USA) soil moisture sensors installed (at 20 cm soil depth) in every plots. Herbaceous biomass was estimated using destructive method augmented by standardized photographic imaging. CO2 fluxes were measured using the ecosystem chamber method and the gas analysed using LI-820 gas analyzer (USA). C:N ratio was calculated from the soil carbon and Nitrogen contents (analyzed using EA2400CHNS/O and EA2410 N elemental analyzers respectively) of different plots under study. The patterning of VWC was directly influenced by the rainfall amount with lower VWC observed in the grazed compared to the non-grazed plots. Rainfall variability, grazing and their interaction significantly affected changes in VWC (p < 0.05) and subsequently total biomass and CO2 fluxes. VWC had a strong influence on CO2 fluxes under 50% rainfall reduction in the grazed (r2 = 0.91; p < 0.05) and ambient rainfall in the ungrazed (r2 = 0.77; p < 0.05). The dependence of biomass on VWC across plots was enhanced under grazed (r2 = 0.78 - 0.87; p < 0.05) condition as compared to ungrazed (r2 = 0.44 - 0.85; p < 0.05). The C:N ratio was however not correlated to VWC across plots. This study provides insight on how the predicted trends in humid savanna will respond to changes influenced by rainfall variability and livestock grazing and consequently the sustainable management of such ecosystems.

Keywords: CO2 fluxes, rainfall manipulation, soil properties, sustainability

Procedia PDF Downloads 103
10 History of Pediatric Renal Pathology

Authors: Mostafa Elbaba

Abstract:

Because childhood renal diseases are grossly different compared to adult diseases, pediatric nephrology was founded as a specialty in 1965. Renal pathology specialty was introduced at the London Ciba Symposium in 1961. The history of renal pathology can be divided into two eras: one starting in the 1650s with the invention of the microscope, the second in the 1950s with the implementation of renal biopsy, and the presence of electron microscopy and immunofluorescence study. Prior to the 1950s, the study of diseased human kidneys was restricted to postmortem examination by gross pathology. In 1827, Richard Bright first described his triad of kidney disease, which was confirmed by morbid kidney changes at autopsy. In 1905 Friedrich Mueller coined the term “nephrosis” describing the inflammatory form of “degenerative” diseases, and later F. Munk added the term “lipoid nephrosis”. The most profound influence on renal diseases’ classification came from the publication of Volhard and Fahr in 1914. In 1899, Carl Max Wilhelm Wilms described Wilms' tumor of the kidneys in children. Chronic pyelonephritis was a popular renal diagnosis and the most common cause of uremia until the 1960s. Although kidney biopsy had been used early in the 1930s for renal tumors, the earliest reports of its use in the diagnosis of medical kidney disease were by Iversen and Brun in 1951, followed by Alwall in 1952, then by Pardo in 1953. The earliest intentional renal biopsies were done in 1944 by Nils Alwall, while the procedure was abandoned after the death of one of his 13 patients who biopsied. In 1950, Antonino Perez-Ara attempted renal biopsies, but his results were missed because of an unpopular journal publication. In the year 1951, Claus Brun and Poul Iverson developed the biopsy procedure using an aspiration technique. Popularizing renal biopsy practice is accredited to Robert Kark, who published his distinct work in 1954. He perfected the technique of renal biopsy in the prone position using the Vim-Silverman needle and used intravenous pyelography to improve the localization of the kidney.

Keywords: history, medicine, nephrology, pediatrics, pathology

Procedia PDF Downloads 35
9 Effects of Active Muscle Contraction in a Car Occupant in Whiplash Injury

Authors: Nisha Nandlal Sharma, Julaluk Carmai, Saiprasit Koetniyom, Bernd Markert

Abstract:

Whiplash Injuries are usually associated with car accidents. The sudden forward or backward jerk to head causes neck strain, which is the result of damage to the muscle or tendons. Neck pain and headaches are the two most common symptoms of whiplash. Symptoms of whiplash are commonly reported in studies but the Injury mechanism is poorly understood. Neck muscles are the most important factor to study the neck Injury. This study focuses on the development of finite element (FE) model of human neck muscle to study the whiplash injury mechanism and effect of active muscle contraction on occupant kinematics. A detailed study of Injury mechanism will promote development and evaluation of new safety systems in cars, hence reducing the occurrence of severe injuries to the occupant. In present study, an active human finite element (FE) model with 3D neck muscle model is developed. Neck muscle was modeled with a combination of solid tetrahedral elements and 1D beam elements. Muscle active properties were represented by beam elements whereas, passive properties by solid tetrahedral elements. To generate muscular force according to inputted activation levels, Hill-type muscle model was applied to beam elements. To simulate non-linear passive properties of muscle, solid elements were modeled with rubber/foam material model. Material properties were assigned from published experimental tests. Some important muscles were then inserted into THUMS (Total Human Model for Safety) 50th percentile male pedestrian model. To reduce the simulation time required, THUMS lower body parts were not included. Posterior to muscle insertion, THUMS was given a boundary conditions similar to experimental tests. The model was exposed to 4g and 7g rear impacts as these load impacts are close to low speed impacts causing whiplash. The effect of muscle activation level on occupant kinematics during whiplash was analyzed.

Keywords: finite element model, muscle activation, neck muscle, whiplash injury prevention

Procedia PDF Downloads 326
8 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.

Keywords: classification, achine learning, predictive quality, feature selection

Procedia PDF Downloads 135
7 Cleaning of Polycyclic Aromatic Hydrocarbons (PAH) Obtained from Ferroalloys Plant

Authors: Stefan Andersson, Balram Panjwani, Bernd Wittgens, Jan Erik Olsen

Abstract:

Polycyclic Aromatic hydrocarbons are organic compounds consisting of only hydrogen and carbon aromatic rings. PAH are neutral, non-polar molecules that are produced due to incomplete combustion of organic matter. These compounds are carcinogenic and interact with biological nucleophiles to inhibit the normal metabolic functions of the cells. Norways, the most important sources of PAH pollution is considered to be aluminum plants, the metallurgical industry, offshore oil activity, transport, and wood burning. Stricter governmental regulations regarding emissions to the outer and internal environment combined with increased awareness of the potential health effects have motivated Norwegian metal industries to increase their efforts to reduce emissions considerably. One of the objective of the ongoing industry and Norwegian research council supported "SCORE" project is to reduce potential PAH emissions from an off gas stream of a ferroalloy furnace through controlled combustion. In a dedicated combustion chamber. The sizing and configuration of the combustion chamber depends on the combined properties of the bulk gas stream and the properties of the PAH itself. In order to achieve efficient and complete combustion the residence time and minimum temperature need to be optimized. For this design approach reliable kinetic data of the individual PAH-species and/or groups thereof are necessary. However, kinetic data on the combustion of PAH are difficult to obtain and there is only a limited number of studies. The paper presents an evaluation of the kinetic data for some of the PAH obtained from literature. In the present study, the oxidation is modelled for pure PAH and also for PAH mixed with process gas. Using a perfectly stirred reactor modelling approach the oxidation is modelled including advanced reaction kinetics to study influence of residence time and temperature on the conversion of PAH to CO2 and water. A Chemical Reactor Network (CRN) approach is developed to understand the oxidation of PAH inside the combustion chamber. Chemical reactor network modeling has been found to be a valuable tool in the evaluation of oxidation behavior of PAH under various conditions.

Keywords: PAH, PSR, energy recovery, ferro alloy furnace

Procedia PDF Downloads 243
6 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 114
5 Gabriel Marcel and Friedrich Nietzsche: Existence and Death of God

Authors: Paolo Scolari

Abstract:

Nietzschean thought flows like a current throughout Marcel’s philosophy. Marcel is in constant dialogue with him. He wants to give homage to him, making him one of the most eminent representatives of existential thought. His enthusiasm is triggered by Nietzsche’s phrase: ‘God is dead,’ the fil rouge that ties all of the Nietzschean references scattered through marcelian texts. The death of God is the theme which emphasises both the greatness and simultaneously the tragedy of Nietzsche. Marcel wants to substitute the idea ‘God is dead’ with its original meaning: a tragic existential characteristic that imitators of Nietzsche seemed to have blurred. An interpretation that Marcel achieves aiming at double target. On the one hand he removes the heavy metaphysical suit from Nietzsche’s aphorisms on the death of God, that his interpreters have made them wear – Heidegger especially. On the other hand, he removes a stratus of trivialisation which takes the aphorisms out of context and transforms them into advertising slogans – here Sartre becomes the target. In the lecture: Nietzsche: l'homme devant la mort de dieu, Marcel hurls himself against the metaphysical Heidegger interpretation of the death of God. A hermeneutical proposal definitely original, but also a bit too abstract. An interpretation without bite, that does not grasp the tragic existential weight of the original Nietzschean idea. ‘We are probably on the wrong road,’ announces, ‘when at all costs, like Heidegger, we want to make a metaphysic out of Nietzsche.’ Marcel also criticizes Sartre. He lands in Geneva and reacts to the journalists, by saying: ‘Gentlemen, God is dead’. Marcel only needs this impromptu exclamation to understand how Sartre misinterprets the meaning of the death of God. Sartre mistakes and loses the existential sense of this idea in favour of the sensational and trivialisation of it. Marcel then wipes the slate clean from these two limited interpretations of the declaration of the death of God. This is much more than a metaphysical quarrel and not at all comparable to any advertising slogan. Behind the cry ‘God is dead’ there is the existence of an anguished man who experiences in his solitude the actual death of God. A man who has killed God with his own hands, haunted by the chill that from now on he will have to live in a completely different way. The death of God, however, is not the end. Marcel spots a new beginning at the point in which nihilism is overcome and the Übermensch is born. Dialoguing with Nietzsche he notices to being in the presence of a great spirit that has contributed to the renewal of a spiritual horizon. He descends to the most profound depths of his thought, aware that the way out is really far below, in the remotest areas of existence. The ambivalence of Nietzsche does not scare him. Rather such a thought, characterised by contradiction, will simultaneously be infinitely dangerous and infinitely healthy.

Keywords: Nietzsche's Death of God, Gabriel Marcel, Heidegger, Sartre

Procedia PDF Downloads 197
4 A Differential Scanning Calorimetric Study of Frozen Liquid Egg Yolk Thawed by Different Thawing Methods

Authors: Karina I. Hidas, Csaba Németh, Anna Visy, Judit Csonka, László Friedrich, Ildikó Cs. Nyulas-Zeke

Abstract:

Egg yolk is a popular ingredient in the food industry due to its gelling, emulsifying, colouring, and coagulating properties. Because of the heat sensitivity of proteins, egg yolk can only be heat treated at low temperatures, so its shelf life, even with the addition of a preservative, is only a few weeks. Freezing can increase the shelf life of liquid egg yolk up to 1 year, but it undergoes gelling below -6 ° C, which is an irreversible phenomenon. The degree of gelation depends on the time and temperature of freezing and is influenced by the process of thawing. Therefore, in our experiment, we examined egg yolks thawed in different ways. In this study, unpasteurized, industrially broken, separated, and homogenized liquid egg yolk was used. Freshly produced samples were frozen in plastic containers at -18°C in a laboratory freezer. Frozen storage was performed for 90 days. Samples were analysed at day zero (unfrozen) and after frozen storage for 1, 7, 14, 30, 60 and 90 days. Samples were thawed in two ways (at 5°C for 24 hours and 30°C for 3 hours) before testing. Calorimetric properties were examined by differential scanning calorimetry, where heat flow curves were recorded. Denaturation enthalpy values were calculated by fitting a linear baseline, and denaturation temperature values were evaluated. Besides, dry matter content of samples was measured by the oven method with drying at 105°C to constant weight. For statistical analysis two-way ANOVA (α = 0.05) was employed, where thawing mode and freezing time were the fixed factors. Denaturation enthalpy values decreased from 1.1 to 0.47 at the end of the storage experiment, which represents a reduction of about 60%. The effect of freezing time was significant on these values, already the enthalpy of samples stored frozen for 1 day was significantly reduced. However, the mode of thawing did not significantly affect the denaturation enthalpy of the samples, and no interaction was seen between the two factors. The denaturation temperature and dry matter content did not change significantly either during the freezing period or during the defrosting mode. Results of our study show that slow freezing and frozen storage at -18°C greatly reduces the amount of protein that can be denatured in egg yolk, indicating that the proteins have been subjected to aggregation, denaturation or other protein conversions regardless of how they were thawed.

Keywords: denaturation enthalpy, differential scanning calorimetry, liquid egg yolk, slow freezing

Procedia PDF Downloads 98
3 Investigations Of The Service Life Of Different Material Configurations At Solid-lubricated Rolling Bearings

Authors: Bernd Sauer, Michel Werner, Stefan Emrich, Michael Kopnarski, Oliver Koch

Abstract:

Friction reduction is an important aspect in the context of sustainability and energy transition. Rolling bearings are therefore used in many applications in which components move relative to each other. Conventionally lubricated rolling bearings are used in a wide range of applications, but are not suitable under certain conditions. Conventional lubricants such as grease or oil cannot be used at very high or very low temperatures. In addition, these lubricants evaporate at very low ambient pressure, e.g. in a high vacuum environment, making the use of solid lubricated bearings unavoidable. With the use of solid-lubricated bearings, predicting the service life becomes more complex. While the end of the service life of bearings with conventional lubrication is mainly caused by the failure of the bearing components due to material fatigue, solid-lubricated bearings fail at the moment when the lubrication layer is worn and the rolling elements come into direct contact with the raceway during operation. In order to extend the service life of these bearings beyond the service life of the initial coating, the use of transfer lubrication is recommended, in which pockets or sacrificial cages are used in which the balls run and can thus absorb the lubricant, which is then available for lubrication in tribological contact. This contribution presents the results of wear and service life tests on solid-lubricated rolling bearings with sacrificial cage pockets. The cage of the bearing consists of a polyimide (PI) matrix with 15% molybdenum disulfide (MoS2) and serves as a lubrication depot alongside the silver-coated balls. The bearings are tested under high vacuum (pE < 10-2 Pa) at a temperature of 300 °C on a four-bearing test rig. First, investigations of the bearing system within the bearing service life are presented and the torque curve, the wear mass and surface analyses are discussed. With regard to wear, it can be seen that the bearing rings tend to increase in mass over the service life of the bearing, while the balls and the cage tend to lose mass. With regard to the elementary surface properties, the surfaces of the bearing rings and balls are examined in terms of the mass of the elements on them. Furthermore, service life investigations with different material pairings are presented, whereby the focus here is on the service life achieved in addition to the torque curve, wear development and surface analysis. It was shown that MoS2 in the cage leads to a longer service life, while a silver (Ag) coating on the balls has no positive influence on the service life and even appears to reduce it in combination with MoS2.

Keywords: ball bearings, molybdenum disulfide, solid lubricated bearings, solid lubrication mechanisms

Procedia PDF Downloads 11
2 Disrupting Traditional Industries: A Scenario-Based Experiment on How Blockchain-Enabled Trust and Transparency Transform Nonprofit Organizations

Authors: Michael Mertel, Lars Friedrich, Kai-Ingo Voigt

Abstract:

Based on principle-agent theory, an information asymmetry exists in the traditional donation process. Consumers cannot comprehend whether nonprofit organizations (NPOs) use raised funds according to the designated cause after the transaction took place (hidden action). Therefore, charity organizations have tried to appear transparent and gain trust by using the same marketing instruments for decades (e.g., releasing project success reports). However, none of these measures can guarantee consumers that charities will use their donations for the purpose. With awareness of misuse of donations rising due to the Ukraine conflict (e.g., funding crime), consumers are increasingly concerned about the destination of their charitable purposes. Therefore, innovative charities like the Human Rights Foundation have started to offer donations via blockchain. Blockchain technology has the potential to establish profound trust and transparency in the donation process: Consumers can publicly track the progress of their donation at any time after deciding to donate. This ensures that the charity is not using donations against its original intent. Hence, the aim is to investigate the effect of blockchain-enabled transactions on the willingness to donate. Sample and Design: To investigate consumers' behavior, we use a scenario-based experiment. After removing participants (e.g., due to failed attention checks), 3192 potential donors participated (47.9% female, 62.4% bachelor or above). Procedure: We randomly assigned the participants to one of two scenarios. In all conditions, the participants read a scenario about a fictive charity organization called "Helper NPO." Afterward, the participants answered questions regarding their perception of the charity. Manipulation: The first scenario (n = 1405) represents a typical donation process, where consumers donate money without any option to track and trace. The second scenario (n = 1787) represents a donation process via blockchain, where consumers can track and trace their donations respectively. Using t-statistics, the findings demonstrate a positive effect of donating via blockchain on participants’ willingness to donate (mean difference = 0.667, p < .001, Cohen’s d effect size = 0.482). A mediation analysis shows significant effects for the mediation of transparency (Estimate = 0.199, p < .001), trust (Estimate = 0.144, p < .001), and transparency and trust (Estimate = 0.158, p < .001). The total effect of blockchain usage on participants’ willingness to donate (Estimate = 0.690, p < .001) consists of the direct effect (Estimate = 0.189, p < .001) and the indirect effects of transparency and trust (Estimate = 0.501, p < .001). Furthermore, consumers' affinity for technology moderates the direct effect of blockchain usage on participants' willingness to donate (Estimate = 0.150, p < .001). Donating via blockchain is a promising way for charities to engage consumers for several reasons: (1) Charities can emphasize trust and transparency in their advertising campaigns. (2) Established charities can target new customer segments by specifically engaging technology-affine consumers in the future. (3) Charities can raise international funds without previous barriers (e.g., setting up bank accounts). Nevertheless, increased transparency can also backfire (e.g., disclosure of costs). Such cases require further research.

Keywords: blockchain, social sector, transparency, trust

Procedia PDF Downloads 65
1 Assessment and Characterization of Dual-Hardening Adhesion Promoter for Self-Healing Mechanisms in Metal-Plastic Hybrid System

Authors: Anas Hallak, Latifa Seblini, Juergen Wilde

Abstract:

In mechatronics or sensor technology, plastic housings are used to protect sensitive components from harmful environmental influences, such as moisture, media, or reactive substances. Connections, preferably in the form of metallic lead-frame structures, through the housing wall are required for their electrical supply or control. In this system, an insufficient connection between the plastic component, e.g., Polyamide66, and the metal surface, e.g., copper, due to the incompatibility is dominating. As a result, leakage paths can occur along with the plastic-metal interface. Since adhesive bonding has been established as one of the most important joining processes and its use has expanded significantly, driven by the development of improved high-performance adhesives and bonding techniques, this technology has been involved in metal-plastic hybrid structures. In this study, an epoxy bonding agent from DELO (DUALBOND LT2266) has been used to improve the mechanical and chemical binding between the metal and the polymer. It is an adhesion promoter with two reaction stages. In these, the first stage provides fixation to the lead frame directly after the coating step, which can be done by UV-Exposure for a few seconds. In the second stage, the material will be thermally hardened during injection molding. To analyze the two reaction stages of the primer, dynamic DSC experiments were carried out and correlated with Fourier-transform infrared spectroscopy measurements. Furthermore, the number of crosslinking bonds formed in the system in each reaction stage has also been estimated by a rheological characterization. Those investigations have been performed with different times of UV exposure: 12, 96 s and in an industrial preferred temperature range from -20 to 175°C. The shear viscosity values of primer have been measured as a function of temperature and exposure times. For further interpretation, the storage modulus values have been calculated, and the so-called Booij–Palmen plot has been sketched. The next approach in this study is the self-healing mechanisms in the hydride system in which the primer should flow into micro-damage such as interface, cracks, inhibit them from growing, and close them. The ability of the primer to flow in and penetrate defined capillaries made in Ultramid was investigated. Holes with a diameter of 0.3 mm were produced in injection-molded A3EG7 plates with 4 mm thickness. A copper substrate coated with the DUALBOND was placed on the A3EG7 plate and pressed with a certain force. Metallographic analyses were carried out to verify the filling grade, which showed an almost 95% filling ratio of the capillaries. Finally, to estimate the self-healing mechanism in metal-plastic hybrid systems, characterizations have been done on a simple geometry with a metal inlay developed by the Institute of Polymer Technology in Friedrich-Alexander-University. The specimens have been modified with tungsten wire which was to be pulled out after the injection molding to create a micro-hole in the specimen at the interface between the primer and the polymer. The capability of the primer to heal those micro-cracks upon heating, pressing, and thermal aging has been characterized through metallographic analyses.

Keywords: hybrid structures, self-healing, thermoplastic housing, adhesive

Procedia PDF Downloads 161