Search results for: Alexander L. Dmitriev
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 93

Search results for: Alexander L. Dmitriev

33 Health Care Ethics in Vulnerable Populations: Clinical Research through the Patient's Eyes

Authors: Alexander V. Libin, Manon Schladen, Assya Pascalev, Nawar Shara, Miriam Philmon, Yuri Millo, Joseph Verbalis

Abstract:

Chronic conditions carry with them strong emotions and often lead to charged relationships between patients and their health providers and, by extension, patients and health researchers. Persons are both autonomous and relational and a purely cognitive model of autonomy neglects the social and relational basis of chronic illness. Ensuring genuine informed consent in research requires a thorough understanding of how participants perceive a study and their reasons for participation. Surveys may not capture the complexities of reasoning that underlies study participation. Contradictory reasons for participation, for instance an initial claim of altruism as rationale and a subsequent claim of personal benefit (therapeutic misconception), affect the quality of informed consent. Individuals apply principles through the filter of personal values and lived experience. Authentic autonomy, and hence authentic consent to research, occurs within the context of patients- unique life narratives and illness experiences.

Keywords: ethical dilemmas, open source technology, patient education, psychology of decision making

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814
32 Fast Adjustable Threshold for Uniform Neural Network Quantization

Authors: Alexander Goncharenko, Andrey Denisov, Sergey Alyamkin, Evgeny Terentev

Abstract:

The neural network quantization is highly desired procedure to perform before running neural networks on mobile devices. Quantization without fine-tuning leads to accuracy drop of the model, whereas commonly used training with quantization is done on the full set of the labeled data and therefore is both time- and resource-consuming. Real life applications require simplification and acceleration of quantization procedure that will maintain accuracy of full-precision neural network, especially for modern mobile neural network architectures like Mobilenet-v1, MobileNet-v2 and MNAS. Here we present a method to significantly optimize training with quantization procedure by introducing the trained scale factors for discretization thresholds that are separate for each filter. Using the proposed technique, we quantize the modern mobile architectures of neural networks with the set of train data of only ∼ 10% of the total ImageNet 2012 sample. Such reduction of train dataset size and small number of trainable parameters allow to fine-tune the network for several hours while maintaining the high accuracy of quantized model (accuracy drop was less than 0.5%). Ready-for-use models and code are available in the GitHub repository.

Keywords: Distillation, machine learning, neural networks, quantization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 732
31 Environmental Efficiency of Electric Power Industry of the United States: A Data Envelopment Analysis Approach

Authors: Alexander Y. Vaninsky

Abstract:

Importance of environmental efficiency of electric power industry stems from high demand for energy combined with global warming concerns. It is especially essential for the world largest economies like that of the United States. The paper introduces a Data Envelopment Analysis (DEA) model of environmental efficiency using indicators of fossil fuels utilization, emissions rate, and electric power losses. Using DEA is advantageous in this situation over other approaches due to its nonparametric nature. The paper analyzes data for the period of 1990 - 2006 by comparing actual yearly levels in each dimension with the best values of partial indicators for the period. As positive factors of efficiency, tendency to the decline in emissions rates starting 2000, and in electric power losses starting 2004 may be mentioned together with increasing trend of fuel utilization starting 1999. As a result, dynamics of environmental efficiency is positive starting 2002. The main concern is the decline in fossil fuels utilization in 2006. This negative change should be reversed to comply with ecological and economic requirements.

Keywords: Environmental efficiency, electric power industry, DEA, United States.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1907
30 A Novel Nucleus-Based Classifier for Discrimination of Osteoclasts and Mesenchymal Precursor Cells in Mouse Bone Marrow Cultures

Authors: Andreas Heindl, Alexander K. Seewald, Martin Schepelmann, Radu Rogojanu, Giovanna Bises, Theresia Thalhammer, Isabella Ellinger

Abstract:

Bone remodeling occurs by the balanced action of bone resorbing osteoclasts (OC) and bone-building osteoblasts. Increased bone resorption by excessive OC activity contributes to malignant and non-malignant diseases including osteoporosis. To study OC differentiation and function, OC formed in in vitro cultures are currently counted manually, a tedious procedure which is prone to inter-observer differences. Aiming for an automated OC-quantification system, classification of OC and precursor cells was done on fluorescence microscope images based on the distinct appearance of fluorescent nuclei. Following ellipse fitting to nuclei, a combination of eight features enabled clustering of OC and precursor cell nuclei. After evaluating different machine-learning techniques, LOGREG achieved 74% correctly classified OC and precursor cell nuclei, outperforming human experts (best expert: 55%). In combination with the automated detection of total cell areas, this system allows to measure various cell parameters and most importantly to quantify proteins involved in osteoclastogenesis.

Keywords: osteoclasts, machine learning, ellipse fitting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1918
29 The Emoji Method: An Approach for Identifying and Formulating Problem Ideas

Authors: Thorsten Herrmann, Alexander Laukemann, Hansgeorg Binz, Daniel Roth

Abstract:

For the analysis of already identified and existing problems, the pertinent literature provides a comprehensive collection of approaches as well as methods in order to analyze the problems in detail. But coming up with problems, which are assets worth pursuing further, is often challenging. However, the importance of well-formulated problem ideas and their influence of subsequent creative processes are incontestable and proven. In order to meet the covered challenges, the Institute for Engineering Design and Industrial Design (IKTD) developed the Emoji Method. This paper presents the Emoji Method, which support designers to generate problem ideas in a structured way. Considering research findings from knowledge management and innovation management, research into emojis and emoticons reveal insights by means of identifying and formulating problem ideas within the early design phase. The simple application and the huge supporting potential of the Emoji Method within the early design phase are only few of the many successful results of the conducted evaluation. The Emoji Method encourages designers to identify problem ideas and describe them in a structured way in order to start focused with generating solution ideas for the revealed problem ideas.

Keywords: Emojis, problem ideas, innovation management, knowledge management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1084
28 Awareness about HIV-Infection among HIV-Infected Individuals Attending Medical Moscow Center, Russia

Authors: Marina Nosik, Irina Rymanova, Sergei Sevostyanihin, Natalya Sergeeva, Alexander Sobkin

Abstract:

This paper presents results of the survey regarding the awareness about HIV/AIDS among HIV-infected individuals. A questionnaire covering various aspects of HIV-infection was conducted among 110 HIV-infected individuals who attended the G.A. Zaharyan Moscow Tuberculosis Clinic, Department for treatment of TB patients with HIV. The questionnaire included questions about modes of HIV transmission and preventive measures against HIV/AIDS, as well as questions about age, gender, education and employment status. The survey revealed that the respondents in the whole had a good knowledge regarding modes of HIV transmission and preventive measures against HIV/AIDS: about 83,6% male respondents and 85,7% female respondents gave an accurate answers regarding the HIV-infection. However, the overwhelming majority of the study participants, that is, 88,5% men and 98% women, was quite ignorant about the risk of acquiring HIV through saliva and toothbrush of HIV-infected individual. Though that risk is rather insignificant, it is still biologically possible. And this gap in knowledge needs to be filled. As the study showed another point of concern was the fact, that despite the knowledge of HIV transmission risk through unprotected sex about 40% percent of HIVpositive men and 25% of HIV-positive women did not insist on using condoms with their sexual partners. These findings indicate that there are still some aspects about HIV-infection which needed to be clarified and explained through more detailed and specific educational programs.

Keywords: AIDS, HIV transmission risks, HIV misconceptions, risk behavior.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2030
27 The Contribution of Diet and Lifestyle Factors in the Prevalence of Irritable Bowel Syndrome

Authors: Alexander Dao, Oscar Wambuguh

Abstract:

Irritable Bowel Syndrome (IBS) is a heterogeneous functional bowel disease that is characterized by chronic visceral abdominal pain and abnormal bowel function and habits. Its multifactorial pathophysiology and mechanisms are still largely a mystery to the contemporary biomedical community, although there are many hypotheses to try to explain IBS’s presumed physiological, psychosocial, genetic, and environmental etiologies. IBS’s symptomatic presentation is varied and divided into four major subtypes: IBS-C, IBS-D, IBS-M, and IBS-U. Given its diverse presentation and unclear mechanisms, diagnosis is done through a combination of positive identification utilizing the “Rome IV Irritable Bowel Syndrome Criteria'' (Rome IV) diagnostic criteria while also excluding other potential conditions with similar symptoms. Treatment of IBS is focused on the management of symptoms using an assortment of pharmaceuticals, lifestyle changes, and dietary changes, with future potential in microbial treatment and psychotherapy as other therapy methods. Its chronic, heterogeneous nature and disruptive gastrointestinal (GI) symptoms are negatively impactful on patients’ daily lives, health systems, and society. However, with a better understanding of the gaps in knowledge and technological advances in IBS’s pathophysiology, management, and treatment options, there is optimism for the millions of people worldwide who are suffering from the debilitating effects of IBS.

Keywords: Irritable bowel syndrome, lifestyle, diet, functional gastrointestinal disorder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 202
26 Automated Testing of Workshop Robot Behavior

Authors: Arne Hitzmann, Philipp Wentscher, Alexander Gabel, Reinhard Gerndt

Abstract:

Autonomous mobile robots can be found in a wide field of applications. Their types range from household robots over workshop robots to autonomous cars and many more. All of them undergo a number of testing steps during development, production and maintenance. This paper describes an approach to improve testing of robot behavior. It was inspired by the RoboCup @work competition that itself reflects a robotics benchmark for industrial robotics. There, scaled down versions of mobile industrial robots have to navigate through a workshop-like environment or operation area and have to perform tasks of manipulating and transporting work pieces. This paper will introduce an approach of automated vision-based testing of the behavior of the so called youBot robot, which is the most widely used robot platform in the RoboCup @work competition. The proposed system allows automated testing of multiple tries of the robot to perform a specific missions and it allows for the flexibility of the robot, e.g. selecting different paths between two tasks within a mission. The approach is based on a multi-camera setup using, off the shelf cameras and optical markers. It has been applied for test-driven development (TDD) and maintenance-like verification of the robot behavior and performance.

Keywords: Supervisory control, Testing, Markers, Mono Vision, Automation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2385
25 Analysis of Translational Ship Oscillations in a Realistic Environment

Authors: Chen Zhang, Bernhard Schwarz-Röhr, Alexander Härting

Abstract:

To acquire accurate ship motions at the center of gravity, a single low-cost inertial sensor is utilized and applied on board to measure ship oscillating motions. As observations, the three axes accelerations and three axes rotational rates provided by the sensor are used. The mathematical model of processing the observation data includes determination of the distance vector between the sensor and the center of gravity in x, y, and z directions. After setting up the transfer matrix from sensor’s own coordinate system to the ship’s body frame, an extended Kalman filter is applied to deal with nonlinearities between the ship motion in the body frame and the observation information in the sensor’s frame. As a side effect, the method eliminates sensor noise and other unwanted errors. Results are not only roll and pitch, but also linear motions, in particular heave and surge at the center of gravity. For testing, we resort to measurements recorded on a small vessel in a well-defined sea state. With response amplitude operators computed numerically by a commercial software (Seaway), motion characteristics are estimated. These agree well with the measurements after processing with the suggested method.

Keywords: Extended Kalman filter, nonlinear estimation, sea trial, ship motion estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1055
24 Application of a Similarity Measure for Graphs to Web-based Document Structures

Authors: Matthias Dehmer, Frank Emmert Streib, Alexander Mehler, Jürgen Kilian, Max Mühlhauser

Abstract:

Due to the tremendous amount of information provided by the World Wide Web (WWW) developing methods for mining the structure of web-based documents is of considerable interest. In this paper we present a similarity measure for graphs representing web-based hypertext structures. Our similarity measure is mainly based on a novel representation of a graph as linear integer strings, whose components represent structural properties of the graph. The similarity of two graphs is then defined as the optimal alignment of the underlying property strings. In this paper we apply the well known technique of sequence alignments for solving a novel and challenging problem: Measuring the structural similarity of generalized trees. In other words: We first transform our graphs considered as high dimensional objects in linear structures. Then we derive similarity values from the alignments of the property strings in order to measure the structural similarity of generalized trees. Hence, we transform a graph similarity problem to a string similarity problem for developing a efficient graph similarity measure. We demonstrate that our similarity measure captures important structural information by applying it to two different test sets consisting of graphs representing web-based document structures.

Keywords: Graph similarity, hierarchical and directed graphs, hypertext, generalized trees, web structure mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1893
23 Estimating Bridge Deterioration for Small Data Sets Using Regression and Markov Models

Authors: Yina F. Muñoz, Alexander Paz, Hanns De La Fuente-Mella, Joaquin V. Fariña, Guilherme M. Sales

Abstract:

The primary approach for estimating bridge deterioration uses Markov-chain models and regression analysis. Traditional Markov models have problems in estimating the required transition probabilities when a small sample size is used. Often, reliable bridge data have not been taken over large periods, thus large data sets may not be available. This study presents an important change to the traditional approach by using the Small Data Method to estimate transition probabilities. The results illustrate that the Small Data Method and traditional approach both provide similar estimates; however, the former method provides results that are more conservative. That is, Small Data Method provided slightly lower than expected bridge condition ratings compared with the traditional approach. Considering that bridges are critical infrastructures, the Small Data Method, which uses more information and provides more conservative estimates, may be more appropriate when the available sample size is small. In addition, regression analysis was used to calculate bridge deterioration. Condition ratings were determined for bridge groups, and the best regression model was selected for each group. The results obtained were very similar to those obtained when using Markov chains; however, it is desirable to use more data for better results.

Keywords: Concrete bridges, deterioration, Markov chains, probability matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1442
22 Measuring the Structural Similarity of Web-based Documents: A Novel Approach

Authors: Matthias Dehmer, Frank Emmert Streib, Alexander Mehler, Jürgen Kilian

Abstract:

Most known methods for measuring the structural similarity of document structures are based on, e.g., tag measures, path metrics and tree measures in terms of their DOM-Trees. Other methods measures the similarity in the framework of the well known vector space model. In contrast to these we present a new approach to measuring the structural similarity of web-based documents represented by so called generalized trees which are more general than DOM-Trees which represent only directed rooted trees.We will design a new similarity measure for graphs representing web-based hypertext structures. Our similarity measure is mainly based on a novel representation of a graph as strings of linear integers, whose components represent structural properties of the graph. The similarity of two graphs is then defined as the optimal alignment of the underlying property strings. In this paper we apply the well known technique of sequence alignments to solve a novel and challenging problem: Measuring the structural similarity of generalized trees. More precisely, we first transform our graphs considered as high dimensional objects in linear structures. Then we derive similarity values from the alignments of the property strings in order to measure the structural similarity of generalized trees. Hence, we transform a graph similarity problem to a string similarity problem. We demonstrate that our similarity measure captures important structural information by applying it to two different test sets consisting of graphs representing web-based documents.

Keywords: Graph similarity, hierarchical and directed graphs, hypertext, generalized trees, web structure mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2558
21 Optimization of Technical and Technological Solutions for the Development of Offshore Hydrocarbon Fields in the Kaliningrad Region

Authors: Pavel Shcherban, Viktoria Ivanova, Alexander Neprokin, Vladislav Golovanov

Abstract:

Currently, LLC «Lukoil-Kaliningradmorneft» is implementing a comprehensive program for the development of offshore fields of the Kaliningrad region. This is largely associated with the depletion of the resource base of land in the region, as well as the positive results of geological investigation surrounding the Baltic Sea area and the data on the volume of hydrocarbon recovery from a single offshore field are working on the Kaliningrad region – D-6 «Kravtsovskoye».The article analyzes the main stages of the LLC «Lukoil-Kaliningradmorneft»’s development program for the development of the hydrocarbon resources of the region's shelf and suggests an optimization algorithm that allows managing a multi-criteria process of development of shelf deposits. The algorithm is formed on the basis of the problem of sequential decision making, which is a section of dynamic programming. Application of the algorithm during the consolidation of the initial data, the elaboration of project documentation, the further exploration and development of offshore fields will allow to optimize the complex of technical and technological solutions and increase the economic efficiency of the field development project implemented by LLC «Lukoil-Kaliningradmorneft».

Keywords: Offshore fields of hydrocarbons of the Baltic Sea, Development of offshore oil and gas fields, Optimization of the field development scheme, Solution of multi-criteria tasks in the oil and gas complex, Quality management of technical and technological processes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 860
20 Rank-Based Chain-Mode Ensemble for Binary Classification

Authors: Chongya Song, Kang Yen, Alexander Pons, Jin Liu

Abstract:

In the field of machine learning, the ensemble has been employed as a common methodology to improve the performance upon multiple base classifiers. However, the true predictions are often canceled out by the false ones during consensus due to a phenomenon called “curse of correlation” which is represented as the strong interferences among the predictions produced by the base classifiers. In addition, the existing practices are still not able to effectively mitigate the problem of imbalanced classification. Based on the analysis on our experiment results, we conclude that the two problems are caused by some inherent deficiencies in the approach of consensus. Therefore, we create an enhanced ensemble algorithm which adopts a designed rank-based chain-mode consensus to overcome the two problems. In order to evaluate the proposed ensemble algorithm, we employ a well-known benchmark data set NSL-KDD (the improved version of dataset KDDCup99 produced by University of New Brunswick) to make comparisons between the proposed and 8 common ensemble algorithms. Particularly, each compared ensemble classifier uses the same 22 base classifiers, so that the differences in terms of the improvements toward the accuracy and reliability upon the base classifiers can be truly revealed. As a result, the proposed rank-based chain-mode consensus is proved to be a more effective ensemble solution than the traditional consensus approach, which outperforms the 8 ensemble algorithms by 20% on almost all compared metrices which include accuracy, precision, recall, F1-score and area under receiver operating characteristic curve.

Keywords: Consensus, curse of correlation, imbalanced classification, rank-based chain-mode ensemble.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 735
19 The Low-Cost Design and 3D Printing of Structural Knee Orthotics for Athletic Knee Injury Patients

Authors: Alexander Hendricks, Sean Nevin, Clayton Wikoff, Melissa Dougherty, Jacob Orlita, Rafiqul Noorani

Abstract:

Knee orthotics play an important role in aiding in the recovery of those with knee injuries, especially athletes. However, structural knee orthotics is often very expensive, ranging between $300 and $800. The primary reason for this project was to answer the question: can 3D printed orthotics represent a viable and cost-effective alternative to present structural knee orthotics? The primary objective for this research project was to design a knee orthotic for athletes with knee injuries for a low-cost under $100 and evaluate its effectiveness. The initial design for the orthotic was done in SolidWorks, a computer-aided design (CAD) software available at Loyola Marymount University. After this design was completed, finite element analysis (FEA) was utilized to understand how normal stresses placed upon the knee affected the orthotic. The knee orthotic was then adjusted and redesigned to meet a specified factor-of-safety of 3.25 based on the data gathered during FEA and literature sources. Once the FEA was completed and the orthotic was redesigned based from the data gathered, the next step was to move on to 3D-printing the first design of the knee brace. Subsequently, physical therapy movement trials were used to evaluate physical performance. Using the data from these movement trials, the CAD design of the brace was refined to accommodate the design requirements. The final goal of this research means to explore the possibility of replacing high-cost, outsourced knee orthotics with a readily available low-cost alternative.

Keywords: Knee Orthotics, 3D printing, finite element analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1038
18 A Multi-Science Study of Modern Synergetic War and Its Information Security Component

Authors: Alexander G. Yushchenko

Abstract:

From a multi-science point of view, we analyze threats to security resulting from globalization of international information space and information and communication aggression of Russia. A definition of Ruschism is formulated as an ideology supporting aggressive actions of modern Russia against the Euro-Atlantic community. Stages of the hybrid war Russia is leading against Ukraine are described, including the elements of subversive activity of the special services, the activation of the military phase and the gradual shift of the focus of confrontation to the realm of information and communication technologies. We reveal an emergence of a threat for democratic states resulting from the destabilizing impact of a target state’s mass media and social networks being exploited by Russian secret services under freedom-of-speech disguise. Thus, we underline the vulnerability of cyber- and information security of the network society in regard of hybrid war. We propose to define the latter a synergetic war. Our analysis is supported with a long-term qualitative monitoring of representation of top state officials on popular TV channels and Facebook. From the memetics point of view, we have detected a destructive psycho-information technology used by the Kremlin, a kind of information catastrophe, the essence of which is explained in detail. In the conclusion, a comprehensive plan for information protection of the public consciousness and mentality of Euro-Atlantic citizens from the aggression of the enemy is proposed.

Keywords: Cyber and information security, psycho-information technology, hybrid war, synergetic war, WWIII, Ruschism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1013
17 Stimulation of Stevioside Accumulation on Stevia rebaudiana (Bertoni) Shoot Culture Induced with Red LED Light in TIS RITA® Bioreactor System

Authors: Vincent Alexander, Rizkita Esyanti

Abstract:

Leaves of Stevia rebaudiana contain steviol glycoside which mainly comprise of stevioside, a natural sweetener compound that is 100-300 times sweeter than sucrose. Current cultivation method of Stevia rebaudiana in Indonesia has yet to reach its optimum efficiency and productivity to produce stevioside as a safe sugar substitute sweetener for people with diabetes. An alternative method that is not limited by environmental factor is in vitro temporary immersion system (TIS) culture method using recipient for automated immersion (RITA®) bioreactor. The aim of this research was to evaluate the effect of red LED light induction towards shoot growth and stevioside accumulation in TIS RITA® bioreactor system, as an endeavour to increase the secondary metabolite synthesis. The result showed that the stevioside accumulation in TIS RITA® bioreactor system induced with red LED light for one hour during night was higher than that in TIS RITA® bioreactor system without red LED light induction, i.e. 71.04 ± 5.36 μg/g and 42.92 ± 5.40 μg/g respectively. Biomass growth rate reached as high as 0.072 ± 0.015/day for red LED light induced TIS RITA® bioreactor system, whereas TIS RITA® bioreactor system without induction was only 0.046 ± 0.003/day. Productivity of Stevia rebaudiana shoots induced with red LED light was 0.065 g/L medium/day, whilst shoots without any induction was 0.041 g/L medium/day. Sucrose, salt, and inorganic consumption in both bioreactor media increased as biomass increased. It can be concluded that Stevia rebaudiana shoot in TIS RITA® bioreactor induced with red LED light produces biomass and accumulates higher stevioside concentration, in comparison to bioreactor without any light induction.

Keywords: LED, Stevia rebaudiana, Stevioside, TIS RITA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1761
16 Context Detection in Spreadsheets Based on Automatically Inferred Table Schema

Authors: Alexander Wachtel, Michael T. Franzen, Walter F. Tichy

Abstract:

Programming requires years of training. With natural language and end user development methods, programming could become available to everyone. It enables end users to program their own devices and extend the functionality of the existing system without any knowledge of programming languages. In this paper, we describe an Interactive Spreadsheet Processing Module (ISPM), a natural language interface to spreadsheets that allows users to address ranges within the spreadsheet based on inferred table schema. Using the ISPM, end users are able to search for values in the schema of the table and to address the data in spreadsheets implicitly. Furthermore, it enables them to select and sort the spreadsheet data by using natural language. ISPM uses a machine learning technique to automatically infer areas within a spreadsheet, including different kinds of headers and data ranges. Since ranges can be identified from natural language queries, the end users can query the data using natural language. During the evaluation 12 undergraduate students were asked to perform operations (sum, sort, group and select) using the system and also Excel without ISPM interface, and the time taken for task completion was compared across the two systems. Only for the selection task did users take less time in Excel (since they directly selected the cells using the mouse) than in ISPM, by using natural language for end user software engineering, to overcome the present bottleneck of professional developers.

Keywords: Natural language processing, end user development; natural language interfaces, human computer interaction, data recognition, dialog systems, spreadsheet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1122
15 Analysis and Remediation of Fecal Coliform Bacteria Pollution in Selected Surface Water Bodies of Enugu State of Nigeria

Authors: Chime Charles C., Ikechukwu Alexander Okorie, Ekanem E.J., Kagbu J. A.

Abstract:

The assessment of surface waters in Enugu metropolis for fecal coliform bacteria was undertaken. Enugu urban was divided into three areas (A1, A2 and A3), and fecal coliform bacteria analysed in the surface waters found in these areas for four years (2005-2008). The plate count method was used for the analyses. Data generated were subjected to statistical tests involving; Normality test, Homogeneity of variance test, correlation test, and tolerance limit test. The influence of seasonality and pollution trends were investigated using time series plots. Results from the tolerance limit test at 95% coverage with 95% confidence, and with respect to EU maximum permissible concentration show that the three areas suffer from fecal coliform pollution. To this end, remediation procedure involving the use of saw-dust extracts from three woods namely; Chlorophora-Excelsa (C-Excelsa),Khayan-Senegalensis,(CSenegalensis) and Erythrophylum-Ivorensis (E-Ivorensis) in controlling the coliforms was studied. Results show that mixture of the acetone extracts of the woods show the most effective antibacterial inhibitory activities (26.00mm zone of inhibition) against E-coli. Methanol extract mixture of the three woods gave best inhibitory activity (26.00mm zone of inhibition) against S-areus, and 25.00mm zones of inhibition against E-Aerogenes. The aqueous extracts mixture gave acceptable zones of inhibitions against the three bacteria organisms.

Keywords: Coliform bacteria, Pollution, Remediation, Saw-dust

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2042
14 Social Media Idea Ontology: A Concept for Semantic Search of Product Ideas in Customer Knowledge through User-Centered Metrics and Natural Language Processing

Authors: Martin H¨ausl, Maximilian Auch, Johannes Forster, Peter Mandl, Alexander Schill

Abstract:

In order to survive on the market, companies must constantly develop improved and new products. These products are designed to serve the needs of their customers in the best possible way. The creation of new products is also called innovation and is primarily driven by a company’s internal research and development department. However, a new approach has been taking place for some years now, involving external knowledge in the innovation process. This approach is called open innovation and identifies customer knowledge as the most important source in the innovation process. This paper presents a concept of using social media posts as an external source to support the open innovation approach in its initial phase, the Ideation phase. For this purpose, the social media posts are semantically structured with the help of an ontology and the authors are evaluated using graph-theoretical metrics such as density. For the structuring and evaluation of relevant social media posts, we also use the findings of Natural Language Processing, e. g. Named Entity Recognition, specific dictionaries, Triple Tagger and Part-of-Speech-Tagger. The selection and evaluation of the tools used are discussed in this paper. Using our ontology and metrics to structure social media posts enables users to semantically search these posts for new product ideas and thus gain an improved insight into the external sources such as customer needs.

Keywords: Idea ontology, innovation management, open innovation, semantic search.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 786
13 Knowledge Transfer among Cross-Functional Teams as a Continual Improvement Process

Authors: Sergio Mauricio Pérez López, Luis Rodrigo Valencia Pérez, Juan Manuel Peña Aguilar, Adelina Morita Alexander

Abstract:

The culture of continuous improvement in organizations is very important as it represents a source of competitive advantage. This article discusses the transfer of knowledge between companies which formed cross-functional teams and used a dynamic model for knowledge creation as a framework. In addition, the article discusses the structure of cognitive assets in companies and the concept of "stickiness" (which is defined as an obstacle to the transfer of knowledge). The purpose of this analysis is to show that an improvement in the attitude of individual members of an organization creates opportunities, and that an exchange of information and knowledge leads to generating continuous improvements in the company as a whole. This article also discusses the importance of creating the proper conditions for sharing tacit knowledge. By narrowing gaps between people, mutual trust can be created and thus contribute to an increase in sharing. The concept of adapting knowledge to new environments will be highlighted, as it is essential for companies to translate and modify information so that such information can fit the context of receiving organizations. Adaptation will ensure that the transfer process is carried out smoothly by preventing "stickiness". When developing the transfer process on cross-functional teams (as opposed to working groups), the team acquires the flexibility and responsiveness necessary to meet objectives. These types of cross-functional teams also generate synergy due to the array of different work backgrounds of their individuals. When synergy is established, a culture of continuous improvement is created.

Keywords: Knowledge transfer, continuous improvement, teamwork, cognitive assets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1703
12 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things

Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker

Abstract:

Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.

Keywords: CUSUM, evidence theory, KL divergence, quickest change detection, time series data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 994
11 Methane versus Carbon Dioxide: Mitigation Prospects

Authors: Alexander J. Severinsky, Allen L. Sessoms

Abstract:

Atmospheric carbon dioxide (CO2) has dominated the discussion around the causes of climate change. This is a reflection of a 100-year time horizon for all greenhouse gases that became a norm.  The 100-year time horizon is much too long – and yet, almost all mitigation efforts, including those set in the near-term frame of within 30 years, are still geared toward it. In this paper, we show that for a 30-year time horizon, methane (CH4) is the greenhouse gas whose radiative forcing exceeds that of CO2. In our analysis, we use the radiative forcing of greenhouse gases in the atmosphere, because they directly affect the rise in temperature on Earth. We found that in 2019, the radiative forcing (RF) of methane was ~2.5 W/m2 and that of carbon dioxide was ~2.1 W/m2. Under a business-as-usual (BAU) scenario until 2050, such forcing would be ~2.8 W/m2 and ~3.1 W/m2 respectively. There is a substantial spread in the data for anthropogenic and natural methane (CH4) emissions, along with natural gas, (which is primarily CH4), leakages from industrial production to consumption. For this reason, we estimate the minimum and maximum effects of a reduction of these leakages, and assume an effective immediate reduction by 80%. Such action may serve to reduce the annual radiative forcing of all CH4 emissions by ~15% to ~30%. This translates into a reduction of RF by 2050 from ~2.8 W/m2 to ~2.5 W/m2 in the case of the minimum effect that can be expected, and to ~2.15 W/m2 in the case of the maximum effort to reduce methane leakages. Under the BAU, we find that the RF of CO2 will increase from ~2.1 W/m2 now to ~3.1 W/m2 by 2050. We assume a linear reduction of 50% in anthropogenic emission over the course of the next 30 years, which would reduce the radiative forcing of CO2 from ~3.1 W/m2 to ~2.9 W/m2. In the case of "net zero," the other 50% of only anthropogenic CO2 emissions reduction would be limited to being either from sources of emissions or directly from the atmosphere. In this instance, the total reduction would be from ~3.1 W/m2 to ~2.7 W/m2, or ~0.4 W/m2. To achieve the same radiative forcing as in the scenario of maximum reduction of methane leakages of ~2.15 W/m2, an additional reduction of radiative forcing of CO2 would be approximately 2.7 -2.15 = 0.55 W/m2. In total, one would need to remove ~660 GT of CO2 from the atmosphere in order to match the maximum reduction of current methane leakages, and ~270 GT of CO2 from emitting sources, to reach "negative emissions". This amounts to over 900 GT of CO2.

Keywords: Methane Leakages, Methane Radiative Forcing, Methane Mitigation, Methane Net Zero.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 648
10 A Comparative Study on Biochar from Slow Pyrolysis of Corn Cob and Cassava Wastes

Authors: Adilah Shariff, Nurhidayah Mohamed Noor, Alexander Lau, Muhammad Azwan Mohd Ali

Abstract:

Biomass such as corn and cassava wastes if left to decay will release significant quantities of greenhouse gases (GHG) including carbon dioxide and methane. The biomass wastes can be converted into biochar via thermochemical process such as slow pyrolysis. This approach can reduce the biomass wastes as well as preserve its carbon content. Biochar has the potential to be used as a carbon sequester and soil amendment. The aim of this study is to investigate the characteristics of the corn cob, cassava stem, and cassava rhizome in order to identify their potential as pyrolysis feedstocks for biochar production. This was achieved by using the proximate and elemental analyses as well as calorific value and lignocellulosic determination. The second objective is to investigate the effect of pyrolysis temperature on the biochar produced. A fixed bed slow pyrolysis reactor was used to pyrolyze the corn cob, cassava stem, and cassava rhizome. The pyrolysis temperatures were varied between 400 °C and 600 °C, while the heating rate and the holding time were fixed at 5 °C/min and 1 hour, respectively. Corn cob, cassava stem, and cassava rhizome were found to be suitable feedstocks for pyrolysis process because they contained a high percentage of volatile matter more than 80 mf wt.%. All the three feedstocks contained low nitrogen and sulphur content less than 1 mf wt.%. Therefore, during the pyrolysis process, the feedstocks give off very low rate of GHG such as nitrogen oxides and sulphur oxides. Independent of the types of biomass, the percentage of biochar yield is inversely proportional to the pyrolysis temperature. The highest biochar yield for each studied temperature is from slow pyrolysis of cassava rhizome as the feedstock contained the highest percentage of ash compared to the other two feedstocks. The percentage of fixed carbon in all the biochars increased as the pyrolysis temperature increased. The increment of pyrolysis temperature from 400 °C to 600 °C increased the fixed carbon of corn cob biochar, cassava stem biochar and cassava rhizome biochar by 26.35%, 10.98%, and 6.20% respectively. Irrespective of the pyrolysis temperature, all the biochars produced were found to contain more than 60 mf wt.% fixed carbon content, much higher than its feedstocks.

Keywords: Biochar, biomass, cassava wastes, corn cob, pyrolysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2154
9 Nonlinear Transformation of Laser Generated Ultrasonic Pulses in Geomaterials

Authors: Elena B. Cherepetskaya, Alexander A. Karabutov, Natalia B. Podymova, Ivan Sas

Abstract:

Nonlinear evolution of broadband ultrasonic pulses passed through the rock specimens is studied using the apparatus “GEOSCAN-02M”. Ultrasonic pulses are excited by the pulses of Qswitched Nd:YAG laser with the time duration of 10 ns and with the energy of 260 mJ. This energy can be reduced to 20 mJ by some light filters. The laser beam radius did not exceed 5 mm. As a result of the absorption of the laser pulse in the special material – the optoacoustic generator–the pulses of longitudinal ultrasonic waves are excited with the time duration of 100 ns and with the maximum pressure amplitude of 10 MPa. The immersion technique is used to measure the parameters of these ultrasonic pulses passed through a specimen, the immersion liquid is distilled water. The reference pulse passed through the cell with water has the compression and the rarefaction phases. The amplitude of the rarefaction phase is five times lower than that of the compression phase. The spectral range of the reference pulse reaches 10 MHz. The cubic-shaped specimens of the Karelian gabbro are studied with the rib length 3 cm. The ultimate strength of the specimens by the uniaxial compression is (300±10) MPa. As the reference pulse passes through the area of the specimen without cracks the compression phase decreases and the rarefaction one increases due to diffraction and scattering of ultrasound, so the ratio of these phases becomes 2.3:1. After preloading some horizontal cracks appear in the specimens. Their location is found by one-sided scanning of the specimen using the backward mode detection of the ultrasonic pulses reflected from the structure defects. Using the computer processing of these signals the images are obtained of the cross-sections of the specimens with cracks. By the increase of the reference pulse amplitude from 0.1 MPa to 5 MPa the nonlinear transformation of the ultrasonic pulse passed through the specimen with horizontal cracks results in the decrease by 2.5 times of the amplitude of the rarefaction phase and in the increase of its duration by 2.1 times. By the increase of the reference pulse amplitude from 5 MPa to 10 MPa the time splitting of the phases is observed for the bipolar pulse passed through the specimen. The compression and rarefaction phases propagate with different velocities. These features of the powerful broadband ultrasonic pulses passed through the rock specimens can be described by the hysteresis model of Preisach- Mayergoyz and can be used for the location of cracks in the optically opaque materials.

Keywords: Cracks, geological materials, nonlinear evolution of ultrasonic pulses, rock.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1896
8 An E-Maintenance IoT Sensor Node Designed for Fleets of Diverse Heavy-Duty Vehicles

Authors: George Charkoftakis, Panagiotis Liosatos, Nicolas-Alexander Tatlas, Dimitrios Goustouridis, Stelios M. Potirakis

Abstract:

E-maintenance is a relatively recent concept, generally referring to maintenance management by monitoring assets over the Internet. One of the key links in the chain of an e-maintenance system is data acquisition and transmission. Specifically for the case of a fleet of heavy-duty vehicles, where the main challenge is the diversity of the vehicles and vehicle-embedded self-diagnostic/reporting technologies, the design of the data acquisition and transmission unit is a demanding task. This is clear if one takes into account that a heavy-vehicles fleet assortment may range from vehicles with only a limited number of analog sensors monitored by dashboard light indicators and gauges to vehicles with plethora of sensors monitored by a vehicle computer producing digital reporting. The present work proposes an adaptable internet of things (IoT) sensor node that is capable of addressing this challenge. The proposed sensor node architecture is based on the increasingly popular single-board computer – expansion boards approach. In the proposed solution, the expansion boards undertake the tasks of position identification, cellular connectivity, connectivity to the vehicle computer, and connectivity to analog and digital sensors by means of a specially targeted design of expansion board. Specifically, the latter offers a number of adaptability features to cope with the diverse sensor types employed in different vehicles. In standard mode, the IoT sensor node communicates to the data center through cellular network, transmitting all digital/digitized sensor data, IoT device identity and position. Moreover, the proposed IoT sensor node offers connectivity, through WiFi and an appropriate application, to smart phones or tablets allowing the registration of additional vehicle- and driver-specific information and these data are also forwarded to the data center. All control and communication tasks of the IoT sensor node are performed by dedicated firmware.

Keywords: IoT sensor nodes, e-maintenance, single-board computers, sensor expansion boards, on-board diagnostics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 595
7 Vibroacoustic Modulation of Wideband Vibrations and Its Possible Application for Windmill Blade Diagnostics

Authors: Abdullah Alnutayfat, Alexander Sutin, Dong Liu

Abstract:

Wind turbine has become one of the most popular energy production methods. However, failure of blades and maintenance costs evolve into significant issues in the wind power industry, so it is essential to detect the initial blade defects to avoid the collapse of the blades and structure. This paper aims to apply modulation of high-frequency blade vibrations by low-frequency blade rotation, which is close to the known Vibro-Acoustic Modulation (VAM) method. The high-frequency wideband blade vibration is produced by the interaction of the surface blades with the environment air turbulence, and the low-frequency modulation is produced by alternating bending stress due to gravity. The low-frequency load of rotational wind turbine blades ranges between 0.2-0.4 Hz and can reach up to 2 Hz for strong wind. The main difference between this study and previous ones on VAM methods is the use of a wideband vibration signal from the blade's natural vibrations. Different features of the VAM are considered using a simple model of breathing crack. This model considers the simple mechanical oscillator, where the parameters of the oscillator are varied due to low-frequency blade rotation. During the blade's operation, the internal stress caused by the weight of the blade modifies the crack's elasticity and damping. The laboratory experiment using steel samples demonstrates the possibility of VAM using a probe wideband noise signal. A cycle load with a small amplitude was used as a pump wave to damage the tested sample, and a small transducer generated a wideband probe wave. The received signal demodulation was conducted using the Detecting of Envelope Modulation on Noise (DEMON) approach. In addition, the experimental results were compared with the modulation index (MI) technique regarding the harmonic pump wave. The wideband and traditional VAM methods demonstrated similar sensitivity for earlier detection of invisible cracks. Importantly, employing a wideband probe signal with the DEMON approach speeds up and simplifies testing since it eliminates the need to conduct tests repeatedly for various harmonic probe frequencies and to adjust the probe frequency.

Keywords: Damage detection, turbine blades, Vibro-Acoustic Structural Health Monitoring, SHM, Detecting of Envelope Modulation on Noise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 455
6 IntelligentLogger: A Heavy-Duty Vehicles Fleet Management System Based on IoT and Smart Prediction Techniques

Authors: D. Goustouridis, A. Sideris, I. Sdrolias, G. Loizos, N.-Alexander Tatlas, S. M. Potirakis

Abstract:

Both daily and long-term management of a heavy-duty vehicles and construction machinery fleet is an extremely complicated and hard to solve issue. This is mainly due to the diversity of the fleet vehicles – machinery, which concerns not only the vehicle types, but also their age/efficiency, as well as the fleet volume, which is often of the order of hundreds or even thousands of vehicles/machineries. In the present paper we present “InteligentLogger”, a holistic heavy-duty fleet management system covering a wide range of diverse fleet vehicles. This is based on specifically designed hardware and software for the automated vehicle health status and operational cost monitoring, for smart maintenance. InteligentLogger is characterized by high adaptability that permits to be tailored to practically any heavy-duty vehicle/machinery (of different technologies -modern or legacy- and of dissimilar uses). Contrary to conventional logistic systems, which are characterized by raised operational costs and often errors, InteligentLogger provides a cost-effective and reliable integrated solution for the e-management and e-maintenance of the fleet members. The InteligentLogger system offers the following unique features that guarantee successful heavy-duty vehicles/machineries fleet management: (a) Recording and storage of operating data of motorized construction machinery, in a reliable way and in real time, using specifically designed Internet of Things (IoT) sensor nodes that communicate through the available network infrastructures, e.g., 3G/LTE; (b) Use on any machine, regardless of its age, in a universal way; (c) Flexibility and complete customization both in terms of data collection, integration with 3rd party systems, as well as in terms of processing and drawing conclusions; (d) Validation, error reporting & correction, as well as update of the system’s database; (e) Artificial intelligence (AI) software, for processing information in real time, identifying out-of-normal behavior and generating alerts; (f) A MicroStrategy based enterprise BI, for modeling information and producing reports, dashboards, and alerts focusing on vehicles– machinery optimal usage, as well as maintenance and scraping policies; (g) Modular structure that allows low implementation costs in the basic fully functional version, but offers scalability without requiring a complete system upgrade.

Keywords: E-maintenance, predictive maintenance, IoT sensor nodes, cost optimization, artificial intelligence, heavy-duty vehicles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 771
5 Greenhouse Gasses’ Effect on Atmospheric Temperature Increase and the Observable Effects on Ecosystems

Authors: Alexander J. Severinsky

Abstract:

Radiative forces of greenhouse gases (GHG) increase the temperature of the Earth's surface, more on land, and less in oceans, due to their thermal capacities. Given this inertia, the temperature increase is delayed over time. Air temperature, however, is not delayed as air thermal capacity is much lower. In this study, through analysis and synthesis of multidisciplinary science and data, an estimate of atmospheric temperature increase is made. Then, this estimate is used to shed light on current observations of ice and snow loss, desertification and forest fires, and increased extreme air disturbances. The reason for this inquiry is due to the author’s skepticism that current changes cannot be explained by a "~1 oC" global average surface temperature rise within the last 50-60 years. The only other plausible cause to explore for understanding is that of atmospheric temperature rise. The study utilizes an analysis of air temperature rise from three different scientific disciplines: thermodynamics, climate science experiments, and climactic historical studies. The results coming from these diverse disciplines are nearly the same, within ± 1.6%. The direct radiative force of GHGs with a high level of scientific understanding is near 4.7 W/m2 on average over the Earth’s entire surface in 2018, as compared to one in pre-Industrial time in the mid-1700s. The additional radiative force of fast feedbacks coming from various forms of water gives approximately an additional ~15 W/m2. In 2018, these radiative forces heated the atmosphere by approximately 5.1 oC, which will create a thermal equilibrium average ground surface temperature increase of 4.6 oC to 4.8 oC by the end of this century. After 2018, the temperature will continue to rise without any additional increases in the concentration of the GHGs, primarily of carbon dioxide and methane. These findings of the radiative force of GHGs in 2018 were applied to estimates of effects on major Earth ecosystems. This additional force of nearly 20 W/m2 causes an increase in ice melting by an additional rate of over 90 cm/year, green leaves temperature increase by nearly 5 oC, and a work energy increase of air by approximately 40 Joules/mole. This explains the observed high rates of ice melting at all altitudes and latitudes, the spread of deserts and increases in forest fires, as well as increased energy of tornadoes, typhoons, hurricanes, and extreme weather, much more plausibly than the 1.5 oC increase in average global surface temperature in the same time interval. Planned mitigation and adaptation measures might prove to be much more effective when directed toward the reduction of existing GHGs in the atmosphere.

Keywords: GHG radiative forces, GHG air temperature, GHG thermodynamics, GHG historical, GHG experimental, GHG radiative force on ice, GHG radiative force on plants, GHG radiative force in air.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 570
4 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements

Authors: Alexander Buhr, Klaus Ehrenfried

Abstract:

Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.

Keywords: Boundary layer, high-speed PIV, ICE3, moving train model, roughness elements.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1528