Search results for: continuous speed profile data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 30081

Search results for: continuous speed profile data

26361 Efficient Frequent Itemset Mining Methods over Real-Time Spatial Big Data

Authors: Hamdi Sana, Emna Bouazizi, Sami Faiz

Abstract:

In recent years, there is a huge increase in the use of spatio-temporal applications where data and queries are continuously moving. As a result, the need to process real-time spatio-temporal data seems clear and real-time stream data management becomes a hot topic. Sliding window model and frequent itemset mining over dynamic data are the most important problems in the context of data mining. Thus, sliding window model for frequent itemset mining is a widely used model for data stream mining due to its emphasis on recent data and its bounded memory requirement. These methods use the traditional transaction-based sliding window model where the window size is based on a fixed number of transactions. Actually, this model supposes that all transactions have a constant rate which is not suited for real-time applications. And the use of this model in such applications endangers their performance. Based on these observations, this paper relaxes the notion of window size and proposes the use of a timestamp-based sliding window model. In our proposed frequent itemset mining algorithm, support conditions are used to differentiate frequents and infrequent patterns. Thereafter, a tree is developed to incrementally maintain the essential information. We evaluate our contribution. The preliminary results are quite promising.

Keywords: real-time spatial big data, frequent itemset, transaction-based sliding window model, timestamp-based sliding window model, weighted frequent patterns, tree, stream query

Procedia PDF Downloads 161
26360 The Extent of Big Data Analysis by the External Auditors

Authors: Iyad Ismail, Fathilatul Abdul Hamid

Abstract:

This research was mainly investigated to recognize the extent of big data analysis by external auditors. This paper adopts grounded theory as a framework for conducting a series of semi-structured interviews with eighteen external auditors. The research findings comprised the availability extent of big data and big data analysis usage by the external auditors in Palestine, Gaza Strip. Considering the study's outcomes leads to a series of auditing procedures in order to improve the external auditing techniques, which leads to high-quality audit process. Also, this research is crucial for auditing firms by giving an insight into the mechanisms of auditing firms to identify the most important strategies that help in achieving competitive audit quality. These results are aims to instruct the auditing academic and professional institutions in developing techniques for external auditors in order to the big data analysis. This paper provides appropriate information for the decision-making process and a source of future information which affects technological auditing.

Keywords: big data analysis, external auditors, audit reliance, internal audit function

Procedia PDF Downloads 70
26359 Influence of Propeller Blade Lift Distribution on Whirl Flutter Stability Characteristics

Authors: J. Cecrdle

Abstract:

This paper deals with the whirl flutter of the turboprop aircraft structures. It is focused on the influence of the blade lift span-wise distribution on the whirl flutter stability. Firstly it gives the overall theoretical background of the whirl flutter phenomenon. After that the propeller blade forces solution and the options of the blade lift modelling are described. The problem is demonstrated on the example of a twin turboprop aircraft structure. There are evaluated the influences with respect to the propeller aerodynamic derivatives and finally the influences to the whirl flutter speed and the whirl flutter margin respectively.

Keywords: aeroelasticity, flutter, propeller blade force, whirl flutter

Procedia PDF Downloads 536
26358 A Model of Teacher Leadership in History Instruction

Authors: Poramatdha Chutimant

Abstract:

The objective of the research was to propose a model of teacher leadership in history instruction for utilization. Everett M. Rogers’ Diffusion of Innovations Theory is applied as theoretical framework. Qualitative method is to be used in the study, and the interview protocol used as an instrument to collect primary data from best practices who awarded by Office of National Education Commission (ONEC). Open-end questions will be used in interview protocol in order to gather the various data. Then, information according to international context of history instruction is the secondary data used to support in the summarizing process (Content Analysis). Dendrogram is a key to interpret and synthesize the primary data. Thus, secondary data comes as the supportive issue in explanation and elaboration. In-depth interview is to be used to collected information from seven experts in educational field. The focal point is to validate a draft model in term of future utilization finally.

Keywords: history study, nationalism, patriotism, responsible citizenship, teacher leadership

Procedia PDF Downloads 279
26357 Simulation of Behaviour Dynamics and Optimization of the Energy System

Authors: Iva Dvornik, Sandro Božić, Žana Božić Brkić

Abstract:

System-dynamic simulating modelling is one of the most appropriate and successful scientific methods of the complex, non-linear, natural, technical and organizational systems. In the recent practice its methodology proved to be efficient in solving the problems of control, behavior, sensitivity and flexibility of the system dynamics behavior having a high degree of complexity, all these by computing simulation i.e. “under laboratory conditions” what means without any danger for observed realities. This essay deals with the research of the gas turbine dynamic process as well as the operating pump units and transformation of gas energy into hydraulic energy has been simulated. In addition, system mathematical model has been also researched (gas turbine- centrifugal pumps – pipeline pressure system – storage vessel).

Keywords: system dynamics, modelling, centrifugal pump, turbine, gases, continuous and discrete simulation, heuristic optimisation

Procedia PDF Downloads 108
26356 The Effect of Institutions on Economic Growth: An Analysis Based on Bayesian Panel Data Estimation

Authors: Mohammad Anwar, Shah Waliullah

Abstract:

This study investigated panel data regression models. This paper used Bayesian and classical methods to study the impact of institutions on economic growth from data (1990-2014), especially in developing countries. Under the classical and Bayesian methodology, the two-panel data models were estimated, which are common effects and fixed effects. For the Bayesian approach, the prior information is used in this paper, and normal gamma prior is used for the panel data models. The analysis was done through WinBUGS14 software. The estimated results of the study showed that panel data models are valid models in Bayesian methodology. In the Bayesian approach, the effects of all independent variables were positively and significantly affected by the dependent variables. Based on the standard errors of all models, we must say that the fixed effect model is the best model in the Bayesian estimation of panel data models. Also, it was proved that the fixed effect model has the lowest value of standard error, as compared to other models.

Keywords: Bayesian approach, common effect, fixed effect, random effect, Dynamic Random Effect Model

Procedia PDF Downloads 68
26355 Stabilized Halogen Based Biocides for RO Membrane Application

Authors: Harshada Lohokare

Abstract:

Biofouling is major issue in Reverse Osmosis (RO) membranes operation. To address the biofouling issue in raw water as well as wastewater recycle / reuse application requires effective biofouling control program. Current biocides (2,2-dibromo-3-nitrilopropionamide, isothiazolinone) are costly and hence often under-dosed. The membrane compatibility, as well as the microbio efficiency of the RO membrane biocide was studied. Based on the biofouling potential, the biocide product and it’s dosage was studied. It was found that these products need to be dosed continuous as well as intermittent dosage based on the microbio load. This study shows that depending on the application and microbio fouling potential, products can be chosen to mitigate the biofouling issues and improve the RO membrane performance.

Keywords: reverse osmosis membrane, biofouling, biocide, stabilized halogen

Procedia PDF Downloads 69
26354 Aminopeptidase P (DAP) Expression Pattern in Drosophila Melanogaster

Authors: Suneeta Gireesh Panicker

Abstract:

Aim: Aminopeptidase P (APP) is an enzyme that has specificity for proline, can specifically cleave Xaa-Proline peptides and is a metallo-aminopeptidase. The bonds nearby to the imino acid proline are tough to cleave by many peptidases, but APP can specifically break peptide bonds engaged with proline. Membrane-bound form and a cytosolic form are the two forms in which this enzyme exists. The exact physiological function of APP remains unclear and hence the present work attempts to determine it. Methods: In the present study, the expression pattern of cytosolic Aminopeptidase P (DAP) was determined in all the embryonic stages and larval stages of wild-type Drosophila by using polyclonal monospecific antibodies. To show the presence of DAP RNA in embryonic and larval stages, RNA in situ hybridization was performed. DAP promoter-LacZ fusion reporter gene vector was used to construct transgenic embryos to study the regulation pattern of DAP. To study the DAP expression profile, a transgenic fly consisting of a DAP promoter with β-gal and GFP reporter genes in front of it was constructed. Results: DAP protein expression was observed in neuroectodermal cells, posterior midgut primordium, proctodeum, ventral neuroblast and primordial stomatogastric nervous system. It was observed in the ventral cord and midgut in stage 12. The completely developed embryos showed the intense occurrence of it in the ventral cord and gut region. The eye-antennal disc, wing disc and leg disc also showed the presence of DAP protein. LacZ expression in transgenic embryos also showed the same pattern. Conclusion: Similar to various known multiple-functional proteins, DAP could be one with different functions at different stages and in different cells. Data presented here designates DAP functions in the early embryonic and imaginal dics differentiation and development, suggesting that it may be required for the metabolism of proteins like neuropeptides and tachykinins.

Keywords: aminopeptidase P, in situ hybridization, transgenic fly, embryonic stages

Procedia PDF Downloads 85
26353 Application of Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Multipoint Optimal Minimum Entropy Deconvolution in Railway Bearings Fault Diagnosis

Authors: Yao Cheng, Weihua Zhang

Abstract:

Although the measured vibration signal contains rich information on machine health conditions, the white noise interferences and the discrete harmonic coming from blade, shaft and mash make the fault diagnosis of rolling element bearings difficult. In order to overcome the interferences of useless signals, a new fault diagnosis method combining Complete Ensemble Empirical Mode Decomposition with adaptive noise (CEEMDAN) and Multipoint Optimal Minimum Entropy Deconvolution (MOMED) is proposed for the fault diagnosis of high-speed train bearings. Firstly, the CEEMDAN technique is applied to adaptively decompose the raw vibration signal into a series of finite intrinsic mode functions (IMFs) and a residue. Compared with Ensemble Empirical Mode Decomposition (EEMD), the CEEMDAN can provide an exact reconstruction of the original signal and a better spectral separation of the modes, which improves the accuracy of fault diagnosis. An effective sensitivity index based on the Pearson's correlation coefficients between IMFs and raw signal is adopted to select sensitive IMFs that contain bearing fault information. The composite signal of the sensitive IMFs is applied to further analysis of fault identification. Next, for propose of identifying the fault information precisely, the MOMED is utilized to enhance the periodic impulses in composite signal. As a non-iterative method, the MOMED has better deconvolution performance than the classical deconvolution methods such Minimum Entropy Deconvolution (MED) and Maximum Correlated Kurtosis Deconvolution (MCKD). Third, the envelope spectrum analysis is applied to detect the existence of bearing fault. The simulated bearing fault signals with white noise and discrete harmonic interferences are used to validate the effectiveness of the proposed method. Finally, the superiorities of the proposed method are further demonstrated by high-speed train bearing fault datasets measured from test rig. The analysis results indicate that the proposed method has strong practicability.

Keywords: bearing, complete ensemble empirical mode decomposition with adaptive noise, fault diagnosis, multipoint optimal minimum entropy deconvolution

Procedia PDF Downloads 374
26352 Solar Energy Potential Studies of Sindh Province, Pakistan for Power Generation

Authors: M. Akhlaque Ahmed, Sidra A. Shaikh, Maliha Afshan Siddiqui

Abstract:

Solar radiation studies of Sindh province have been studied to evaluate the solar energy potential of the area. Global and diffuse solar radiation on horizontal surface over five cities namely Karachi, Hyderabad, Nawabshah, Chore and Padidan of Sindh province were carried out using sun shine hour data of the area to assess the feasibility of solar energy utilization. The result obtained shows a large variation of direct and diffuse component of solar radiation in winter and summer months. 50% direct and 50% diffuse solar radiation for Karachi and Hyderabad were observed and for Chore in summer month July and August the diffuse radiation is about 33 to 39%. For other areas of Sindh such as Nawabshah and Patidan the contribution of direct solar radiation is high throughout the year. The Kt values for Nawabshah and Patidan indicates a clear sky almost throughout the year. In Nawabshah area the percentage of diffuse radiation does not exceed more than 29%. The appearance of cloud is rare even in the monsoon months July and August whereas Karachi and Hyderabad and Chore has low solar potential during the monsoon months. During the monsoon period Karachi and Hyderabad can utilize hybrid system with wind power as wind speed is higher. From the point of view of power generation the estimated values indicate that Karachi and Hyderabad and chore has low solar potential for July and August while Nawabshah, and Padidan has high solar potential Throughout the year.

Keywords: global and diffuse solar radiation, province of Sindh, solar energy potential, solar radiation studies for power generation

Procedia PDF Downloads 259
26351 Characterization of current–voltage (I–V) and capacitance–voltage–frequency (C–V–f) features of Au/GaN Schottky diodes

Authors: Abdelaziz Rabehi

Abstract:

The current–voltage (I–V) characteristics of Au/GaN Schottky diodes were measured at room temperature. In addition, capacitance–voltage–frequency (C–V–f) characteristics are investigated by considering the interface states (Nss) at frequency range 100 kHz to 1 MHz. From the I–V characteristics of the Schottky diode, ideality factor (n) and barrier height (Φb) values of 1.22 and 0.56 eV, respectively, were obtained from a forward bias I–V plot. In addition, the interface states distribution profile as a function of (Ess − Ev) was extracted from the forward bias I–V measurements by taking into account the bias dependence of the effective barrier height (Φe) for the Schottky diode. The C–V curves gave a barrier height value higher than those obtained from I–V measurements. This discrepancy is due to the different nature of the I–V and C–V measurement techniques.

Keywords: Schottky diodes, frequency dependence, barrier height, interface states

Procedia PDF Downloads 302
26350 Diagnosis of the Heart Rhythm Disorders by Using Hybrid Classifiers

Authors: Sule Yucelbas, Gulay Tezel, Cuneyt Yucelbas, Seral Ozsen

Abstract:

In this study, it was tried to identify some heart rhythm disorders by electrocardiography (ECG) data that is taken from MIT-BIH arrhythmia database by subtracting the required features, presenting to artificial neural networks (ANN), artificial immune systems (AIS), artificial neural network based on artificial immune system (AIS-ANN) and particle swarm optimization based artificial neural network (PSO-NN) classifier systems. The main purpose of this study is to evaluate the performance of hybrid AIS-ANN and PSO-ANN classifiers with regard to the ANN and AIS. For this purpose, the normal sinus rhythm (NSR), atrial premature contraction (APC), sinus arrhythmia (SA), ventricular trigeminy (VTI), ventricular tachycardia (VTK) and atrial fibrillation (AF) data for each of the RR intervals were found. Then these data in the form of pairs (NSR-APC, NSR-SA, NSR-VTI, NSR-VTK and NSR-AF) is created by combining discrete wavelet transform which is applied to each of these two groups of data and two different data sets with 9 and 27 features were obtained from each of them after data reduction. Afterwards, the data randomly was firstly mixed within themselves, and then 4-fold cross validation method was applied to create the training and testing data. The training and testing accuracy rates and training time are compared with each other. As a result, performances of the hybrid classification systems, AIS-ANN and PSO-ANN were seen to be close to the performance of the ANN system. Also, the results of the hybrid systems were much better than AIS, too. However, ANN had much shorter period of training time than other systems. In terms of training times, ANN was followed by PSO-ANN, AIS-ANN and AIS systems respectively. Also, the features that extracted from the data affected the classification results significantly.

Keywords: AIS, ANN, ECG, hybrid classifiers, PSO

Procedia PDF Downloads 442
26349 Demographic Assessment and Evaluation of Degree of Lipid Control in High Risk Indian Dyslipidemia Patients

Authors: Abhijit Trailokya

Abstract:

Background: Cardiovascular diseases (CVD’s) are the major cause of morbidity and mortality in both developed and developing countries. Many clinical trials have demonstrated that low-density lipoprotein cholesterol (LDL-C) lowering, reduces the incidence of coronary and cerebrovascular events across a broad spectrum of patients at risk. Guidelines for the management of patients at risk have been established in Europe and North America. The guidelines have advocated progressively lower LDL-C targets and more aggressive use of statin therapy. In Indian patients, comprehensive data on dyslipidemia management and its treatment outcomes are inadequate. There is lack of information on existing treatment patterns, the patient’s profile being treated, and factors that determine treatment success or failure in achieving desired goals. Purpose: The present study was planned to determine the lipid control status in high-risk dyslipidemic patients treated with lipid-lowering therapy in India. Methods: This cross-sectional, non-interventional, single visit program was conducted across 483 sites in India where male and female patients with high-risk dyslipidemia aged 18 to 65 years who had visited for a routine health check-up to their respective physician at hospital or a healthcare center. Percentage of high-risk dyslipidemic patients achieving adequate LDL-C level (< 70 mg/dL) on lipid-lowering therapy and the association of lipid parameters with patient characteristics, comorbid conditions, and lipid lowering drugs were analysed. Results: 3089 patients were enrolled in the study; of which 64% were males. LDL-C data was available for 95.2% of the patients; only 7.7% of these patients achieved LDL-C levels < 70 mg/dL on lipid-lowering therapy, which may be due to inability to follow therapeutic plans, poor compliance, or inadequate counselling by physician. The physician’s lack of awareness about recent treatment guidelines also might contribute to patients’ poor adherence, not explaining adequately the benefit and risks of a medication, not giving consideration to the patient’s life style and the cost of medication. Statin was the most commonly used anti-dyslipidemic drug across population. The higher proportion of patients had the comorbid condition of CVD and diabetes mellitus across all dyslipidemic patients. Conclusion: As per the European Society of Cardiology guidelines the ideal LDL-C levels in high risk dyslipidemic patients should be less than 70%. In the present study, 7.7% of the patients achieved LDL-C levels < 70 mg/dL on lipid lowering therapy which is very less. Most of high risk dyslipidemic patients in India are on suboptimal dosage of statin. So more aggressive and high dosage statin therapy may be required to achieve target LDLC levels in high risk Indian dyslipidemic patients.

Keywords: cardiovascular disease, diabetes mellitus, dyslipidemia, LDL-C, lipid lowering drug, statins

Procedia PDF Downloads 201
26348 Topic Modelling Using Latent Dirichlet Allocation and Latent Semantic Indexing on SA Telco Twitter Data

Authors: Phumelele Kubheka, Pius Owolawi, Gbolahan Aiyetoro

Abstract:

Twitter is one of the most popular social media platforms where users can share their opinions on different subjects. As of 2010, The Twitter platform generates more than 12 Terabytes of data daily, ~ 4.3 petabytes in a single year. For this reason, Twitter is a great source for big mining data. Many industries such as Telecommunication companies can leverage the availability of Twitter data to better understand their markets and make an appropriate business decision. This study performs topic modeling on Twitter data using Latent Dirichlet Allocation (LDA). The obtained results are benchmarked with another topic modeling technique, Latent Semantic Indexing (LSI). The study aims to retrieve topics on a Twitter dataset containing user tweets on South African Telcos. Results from this study show that LSI is much faster than LDA. However, LDA yields better results with higher topic coherence by 8% for the best-performing model represented in Table 1. A higher topic coherence score indicates better performance of the model.

Keywords: big data, latent Dirichlet allocation, latent semantic indexing, telco, topic modeling, twitter

Procedia PDF Downloads 150
26347 Enhance the Power of Sentiment Analysis

Authors: Yu Zhang, Pedro Desouza

Abstract:

Since big data has become substantially more accessible and manageable due to the development of powerful tools for dealing with unstructured data, people are eager to mine information from social media resources that could not be handled in the past. Sentiment analysis, as a novel branch of text mining, has in the last decade become increasingly important in marketing analysis, customer risk prediction and other fields. Scientists and researchers have undertaken significant work in creating and improving their sentiment models. In this paper, we present a concept of selecting appropriate classifiers based on the features and qualities of data sources by comparing the performances of five classifiers with three popular social media data sources: Twitter, Amazon Customer Reviews, and Movie Reviews. We introduced a couple of innovative models that outperform traditional sentiment classifiers for these data sources, and provide insights on how to further improve the predictive power of sentiment analysis. The modelling and testing work was done in R and Greenplum in-database analytic tools.

Keywords: sentiment analysis, social media, Twitter, Amazon, data mining, machine learning, text mining

Procedia PDF Downloads 353
26346 Importance of Different Spatial Parameters in Water Quality Analysis within Intensive Agricultural Area

Authors: Marina Bubalo, Davor Romić, Stjepan Husnjak, Helena Bakić

Abstract:

Even though European Council Directive 91/676/EEC known as Nitrates Directive was adopted in 1991, the issue of water quality preservation in areas of intensive agricultural production still persist all over Europe. High nitrate nitrogen concentrations in surface and groundwater originating from diffuse sources are one of the most important environmental problems in modern intensive agriculture. The fate of nitrogen in soil, surface and groundwater in agricultural area is mostly affected by anthropogenic activity (i.e. agricultural practice) and hydrological and climatological conditions. The aim of this study was to identify impact of land use, soil type, soil vulnerability to pollutant percolation, and natural aquifer vulnerability to nitrate occurrence in surface and groundwater within an intensive agricultural area. The study was set in Varaždin County (northern Croatia), which is under significant influence of the large rivers Drava and Mura and due to that entire area is dominated by alluvial soil with shallow active profile mainly on gravel base. Negative agricultural impact on water quality in this area is evident therefore the half of selected county is a part of delineated nitrate vulnerable zones (NVZ). Data on water quality were collected from 7 surface and 8 groundwater monitoring stations in the County. Also, recent study of the area implied detailed inventory of agricultural production and fertilizers use with the aim to produce new agricultural land use database as one of dominant parameters. The analysis of this database done using ArcGIS 10.1 showed that 52,7% of total County area is agricultural land and 59,2% of agricultural land is used for intensive agricultural production. On the other hand, 56% of soil within the county is classified as soil vulnerable to pollutant percolation. The situation is similar with natural aquifer vulnerability; northern part of the county ranges from high to very high aquifer vulnerability. Statistical analysis of water quality data is done using SPSS 13.0. Cluster analysis group both surface and groundwater stations in two groups according to nitrate nitrogen concentrations. Mean nitrate nitrogen concentration in surface water – group 1 ranges from 4,2 to 5,5 mg/l and in surface water – group 2 from 24 to 42 mg/l. The results are similar, but evidently higher, in groundwater samples; mean nitrate nitrogen concentration in group 1 ranges from 3,9 to 17 mg/l and in group 2 from 36 to 96 mg/l. ANOVA analysis confirmed statistical significance between stations that are classified in the same group. The previously listed parameters (land use, soil type, etc.) were used in factorial correspondence analysis (FCA) to detect importance of each stated parameter in local water quality. Since stated parameters mostly cannot be altered, there is obvious necessity for more precise and more adapted land management in such conditions.

Keywords: agricultural area, nitrate, factorial correspondence analysis, water quality

Procedia PDF Downloads 259
26345 Real-Time Big-Data Warehouse a Next-Generation Enterprise Data Warehouse and Analysis Framework

Authors: Abbas Raza Ali

Abstract:

Big Data technology is gradually becoming a dire need of large enterprises. These enterprises are generating massively large amount of off-line and streaming data in both structured and unstructured formats on daily basis. It is a challenging task to effectively extract useful insights from the large scale datasets, even though sometimes it becomes a technology constraint to manage transactional data history of more than a few months. This paper presents a framework to efficiently manage massively large and complex datasets. The framework has been tested on a communication service provider producing massively large complex streaming data in binary format. The communication industry is bound by the regulators to manage history of their subscribers’ call records where every action of a subscriber generates a record. Also, managing and analyzing transactional data allows service providers to better understand their customers’ behavior, for example, deep packet inspection requires transactional internet usage data to explain internet usage behaviour of the subscribers. However, current relational database systems limit service providers to only maintain history at semantic level which is aggregated at subscriber level. The framework addresses these challenges by leveraging Big Data technology which optimally manages and allows deep analysis of complex datasets. The framework has been applied to offload existing Intelligent Network Mediation and relational Data Warehouse of the service provider on Big Data. The service provider has 50+ million subscriber-base with yearly growth of 7-10%. The end-to-end process takes not more than 10 minutes which involves binary to ASCII decoding of call detail records, stitching of all the interrogations against a call (transformations) and aggregations of all the call records of a subscriber.

Keywords: big data, communication service providers, enterprise data warehouse, stream computing, Telco IN Mediation

Procedia PDF Downloads 175
26344 Investigating Effects of Vehicle Speed and Road PSDs on Response of a 35-Ton Heavy Commercial Vehicle (HCV) Using Mathematical Modelling

Authors: Amal G. Kurian

Abstract:

The use of mathematical modeling has seen a considerable boost in recent times with the development of many advanced algorithms and mathematical modeling capabilities. The advantages this method has over other methods are that they are much closer to standard physics theories and thus represent a better theoretical model. They take lesser solving time and have the ability to change various parameters for optimization, which is a big advantage, especially in automotive industry. This thesis work focuses on a thorough investigation of the effects of vehicle speed and road roughness on a heavy commercial vehicle ride and structural dynamic responses. Since commercial vehicles are kept in operation continuously for longer periods of time, it is important to study effects of various physical conditions on the vehicle and its user. For this purpose, various experimental as well as simulation methodologies, are adopted ranging from experimental transfer path analysis to various road scenario simulations. To effectively investigate and eliminate several causes of unwanted responses, an efficient and robust technique is needed. Carrying forward this motivation, the present work focuses on the development of a mathematical model of a 4-axle configuration heavy commercial vehicle (HCV) capable of calculating responses of the vehicle on different road PSD inputs and vehicle speeds. Outputs from the model will include response transfer functions and PSDs and wheel forces experienced. A MATLAB code will be developed to implement the objectives in a robust and flexible manner which can be exploited further in a study of responses due to various suspension parameters, loading conditions as well as vehicle dimensions. The thesis work resulted in quantifying the effect of various physical conditions on ride comfort of the vehicle. An increase in discomfort is seen with velocity increase; also the effect of road profiles has a considerable effect on comfort of the driver. Details of dominant modes at each frequency are analysed and mentioned in work. The reduction in ride height or deflection of tire and suspension with loading along with load on each axle is analysed and it is seen that the front axle supports a greater portion of vehicle weight while more of payload weight comes on fourth and third axles. The deflection of the vehicle is seen to be well inside acceptable limits.

Keywords: mathematical modeling, HCV, suspension, ride analysis

Procedia PDF Downloads 259
26343 Programming with Grammars

Authors: Peter M. Maurer Maurer

Abstract:

DGL is a context free grammar-based tool for generating random data. Many types of simulator input data require some computation to be placed in the proper format. For example, it might be necessary to generate ordered triples in which the third element is the sum of the first two elements, or it might be necessary to generate random numbers in some sorted order. Although DGL is universal in computational power, generating these types of data is extremely difficult. To overcome this problem, we have enhanced DGL to include features that permit direct computation within the structure of a context free grammar. The features have been implemented as special types of productions, preserving the context free flavor of DGL specifications.

Keywords: DGL, Enhanced Context Free Grammars, Programming Constructs, Random Data Generation

Procedia PDF Downloads 147
26342 Study on Impact of Road Loads on Full Vehicle Squeak and Rattle Performance

Authors: R. Praveen, B. R. Chandan Ravi, M. Harikrishna

Abstract:

Squeak and rattle noises are the most annoying transient vehicle noises produced due to different terrain conditions. Interpretation and prohibition of squeak and rattle noises are the dominant aspects of a vehicle refinement. This paper describes the computer-aided engineering (CAE) approach to evaluating the full vehicle squeak and rattle performance with the measured road surface profile as enforced excitation at the tire patch points. The E-Line methodology has been used to predict the relative displacement at the interface points and the risk areas were identified. Squeak and rattle performance has been evaluated at different speeds and at different road conditions to understand the vehicle characteristics. The competence of the process in predicting the risk and root cause of the problems showcased us a pleasing conformity between the physical testing and CAE simulation results.

Keywords: e-line, enforced excitation, full vehicle, squeak and rattle, road excitation

Procedia PDF Downloads 142
26341 A Model Architecture Transformation with Approach by Modeling: From UML to Multidimensional Schemas of Data Warehouses

Authors: Ouzayr Rabhi, Ibtissam Arrassen

Abstract:

To provide a complete analysis of the organization and to help decision-making, leaders need to have relevant data; Data Warehouses (DW) are designed to meet such needs. However, designing DW is not trivial and there is no formal method to derive a multidimensional schema from heterogeneous databases. In this article, we present a Model-Driven based approach concerning the design of data warehouses. We describe a multidimensional meta-model and also specify a set of transformations starting from a Unified Modeling Language (UML) metamodel. In this approach, the UML metamodel and the multidimensional one are both considered as a platform-independent model (PIM). The first meta-model is mapped into the second one through transformation rules carried out by the Query View Transformation (QVT) language. This proposal is validated through the application of our approach to generating a multidimensional schema of a Balanced Scorecard (BSC) DW. We are interested in the BSC perspectives, which are highly linked to the vision and the strategies of an organization.

Keywords: data warehouse, meta-model, model-driven architecture, transformation, UML

Procedia PDF Downloads 160
26340 The Effect of Six Weeks Aerobic Training and Taxol Consumption on Interleukin 8 and Plasminogen Activator Inhibitor-1 on Mice with Cervical Cancer

Authors: Alireza Barari, Maryam Firoozi, Maryam Ebrahimzadeh, Romina Roohan Ardeshiri, Maryam Kamarloeei

Abstract:

Background: The The purpose of this study was to evaluate the effect of six-week aerobic training and taxol consumption on interleukin 8 and Plasminogen Activator Inhibitor-1 (PAI-1) in mice with cervical cancer. Material and method: In this experimental study, 40 female C57 mice, eight weeks old, were randomly divided into 4 groups: cancer, cancer-taxol complement, cancer-training and cancer-training - taxol complement with 10 mice in each group. The implantation of cancerous tumors was performed under the skin of the upper pelvis. The training group completed the endurance training protocol, which included 3 sessions per week, 50 minutes per session, at a speed of 14-18 m/s for six weeks. A dose of 60 mg/ kg/day, a pure extract of Taxol was injected peritoneal Data were analyzed by t-test, One-way ANOVA and post hoc Bonferron's at the significant level P<0. 05. Results: The results showed that there was a significant difference between mean values of interleukin-8 (P < 0.05, F = 12.25) and the plasminogen activator inhibitor-1 (P < 0.05, P=0.10737) in four groups. A significance level of less than 0.05 in Tukey test for both variables also showed a significant difference between the "control" group and the complementary "exercise" group. Namely, six weeks of aerobic training, along with taxol, have a significant effect on the level of plasminogen activator inhibitor-1 and interleukin-8 mice with cervical cancer. Conclusion: Considering the effect of training on these variables, this type of exercise can be used as a complementary therapeutic approach along with other therapies for cervical cancer.

Keywords: cervical cancer, taxol, endurance training, interleukin 8, plasminogen activator inhibitor-1

Procedia PDF Downloads 183
26339 Experimental Analysis of Supersonic Combustion Induced by Shock Wave at the Combustion Chamber of the 14-X Scramjet Model

Authors: Ronaldo de Lima Cardoso, Thiago V. C. Marcos, Felipe J. da Costa, Antonio C. da Oliveira, Paulo G. P. Toro

Abstract:

The 14-X is a strategic project of the Brazil Air Force Command to develop a technological demonstrator of a hypersonic air-breathing propulsion system based on supersonic combustion programmed to flight in the Earth's atmosphere at 30 km of altitude and Mach number 10. The 14-X is under development at the Laboratory of Aerothermodynamics and Hypersonic Prof. Henry T. Nagamatsu of the Institute of Advanced Studies. The program began in 2007 and was planned to have three stages: development of the wave rider configuration, development of the scramjet configuration and finally the ground tests in the hypersonic shock tunnel T3. The install configuration of the model based in the scramjet of the 14-X in the test section of the hypersonic shock tunnel was made to proportionate and test the flight conditions in the inlet of the combustion chamber. Experimental studies with hypersonic shock tunnel require special techniques to data acquisition. To measure the pressure along the experimental model geometry tested we used 30 pressure transducers model 122A22 of PCB®. The piezoeletronic crystals of a piezoelectric transducer pressure when to suffer pressure variation produces electric current (PCB® PIEZOTRONIC, 2016). The reading of the signal of the pressure transducers was made by oscilloscope. After the studies had begun we observed that the pressure inside in the combustion chamber was lower than expected. One solution to improve the pressure inside the combustion chamber was install an obstacle to providing high temperature and pressure. To confirm if the combustion occurs was selected the spectroscopy emission technique. The region analyzed for the spectroscopy emission system is the edge of the obstacle installed inside the combustion chamber. The emission spectroscopy technique was used to observe the emission of the OH*, confirming or not the combustion of the mixture between atmospheric air in supersonic speed and the hydrogen fuel inside of the combustion chamber of the model. This paper shows the results of experimental studies of the supersonic combustion induced by shock wave performed at the Hypersonic Shock Tunnel T3 using the scramjet 14-X model. Also, this paper provides important data about the combustion studies using the model based on the engine of 14-X (second stage of the 14-X Program). Informing the possibility of necessaries corrections to be made in the next stages of the program or in other models to experimental study.

Keywords: 14-X, experimental study, ground tests, scramjet, supersonic combustion

Procedia PDF Downloads 387
26338 Secured Embedding of Patient’s Confidential Data in Electrocardiogram Using Chaotic Maps

Authors: Butta Singh

Abstract:

This paper presents a chaotic map based approach for secured embedding of patient’s confidential data in electrocardiogram (ECG) signal. The chaotic map generates predefined locations through the use of selective control parameters. The sample value difference method effectually hides the confidential data in ECG sample pairs at these predefined locations. Evaluation of proposed method on all 48 records of MIT-BIH arrhythmia ECG database demonstrates that the embedding does not alter the diagnostic features of cover ECG. The secret data imperceptibility in stego-ECG is evident through various statistical and clinical performance measures. Statistical metrics comprise of Percentage Root Mean Square Difference (PRD) and Peak Signal to Noise Ratio (PSNR). Further, a comparative analysis between proposed method and existing approaches was also performed. The results clearly demonstrated the superiority of proposed method.

Keywords: chaotic maps, ECG steganography, data embedding, electrocardiogram

Procedia PDF Downloads 195
26337 Used MATLAB Code to Study the Vehicle Bridge Coupling Vibration Based On the Method of Newmark-β

Authors: Saidi Abdelkrim, Hamouine Abdelmadjid, Abdellatif Megnounif

Abstract:

The study of interaction between vehicles and bridge structures has become extremely important. Large deflections and vibration induced by heavy and high-speed vehicles affect significantly the safety and efficiency of bridge. The vibration of a bridge caused by passage of vehicles is one of the most imperative considerations in the design of a bridge as a common sort of transportation structure. A major goal of this study is to create a simplified model of a vehicle bridge system in MATLAB. The model will then be used to study the influence of parameters to vehicle-bridge vibrations.

Keywords: vehicle-bridge interaction, Newmark-β, MATLAB code

Procedia PDF Downloads 618
26336 Characteristics of PET-Based Conductive Fiber

Authors: Chung-Yang Chuang, Chi-Lung Chen, Hui-Min Wang, Chang-Jung Chang

Abstract:

Conductive fiber is the key material for e-textiles and wearable devices. However, the durability of the conductive fiber after the wash process is an important issue for conductive fiber applications in e-textiles. Therefore, it is necessary for conductive fiber with good performance on electrically conductive behavior during the product life cycle. In this research, the PET-based conductive fiber was prepared by silver conductive ink continuous coating. The conductive fiber showed low fiber resistance (10-¹~10Ω/cm), and the conductive behavior still had good performance (fiber resistance:10-¹~10Ω/cm, percentage of fiber resistance change:<60%) after the water wash durability test (AATCC-135, 30 times). This research provides a better solution to resolve the issues of resistance increase after the water wash process due to the damage to the conductive fiber structure.

Keywords: PET, conductive fiber, e-textiles, wearable devices

Procedia PDF Downloads 101
26335 Investigation and Perfection of Centrifugal Compressor Stages by CFD Methods

Authors: Y. Galerkin, L. Marenina

Abstract:

Stator elements «Vane diffuser + crossover + return channel» of stages with different specific speed were investigated by CFD calculations. The regime parameter was introduced to present efficiency and loss coefficient performance of all elements together. Flow structure demonstrated advantages and disadvantages of design. Flow separation in crossovers was eliminated by its shape modification. Efficiency increased visibly. Calculated CFD performances are in acceptable correlation with predicted ones by engineering design method. The information obtained is useful for design method better calibration.

Keywords: vane diffuser, return channel, crossover, efficiency, loss coefficient, inlet flow angle

Procedia PDF Downloads 430
26334 The New Propensity Score Method and Assessment of Propensity Score: A Simulation Study

Authors: Azam Najafkouchak, David Todem, Dorothy Pathak, Pramod Pathak, Joseph Gardiner

Abstract:

Propensity score (PS) methods have recently become the standard analysis tool for causal inference in observational studies where exposure is not randomly assigned. Thus, confounding can impact the estimation of treatment effect on the outcome. Due to the dangers of discretizing continuous variables, the focus of this paper will be on how the variation in cut-points or boundaries will affect the average treatment effect utilizing the stratification of the PS method. In this study, we will develop a new methodology to improve the efficiency of the PS analysis through stratification and simulation study. We will also explore the property of empirical distribution of average treatment effect theoretically, including asymptotic distribution, variance estimation and 95% confident Intervals.

Keywords: propensity score, stratification, emprical distribution, average treatment effect

Procedia PDF Downloads 97
26333 Reliability Analysis of Dam under Quicksand Condition

Authors: Manthan Patel, Vinit Ahlawat, Anshh Singh Claire, Pijush Samui

Abstract:

This paper focuses on the analysis of quicksand condition for a dam foundation. The quicksand condition occurs in cohesion less soil when effective stress of soil becomes zero. In a dam, the saturated sediment may appear quite solid until a sudden change in pressure or shock initiates liquefaction. This causes the sand to form a suspension and lose strength hence resulting in failure of dam. A soil profile shows different properties at different points and the values obtained are uncertain thus reliability analysis is performed. The reliability is defined as probability of safety of a system in a given environment and loading condition and it is assessed as Reliability Index. The reliability analysis of dams under quicksand condition is carried by Gaussian Process Regression (GPR). Reliability index and factor of safety relating to liquefaction of soil is analysed using GPR. The results of reliability analysis by GPR is compared to that of conventional method and it is demonstrated that on applying GPR the probabilistic analysis reduces the computational time and efforts.

Keywords: factor of safety, GPR, reliability index, quicksand

Procedia PDF Downloads 482
26332 Detection Efficient Enterprises via Data Envelopment Analysis

Authors: S. Turkan

Abstract:

In this paper, the Turkey’s Top 500 Industrial Enterprises data in 2014 were analyzed by data envelopment analysis. Data envelopment analysis is used to detect efficient decision-making units such as universities, hospitals, schools etc. by using inputs and outputs. The decision-making units in this study are enterprises. To detect efficient enterprises, some financial ratios are determined as inputs and outputs. For this reason, financial indicators related to productivity of enterprises are considered. The efficient foreign weighted owned capital enterprises are detected via super efficiency model. According to the results, it is said that Mercedes-Benz is the most efficient foreign weighted owned capital enterprise in Turkey.

Keywords: data envelopment analysis, super efficiency, logistic regression, financial ratios

Procedia PDF Downloads 324