Search results for: Model Driven Architecture
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8373

Search results for: Model Driven Architecture

4713 The Effectiveness of Mineral Fertilization of Winter Wheat by Nitrogen in the Soil and Climatic Conditions in the Cr

Authors: Václav Voltr, Jan Leština

Abstract:

The basis of examines is survey of 500 in the years 2002-2010, which was selected according to homogeneity of land cover and where 1090 revenues were evaluated. For achieved yields of winter wheat is obtained multicriterial regression function depending on the major factors influencing the consumption of nitrogen. The coefficient of discrimination of the established model is 0.722. The increase in efficiency of fertilization is involved in supply of organic nutrients, tillage, soil pH, past weather, the humus content in the subsoil and grain content to 0.001 mm. The decrease in efficiency was mainly influenced by the total dose of mineral nitrogen, although it was divided into multiple doses, the proportion loamy particles up to 0.01 mm, rainy, or conversely dry weather during the vegetation. The efficiency of nitrogen was found to be the smallest on undeveloped soils and the highest on chernozem and alluvial soils.

Keywords: Nitrogen efficiency, winter wheat, regression model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1440
4712 Adaptive Shape Parameter (ASP) Technique for Local Radial Basis Functions (RBFs) and Their Application for Solution of Navier Strokes Equations

Authors: A. Javed, K. Djidjeli, J. T. Xing

Abstract:

The concept of adaptive shape parameters (ASP) has been presented for solution of incompressible Navier Strokes equations using mesh-free local Radial Basis Functions (RBF). The aim is to avoid ill-conditioning of coefficient matrices of RBF weights and inaccuracies in RBF interpolation resulting from non-optimized shape of basis functions for the cases where data points (or nodes) are not distributed uniformly throughout the domain. Unlike conventional approaches which assume globally similar values of RBF shape parameters, the presented ASP technique suggests that shape parameter be calculated exclusively for each data point (or node) based on the distribution of data points within its own influence domain. This will ensure interpolation accuracy while still maintaining well conditioned system of equations for RBF weights. Performance and accuracy of ASP technique has been tested by evaluating derivatives and laplacian of a known function using RBF in Finite difference mode (RBFFD), with and without the use of adaptivity in shape parameters. Application of adaptive shape parameters (ASP) for solution of incompressible Navier Strokes equations has been presented by solving lid driven cavity flow problem on mesh-free domain using RBF-FD. The results have been compared for fixed and adaptive shape parameters. Improved accuracy has been achieved with the use of ASP in RBF-FD especially at regions where larger gradients of field variables exist.

Keywords: CFD, Meshless Particle Method, Radial Basis Functions, Shape Parameters

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2824
4711 The Excess Loop Delay Calibration in a Bandpass Continuous-Time Delta Sigma Modulators Based on Q-Enhanced LC Filter

Authors: Sorore Benabid

Abstract:

The Q-enhanced LC filters are the most used architecture in the Bandpass (BP) Continuous-Time (CT) Delta-Sigma (ΣΔ) modulators, due to their: high frequencies operation, high linearity than the active filters and a high quality factor obtained by Q-enhanced technique. This technique consists of the use of a negative resistance that compensate the ohmic losses in the on-chip inductor. However, this technique introduces a zero in the filter transfer function which will affect the modulator performances in term of Dynamic Range (DR), stability and in-band noise (Signal-to-Noise Ratio (SNR)). In this paper, we study the effect of this zero and we demonstrate that a calibration of the excess loop delay (ELD) is required to ensure the best performances of the modulator. System level simulations are done for a 2ndorder BP CT (ΣΔ) modulator at a center frequency of 300MHz. Simulation results indicate that the optimal ELD should be reduced by 13% to achieve the maximum SNR and DR compared to the ideal LC-based ΣΔ modulator.

Keywords: Continuous-time bandpass delta-sigma modulators, excess loop delay, on-chip inductor, Q-enhanced LC filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 756
4710 Interference Reduction Technique in Multistage Multiuser Detector for DS-CDMA System

Authors: Lokesh Tharani, R.P.Yadav

Abstract:

This paper presents the results related to the interference reduction technique in multistage multiuser detector for asynchronous DS-CDMA system. To meet the real-time requirements for asynchronous multiuser detection, a bit streaming, cascade architecture is used. An asynchronous multiuser detection involves block-based computations and matrix inversions. The paper covers iterative-based suboptimal schemes that have been studied to decrease the computational complexity, eliminate the need for matrix inversions, decreases the execution time, reduces the memory requirements and uses joint estimation and detection process that gives better performance than the independent parameter estimation method. The stages of the iteration use cascaded and bits processed in a streaming fashion. The simulation has been carried out for asynchronous DS-CDMA system by varying one parameter, i.e., number of users. The simulation result exhibits that system gives optimum bit error rate (BER) at 3rd stage for 15-users.

Keywords: Multi-user detection (MUD), multiple accessinterference (MAI), near-far effect, decision feedback detector, successive interference cancellation detector (SIC) and parallelinterference cancellation (PIC) detector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1759
4709 A Study of the Role of Perceived Risk and User Characteristics in Internet Purchase Intention

Authors: Ali Hajiha, Farhad Ghaffari, Nooshin Gholamali Tehrani

Abstract:

This study aims at investigating the empirical relationships between risk preference, internet preference, and internet knowledge which are known as user characteristics, in addition to perceived risk of the customers on the internet purchase intention. In order to test the relationships between the variables of model 174, a questionnaire was collected from the students with previous online experience. For the purpose of data analysis, confirmatory factor analysis (CFA) and structural equation model (SEM) was used. Test results show that the perceived risk affects the internet purchase intention, and increase or decrease of perceived risk influences the purchase intention when the customer does the internet shopping. Other factors such as internet preference, knowledge of the internet, and risk preference affect the internet purchase intention.

Keywords: Perceived risk, Internet preference, Internetknowledge, Risk preference, Internet purchase intention

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2476
4708 Generic Data Warehousing for Consumer Electronics Retail Industry

Authors: S. Habte, K. Ouazzane, P. Patel, S. Patel

Abstract:

The dynamic and highly competitive nature of the consumer electronics retail industry means that businesses in this industry are experiencing different decision making challenges in relation to pricing, inventory control, consumer satisfaction and product offerings. To overcome the challenges facing retailers and create opportunities, we propose a generic data warehousing solution which can be applied to a wide range of consumer electronics retailers with a minimum configuration. The solution includes a dimensional data model, a template SQL script, a high level architectural descriptions, ETL tool developed using C#, a set of APIs, and data access tools. It has been successfully applied by ASK Outlets Ltd UK resulting in improved productivity and enhanced sales growth.

Keywords: Consumer electronics retail, dimensional data model, data analysis, generic data warehousing, reporting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1380
4707 Image Modeling Using Gibbs-Markov Random Field and Support Vector Machines Algorithm

Authors: Refaat M Mohamed, Ayman El-Baz, Aly A. Farag

Abstract:

This paper introduces a novel approach to estimate the clique potentials of Gibbs Markov random field (GMRF) models using the Support Vector Machines (SVM) algorithm and the Mean Field (MF) theory. The proposed approach is based on modeling the potential function associated with each clique shape of the GMRF model as a Gaussian-shaped kernel. In turn, the energy function of the GMRF will be in the form of a weighted sum of Gaussian kernels. This formulation of the GMRF model urges the use of the SVM with the Mean Field theory applied for its learning for estimating the energy function. The approach has been tested on synthetic texture images and is shown to provide satisfactory results in retrieving the synthesizing parameters.

Keywords: Image Modeling, MRF, Parameters Estimation, SVM Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1631
4706 Biosorption of Azo Dye Reactive Black B onto Nonviable Biomass of Cladosporium cladosporioides LM1: Thermodynamic, Kinetic and Equilibrium Modeling

Authors: L. A. S. Dionel, B. A. P. Santos, V. C. P. Lopes, L. G. Vasconcelos, M. A. Soares, E. B. Morais

Abstract:

This study investigated the biosorption of the azo dye reactive Black B (RBB) from aqueous solution using the nonviable biomass of Cladosporium cladosporioides LM1. The biosorption systems were carried out in batch mode considering different conditions of initial pH, contact time, temperature, initial dye concentration and biosorbent dosage. Higher removal rate of RBB was obtained at pH 2. Biosorption data were successfully described by pseudo-second-order kinetic model and Langmuir isotherm model with the maximum monolayer biosorption capacity estimated at 71.43 mg/g. The values of thermodynamic parameters such as ∆G°, ∆H° and ∆S° indicated that the biosorption of RBB onto fungal biomass was spontaneous and exothermic in nature. It can be concluded that nonviable biomass of Cladosporium cladosporioides LM1 may be an attractive low-cost biosorbent for the removal of azo dye RBB from aqueous solution.

Keywords: Color removal, isotherms and kinetics models, thermodynamic studies, fungus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 847
4705 Bail-in Capital: The New Box

Authors: Manu Krishnan, Phil Jacoby

Abstract:

In this paper, we discuss the paradigm shift in bank capital from the “gone concern" to the “going concern" mindset. We then propose a methodology for pricing a product of this shift called Contingent Capital Notes (“CoCos"). The Merton Model can determine a price for credit risk by using the firm-s equity value as a call option on those assets. Our pricing methodology for CoCos also uses the credit spread implied by the Merton Model in a subsequent derivative form created by John Hull et al . Here, a market implied asset volatility is calculated by using observed market CDS spreads. This implied asset volatility is then used to estimate the probability of triggering a predetermined “contingency event" given the distanceto- trigger (DTT). The paper then investigates the effect of varying DTTs and recovery assumptions on the CoCo yield. We conclude with an investment rationale.

Keywords: CoCo, Contingent capital, Bank Capital, Tier1 Capital

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1541
4704 A Game Design Framework for Vocational Education

Authors: Heide Lukosch, Roy Van Bussel, Sebastiaan Meijer

Abstract:

Serious games have proven to be a useful instrument to engage learners and increase motivation. Nevertheless, a broadly accepted, practical instructional design approach to serious games does not exist. In this paper, we introduce the use of an instructional design model that has not been applied to serious games yet, and has some advantages compared to other design approaches. We present the case of mechanics mechatronics education to illustrate the close match with timing and role of knowledge and information that the instructional design model prescribes and how this has been translated to a rigidly structured game design. The structured approach answers the learning needs of applicable knowledge within the target group. It combines advantages of simulations with strengths of entertainment games to foster learner-s motivation in the best possible way. A prototype of the game will be evaluated along a well-respected evaluation method within an advanced test setting including test and control group.

Keywords: Serious Gaming, Simulation, Complex Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1755
4703 What is the Key Element for the Territory's State of Development?

Authors: J. Lonska, V. Boronenko

Abstract:

The result of process of territory-s development is the territory-s state of development (TSoD), which is pointed towards the provision and improvement of people-s life conditions. The authors offer to measure the TSoD according to their own developed model. Using the available statistical data regarding the values of model-s elements, the authors empirically show which element mainly determines the TSoD. The findings of the research showed that the key elements of the TSoD are the “Material welfare of people" and “People-s health". Performing a deeper statistical analysis of correlation between these elements, it turned out that it is not so necessary for a country to be bent on trying to increase the material growth of a territory, because a relatively high index of life expectancy at birth could be ensured also by much more modest material resources. On the other hand, the economical feedback of longer lifespan within countries with lower material performance is also relatively low.

Keywords: Development indices, health, territory's state of development, wealth.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1188
4702 Smart Energy Consumers: An Empirical Investigation on the Intention to Adopt Innovative Consumption Behaviour

Authors: Cecilia Perri, Vincenzo Corvello

Abstract:

The aim of the present study is to investigate consumers' determinants of intention toward the adoption of Smart Grid solutions and technologies. Ajzen's Theory of Planned Behaviour (TPB) model is applied and tested to explain the formation of such adoption intention. An exogenous variable, taking into account the resistance to change of individuals, was added to the basic model. The elicitation study allowed obtaining salient modal beliefs, which were used, with the support of literature, to design the questionnaire. After the screening phase, data collected from the main survey were analysed for evaluating measurement model's reliability and validity. Consistent with the theory, the results of structural equation analysis revealed that attitude, subjective norm, and perceived behavioural control positively, which affected the adoption intention. Specifically, the variable with the highest estimate loading factor was found to be the perceived behavioural control, and, the most important belief related to each construct was determined (e.g., energy saving was observed to be the most significant belief linked with attitude). Further investigation indicated that the added exogenous variable has a negative influence on intention; this finding confirmed partially the hypothesis, since this influence was indirect: such relationship was mediated by attitude. Implications and suggestions for future research are discussed.

Keywords: Adoption of innovation, consumers behaviour, energy management, smart grid, theory of planned behaviour.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2098
4701 Mean Velocity Modeling of Open-Channel Flow with Submerged Rigid Vegetation

Authors: M. Morri, A. Soualmia, P. Belleudy

Abstract:

Vegetation affects the mean and turbulent flow structure. It may increase flood risks and sediment transport. Therefore, it is important to develop analytical approaches for the bed shear stress on vegetated bed, to predict resistance caused by vegetation. In the recent years, experimental and numerical models have both been developed to model the effects of submerged vegetation on open-channel flow. In this paper, different analytic models are compared and tested using the criteria of deviation, to explore their capacity for predicting the mean velocity and select the suitable one that will be applied in real case of rivers. The comparison between the measured data in vegetated flume and simulated mean velocities indicated, a good performance, in the case of rigid vegetation, whereas, Huthoff model shows the best agreement with a high coefficient of determination (R2=80%) and the smallest error in the prediction of the average velocities.

Keywords: Analytic Models, Comparison, Mean Velocity, Vegetation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2532
4700 Network Based Intrusion Detection and Prevention Systems in IP-Level Security Protocols

Authors: R. Kabila

Abstract:

IPsec has now become a standard information security technology throughout the Internet society. It provides a well-defined architecture that takes into account confidentiality, authentication, integrity, secure key exchange and protection mechanism against replay attack also. For the connectionless security services on packet basis, IETF IPsec Working Group has standardized two extension headers (AH&ESP), key exchange and authentication protocols. It is also working on lightweight key exchange protocol and MIB's for security management. IPsec technology has been implemented on various platforms in IPv4 and IPv6, gradually replacing old application-specific security mechanisms. IPv4 and IPv6 are not directly compatible, so programs and systems designed to one standard can not communicate with those designed to the other. We propose the design and implementation of controlled Internet security system, which is IPsec-based Internet information security system in IPv4/IPv6 network and also we show the data of performance measurement. With the features like improved scalability and routing, security, ease-of-configuration, and higher performance of IPv6, the controlled Internet security system provides consistent security policy and integrated security management on IPsec-based Internet security system.

Keywords: IDS, IPS, IP-Sec, IPv6, IPv4, VPN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4537
4699 Parameter Estimation for Viewing Rank Distribution of Video-on-Demand

Authors: Hyoup-Sang Yoon

Abstract:

Video-on-demand (VOD) is designed by using content delivery networks (CDN) to minimize the overall operational cost and to maximize scalability. Estimation of the viewing pattern (i.e., the relationship between the number of viewings and the ranking of VOD contents) plays an important role in minimizing the total operational cost and maximizing the performance of the VOD systems. In this paper, we have analyzed a large body of commercial VOD viewing data and found that the viewing rank distribution fits well with the parabolic fractal distribution. The weighted linear model fitting function is used to estimate the parameters (coefficients) of the parabolic fractal distribution. This paper presents an analytical basis for designing an optimal hierarchical VOD contents distribution system in terms of its cost and performance.

Keywords: VOD, CDN, parabolic fractal distribution, viewing rank, weighted linear model fitting

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1784
4698 Transient Stress Analysis on Medium Modules Spur Gear by Using Mode Super Position Technique

Authors: Ali Raad Hassan

Abstract:

Natural frequencies and dynamic response of a spur gear sector are investigated using a two dimensional finite element model that offers significant advantages for dynamic gear analyses. The gear teeth are analyzed for different operating speeds. A primary feature of this modeling is determination of mesh forces using a detailed contact analysis for each time step as the gears roll through the mesh. ANSYS software has been used on the proposed model to find the natural frequencies by Block Lanczos technique and displacements and dynamic stresses by transient mode super position method. The effect of rotational speed of the gear on the dynamic response of gear tooth has been studied and design limits have been discussed.

Keywords: Spur gear, Natural frequency, transient analysis, Mode super position technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2978
4697 Robust Regression and its Application in Financial Data Analysis

Authors: Mansoor Momeni, Mahmoud Dehghan Nayeri, Ali Faal Ghayoumi, Hoda Ghorbani

Abstract:

This research is aimed to describe the application of robust regression and its advantages over the least square regression method in analyzing financial data. To do this, relationship between earning per share, book value of equity per share and share price as price model and earning per share, annual change of earning per share and return of stock as return model is discussed using both robust and least square regressions, and finally the outcomes are compared. Comparing the results from the robust regression and the least square regression shows that the former can provide the possibility of a better and more realistic analysis owing to eliminating or reducing the contribution of outliers and influential data. Therefore, robust regression is recommended for getting more precise results in financial data analysis.

Keywords: Financial data analysis, Influential data, Outliers, Robust regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1929
4696 Software Development for the Kinematic Analysis of a Lynx 6 Robot Arm

Authors: Baki Koyuncu, Mehmet Güzel

Abstract:

The kinematics of manipulators is a central problem in the automatic control of robot manipulators. Theoretical background for the analysis of the 5 Dof Lynx-6 educational Robot Arm kinematics is presented in this paper. The kinematics problem is defined as the transformation from the Cartesian space to the joint space and vice versa. The Denavit-Harbenterg (D-H) model of representation is used to model robot links and joints in this study. Both forward and inverse kinematics solutions for this educational manipulator are presented, An effective method is suggested to decrease multiple solutions in inverse kinematics. A visual software package, named MSG, is also developed for testing Motional Characteristics of the Lynx-6 Robot arm. The kinematics solutions of the software package were found to be identical with the robot arm-s physical motional behaviors.

Keywords: Lynx 6, robot arm, forward kinematics, inverse kinematics, software, DH parameters, 5 DOF , SSC-32 , simulator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5361
4695 A Metric-Set and Model Suggestion for Better Software Project Cost Estimation

Authors: Murat Ayyıldız, Oya Kalıpsız, Sırma Yavuz

Abstract:

Software project effort estimation is frequently seen as complex and expensive for individual software engineers. Software production is in a crisis. It suffers from excessive costs. Software production is often out of control. It has been suggested that software production is out of control because we do not measure. You cannot control what you cannot measure. During last decade, a number of researches on cost estimation have been conducted. The metric-set selection has a vital role in software cost estimation studies; its importance has been ignored especially in neural network based studies. In this study we have explored the reasons of those disappointing results and implemented different neural network models using augmented new metrics. The results obtained are compared with previous studies using traditional metrics. To be able to make comparisons, two types of data have been used. The first part of the data is taken from the Constructive Cost Model (COCOMO'81) which is commonly used in previous studies and the second part is collected according to new metrics in a leading international company in Turkey. The accuracy of the selected metrics and the data samples are verified using statistical techniques. The model presented here is based on Multi-Layer Perceptron (MLP). Another difficulty associated with the cost estimation studies is the fact that the data collection requires time and care. To make a more thorough use of the samples collected, k-fold, cross validation method is also implemented. It is concluded that, as long as an accurate and quantifiable set of metrics are defined and measured correctly, neural networks can be applied in software cost estimation studies with success

Keywords: Software Metrics, Software Cost Estimation, Neural Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1953
4694 Adoption and Diffusion of E-Government Services in India: The Impact of User Demographics and Service Quality

Authors: Sayantan Khanra, Rojers P. Joseph

Abstract:

This study attempts to analyze the impact of demography and service quality on the adoption and diffusion of e-Government services in the context of India. The objective of this paper is to study the users' perception about e-Government services and investigate the key variables that are most salient to the Indian populace. At the completion of this study, a research model that would help to understand the relationship involving the demographic variables and service quality dimensions, and the willingness to adopt e-Government services is expected to be developed. Dedicated authorities, particularly those in developing economies, may use that model or its augmented versions to design and update e-Government services and promote their use among citizens. After all, enhanced public participation is required to improve efficiency, engagement and transparency in the implementation of the aforementioned services.

Keywords: Adoption and diffusion of e-Government services, demographic variables, hierarchical regression analysis, service quality dimensions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1774
4693 Deep-Learning Based Approach to Facial Emotion Recognition Through Convolutional Neural Network

Authors: Nouha Khediri, Mohammed Ben Ammar, Monji Kherallah

Abstract:

Recently, facial emotion recognition (FER) has become increasingly essential to understand the state of the human mind. However, accurately classifying emotion from the face is a challenging task. In this paper, we present a facial emotion recognition approach named CV-FER benefiting from deep learning, especially CNN and VGG16. First, the data are pre-processed with data cleaning and data rotation. Then, we augment the data and proceed to our FER model, which contains five convolutions layers and five pooling layers. Finally, a softmax classifier is used in the output layer to recognize emotions. Based on the above contents, this paper reviews the works of facial emotion recognition based on deep learning. Experiments show that our model outperforms the other methods using the same FER2013 database and yields a recognition rate of 92%. We also put forward some suggestions for future work.

Keywords: CNN, deep-learning, facial emotion recognition, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 692
4692 Jitter Transfer in High Speed Data Links

Authors: Tsunwai Gary Yip

Abstract:

Phase locked loops for data links operating at 10 Gb/s or faster are low phase noise devices designed to operate with a low jitter reference clock. Characterization of their jitter transfer function is difficult because the intrinsic noise of the device is comparable to the random noise level in the reference clock signal. A linear model is proposed to account for the intrinsic noise of a PLL. The intrinsic noise data of a PLL for 10 Gb/s links is presented. The jitter transfer function of a PLL in a test chip for 12.8 Gb/s data links was determined in experiments using the 400 MHz reference clock as the source of simultaneous excitations over a wide range of frequency. The result shows that the PLL jitter transfer function can be approximated by a second order linear model.

Keywords: Intrinsic phase noise, jitter in data link, PLL jitter transfer function, high speed clocking in electronic circuit

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1942
4691 Distortion of Flow Measurement and Cavitation Occurs Due to Orifice Inlet Velocity Profiles

Authors: Byung-Soo Shin, Nam-Seok Kim, Sang-Kyu Lee, O-Hyun Keum

Abstract:

This analysis investigates the distortion of flow measurement and the increase of cavitation along orifice flowmeter. The analysis using the numerical method (CFD) validated the distortion of flow measurement through the inlet velocity profile considering the convergence and grid dependency. Realizable k-e model was selected and y+ was about 50 in this numerical analysis. This analysis also estimated the vulnerability of cavitation effect due to inlet velocity profile. The investigation concludes that inclined inlet velocity profile could vary the pressure which was measured at pressure tab near pipe wall and it led to distort the pressure values ranged from -3.8% to 5.3% near the orifice plate and to make the increase of cavitation. The investigation recommends that the fully developed inlet velocity flow is beneficial to accurate flow measurement in orifice flowmeter.

Keywords: Orifice, k-e model, CFD

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2209
4690 Using Dempster-Shafer Theory in XML Information Retrieval

Authors: F. Raja, M. Rahgozar, F. Oroumchian

Abstract:

XML is a markup language which is becoming the standard format for information representation and data exchange. A major purpose of XML is the explicit representation of the logical structure of a document. Much research has been performed to exploit logical structure of documents in information retrieval in order to precisely extract user information need from large collections of XML documents. In this paper, we describe an XML information retrieval weighting scheme that tries to find the most relevant elements in XML documents in response to a user query. We present this weighting model for information retrieval systems that utilize plausible inferences to infer the relevance of elements in XML documents. We also add to this model the Dempster-Shafer theory of evidence to express the uncertainty in plausible inferences and Dempster-Shafer rule of combination to combine evidences derived from different inferences.

Keywords: Dempster-Shafer theory, plausible inferences, XMLinformation retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1526
4689 Estimation Model of Dry Docking Duration Using Data Mining

Authors: Isti Surjandari, Riara Novita

Abstract:

Maintenance is one of the most important activities in the shipyard industry. However, sometimes it is not supported by adequate services from the shipyard, where inaccuracy in estimating the duration of the ship maintenance is still common. This makes estimation of ship maintenance duration is crucial. This study uses Data Mining approach, i.e., CART (Classification and Regression Tree) to estimate the duration of ship maintenance that is limited to dock works or which is known as dry docking. By using the volume of dock works as an input to estimate the maintenance duration, 4 classes of dry docking duration were obtained with different linear model and job criteria for each class. These linear models can then be used to estimate the duration of dry docking based on job criteria.

Keywords: Classification and regression tree (CART), data mining, dry docking, maintenance duration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2431
4688 Investigation of the Cooling and Uniformity Effectiveness in a Sinter Packed Bed

Authors: Uzu-Kuei Hsu, Chang-Hsien Tai, Kai-Wun Jin

Abstract:

When sinters are filled into the cooler from the sintering machine, and the non-uniform distribution of the sinters leads to uneven cooling. This causes the temperature difference of the sinters leaving the cooler to be so large that it results in the conveyors being deformed by the heat. The present work applies CFD method to investigate the thermo flowfield phenomena in a sinter cooler by the Porous Media Model. Using the obtained experimental data to simulate porosity (Ε), permeability (κ), inertial coefficient (F), specific heat (Cp) and effective thermal conductivity (keff) of the sinter packed beds. The physical model is a similar geometry whose Darcy numbers (Da) are similar to the sinter cooler. Using the Cooling Index (CI) and Uniformity Index (UI) to analyze the thermo flowfield in the sinter packed bed obtains the cooling performance of the sinter cooler.

Keywords: Porous media, sinter, cooling index, uniformity index, CFD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1657
4687 A Linearization and Decomposition Based Approach to Minimize the Non-Productive Time in Transfer Lines

Authors: Hany Osman, M. F. Baki

Abstract:

We address the balancing problem of transfer lines in this paper to find the optimal line balancing that minimizes the nonproductive time. We focus on the tool change time and face orientation change time both of which influence the makespane. We consider machine capacity limitations and technological constraints associated with the manufacturing process of auto cylinder heads. The problem is represented by a mixed integer programming model that aims at distributing the design features to workstations and sequencing the machining processes at a minimum non-productive time. The proposed model is solved by an algorithm established using linearization schemes and Benders- decomposition approach. The experiments show the efficiency of the algorithm in reaching the exact solution of small and medium problem instances at reasonable time.

Keywords: Transfer line balancing, Benders' decomposition, Linearization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1723
4686 A Survey on Quasi-Likelihood Estimation Approaches for Longitudinal Set-ups

Authors: Naushad Mamode Khan

Abstract:

The Com-Poisson (CMP) model is one of the most popular discrete generalized linear models (GLMS) that handles both equi-, over- and under-dispersed data. In longitudinal context, an integer-valued autoregressive (INAR(1)) process that incorporates covariate specification has been developed to model longitudinal CMP counts. However, the joint likelihood CMP function is difficult to specify and thus restricts the likelihood-based estimating methodology. The joint generalized quasi-likelihood approach (GQL-I) was instead considered but is rather computationally intensive and may not even estimate the regression effects due to a complex and frequently ill-conditioned covariance structure. This paper proposes a new GQL approach for estimating the regression parameters (GQL-III) that is based on a single score vector representation. The performance of GQL-III is compared with GQL-I and separate marginal GQLs (GQL-II) through some simulation experiments and is proved to yield equally efficient estimates as GQL-I and is far more computationally stable.

Keywords: Longitudinal, Com-Poisson, Ill-conditioned, INAR(1), GLMS, GQL.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1773
4685 FPGA Implementation of Adaptive Clock Recovery for TDMoIP Systems

Authors: Semih Demir, Anil Celebi

Abstract:

Circuit switched networks widely used until the end of the 20th century have been transformed into packages switched networks. Time Division Multiplexing over Internet Protocol (TDMoIP) is a system that enables Time Division Multiplexing (TDM) traffic to be carried over packet switched networks (PSN). In TDMoIP systems, devices that send TDM data to the PSN and receive it from the network must operate with the same clock frequency. In this study, it was aimed to implement clock synchronization process in Field Programmable Gate Array (FPGA) chips using time information attached to the packages received from PSN. The designed hardware is verified using the datasets obtained for the different carrier types and comparing the results with the software model. Field tests are also performed by using the real time TDMoIP system.

Keywords: Clock recovery on TDMoIP, FPGA, MATLAB reference model, clock synchronization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1449
4684 End-to-End Pyramid Based Method for MRI Reconstruction

Authors: Omer Cahana, Maya Herman, Ofer Levi

Abstract:

Magnetic Resonance Imaging (MRI) is a lengthy medical scan that stems from a long acquisition time. Its length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach such as Compress Sensing (CS) or Parallel Imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. To achieve that, two conditions must be satisfied: i) the signal must be sparse under a known transform domain, and ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm must be applied to recover the signal. While the rapid advances in Deep Learning (DL) have had tremendous successes in various computer vision tasks, the field of MRI reconstruction is still in its early stages. In this paper, we present an end-to-end method for MRI reconstruction from k-space to image. Our method contains two parts. The first is sensitivity map estimation (SME), which is a small yet effective network that can easily be extended to a variable number of coils. The second is reconstruction, which is a top-down architecture with lateral connections developed for building high-level refinement at all scales. Our method holds the state-of-art fastMRI benchmark, which is the largest, most diverse benchmark for MRI reconstruction.

Keywords: Accelerate MRI scans, image reconstruction, pyramid network, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 324