Search results for: The linear quantum hydrodynamic model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8693

Search results for: The linear quantum hydrodynamic model

7463 Optimizing Network Latency with Fast Path Assignment for Incoming Flows

Authors: Qing Lyu, Hang Zhu

Abstract:

Various flows in the network require to go through different types of middlebox. The improper placement of network middlebox and path assignment for flows could greatly increase the network latency and also decrease the performance of network. Minimizing the total end to end latency of all the ows requires to assign path for the incoming flows. In this paper, the flow path assignment problem in regard to the placement of various kinds of middlebox is studied. The flow path assignment problem is formulated to a linear programming problem, which is very time consuming. On the other hand, a naive greedy algorithm is studied. Which is very fast but causes much more latency than the linear programming algorithm. At last, the paper presents a heuristic algorithm named FPA, which takes bottleneck link information and estimated bandwidth occupancy into consideration, and achieves near optimal latency in much less time. Evaluation results validate the effectiveness of the proposed algorithm.

Keywords: Latency, Fast path assignment, Bottleneck link.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 562
7462 A Super-Efficiency Model for Evaluating Efficiency in the Presence of Time Lag Effect

Authors: Yanshuang Zhang, Byungho Jeong

Abstract:

In many cases, there are some time lag between the consumption of inputs and the production of outputs. This time lag effect should be considered in evaluating the performance of organizations. Recently, a couple of DEA models were developed for considering time lag effect in efficiency evaluation of research activities. Multi-periods input(MpI) and Multi-periods output(MpO) models are integrate models to calculate simple efficiency considering time lag effect. However, these models can’t discriminate efficient DMUs because of the nature of basic DEA model in which efficiency scores are limited to ‘1’. That is, efficient DMUs can’t be discriminated because their efficiency scores are same. Thus, this paper suggests a super-efficiency model for efficiency evaluation under the consideration of time lag effect based on the MpO model. A case example using a long term research project is given to compare the suggested model with the MpO model.

Keywords: DEA, Super-efficiency, Time Lag.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1602
7461 Determination of Sea Transport Route for Staple Food Distribution to Achieve Food Security in the Eastern Indonesia

Authors: Kuncoro Harto Widodo, Yandra Rahadian Perdana, Iwan Puja Riyadi

Abstract:

Effectiveness and efficiency of food distribution is necessary to maintain food security in a region. Food supply varies among regions depending on their production capacity; therefore, it is necessary to regulate food distribution. Sea transportation could play a great role in the food distribution system. To play this role and to support transportation needs in the Eastern Indonesia, sea transportation shall be supported by fleet which is adequate and reliable, both in terms of load and worthiness. This research uses Linear Programming (LP) method to analyze food distribution pattern in order to determine the optimal distribution system. In this research, transshipment points have been selected for regions in one province. Comparison between result of modeling and existing shipping route reveals that from 369 existing routes, 54 routes are used for transporting rice, corn, green bean, peanut, soybean, sweet potato, and cassava.

Keywords: Distribution, Sea Transportation, Eastern Indonesia (KTI), Linear Programming (LP).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2132
7460 Design of a CMOS Highly Linear Front-end IC with Auto Gain Controller for a Magnetic Field Transceiver

Authors: Yeon-kug Moon, Kang-Yoon Lee, Yun-Jae Won, Seung-Ok Lim

Abstract:

This paper describes a low-voltage and low-power channel selection analog front end with continuous-time low pass filters and highly linear programmable gain amplifier (PGA). The filters were realized as balanced Gm-C biquadratic filters to achieve a low current consumption. High linearity and a constant wide bandwidth are achieved by using a new transconductance (Gm) cell. The PGA has a voltage gain varying from 0 to 65dB, while maintaining a constant bandwidth. A filter tuning circuit that requires an accurate time base but no external components is presented. With a 1-Vrms differential input and output, the filter achieves -85dB THD and a 78dB signal-to-noise ratio. Both the filter and PGA were implemented in a 0.18um 1P6M n-well CMOS process. They consume 3.2mW from a 1.8V power supply and occupy an area of 0.19mm2.

Keywords: component ; Channel selection filters, DC offset, programmable gain amplifier, tuning circuit

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2122
7459 Improved Algorithms for Construction of Interface Agent Interaction Model

Authors: Huynh Quyet Thang, Le Hai Quan

Abstract:

Interaction Model plays an important role in Modelbased Intelligent Interface Agent Architecture for developing Intelligent User Interface. In this paper we are presenting some improvements in the algorithms for development interaction model of interface agent including: the action segmentation algorithm, the action pair selection algorithm, the final action pair selection algorithm, the interaction graph construction algorithm and the probability calculation algorithm. The analysis of the algorithms also presented. At the end of this paper, we introduce an experimental program called “Personal Transfer System".

Keywords: interface agent, interaction model, user model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2179
7458 Kinetic Spectrophotometric Determination of Ramipril in Commercial Dosage Forms

Authors: Nafisur Rahman, Habibur Rahman, Syed Najmul Hejaz Azmi

Abstract:

This paper presents a simple and sensitive kinetic spectrophotometric method for the determination of ramipril in commercial dosage forms. The method is based on the reaction of the drug with 1-chloro-2,4-dinitrobenzene (CDNB) in dimethylsulfoxide (DMSO) at 100 ± 1ºC. The reaction is followed spectrophotometrically by measuring the rate of change of the absorbance at 420 nm. Fixed-time (ΔA) and equilibrium methods are adopted for constructing the calibration curves. Both the calibration curves were found to be linear over the concentration ranges 20 - 220 μg/ml. The regression analysis of calibration data yielded the linear equations: Δ A = 6.30 × 10-4 + 1.54 × 10-3 C and A = 3.62 × 10-4 + 6.35 × 10-3 C for fixed time (Δ A) and equilibrium methods, respectively. The limits of detection (LOD) for fixed time and equilibrium methods are 1.47 and 1.05 μg/ml, respectively. The method has been successfully applied to the determination of ramipril in commercial dosage forms. Statistical comparison of the results shows that there is no significant difference between the proposed methods and Abdellatef-s spectrophotometric method.

Keywords: Equilibrium method, Fixed-time (ΔA) method, Ramipril, Spectrophotometry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2262
7457 A Constitutive Model of Ligaments and Tendons Accounting for Fiber-Matrix Interaction

Authors: Ratchada Sopakayang, Gerhard A. Holzapfel

Abstract:

In this study, a new constitutive model is developed to describe the hyperelastic behavior of collagenous tissues with a parallel arrangement of collagen fibers such as ligaments and tendons. The model is formulated using a continuum approach incorporating the structural changes of the main tissue components: collagen fibers, proteoglycan-rich matrix and fiber-matrix interaction. The mechanical contribution of the interaction between the fibers and the matrix is simply expressed by a coupling term. The structural change of the collagen fibers is incorporated in the constitutive model to describe the activation of the fibers under tissue straining. Finally, the constitutive model can easily describe the stress-stretch nonlinearity which occurs when a ligament/tendon is axially stretched. This study shows that the interaction between the fibers and the matrix contributes to the mechanical tissue response. Therefore, the model may lead to a better understanding of the physiological mechanisms of ligaments and tendons under axial loading.

Keywords: Hyperelasticity, constitutive model, fiber-matrix interaction, ligament, tendon.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 857
7456 Adaptive Digital Watermarking Integrating Fuzzy Inference HVS Perceptual Model

Authors: Sherin M. Youssef, Ahmed Abouelfarag, Noha M. Ghatwary

Abstract:

An adaptive Fuzzy Inference Perceptual model has been proposed for watermarking of digital images. The model depends on the human visual characteristics of image sub-regions in the frequency multi-resolution wavelet domain. In the proposed model, a multi-variable fuzzy based architecture has been designed to produce a perceptual membership degree for both candidate embedding sub-regions and strength watermark embedding factor. Different sizes of benchmark images with different sizes of watermarks have been applied on the model. Several experimental attacks have been applied such as JPEG compression, noises and rotation, to ensure the robustness of the scheme. In addition, the model has been compared with different watermarking schemes. The proposed model showed its robustness to attacks and at the same time achieved a high level of imperceptibility.

Keywords: Watermarking, The human visual system (HVS), Fuzzy Inference System (FIS), Local Binary Pattern (LBP), Discrete Wavelet Transform (DWT).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1799
7455 Application of Generalized Autoregressive Score Model to Stock Returns

Authors: Katleho Daniel Makatjane, Diteboho Lawrence Xaba, Ntebogang Dinah Moroke

Abstract:

The current study investigates the behaviour of time-varying parameters that are based on the score function of the predictive model density at time t. The mechanism to update the parameters over time is the scaled score of the likelihood function. The results revealed that there is high persistence of time-varying, as the location parameter is higher and the skewness parameter implied the departure of scale parameter from the normality with the unconditional parameter as 1.5. The results also revealed that there is a perseverance of the leptokurtic behaviour in stock returns which implies the returns are heavily tailed. Prior to model estimation, the White Neural Network test exposed that the stock price can be modelled by a GAS model. Finally, we proposed further researches specifically to model the existence of time-varying parameters with a more detailed model that encounters the heavy tail distribution of the series and computes the risk measure associated with the returns.

Keywords: Generalized autoregressive score model, stock returns, time-varying.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1017
7454 Specialized Reduced Models of Dynamic Flows in 2-Stroke Engines

Authors: S. Cagin, X. Fischer, E. Delacourt, N. Bourabaa, C. Morin, D. Coutellier, B. Carré, S. Loumé

Abstract:

The complexity of scavenging by ports and its impact on engine efficiency create the need to understand and to model it as realistically as possible. However, there are few empirical scavenging models and these are highly specialized. In a design optimization process, they appear very restricted and their field of use is limited. This paper presents a comparison of two methods to establish and reduce a model of the scavenging process in 2-stroke diesel engines. To solve the lack of scavenging models, a CFD model has been developed and is used as the referent case. However, its large size requires a reduction. Two techniques have been tested depending on their fields of application: The NTF method and neural networks. They both appear highly appropriate drastically reducing the model’s size (over 90% reduction) with a low relative error rate (under 10%). Furthermore, each method produces a reduced model which can be used in distinct specialized fields of application: the distribution of a quantity (mass fraction for example) in the cylinder at each time step (pseudo-dynamic model) or the qualification of scavenging at the end of the process (pseudo-static model).

Keywords: Diesel engine, Design optimization, Model reduction, Neural network, NTF algorithm, Scavenging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1303
7453 A New Shock Model for Systems Subject to Random Threshold Failure

Authors: A. Rangan, A. Tansu

Abstract:

This paper generalizes Yeh Lam-s shock model for renewal shock arrivals and random threshold. Several interesting statistical measures are explicitly obtained. A few special cases and an optimal replacement problem are also discussed.

Keywords: shock model, optimal replacement, random threshold, shocks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1558
7452 Comparison between Minimum Direct and Indirect Jerks of Linear Dynamic Systems

Authors: Tawiwat Veeraklaew, Nathasit Phathana-im, Songkit Heama

Abstract:

Both the minimum energy consumption and smoothness, which is quantified as a function of jerk, are generally needed in many dynamic systems such as the automobile and the pick-and-place robot manipulator that handles fragile equipments. Nevertheless, many researchers come up with either solely concerning on the minimum energy consumption or minimum jerk trajectory. This research paper proposes a simple yet very interesting relationship between the minimum direct and indirect jerks approaches in designing the time-dependent system yielding an alternative optimal solution. Extremal solutions for the cost functions of direct and indirect jerks are found using the dynamic optimization methods together with the numerical approximation. This is to allow us to simulate and compare visually and statistically the time history of control inputs employed by minimum direct and indirect jerk designs. By considering minimum indirect jerk problem, the numerical solution becomes much easier and yields to the similar results as minimum direct jerk problem.

Keywords: Optimization, Dynamic, Linear Systems, Jerks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1240
7451 Super Harmonic Nonlinear Lateral Vibration of an Axially Moving Beam with Rotating Prismatic Joint

Authors: M. Najafi, S. Bab, F. Rahimi Dehgolan

Abstract:

The motion of an axially moving beam with rotating prismatic joint with a tip mass on the end is analyzed to investigate the nonlinear vibration and dynamic stability of the beam. The beam is moving with a harmonic axially and rotating velocity about a constant mean velocity. A time-dependent partial differential equation and boundary conditions with the aid of the Hamilton principle are derived to describe the beam lateral deflection. After the partial differential equation is discretized by the Galerkin method, the method of multiple scales is applied to obtain analytical solutions. Frequency response curves are plotted for the super harmonic resonances of the first and the second modes. The effects of non-linear term and mean velocity are investigated on the steady state response of the axially moving beam. The results are validated with numerical simulations.

Keywords: Axially moving beam, Galerkin method, non-linear vibration, super harmonic resonances.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 976
7450 Combining Minimum Energy and Minimum Direct Jerk of Linear Dynamic Systems

Authors: V. Tawiwat, P. Jumnong

Abstract:

Both the minimum energy consumption and smoothness, which is quantified as a function of jerk, are generally needed in many dynamic systems such as the automobile and the pick-and-place robot manipulator that handles fragile equipments. Nevertheless, many researchers come up with either solely concerning on the minimum energy consumption or minimum jerk trajectory. This research paper proposes a simple yet very interesting when combining the minimum energy and jerk of indirect jerks approaches in designing the time-dependent system yielding an alternative optimal solution. Extremal solutions for the cost functions of the minimum energy, the minimum jerk and combining them together are found using the dynamic optimization methods together with the numerical approximation. This is to allow us to simulate and compare visually and statistically the time history of state inputs employed by combining minimum energy and jerk designs. The numerical solution of minimum direct jerk and energy problem are exactly the same solution; however, the solutions from problem of minimum energy yield the similar solution especially in term of tendency.

Keywords: Optimization, Dynamic, Linear Systems, Jerks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1554
7449 Analysis of Model in Pregnant and Non-Pregnant Dengue Patients

Authors: R. Kongnuy, P. Pongsumpun

Abstract:

We used mathematical model to study the transmission of dengue disease. The model is developed in which the human population is separated into two populations, pregnant and non-pregnant humans. The dynamical analysis method is used for analyzing this modified model. Two equilibrium states are found and the conditions for stability of theses two equilibrium states are established. Numerical results are shown for each equilibrium state. The basic reproduction numbers are found and they are compared by using numerical simulations.

Keywords: Basic reproductive number, dengue disease, equilibrium states, pregnancy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1576
7448 Modelling of Soil Structure Interaction of Integral Abutment Bridges

Authors: Thevaneyan K. David, John P. Forth

Abstract:

Integral Abutment Bridges (IAB) are defined as simple or multiple span bridges in which the bridge deck is cast monolithically with the abutment walls. This kind of bridges are becoming very popular due to different aspects such as good response under seismic loading, low initial costs, elimination of bearings, and less maintenance. However the main issue related to the analysis of this type of structures is dealing with soil-structure interaction of the abutment walls and the supporting piles. Various soil constitutive models have been used in studies of soil-structure interaction in this kind of structures by researchers. This paper is an effort to review the implementation of various finite elements model which explicitly incorporates the nonlinear soil and linear structural response considering various soil constitutive models and finite element mesh.

Keywords: Constitutive Models, FEM, Integral AbutmentBridges, Soil-structure Interactions

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4699
7447 ANN Based Model Development for Material Removal Rate in Dry Turning in Indian Context

Authors: Mangesh R. Phate, V. H. Tatwawadi

Abstract:

This paper is intended to develop an artificial neural network (ANN) based model of material removal rate (MRR) in the turning of ferrous and nonferrous material in a Indian small-scale industry. MRR of the formulated model was proved with the testing data and artificial neural network (ANN) model was developed for the analysis and prediction of the relationship between inputs and output parameters during the turning of ferrous and nonferrous materials. The input parameters of this model are operator, work-piece, cutting process, cutting tool, machine and the environment.

The ANN model consists of a three layered feedforward back propagation neural network. The network is trained with pairs of independent/dependent datasets generated when machining ferrous and nonferrous material. A very good performance of the neural network, in terms of contract with experimental data, was achieved. The model may be used for the testing and forecast of the complex relationship between dependent and the independent parameters in turning operations.

Keywords: Field data based model, Artificial neural network, Simulation, Convectional Turning, Material removal rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1951
7446 Parameters Extraction for Pseudomorphic HEMTs Using Genetic Algorithms

Authors: Mazhar B. Tayel, Amr H. Yassin

Abstract:

A proposed small-signal model parameters for a pseudomorphic high electron mobility transistor (PHEMT) is presented. Both extrinsic and intrinsic circuit elements of a smallsignal model are determined using genetic algorithm (GA) as a stochastic global search and optimization tool. The parameters extraction of the small-signal model is performed on 200-μm gate width AlGaAs/InGaAs PHEMT. The equivalent circuit elements for a proposed 18 elements model are determined directly from the measured S- parameters. The GA is used to extract the parameters of the proposed small-signal model from 0.5 up to 18 GHz.

Keywords: PHEMT, Genetic Algorithms, small signal modeling, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2247
7445 Performance Evaluation of Task Scheduling Algorithm on LCQ Network

Authors: Zaki Ahmad Khan, Jamshed Siddiqui, Abdus Samad

Abstract:

The Scheduling and mapping of tasks on a set of processors is considered as a critical problem in parallel and distributed computing system. This paper deals with the problem of dynamic scheduling on a special type of multiprocessor architecture known as Linear Crossed Cube (LCQ) network. This proposed multiprocessor is a hybrid network which combines the features of both linear types of architectures as well as cube based architectures. Two standard dynamic scheduling schemes namely Minimum Distance Scheduling (MDS) and Two Round Scheduling (TRS) schemes are implemented on the LCQ network. Parallel tasks are mapped and the imbalance of load is evaluated on different set of processors in LCQ network. The simulations results are evaluated and effort is made by means of through analysis of the results to obtain the best solution for the given network in term of load imbalance left and execution time. The other performance matrices like speedup and efficiency are also evaluated with the given dynamic algorithms.

Keywords: Dynamic algorithm, Load imbalance, Mapping, Task scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1995
7444 Kinetic Study of Gluconic Acid Batch Fermentation by Aspergillus niger

Authors: Akbarningrum Fatmawati, Rudy Agustriyanto, Lindawati

Abstract:

Gluconic acid is one of interesting chemical products in industries such as detergents, leather, photographic, textile, and especially in food and pharmaceutical industries. Fermentation is an advantageous process to produce gluconic acid. Mathematical modeling is important in the design and operation of fermentation process. In fact, kinetic data must be available for modeling. The kinetic parameters of gluconic acid production by Aspergillus niger in batch culture was studied in this research at initial substrate concentration of 150, 200 and 250 g/l. The kinetic models used were logistic equation for growth, Luedeking-Piret equation for gluconic acid formation, and Luedeking-Piret-like equation for glucose consumption. The Kinetic parameters in the model were obtained by minimizing non linear least squares curve fitting.

Keywords: Aspergillus niger, fermentation, gluconic acid, kinetic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2676
7443 Cost and Profit Analysis of Markovian Queuing System with Two Priority Classes: A Computational Approach

Authors: S. S. Mishra, D. K. Yadav

Abstract:

This paper focuses on cost and profit analysis of single-server Markovian queuing system with two priority classes. In this paper, functions of total expected cost, revenue and profit of the system are constructed and subjected to optimization with respect to its service rates of lower and higher priority classes. A computing algorithm has been developed on the basis of fast converging numerical method to solve the system of non linear equations formed out of the mathematical analysis. A novel performance measure of cost and profit analysis in view of its economic interpretation for the system with priority classes is attempted to discuss in this paper. On the basis of computed tables observations are also drawn to enlighten the variational-effect of the model on the parameters involved therein.

Keywords: Cost and Profit, Computing, Expected Revenue, Priority classes

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2699
7442 Effects of Crushed Waste Aggregate from the Manufacture of Clay Bricks on Rendering Cement Mortar Performance

Authors: Benmalek M. Larbi, R. Harbi, S. Boukor

Abstract:

This paper reports an experimental work that aimed to investigate the effects of clay brick waste, as part of fine aggregate, on rendering mortar performance. The brick, in crushed form, was from a local brick manufacturer that was rejected due to being of-standard. It was used to replace 33.33 %, 50 %, 66.66 % and 100 % by weight of the quarry sand in mortar. Effects of the brick replacement on the mortar key properties intended for wall plastering were investigated; these are workability, compressive strength, flexural strength, linear shrinkage, water absorption by total immersion and by capillary suction. The results showed that as the brick replacement level increased, the mortar workability reduced. The linear shrinkage increases over time and decreases with the introduction of brick waste. The compressive and flexural strengths decrease with the increase of brick waste because of their great water absorption.

Keywords: Clay brick waste, mortar, properties, quarry sand.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1930
7441 Conflation Methodology Applied to Flood Recovery

Authors: E. L. Suarez, D. E. Meeroff, Y. Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: Community resilience, conflation, flood risk, nuisance flooding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 87
7440 A Novel Low Power Digitally Controlled Oscillator with Improved linear Operating Range

Authors: Nasser Erfani Majd, Mojtaba Lotfizad

Abstract:

In this paper, an ultra low power and low jitter 12bit CMOS digitally controlled oscillator (DCO) design is presented. Based on a ring oscillator implemented with low power Schmitt trigger based inverters. Simulation of the proposed DCO using 32nm CMOS Predictive Transistor Model (PTM) achieves controllable frequency range of 550MHz~830MHz with a wide linearity and high resolution. Monte Carlo simulation demonstrates that the time-period jitter due to random power supply fluctuation is under 31ps and the power consumption is 0.5677mW at 750MHz with 1.2V power supply and 0.53-ps resolution. The proposed DCO has a good robustness to voltage and temperature variations and better linearity comparing to the conventional design.

Keywords: digitally controlled oscillator (DCO), low power, jitter; good linearity, robust

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1885
7439 The Relationship between Business-model Innovation and Firm Value: A Dynamic Perspective

Authors: Yung C. Ho, Hui C. Fang, Ming J. Hsieh

Abstract:

When consistently innovative business-models can give companies a competitive advantage, longitudinal empirical research, which can reflect dynamic business-model changes, has yet to prove a definitive connection. This study consequently employs a dynamic perspective in conjunction with innovation theory to examine the relationship between the types of business-model innovation and firm value. This study tries to examine various types of business-model innovation in high-end and low-end technology industries such as HTC and the 7-Eleven chain stores with research periods of 14 years and 32 years, respectively. The empirical results suggest that adopting radical business-model innovation in addition to expanding new target markets can successfully lead to a competitive advantage. Sustained advanced technological competences and service/product innovation are the key successful factors in high-end and low-end technology industry business-models respectively. In sum up, the business-model innovation can yield a higher market value and financial value in high-end technology industries than low-end ones.

Keywords: Business-model, Dynamic Perspective, Firm Value, Innovation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2723
7438 Manual Testing of Web Software Systems Supported by Direct Guidance of the Tester Based On Design Model

Authors: Karel Frajtak, Miroslav Bures, Ivan Jelinek

Abstract:

Software testing is important stage of software development cycle. Current testing process involves tester and electronic documents with test case scenarios. In this paper we focus on new approach to testing process using automated test case generation and tester guidance through the system based on the model of the system. Test case generation and model-based testing is not possible without proper system model. We aim on providing better feedback from the testing process thus eliminating the unnecessary paper work.

Keywords: Model based testing, test automation, test generating, tester support.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1940
7437 Comparing Test Equating by Item Response Theory and Raw Score Methods with Small Sample Sizes on a Study of the ARTé: Mecenas Learning Game

Authors: Steven W. Carruthers

Abstract:

The purpose of the present research is to equate two test forms as part of a study to evaluate the educational effectiveness of the ARTé: Mecenas art history learning game. The researcher applied Item Response Theory (IRT) procedures to calculate item, test, and mean-sigma equating parameters. With the sample size n=134, test parameters indicated “good” model fit but low Test Information Functions and more acute than expected equating parameters. Therefore, the researcher applied equipercentile equating and linear equating to raw scores and compared the equated form parameters and effect sizes from each method. Item scaling in IRT enables the researcher to select a subset of well-discriminating items. The mean-sigma step produces a mean-slope adjustment from the anchor items, which was used to scale the score on the new form (Form R) to the reference form (Form Q) scale. In equipercentile equating, scores are adjusted to align the proportion of scores in each quintile segment. Linear equating produces a mean-slope adjustment, which was applied to all core items on the new form. The study followed a quasi-experimental design with purposeful sampling of students enrolled in a college level art history course (n=134) and counterbalancing design to distribute both forms on the pre- and posttests. The Experimental Group (n=82) was asked to play ARTé: Mecenas online and complete Level 4 of the game within a two-week period; 37 participants completed Level 4. Over the same period, the Control Group (n=52) did not play the game. The researcher examined between group differences from post-test scores on test Form Q and Form R by full-factorial Two-Way ANOVA. The raw score analysis indicated a 1.29% direct effect of form, which was statistically non-significant but may be practically significant. The researcher repeated the between group differences analysis with all three equating methods. For the IRT mean-sigma adjusted scores, form had a direct effect of 8.39%. Mean-sigma equating with a small sample may have resulted in inaccurate equating parameters. Equipercentile equating aligned test means and standard deviations, but resultant skewness and kurtosis worsened compared to raw score parameters. Form had a 3.18% direct effect. Linear equating produced the lowest Form effect, approaching 0%. Using linearly equated scores, the researcher conducted an ANCOVA to examine the effect size in terms of prior knowledge. The between group effect size for the Control Group versus Experimental Group participants who completed the game was 14.39% with a 4.77% effect size attributed to pre-test score. Playing and completing the game increased art history knowledge, and individuals with low prior knowledge tended to gain more from pre- to post test. Ultimately, researchers should approach test equating based on their theoretical stance on Classical Test Theory and IRT and the respective  assumptions. Regardless of the approach or method, test equating requires a representative sample of sufficient size. With small sample sizes, the application of a range of equating approaches can expose item and test features for review, inform interpretation, and identify paths for improving instruments for future study.

Keywords: Effectiveness, equipercentile equating, IRT, learning games, linear equating, mean-sigma equating.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 995
7436 An Extension of Multi-Layer Perceptron Based on Layer-Topology

Authors: Jānis Zuters

Abstract:

There are a lot of extensions made to the classic model of multi-layer perceptron (MLP). A notable amount of them has been designed to hasten the learning process without considering the quality of generalization. The paper proposes a new MLP extension based on exploiting topology of the input layer of the network. Experimental results show the extended model to improve upon generalization capability in certain cases. The new model requires additional computational resources to compare to the classic model, nevertheless the loss in efficiency isn-t regarded to be significant.

Keywords: Learning algorithm, multi-layer perceptron, topology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1499
7435 Influence of the Low Frequency Ultrasound on the Cadmium (II) Biosorption by an Ecofriendly Biocomposite (Extraction Solid Waste of Ammi visnaga / Calcium Alginate): Kinetic Modeling

Authors: L. Nouri Taiba, Y. Bouhamidi, F. Kaouah, Z. Bendjama, M. Trari

Abstract:

In the present study, an ecofriendly biocomposite namely calcium alginate immobilized Ammi Visnaga (Khella) extraction waste (SWAV/CA) was prepared by electrostatic extrusion method and used on the cadmium biosorption from aqueous phase with and without the assistance of ultrasound in batch conditions. The influence of low frequency ultrasound (37 and 80 KHz) on the cadmium biosorption kinetics was studied. The obtained results show that the ultrasonic irradiation significantly enhances and improves the efficiency of the cadmium removal. The Pseudo first order, Pseudo-second-order, Intraparticle diffusion, and Elovich models were evaluated using the non-linear curve fitting analysis method. Modeling of kinetic results shows that biosorption process is best described by the pseudo-second order and Elovich, in both the absence and presence of ultrasound.

Keywords: Biocomposite, biosorption, cadmium, non-linear analysis, ultrasound.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1572
7434 Cascaded H-Bridge Five Level Inverter Based Selective Harmonic Eliminated Pulse Width Modulation for Harmonic Elimination

Authors: S. Selvaperumal, M. S. Sivagamasundari

Abstract:

In this paper, selective harmonic elimination pulse width modulation technique is employed to eliminate lower order harmonics like third by determination of solving non-linear equations. The cascaded H-bridge five level inverter is driven by the Peripheral Interface Controlled (PIC) Microcontroller 16F877A. The performance of single phase cascaded H-bridge five level inverter with relevant to harmonics and a variety of switches with solar cell as its input source is simulated by employing MATLAB/Simulink. A hardware model is developed to verify the performance of the developed system.

Keywords: Multilevel inverter, cascaded H-Bridge multilevel inverter, total harmonic distortion, selective harmonic elimination pulse width modulation, MATLAB.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 802