Search results for: the linear regression model.
8675 Modified Buck Boost Circuit for Linear and Non-Linear Piezoelectric Energy Harvesting
Authors: I Made Darmayuda, Chai Tshun Chuan Kevin, Je Minkyu
Abstract:
Plenty researches have reported techniques to harvest energy from piezoelectric transducer. In the earlier years, the researches mainly report linear energy harvesting techniques whereby interface circuitry is designed to have input impedance that match with the impedance of the piezoelectric transducer. In recent years non-linear techniques become more popular. The non-linear technique employs voltage waveform manipulation to boost the available-for-extraction energy at the time of energy transfer. The fact that non-linear energy extraction provides larger available-for-extraction energy doesn’t mean the linear energy extraction is completely obsolete. In some scenarios, such as where initial power is not available, linear energy extraction is still preferred. A modified Buck Boost circuit which is capable of harvesting piezoelectric energy using both linear and non-linear techniques is reported in this paper. Efficiency of at least 64% can be achieved using this circuit. For linear extraction, the modified Buck Boost circuit is controlled using a fix frequency and duty cycle clock. A voltage sensor and a pulse generator are added as the controller for the non-linear extraction technique.
Keywords: Buck boost, energy harvester, linear energy harvester, non-linear energy harvester, piezoelectric, synchronized charge extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24358674 Optimal Control Strategies for Speed Control of Permanent-Magnet Synchronous Motor Drives
Authors: Roozbeh Molavi, Davood A. Khaburi
Abstract:
The permanent magnet synchronous motor (PMSM) is very useful in many applications. Vector control of PMSM is popular kind of its control. In this paper, at first an optimal vector control for PMSM is designed and then results are compared with conventional vector control. Then, it is assumed that the measurements are noisy and linear quadratic Gaussian (LQG) methodology is used to filter the noises. The results of noisy optimal vector control and filtered optimal vector control are compared to each other. Nonlinearity of PMSM and existence of inverter in its control circuit caused that the system is nonlinear and time-variant. With deriving average model, the system is changed to nonlinear time-invariant and then the nonlinear system is converted to linear system by linearization of model around average values. This model is used to optimize vector control then two optimal vector controls are compared to each other. Simulation results show that the performance and robustness to noise of the control system has been highly improved.Keywords: Kalman filter, Linear quadratic Gaussian (LQG), Linear quadratic regulator (LQR), Permanent-Magnet synchronousmotor (PMSM).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30098673 Predictive Models for Compressive Strength of High Performance Fly Ash Cement Concrete for Pavements
Authors: S. M. Gupta, Vanita Aggarwal, Som Nath Sachdeva
Abstract:
The work reported through this paper is an experimental work conducted on High Performance Concrete (HPC) with super plasticizer with the aim to develop some models suitable for prediction of compressive strength of HPC mixes. In this study, the effect of varying proportions of fly ash (0% to 50% @ 10% increment) on compressive strength of high performance concrete has been evaluated. The mix designs studied were M30, M40 and M50 to compare the effect of fly ash addition on the properties of these concrete mixes. In all eighteen concrete mixes that have been designed, three were conventional concretes for three grades under discussion and fifteen were HPC with fly ash with varying percentages of fly ash. The concrete mix designing has been done in accordance with Indian standard recommended guidelines. All the concrete mixes have been studied in terms of compressive strength at 7 days, 28 days, 90 days, and 365 days. All the materials used have been kept same throughout the study to get a perfect comparison of values of results. The models for compressive strength prediction have been developed using Linear Regression method (LR), Artificial Neural Network (ANN) and Leave-One-Out Validation (LOOV) methods.
Keywords: ANN, concrete mixes, compressive strength, fly ash, high performance concrete, linear regression, strength prediction models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20778672 Forecasting Stock Indexes Using Bayesian Additive Regression Tree
Authors: Darren Zou
Abstract:
Forecasting the stock market is a very challenging task. Various economic indicators such as GDP, exchange rates, interest rates, and unemployment have a substantial impact on the stock market. Time series models are the traditional methods used to predict stock market changes. In this paper, a machine learning method, Bayesian Additive Regression Tree (BART) is used in predicting stock market indexes based on multiple economic indicators. BART can be used to model heterogeneous treatment effects, and thereby works well when models are misspecified. It also has the capability to handle non-linear main effects and multi-way interactions without much input from financial analysts. In this research, BART is proposed to provide a reliable prediction on day-to-day stock market activities. By comparing the analysis results from BART and with time series method, BART can perform well and has better prediction capability than the traditional methods.
Keywords: Bayesian, Forecast, Stock, BART.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7348671 Stochastic Model Predictive Control for Linear Discrete-Time Systems with Random Dither Quantization
Authors: Tomoaki Hashimoto
Abstract:
Recently, feedback control systems using random dither quantizers have been proposed for linear discrete-time systems. However, the constraints imposed on state and control variables have not yet been taken into account for the design of feedback control systems with random dither quantization. Model predictive control is a kind of optimal feedback control in which control performance over a finite future is optimized with a performance index that has a moving initial and terminal time. An important advantage of model predictive control is its ability to handle constraints imposed on state and control variables. Based on the model predictive control approach, the objective of this paper is to present a control method that satisfies probabilistic state constraints for linear discrete-time feedback control systems with random dither quantization. In other words, this paper provides a method for solving the optimal control problems subject to probabilistic state constraints for linear discrete-time feedback control systems with random dither quantization.Keywords: Optimal control, stochastic systems, discrete-time systems, probabilistic constraints, random dither quantization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11558670 A Robust LS-SVM Regression
Authors: József Valyon, Gábor Horváth
Abstract:
In comparison to the original SVM, which involves a quadratic programming task; LS–SVM simplifies the required computation, but unfortunately the sparseness of standard SVM is lost. Another problem is that LS-SVM is only optimal if the training samples are corrupted by Gaussian noise. In Least Squares SVM (LS–SVM), the nonlinear solution is obtained, by first mapping the input vector to a high dimensional kernel space in a nonlinear fashion, where the solution is calculated from a linear equation set. In this paper a geometric view of the kernel space is introduced, which enables us to develop a new formulation to achieve a sparse and robust estimate.Keywords: Support Vector Machines, Least Squares SupportVector Machines, Regression, Sparse approximation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20638669 Analyzing the Factors Effecting the Passenger Car Breakdowns using Com-Poisson GLM
Authors: N. Mamode Khan, V. Jowaheer
Abstract:
Number of breakdowns experienced by a machinery is a highly under-dispersed count random variable and its value can be attributed to the factors related to the mechanical input and output of that machinery. Analyzing such under-dispersed count observations as a function of the explanatory factors has been a challenging problem. In this paper, we aim at estimating the effects of various factors on the number of breakdowns experienced by a passenger car based on a study performed in Mauritius over a year. We remark that the number of passenger car breakdowns is highly under-dispersed. These data are therefore modelled and analyzed using Com-Poisson regression model. We use quasi-likelihood estimation approach to estimate the parameters of the model. Under-dispersion parameter is estimated to be 2.14 justifying the appropriateness of Com-Poisson distribution in modelling under-dispersed count responses recorded in this study.
Keywords: Breakdowns, under-dispersion, com-poisson, generalized linear model, quasi-likelihood estimation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15448668 Application of an Analytical Model to Obtain Daily Flow Duration Curves for Different Hydrological Regimes in Switzerland
Authors: Ana Clara Santos, Maria Manuela Portela, Bettina Schaefli
Abstract:
This work assesses the performance of an analytical model framework to generate daily flow duration curves, FDCs, based on climatic characteristics of the catchments and on their streamflow recession coefficients. According to the analytical model framework, precipitation is considered to be a stochastic process, modeled as a marked Poisson process, and recession is considered to be deterministic, with parameters that can be computed based on different models. The analytical model framework was tested for three case studies with different hydrological regimes located in Switzerland: pluvial, snow-dominated and glacier. For that purpose, five time intervals were analyzed (the four meteorological seasons and the civil year) and two developments of the model were tested: one considering a linear recession model and the other adopting a nonlinear recession model. Those developments were combined with recession coefficients obtained from two different approaches: forward and inverse estimation. The performance of the analytical framework when considering forward parameter estimation is poor in comparison with the inverse estimation for both, linear and nonlinear models. For the pluvial catchment, the inverse estimation shows exceptional good results, especially for the nonlinear model, clearing suggesting that the model has the ability to describe FDCs. For the snow-dominated and glacier catchments the seasonal results are better than the annual ones suggesting that the model can describe streamflows in those conditions and that future efforts should focus on improving and combining seasonal curves instead of considering single annual ones.Keywords: Analytical streamflow distribution, stochastic process, linear and non-linear recession, hydrological modelling, daily discharges.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6468667 Comparison of Different Data Acquisition Techniques for Shape Optimization Problems
Authors: Attila Vámosi, Tamás Mankovits, Dávid Huri, Imre Kocsis, Tamás Szabó
Abstract:
Non-linear FEM calculations are indispensable when important technical information like operating performance of a rubber component is desired. For example rubber bumpers built into air-spring structures may undergo large deformations under load, which in itself shows non-linear behavior. The changing contact range between the parts and the incompressibility of the rubber increases this non-linear behavior further. The material characterization of an elastomeric component is also a demanding engineering task. The shape optimization problem of rubber parts led to the study of FEM based calculation processes. This type of problems was posed and investigated by several authors. In this paper the time demand of certain calculation methods are studied and the possibilities of time reduction is presented.
Keywords: Rubber bumper, data acquisition, finite element analysis, support vector regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21488666 Estimating Reaction Rate Constants with Neural Networks
Authors: Benedek Kovacs, Janos Toth
Abstract:
Solutions are proposed for the central problem of estimating the reaction rate coefficients in homogeneous kinetics. The first is based upon the fact that the right hand side of a kinetic differential equation is linear in the rate constants, whereas the second one uses the technique of neural networks. This second one is discussed deeply and its advantages, disadvantages and conditions of applicability are analyzed in the mirror of the first one. Numerical analysis carried out on practical models using simulated data, and our programs written in Mathematica.
Keywords: Neural networks, parameter estimation, linear regression, kinetic models, reaction rate coefficients.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19968665 Comparison of Neural Network and Logistic Regression Methods to Predict Xerostomia after Radiotherapy
Authors: Hui-Min Ting, Tsair-Fwu Lee, Ming-Yuan Cho, Pei-Ju Chao, Chun-Ming Chang, Long-Chang Chen, Fu-Min Fang
Abstract:
To evaluate the ability to predict xerostomia after radiotherapy, we constructed and compared neural network and logistic regression models. In this study, 61 patients who completed a questionnaire about their quality of life (QoL) before and after a full course of radiation therapy were included. Based on this questionnaire, some statistical data about the condition of the patients’ salivary glands were obtained, and these subjects were included as the inputs of the neural network and logistic regression models in order to predict the probability of xerostomia. Seven variables were then selected from the statistical data according to Cramer’s V and point-biserial correlation values and were trained by each model to obtain the respective outputs which were 0.88 and 0.89 for AUC, 9.20 and 7.65 for SSE, and 13.7% and 19.0% for MAPE, respectively. These parameters demonstrate that both neural network and logistic regression methods are effective for predicting conditions of parotid glands.
Keywords: NPC, ANN, logistic regression, xerostomia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16368664 Development of a Comprehensive Electricity Generation Simulation Model Using a Mixed Integer Programming Approach
Authors: Erik Delarue, David Bekaert, Ronnie Belmans, William D'haeseleer
Abstract:
This paper presents the development of an electricity simulation model taking into account electrical network constraints, applied on the Belgian power system. The base of the model is optimizing an extensive Unit Commitment (UC) problem through the use of Mixed Integer Linear Programming (MILP). Electrical constraints are incorporated through the implementation of a DC load flow. The model encloses the Belgian power system in a 220 – 380 kV high voltage network (i.e., 93 power plants and 106 nodes). The model features the use of pumping storage facilities as well as the inclusion of spinning reserves in a single optimization process. Solution times of the model stay below reasonable values.
Keywords: Electricity generation modeling, Unit Commitment(UC), Mixed Integer Linear Programming (MILP), DC load flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20438663 Performance Analysis of Proprietary and Non-Proprietary Tools for Regression Testing Using Genetic Algorithm
Authors: K. Hema Shankari, R. Thirumalaiselvi, N. V. Balasubramanian
Abstract:
The present paper addresses to the research in the area of regression testing with emphasis on automated tools as well as prioritization of test cases. The uniqueness of regression testing and its cyclic nature is pointed out. The difference in approach between industry, with business model as basis, and academia, with focus on data mining, is highlighted. Test Metrics are discussed as a prelude to our formula for prioritization; a case study is further discussed to illustrate this methodology. An industrial case study is also described in the paper, where the number of test cases is so large that they have to be grouped as Test Suites. In such situations, a genetic algorithm proposed by us can be used to reconfigure these Test Suites in each cycle of regression testing. The comparison is made between a proprietary tool and an open source tool using the above-mentioned metrics. Our approach is clarified through several tables.Keywords: APFD metric, genetic algorithm, regression testing, RFT tool, test case prioritization, selenium tool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9178662 A Linear Regression Model for Estimating Anxiety Index Using Wide Area Frontal Lobe Brain Blood Volume
Authors: Takashi Kaburagi, Masashi Takenaka, Yosuke Kurihara, Takashi Matsumoto
Abstract:
Major depressive disorder (MDD) is one of the most common mental illnesses today. It is believed to be caused by a combination of several factors, including stress. Stress can be quantitatively evaluated using the State-Trait Anxiety Inventory (STAI), one of the best indices to evaluate anxiety. Although STAI scores are widely used in applications ranging from clinical diagnosis to basic research, the scores are calculated based on a self-reported questionnaire. An objective evaluation is required because the subject may intentionally change his/her answers if multiple tests are carried out. In this article, we present a modified index called the “multi-channel Laterality Index at Rest (mc-LIR)” by recording the brain activity from a wider area of the frontal lobe using multi-channel functional near-infrared spectroscopy (fNIRS). The presented index aims to measure multiple positions near the Fpz defined by the international 10-20 system positioning. Using 24 subjects, the dependencies on the number of measuring points used to calculate the mc-LIR and its correlation coefficients with the STAI scores are reported. Furthermore, a simple linear regression was performed to estimate the STAI scores from mc-LIR. The cross-validation error is also reported. The experimental results show that using multiple positions near the Fpz will improve the correlation coefficients and estimation than those using only two positions.
Keywords: Stress, functional near-infrared spectroscopy, frontal lobe, state-trait anxiety inventory score.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11668661 Currency Exchange Rate Forecasts Using Quantile Regression
Authors: Yuzhi Cai
Abstract:
In this paper, we discuss a Bayesian approach to quantile autoregressive (QAR) time series model estimation and forecasting. Together with a combining forecasts technique, we then predict USD to GBP currency exchange rates. Combined forecasts contain all the information captured by the fitted QAR models at different quantile levels and are therefore better than those obtained from individual models. Our results show that an unequally weighted combining method performs better than other forecasting methodology. We found that a median AR model can perform well in point forecasting when the predictive density functions are symmetric. However, in practice, using the median AR model alone may involve the loss of information about the data captured by other QAR models. We recommend that combined forecasts should be used whenever possible.Keywords: Exchange rate, quantile regression, combining forecasts.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17778660 Comparison of Multivariate Adaptive Regression Splines and Random Forest Regression in Predicting Forced Expiratory Volume in One Second
Authors: P. V. Pramila, V. Mahesh
Abstract:
Pulmonary Function Tests are important non-invasive diagnostic tests to assess respiratory impairments and provides quantifiable measures of lung function. Spirometry is the most frequently used measure of lung function and plays an essential role in the diagnosis and management of pulmonary diseases. However, the test requires considerable patient effort and cooperation, markedly related to the age of patients resulting in incomplete data sets. This paper presents, a nonlinear model built using Multivariate adaptive regression splines and Random forest regression model to predict the missing spirometric features. Random forest based feature selection is used to enhance both the generalization capability and the model interpretability. In the present study, flow-volume data are recorded for N= 198 subjects. The ranked order of feature importance index calculated by the random forests model shows that the spirometric features FVC, FEF25, PEF, FEF25-75, FEF50 and the demographic parameter height are the important descriptors. A comparison of performance assessment of both models prove that, the prediction ability of MARS with the `top two ranked features namely the FVC and FEF25 is higher, yielding a model fit of R2= 0.96 and R2= 0.99 for normal and abnormal subjects. The Root Mean Square Error analysis of the RF model and the MARS model also shows that the latter is capable of predicting the missing values of FEV1 with a notably lower error value of 0.0191 (normal subjects) and 0.0106 (abnormal subjects) with the aforementioned input features. It is concluded that combining feature selection with a prediction model provides a minimum subset of predominant features to train the model, as well as yielding better prediction performance. This analysis can assist clinicians with a intelligence support system in the medical diagnosis and improvement of clinical care.
Keywords: FEV1, Multivariate Adaptive Regression Splines Pulmonary Function Test, Random Forest.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37378659 A Non-Linear Eddy Viscosity Model for Turbulent Natural Convection in Geophysical Flows
Authors: J. P. Panda, K. Sasmal, H. V. Warrior
Abstract:
Eddy viscosity models in turbulence modeling can be mainly classified as linear and nonlinear models. Linear formulations are simple and require less computational resources but have the disadvantage that they cannot predict actual flow pattern in complex geophysical flows where streamline curvature and swirling motion are predominant. A constitutive equation of Reynolds stress anisotropy is adopted for the formulation of eddy viscosity including all the possible higher order terms quadratic in the mean velocity gradients, and a simplified model is developed for actual oceanic flows where only the vertical velocity gradients are important. The new model is incorporated into the one dimensional General Ocean Turbulence Model (GOTM). Two realistic oceanic test cases (OWS Papa and FLEX' 76) have been investigated. The new model predictions match well with the observational data and are better in comparison to the predictions of the two equation k-epsilon model. The proposed model can be easily incorporated in the three dimensional Princeton Ocean Model (POM) to simulate a wide range of oceanic processes. Practically, this model can be implemented in the coastal regions where trasverse shear induces higher vorticity, and for prediction of flow in estuaries and lakes, where depth is comparatively less. The model predictions of marine turbulence and other related data (e.g. Sea surface temperature, Surface heat flux and vertical temperature profile) can be utilized in short term ocean and climate forecasting and warning systems.Keywords: Eddy viscosity, turbulence modeling, GOTM, CFD.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9548658 Robust Variogram Fitting Using Non-Linear Rank-Based Estimators
Authors: Hazem M. Al-Mofleh, John E. Daniels, Joseph W. McKean
Abstract:
In this paper numerous robust fitting procedures are considered in estimating spatial variograms. In spatial statistics, the conventional variogram fitting procedure (non-linear weighted least squares) suffers from the same outlier problem that has plagued this method from its inception. Even a 3-parameter model, like the variogram, can be adversely affected by a single outlier. This paper uses the Hogg-Type adaptive procedures to select an optimal score function for a rank-based estimator for these non-linear models. Numeric examples and simulation studies will demonstrate the robustness, utility, efficiency, and validity of these estimates.
Keywords: Asymptotic relative efficiency, non-linear rank-based, robust, rank estimates, variogram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15858657 Simulating Action Potential as a Linear Combination of Gating Dynamics
Authors: S. H. Sabzpoushan
Abstract:
In this research we show that the dynamics of an action potential in a cell can be modeled with a linear combination of the dynamics of the gating state variables. It is shown that the modeling error is negligible. Our findings can be used for simplifying cell models and reduction of computational burden i.e. it is useful for simulating action potential propagation in large scale computations like tissue modeling. We have verified our finding with the use of several cell models.
Keywords: Linear model, Action potential, gating dynamics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12758656 Optimal Planning of Waste-to-Energy through Mixed Integer Linear Programming
Authors: S. T. Tan, H. Hashim, W. S. Ho, C. T. Lee
Abstract:
Rapid economic development and population growth in Malaysia had accelerated the generation of solid waste. This issue gives pressure for effective management of municipal solid waste (MSW) to take place in Malaysia due to the increased cost of landfill. This paper discusses optimal planning of waste-to-energy (WTE) using a combinatorial simulation and optimization model through mixed integer linear programming (MILP) approach. The proposed multi-period model is tested in Iskandar Malaysia (IM) as case study for a period of 12 years (2011 -2025) to illustrate the economic potential and tradeoffs involved in this study. In this paper, 3 scenarios have been used to demonstrate the applicability of the model: (1) Incineration scenario (2) Landfill scenario (3) Optimal scenario. The model revealed that the minimum cost of electricity generation from 9,995,855 tonnes of MSW is estimated as USD 387million with a total electricity generation of 50MW /yr in the optimal scenario.Keywords: Mixed Integer Linear Programming (MILP), optimization, solid waste management (SWM), Waste-to-energy (WTE).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29878655 Nonlinear Model Predictive Control for Solid Oxide Fuel Cell System Based On Wiener Model
Authors: T. H. Lee, J. H. Park, S. M. Lee, S. C. Lee
Abstract:
In this paper, we consider Wiener nonlinear model for solid oxide fuel cell (SOFC). The Wiener model of the SOFC consists of a linear dynamic block and a static output non-linearity followed by the block, in which linear part is approximated by state-space model and the nonlinear part is identified by a polynomial form. To control the SOFC system, we have to consider various view points such as operating conditions, another constraint conditions, change of load current and so on. A change of load current is the significant one of these for good performance of the SOFC system. In order to keep the constant stack terminal voltage by changing load current, the nonlinear model predictive control (MPC) is proposed in this paper. After primary control method is designed to guarantee the fuel utilization as a proper constant, a nonlinear model predictive control based on the Wiener model is developed to control the stack terminal voltage of the SOFC system. Simulation results verify the possibility of the proposed Wiener model and MPC method to control of SOFC system.
Keywords: SOFC, model predictive control, Wiener model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20688654 Analysis of Attention to the Confucius Institute from Domestic and Foreign Mainstream Media
Authors: Wei Yang, Xiaohui Cui, Weiping Zhu, Liqun Liu
Abstract:
The rapid development of the Confucius Institute is attracting more and more attention from mainstream media around the world. Mainstream media plays a large role in public information dissemination and public opinion. This study presents efforts to analyze the correlation and functional relationship between domestic and foreign mainstream media by analyzing the amount of reports on the Confucius Institute. Three kinds of correlation calculation methods, the Pearson correlation coefficient (PCC), the Spearman correlation coefficient (SCC), and the Kendall rank correlation coefficient (KCC), were applied to analyze the correlations among mainstream media from three regions: mainland of China; Hong Kong and Macao (the two special administration regions of China denoted as SARs); and overseas countries excluding China, such as the United States, England, and Canada. Further, the paper measures the functional relationships among the regions using a regression model. The experimental analyses found high correlations among mainstream media from the different regions. Additionally, we found that there is a linear relationship between the mainstream media of overseas countries and those of the SARs by analyzing the amount of reports on the Confucius Institute based on a data set obtained by crawling the websites of 106 mainstream media during the years 2004 to 2014.Keywords: Confucius Institute, correlation analysis, mainstream media, regression model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13608653 A Model for Test Case Selection in the Software-Development Life Cycle
Authors: Adtha Lawanna
Abstract:
Software maintenance is one of the essential processes of Software-Development Life Cycle. The main philosophies of retaining software concern the improvement of errors, the revision of codes, the inhibition of future errors, and the development in piece and capacity. While the adjustment has been employing, the software structure has to be retested to an upsurge a level of assurance that it will be prepared due to the requirements. According to this state, the test cases must be considered for challenging the revised modules and the whole software. A concept of resolving this problem is ongoing by regression test selection such as the retest-all selections, random/ad-hoc selection and the safe regression test selection. Particularly, the traditional techniques concern a mapping between the test cases in a test suite and the lines of code it executes. However, there are not only the lines of code as one of the requirements that can affect the size of test suite but including the number of functions and faulty versions. Therefore, a model for test case selection is developed to cover those three requirements by the integral technique which can produce the smaller size of the test cases when compared with the traditional regression selection techniques.
Keywords: Software maintenance, regression test selection, test case.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16978652 A Model for Test Case Selection in the Software-Development Life Cycle
Authors: Adtha Lawanna
Abstract:
Software maintenance is one of the essential processes of Software-Development Life Cycle. The main philosophies of retaining software concern the improvement of errors, the revision of codes, the inhibition of future errors, and the development in piece and capacity. While the adjustment has been employing, the software structure has to be retested to an upsurge a level of assurance that it will be prepared due to the requirements. According to this state, the test cases must be considered for challenging the revised modules and the whole software. A concept of resolving this problem is ongoing by regression test selection such as the retest-all selections, random/ad-hoc selection and the safe regression test selection. Particularly, the traditional techniques concern a mapping between the test cases in a test suite and the lines of code it executes. However, there are not only the lines of code as one of the requirements that can affect the size of test suite but including the number of functions and faulty versions. Therefore, a model for test case selection is developed to cover those three requirements by the integral technique which can produce the smaller size of the test cases when compared with the traditional regression selection techniques.
Keywords: Software maintenance, regression test selection, test case.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15998651 Mathematical Modeling to Predict Surface Roughness in CNC Milling
Authors: Ab. Rashid M.F.F., Gan S.Y., Muhammad N.Y.
Abstract:
Surface roughness (Ra) is one of the most important requirements in machining process. In order to obtain better surface roughness, the proper setting of cutting parameters is crucial before the process take place. This research presents the development of mathematical model for surface roughness prediction before milling process in order to evaluate the fitness of machining parameters; spindle speed, feed rate and depth of cut. 84 samples were run in this study by using FANUC CNC Milling α-Τ14ιE. Those samples were randomly divided into two data sets- the training sets (m=60) and testing sets(m=24). ANOVA analysis showed that at least one of the population regression coefficients was not zero. Multiple Regression Method was used to determine the correlation between a criterion variable and a combination of predictor variables. It was established that the surface roughness is most influenced by the feed rate. By using Multiple Regression Method equation, the average percentage deviation of the testing set was 9.8% and 9.7% for training data set. This showed that the statistical model could predict the surface roughness with about 90.2% accuracy of the testing data set and 90.3% accuracy of the training data set.
Keywords: Surface roughness, regression analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21318650 Improving the Analytical Power of Dynamic DEA Models, by the Consideration of the Shape of the Distribution of Inputs/Outputs Data: A Linear Piecewise Decomposition Approach
Authors: Elias K. Maragos, Petros E. Maravelakis
Abstract:
In Dynamic Data Envelopment Analysis (DDEA), which is a subfield of Data Envelopment Analysis (DEA), the productivity of Decision Making Units (DMUs) is considered in relation to time. In this case, as it is accepted by the most of the researchers, there are outputs, which are produced by a DMU to be used as inputs in a future time. Those outputs are known as intermediates. The common models, in DDEA, do not take into account the shape of the distribution of those inputs, outputs or intermediates data, assuming that the distribution of the virtual value of them does not deviate from linearity. This weakness causes the limitation of the accuracy of the analytical power of the traditional DDEA models. In this paper, the authors, using the concept of piecewise linear inputs and outputs, propose an extended DDEA model. The proposed model increases the flexibility of the traditional DDEA models and improves the measurement of the dynamic performance of DMUs.
Keywords: Data envelopment analysis, Dynamic DEA, Piecewise linear inputs, Piecewise linear outputs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6558649 Functional Decomposition Based Effort Estimation Model for Software-Intensive Systems
Authors: Nermin Sökmen
Abstract:
An effort estimation model is needed for softwareintensive projects that consist of hardware, embedded software or some combination of the two, as well as high level software solutions. This paper first focuses on functional decomposition techniques to measure functional complexity of a computer system and investigates its impact on system development effort. Later, it examines effects of technical difficulty and design team capability factors in order to construct the best effort estimation model. With using traditional regression analysis technique, the study develops a system development effort estimation model which takes functional complexity, technical difficulty and design team capability factors as input parameters. Finally, the assumptions of the model are tested.
Keywords: Functional complexity, functional decomposition, development effort, technical difficulty, design team capability, regression analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22788648 Application of Company Financial Crisis Early Warning Model- Use of “Financial Reference Database“
Authors: Chiung-ying Lee, Chia-hua Chang
Abstract:
In July 1, 2007, Taiwan Stock Exchange (TWSE) on market observation post system (MOPS) adds a new "Financial reference database" for investors to do investment reference. This database as a warning to public offering companies listed on the public financial information and it original within eight targets. In this paper, this database provided by the indicators for the application of company financial crisis early warning model verify that the database provided by the indicator forecast for the financial crisis, whether or not companies have a high accuracy rate as opposed to domestic and foreign scholars have positive results. There is use of Logistic Regression Model application of the financial early warning model, in which no joined back-conditions is the first model, joined it in is the second model, has been taken occurred in the financial crisis of companies to research samples and then business took place before the financial crisis point with T-1 and T-2 sample data to do positive analysis. The results show that this database provided the debt ratio and net per share for the best forecast variables.Keywords: Financial reference database, Financial early warning model, Logistic Regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14278647 Analytical Subthreshold Drain Current Model Incorporating Inversion Layer Effective Mobility Model for Pocket Implanted Nano Scale n-MOSFET
Authors: Muhibul Haque Bhuyan, Quazi D. M. Khosru
Abstract:
Carrier scatterings in the inversion channel of MOSFET dominates the carrier mobility and hence drain current. This paper presents an analytical model of the subthreshold drain current incorporating the effective electron mobility model of the pocket implanted nano scale n-MOSFET. The model is developed by assuming two linear pocket profiles at the source and drain edges at the surface and by using the conventional drift-diffusion equation. Effective electron mobility model includes three scattering mechanisms, such as, Coulomb, phonon and surface roughness scatterings as well as ballistic phenomena in the pocket implanted n-MOSFET. The model is simulated for various pocket profile and device parameters as well as for various bias conditions. Simulation results show that the subthreshold drain current data matches the experimental data already published in the literature.
Keywords: Linear Pocket Profile, Pocket Implanted n-MOSFET, Subthreshold Drain Current and Effective Mobility Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25638646 Statistics of Exon Lengths in Animals, Plants, Fungi, and Protists
Authors: Alexander Kaplunovsky, Vladimir Khailenko, Alexander Bolshoy, Shara Atambayeva, AnatoliyIvashchenko
Abstract:
Eukaryotic protein-coding genes are interrupted by spliceosomal introns, which are removed from the RNA transcripts before translation into a protein. The exon-intron structures of different eukaryotic species are quite different from each other, and the evolution of such structures raises many questions. We try to address some of these questions using statistical analysis of whole genomes. We go through all the protein-coding genes in a genome and study correlations between the net length of all the exons in a gene, the number of the exons, and the average length of an exon. We also take average values of these features for each chromosome and study correlations between those averages on the chromosomal level. Our data show universal features of exon-intron structures common to animals, plants, and protists (specifically, Arabidopsis thaliana, Caenorhabditis elegans, Drosophila melanogaster, Cryptococcus neoformans, Homo sapiens, Mus musculus, Oryza sativa, and Plasmodium falciparum). We have verified linear correlation between the number of exons in a gene and the length of a protein coded by the gene, while the protein length increases in proportion to the number of exons. On the other hand, the average length of an exon always decreases with the number of exons. Finally, chromosome clustering based on average chromosome properties and parameters of linear regression between the number of exons in a gene and the net length of those exons demonstrates that these average chromosome properties are genome-specific features.
Keywords: Comparative genomics, exon-intron structure, eukaryotic clustering, linear regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2573