Search results for: the closed form method of variance estimation
10130 Confidence Intervals for the Coefficients of Variation with Bounded Parameters
Authors: Jeerapa Sappakitkamjorn, Sa-aat Niwitpong
Abstract:
In many practical applications in various areas, such as engineering, science and social science, it is known that there exist bounds on the values of unknown parameters. For example, values of some measurements for controlling machines in an industrial process, weight or height of subjects, blood pressures of patients and retirement ages of public servants. When interval estimation is considered in a situation where the parameter to be estimated is bounded, it has been argued that the classical Neyman procedure for setting confidence intervals is unsatisfactory. This is due to the fact that the information regarding the restriction is simply ignored. It is, therefore, of significant interest to construct confidence intervals for the parameters that include the additional information on parameter values being bounded to enhance the accuracy of the interval estimation. Therefore in this paper, we propose a new confidence interval for the coefficient of variance where the population mean and standard deviation are bounded. The proposed interval is evaluated in terms of coverage probability and expected length via Monte Carlo simulation.
Keywords: Bounded parameters, coefficient of variation, confidence interval, Monte Carlo simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 423110129 Estimation of Time Loss and Costs of Traffic Congestion: The Contingent Valuation Method
Authors: Amira Mabrouk, Chokri Abdennadher
Abstract:
The reduction of road congestion which is inherent to the use of vehicles is an obvious priority to public authority. Therefore, assessing the willingness to pay of an individual in order to save trip-time is akin to estimating the change in price which was the result of setting up a new transport policy to increase the networks fluidity and improving the level of social welfare. This study holds an innovative perspective. In fact, it initiates an economic calculation that has the objective of giving an estimation of the monetized time value during the trips made in Sfax. This research is founded on a double-objective approach. The aim of this study is to i) give an estimation of the monetized value of time; an hour dedicated to trips, ii) determine whether or not the consumer considers the environmental variables to be significant, iii) analyze the impact of applying a public management of the congestion via imposing taxation of city tolls on urban dwellers. This article is built upon a rich field survey led in the city of Sfax. With the use of the contingent valuation method, we analyze the “declared time preferences” of 450 drivers during rush hours. Based on the fond consideration of attributed bias of the applied method, we bring to light the delicacy of this approach with regards to the revelation mode and the interrogative techniques by following the NOAA panel recommendations bearing the exception of the valorization point and other similar studies about the estimation of transportation externality.Keywords: Willingness to pay, value of time, contingent valuation, time value, city toll, transport.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 230610128 Adaptive WiFi Fingerprinting for Location Approximation
Authors: Mohd Fikri Azli bin Abdullah, Khairul Anwar bin Kamarul Hatta, Esther Jeganathan
Abstract:
WiFi has become an essential technology that is widely used nowadays. It is famous due to its convenience to be used with mobile devices. This is especially true for Internet users worldwide that use WiFi connections. There are many location based services that are available nowadays which uses Wireless Fidelity (WiFi) signal fingerprinting. A common example that is gaining popularity in this era would be Foursquare. In this work, the WiFi signal would be used to estimate the user or client’s location. Similar to GPS, fingerprinting method needs a floor plan to increase the accuracy of location estimation. Still, the factor of inconsistent WiFi signal makes the estimation defer at different time intervals. Given so, an adaptive method is needed to obtain the most accurate signal at all times. WiFi signals are heavily distorted by external factors such as physical objects, radio frequency interference, electrical interference, and environmental factors to name a few. Due to these factors, this work uses a method of reducing the signal noise and estimation using the Nearest Neighbour based on past activities of the signal to increase the signal accuracy up to more than 80%. The repository yet increases the accuracy by using Artificial Neural Network (ANN) pattern matching. The repository acts as the server cum support of the client side application decision. Numerous previous works has adapted the methods of collecting signal strengths in the repository over the years, but mostly were just static. In this work, proposed solutions on how the adaptive method is done to match the signal received to the data in the repository are highlighted. With the said approach, location estimation can be done more accurately. Adaptive update allows the latest location fingerprint to be stored in the repository. Furthermore, any redundant location fingerprints are removed and only the updated version of the fingerprint is stored in the repository. How the location estimation of the user can be predicted would be highlighted more in the proposed solution section. After some studies on previous works, it is found that the Artificial Neural Network is the most feasible method to deploy in updating the repository and making it adaptive. The Artificial Neural Network functions are to do the pattern matching of the WiFi signal to the existing data available in the repository.
Keywords: Adaptive Repository, Artificial Neural Network, Location Estimation, Nearest Neighbour Euclidean Distance, WiFi RSSI Fingerprinting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 346310127 Enhancing the Performance of H.264/AVC in Adaptive Group of Pictures Mode Using Octagon and Square Search Pattern
Authors: S. Sowmyayani, P. Arockia Jansi Rani
Abstract:
This paper integrates Octagon and Square Search pattern (OCTSS) motion estimation algorithm into H.264/AVC (Advanced Video Coding) video codec in Adaptive Group of Pictures (AGOP) mode. AGOP structure is computed based on scene change in the video sequence. Octagon and square search pattern block-based motion estimation method is implemented in inter-prediction process of H.264/AVC. Both these methods reduce bit rate and computational complexity while maintaining the quality of the video sequence respectively. Experiments are conducted for different types of video sequence. The results substantially proved that the bit rate, computation time and PSNR gain achieved by the proposed method is better than the existing H.264/AVC with fixed GOP and AGOP. With a marginal gain in quality of 0.28dB and average gain in bitrate of 132.87kbps, the proposed method reduces the average computation time by 27.31 minutes when compared to the existing state-of-art H.264/AVC video codec.Keywords: Block Distortion Measure, Block Matching Algorithms, H.264/AVC, Motion estimation, Search patterns, Shot cut detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 173510126 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models
Authors: I. V. Pinto, M. R. Sooriyarachchi
Abstract:
It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.
Keywords: Goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, type-I error, penalized quasi-likelihood, power, quasi-likelihood.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 73410125 Multiscale Blind Image Restoration with a New Method
Authors: Alireza Mallahzadeh, Hamid Dehghani, Iman Elyasi
Abstract:
A new method, based on the normal shrink and modified version of Katssagelous and Lay, is proposed for multiscale blind image restoration. The method deals with the noise and blur in the images. It is shown that the normal shrink gives the highest S/N (signal to noise ratio) for image denoising process. The multiscale blind image restoration is divided in two sections. The first part of this paper proposes normal shrink for image denoising and the second part of paper proposes modified version of katssagelous and Lay for blur estimation and the combination of both methods to reach a multiscale blind image restoration.Keywords: Multiscale blind image restoration, image denoising, blur estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 172510124 Technical Aspects of Closing the Loop in Depth-of-Anesthesia Control
Authors: Gorazd Karer
Abstract:
When performing a diagnostic procedure or surgery in general anesthesia (GA), a proper introduction and dosing of anesthetic agents is one of the main tasks of the anesthesiologist. That being said, depth of anesthesia (DoA) also seems to be a suitable process for closed-loop control implementation. To implement such a system, one must be able to acquire the relevant signals online and in real-time, as well as stream the calculated control signal to the infusion pump. However, during a procedure, patient monitors and infusion pumps are purposely unable to connect to an external (possibly medically unapproved) device for safety reasons, thus preventing closed-loop control. This paper proposes a conceptual solution to the aforementioned problem. First, it presents some important aspects of contemporary clinical practice. Next, it introduces the closed-loop-control-system structure and the relevant information flow. Focusing on transferring the data from the patient to the computer, it presents a non-invasive image-based system for signal acquisition from a patient monitor for online depth-of-anesthesia assessment. Furthermore, it introduces a User-Datagram-Protocol-based (UDP-based) communication method that can be used for transmitting the calculated anesthetic inflow to the infusion pump. The proposed system is independent of medical-device manufacturer and is implemented in MATLAB-Simulink, which can be conveniently used for DoA control implementation. The proposed scheme has been tested in a simulated GA setting and is ready to be evaluated in an operating theatre. However, the proposed system is only a step towards a proper closed-loop control system for DoA, which could routinely be used in clinical practice.
Keywords: Closed-loop control, Depth of Anesthesia, DoA, optical signal acquisition, Patient State index, PSi, UDP communication protocol.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 53610123 Correlation to Predict Thermal Performance According to Working Fluids of Vertical Closed-Loop Pulsating Heat Pipe
Authors: Niti Kammuang-lue, Kritsada On-ai, Phrut Sakulchangsatjatai, Pradit Terdtoon
Abstract:
The objectives of this paper are to investigate effects of dimensionless numbers on thermal performance of the vertical closed-loop pulsating heat pipe (VCLPHP) and to establish a correlation to predict the thermal performance of the VCLPHP. The CLPHPs were made of long copper capillary tubes with inner diameters of 1.50, 1.78, and 2.16mm and bent into 26 turns. Then, both ends were connected together to form a loop. The evaporator, adiabatic, and condenser sections length were equal to 50 and 150 mm. R123, R141b, acetone, ethanol, and water were chosen as variable working fluids with constant filling ratio of 50% by total volume. Inlet temperature of heating medium and adiabatic section temperature was constantly controlled at 80 and 50oC, respectively. Thermal performance was represented in a term of Kutateladze number (Ku). It can be concluded that when Prandtl number of liquid working fluid (Prl), and Karman number (Ka) increases, thermal performance increases. On contrary, when Bond number (Bo), Jacob number (Ja), and Aspect ratio (Le/Di) increases, thermal performance decreases. Moreover, the correlation to predict more precise thermal performance has been successfully established by analyzing on all dimensionless numbers that have effect on the thermal performance of the VCLPHP.
Keywords: Vertical closed-loop pulsating heat pipe, working fluid, thermal performance, dimensionless parameter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 233910122 Image Features Comparison-Based Position Estimation Method Using a Camera Sensor
Authors: Jinseon Song, Yongwan Park
Abstract:
In this paper, propose method that can user’s position that based on database is built from single camera. Previous positioning calculate distance by arrival-time of signal like GPS (Global Positioning System), RF(Radio Frequency). However, these previous method have weakness because these have large error range according to signal interference. Method for solution estimate position by camera sensor. But, signal camera is difficult to obtain relative position data and stereo camera is difficult to provide real-time position data because of a lot of image data, too. First of all, in this research we build image database at space that able to provide positioning service with single camera. Next, we judge similarity through image matching of database image and transmission image from user. Finally, we decide position of user through position of most similar database image. For verification of propose method, we experiment at real-environment like indoor and outdoor. Propose method is wide positioning range and this method can verify not only position of user but also direction.Keywords: Positioning, Distance, Camera, Features, SURF (Speed-Up Robust Features), Database, Estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 146410121 Weakly Generalized Closed Map
Authors: R. Parimelazhagan, N. Nagaveni
Abstract:
In this paper we introduce a new class of mg-continuous mapping and studied some of its basic properties.We obtain some characterizations of such functions. Moreover we define sub minimal structure and further study certain properties of mg-closed sets.
Keywords: M-structure, mg-continuous mapping, minimal structure, mg T2 space, sub minimal structure, T12 space, mg-compact set.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 154310120 Estimation of Synchronous Machine Synchronizing and Damping Torque Coefficients
Authors: Khaled M. EL-Naggar
Abstract:
Synchronizing and damping torque coefficients of a synchronous machine can give a quite clear picture for machine behavior during transients. These coefficients are used as a power system transient stability measurement. In this paper, a crow search optimization algorithm is presented and implemented to study the power system stability during transients. The algorithm makes use of the machine responses to perform the stability study in time domain. The problem is formulated as a dynamic estimation problem. An objective function that minimizes the error square in the estimated coefficients is designed. The method is tested using practical system with different study cases. Results are reported and a thorough discussion is presented. The study illustrates that the proposed method can estimate the stability coefficients for the critical stable cases where other methods may fail. The tests proved that the proposed tool is an accurate and reliable tool for estimating the machine coefficients for assessment of power system stability.Keywords: Optimization, estimation, synchronous, machine, crow search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 67010119 Analysis of Aiming Performance for Games Using Mapping Method of Corneal Reflections Based on Two Different Light Sources
Authors: Yoshikazu Onuki, Itsuo Kumazawa
Abstract:
Fundamental motivation of this paper is how gaze estimation can be utilized effectively regarding an application to games. In games, precise estimation is not always important in aiming targets but an ability to move a cursor to an aiming target accurately is also significant. Incidentally, from a game producing point of view, a separate expression of a head movement and gaze movement sometimes becomes advantageous to expressing sense of presence. A case that panning a background image associated with a head movement and moving a cursor according to gaze movement can be a representative example. On the other hand, widely used technique of POG estimation is based on a relative position between a center of corneal reflection of infrared light sources and a center of pupil. However, a calculation of a center of pupil requires relatively complicated image processing, and therefore, a calculation delay is a concern, since to minimize a delay of inputting data is one of the most significant requirements in games. In this paper, a method to estimate a head movement by only using corneal reflections of two infrared light sources in different locations is proposed. Furthermore, a method to control a cursor using gaze movement as well as a head movement is proposed. By using game-like-applications, proposed methods are evaluated and, as a result, a similar performance to conventional methods is confirmed and an aiming control with lower computation power and stressless intuitive operation is obtained.
Keywords: Point-of-gaze, gaze estimation, head movement, corneal reflections, two infrared light sources, game.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 107210118 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model
Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You
Abstract:
The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.Keywords: Clustering algorithm, potential function, speech signal, the UBSS model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 68410117 A Mean–Variance–Skewness Portfolio Optimization Model
Authors: Kostas Metaxiotis
Abstract:
Portfolio optimization is one of the most important topics in finance. This paper proposes a mean–variance–skewness (MVS) portfolio optimization model. Traditionally, the portfolio optimization problem is solved by using the mean–variance (MV) framework. In this study, we formulate the proposed model as a three-objective optimization problem, where the portfolio's expected return and skewness are maximized whereas the portfolio risk is minimized. For solving the proposed three-objective portfolio optimization model we apply an adapted version of the non-dominated sorting genetic algorithm (NSGAII). Finally, we use a real dataset from FTSE-100 for validating the proposed model.
Keywords: Evolutionary algorithms, portfolio optimization, skewness, stock selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 142110116 Feature Vector Fusion for Image Based Human Age Estimation
Authors: D. Karthikeyan, G. Balakrishnan
Abstract:
Human faces, as important visual signals, express a significant amount of nonverbal info for usage in human-to-human communication. Age, specifically, is more significant among these properties. Human age estimation using facial image analysis as an automated method which has numerous potential real‐world applications. In this paper, an automated age estimation framework is presented. Support Vector Regression (SVR) strategy is utilized to investigate age prediction. This paper depicts a feature extraction taking into account Gray Level Co-occurrence Matrix (GLCM), which can be utilized for robust face recognition framework. It applies GLCM operation to remove the face's features images and Active Appearance Models (AAMs) to assess the human age based on image. A fused feature technique and SVR with GA optimization are proposed to lessen the error in age estimation.
Keywords: Support vector regression, feature extraction, gray level co-occurrence matrix, active appearance models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 131610115 Distributed Frequency Synchronization for Global Synchronization in Wireless Mesh Networks
Authors: Jung-Hyun Kim, Jihyung Kim, Kwangjae Lim, Dong Seung Kwon
Abstract:
In this paper, our focus is to assure a global frequency synchronization in OFDMA-based wireless mesh networks with local information. To acquire the global synchronization in distributed manner, we propose a novel distributed frequency synchronization (DFS) method. DFS is a method that carrier frequencies of distributed nodes converge to a common value by repetitive estimation and averaging step and sharing step. Experimental results show that DFS achieves noteworthy better synchronization success probability than existing schemes in OFDMA-based mesh networks where the estimation error is presented.
Keywords: OFDMA systems, Frequency synchronization, Distributed networks, Multiple groups.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 172210114 Applying Element Free Galerkin Method on Beam and Plate
Authors: Mahdad M’hamed, Belaidi Idir
Abstract:
This paper develops a meshless approach, called Element Free Galerkin (EFG) method, which is based on the weak form Moving Least Squares (MLS) of the partial differential governing equations and employs the interpolation to construct the meshless shape functions. The variation weak form is used in the EFG where the trial and test functions are approximated bye the MLS approximation. Since the shape functions constructed by this discretization have the weight function property based on the randomly distributed points, the essential boundary conditions can be implemented easily. The local weak form of the partial differential governing equations is obtained by the weighted residual method within the simple local quadrature domain. The spline function with high continuity is used as the weight function. The presently developed EFG method is a truly meshless method, as it does not require the mesh, either for the construction of the shape functions, or for the integration of the local weak form. Several numerical examples of two-dimensional static structural analysis are presented to illustrate the performance of the present EFG method. They show that the EFG method is highly efficient for the implementation and highly accurate for the computation. The present method is used to analyze the static deflection of beams and plate holeKeywords: Numerical computation, element-free Galerkin, moving least squares, meshless methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 244110113 Seat Assignment Model for Student Admissions Process at Saudi Higher Education Institutions
Authors: Mohammed Salem Alzahrani
Abstract:
In this paper, student admission process is studied to optimize the assignment of vacant seats with three main objectives. Utilizing all vacant seats, satisfying all programs of study admission requirements and maintaining fairness among all candidates are the three main objectives of the optimization model. Seat Assignment Method (SAM) is used to build the model and solve the optimization problem with help of Northwest Coroner Method and Least Cost Method. A closed formula is derived for applying the priority of assigning seat to candidate based on SAM.
Keywords: Admission Process Model, Assignment Problem, Hungarian Method, Least Cost Method, Northwest Corner Method, Seat Assignment Method (SAM).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 197910112 An Insurer’s Investment Model with Reinsurance Strategy under the Modified Constant Elasticity of Variance Process
Authors: K. N. C. Njoku, Chinwendu Best Eleje, Christian Chukwuemeka Nwandu
Abstract:
One of the problems facing most insurance companies is how best the burden of paying claims to its policy holders can be managed whenever need arises. Hence there is need for the insurer to buy a reinsurance contract in order to reduce risk which will enable the insurer to share the financial burden with the reinsurer. In this paper, the insurer’s and reinsurer’s strategy is investigated under the modified constant elasticity of variance (M-CEV) process and proportional administrative charges. The insurer considered investment in one risky asset and one risk free asset where the risky asset is modeled based on the M-CEV process which is an extension of constant elasticity of variance (CEV) process. Next, a nonlinear partial differential equation in the form of Hamilton Jacobi Bellman equation is obtained by dynamic programming approach. Using power transformation technique and variable change, the explicit solutions of the optimal investment strategy and optimal reinsurance strategy are obtained. Finally, some numerical simulations of some sensitive parameters were obtained and discussed in details where we observed that the modification factor only affects the optimal investment strategy and not the reinsurance strategy for an insurer with exponential utility function.
Keywords: Reinsurance strategy, Hamilton Jacobi Bellman equation, power transformation, M-CEV process, exponential utility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33910111 A Study of Adaptive Fault Detection Method for GNSS Applications
Authors: Je Young Lee, Hee Sung Kim, Kwang Ho Choi, Joonhoo Lim, Sebum Chun, Hyung Keun Lee
Abstract:
This study is purposed to develop an efficient fault detection method for Global Navigation Satellite Systems (GNSS) applications based on adaptive noise covariance estimation. Due to the dependence on radio frequency signals, GNSS measurements are dominated by systematic errors in receiver’s operating environment. In the proposed method, the pseudorange and carrier-phase measurement noise covariances are obtained at time propagations and measurement updates in process of Carrier-Smoothed Code (CSC) filtering, respectively. The test statistics for fault detection are generated by the estimated measurement noise covariances. To evaluate the fault detection capability, intentional faults were added to the filed-collected measurements. The experiment result shows that the proposed method is efficient in detecting unhealthy measurements and improves GNSS positioning accuracy against fault occurrences.
Keywords: Adaptive estimation, fault detection, GNSS, residual.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 255810110 Management Control Systems in Post-Incubation: An Investigation of Closed Down High-Technology Start-Ups
Authors: Jochen Edmund Kerschenbauer, Roman Salinger, Daniel Strametz
Abstract:
Insufficient informal communication systems can lead to the first crisis (‘Crisis of Leadership’) for start-ups. Management Control Systems (MCS) are one way for high-technology start-ups to successfully overcome these problems. So far the literature has investigated the incubation of a start-up, but focused less on the post-incubation stage. This paper focuses on the use of MCS in post-incubation and, if failed start-ups agree, on how MCS are used. We conducted 14 semi-structured interviews for this purpose, to obtain our results. The overall conclusion is that the majority of the companies were closed down due to a combination of strategic, operative and financial reasons.
Keywords: Closed down, high-technology, incubation, Levers of Control, management control systems, post-incubation, start-ups.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 175810109 Zero Truncated Strict Arcsine Model
Authors: Y. N. Phang, E. F. Loh
Abstract:
The zero truncated model is usually used in modeling count data without zero. It is the opposite of zero inflated model. Zero truncated Poisson and zero truncated negative binomial models are discussed and used by some researchers in analyzing the abundance of rare species and hospital stay. Zero truncated models are used as the base in developing hurdle models. In this study, we developed a new model, the zero truncated strict arcsine model, which can be used as an alternative model in modeling count data without zero and with extra variation. Two simulated and one real life data sets are used and fitted into this developed model. The results show that the model provides a good fit to the data. Maximum likelihood estimation method is used in estimating the parameters.
Keywords: Hurdle models, maximum likelihood estimation method, positive count data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 185910108 Enhanced Weighted Centroid Localization Algorithm for Indoor Environments
Authors: I. Nižetić Kosović, T. Jagušt
Abstract:
Lately, with the increasing number of location-based applications, demand for highly accurate and reliable indoor localization became urgent. This is a challenging problem, due to the measurement variance which is the consequence of various factors like obstacles, equipment properties and environmental changes in complex nature of indoor environments. In this paper we propose low-cost custom-setup infrastructure solution and localization algorithm based on the Weighted Centroid Localization (WCL) method. Localization accuracy is increased by several enhancements: calibration of RSSI values gained from wireless nodes, repetitive measurements of RSSI to exclude deviating values from the position estimation, and by considering orientation of the device according to the wireless nodes. We conducted several experiments to evaluate the proposed algorithm. High accuracy of ~1m was achieved.
Keywords: Indoor environment, received signal strength indicator, weighted centroid localization, wireless localization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 310510107 Fuzzy Numbers and MCDM Methods for Portfolio Optimization
Authors: Thi T. Nguyen, Lee N. Gordon-Brown
Abstract:
A new deployment of the multiple criteria decision making (MCDM) techniques: the Simple Additive Weighting (SAW), and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) for portfolio allocation, is demonstrated in this paper. Rather than exclusive reference to mean and variance as in the traditional mean-variance method, the criteria used in this demonstration are the first four moments of the portfolio distribution. Each asset is evaluated based on its marginal impacts to portfolio higher moments that are characterized by trapezoidal fuzzy numbers. Then centroid-based defuzzification is applied to convert fuzzy numbers to the crisp numbers by which SAW and TOPSIS can be deployed. Experimental results suggest the similar efficiency of these MCDM approaches to selecting dominant assets for an optimal portfolio under higher moments. The proposed approaches allow investors flexibly adjust their risk preferences regarding higher moments via different schemes adapting to various (from conservative to risky) kinds of investors. The other significant advantage is that, compared to the mean-variance analysis, the portfolio weights obtained by SAW and TOPSIS are consistently well-diversified.Keywords: Fuzzy numbers, SAW, TOPSIS, portfolio optimization, higher moments, risk management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 214910106 Modelling Hydrological Time Series Using Wakeby Distribution
Authors: Ilaria Lucrezia Amerise
Abstract:
The statistical modelling of precipitation data for a given portion of territory is fundamental for the monitoring of climatic conditions and for Hydrogeological Management Plans (HMP). This modelling is rendered particularly complex by the changes taking place in the frequency and intensity of precipitation, presumably to be attributed to the global climate change. This paper applies the Wakeby distribution (with 5 parameters) as a theoretical reference model. The number and the quality of the parameters indicate that this distribution may be the appropriate choice for the interpolations of the hydrological variables and, moreover, the Wakeby is particularly suitable for describing phenomena producing heavy tails. The proposed estimation methods for determining the value of the Wakeby parameters are the same as those used for density functions with heavy tails. The commonly used procedure is the classic method of moments weighed with probabilities (probability weighted moments, PWM) although this has often shown difficulty of convergence, or rather, convergence to a configuration of inappropriate parameters. In this paper, we analyze the problem of the likelihood estimation of a random variable expressed through its quantile function. The method of maximum likelihood, in this case, is more demanding than in the situations of more usual estimation. The reasons for this lie, in the sampling and asymptotic properties of the estimators of maximum likelihood which improve the estimates obtained with indications of their variability and, therefore, their accuracy and reliability. These features are highly appreciated in contexts where poor decisions, attributable to an inefficient or incomplete information base, can cause serious damages.Keywords: Generalized extreme values (GEV), likelihood estimation, precipitation data, Wakeby distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 67810105 The Results of the Fetal Weight Estimation of the Infants Delivered in the Delivery Room At Dan Khunthot Hospital by Johnson-s Method
Authors: Nareelux Suwannobol, JintanaTapin, Khuanchanok Narachan
Abstract:
The objective of this study was to determine the accuracy to estimation fetal weight by Johnson-s method and compares it with actual birth weight. The sample group was 126 infants delivered in Dan KhunThot hospital from January March 2012. Fetal weight was estimated by measuring fundal height according to Johnson-s method. The information was collected by studying historical delivery records and then analyzed by using the statistics of frequency, percentage, mean, and standard deviation. Finally, the difference was analyzed by a paired t-test.The results showed had an average birth weight was 3093.57 ± 391.03 g (mean ± SD) and 3,455 ± 454.55 g average estimated fetal weight by Johnson-s method higher than average actual birth weight was 384.09 grams. When classifying the infants according to birth weight found that low birth weight (<2500 g) and the appropriate birth weight (2500-3999g) actual birth weight less than estimate fetal weight . But the high birth weight (> 4000 g) actual birth weight was more than estimated fetal weight. The difference was found between actual birth weight and estimation fetal weight of the minimum weight in high birth weight ( > 4000 g) , the appropriate birth weight (2500-3999g) and low birth weight (<2500 g) respectively. The rate of estimates fetal weight within 10% of actual birth weight was 35.7%. Actual birth weight were compared with the found that the difference is statistically significant (p <.000). Employing Johnson-s method to estimate fetal weight can estimate initial fetal weight before passing to special examinations, which may require excessive high cost. A variety of methods should be employed to estimate fetal weight more precisely, which will help plan care for mother-s and infant-s safety.
Keywords: Johnson's method, Fetal weight estimate, Delivery Room, Student nurse.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 234810104 High Performance of Direct Torque and Flux Control of a Double Stator Induction Motor Drive with a Fuzzy Stator Resistance Estimator
Authors: K. Kouzi
Abstract:
In order to have stable and high performance of direct torque and flux control (DTFC) of double star induction motor drive (DSIM), proper on-line adaptation of the stator resistance is very important. This is inevitably due to the variation of the stator resistance during operating conditions, which introduces error in estimated flux position and the magnitude of the stator flux. Error in the estimated stator flux deteriorates the performance of the DTFC drive. Also, the effect of error in estimation is very important especially at low speed. Due to this, our aim is to overcome the sensitivity of the DTFC to the stator resistance variation by proposing on-line fuzzy estimation stator resistance. The fuzzy estimation method is based on an on-line stator resistance correction through the variations of the stator current estimation error and its variations. The fuzzy logic controller gives the future stator resistance increment at the output. The main advantage of the suggested algorithm control is to avoid the drive instability that may occur in certain situations and ensure the tracking of the actual stator resistance. The validity of the technique and the improvement of the whole system performance are proved by the results.
Keywords: Direct torque control, dual stator induction motor, fuzzy logic estimation, stator resistance adaptation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 116410103 Electric Load Forecasting Using Genetic Based Algorithm, Optimal Filter Estimator and Least Error Squares Technique: Comparative Study
Authors: Khaled M. EL-Naggar, Khaled A. AL-Rumaih
Abstract:
This paper presents performance comparison of three estimation techniques used for peak load forecasting in power systems. The three optimum estimation techniques are, genetic algorithms (GA), least error squares (LS) and, least absolute value filtering (LAVF). The problem is formulated as an estimation problem. Different forecasting models are considered. Actual recorded data is used to perform the study. The performance of the above three optimal estimation techniques is examined. Advantages of each algorithms are reported and discussed.
Keywords: Forecasting, Least error squares, Least absolute Value, Genetic algorithms
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 272610102 Wavelet Based Identification of Second Order Linear System
Authors: Sudipta Majumdar, Harish Parthasarathy
Abstract:
In this paper, a wavelet based method is proposed to identify the constant coefficients of a second order linear system and is compared with the least squares method. The proposed method shows improved accuracy of parameter estimation as compared to the least squares method. Additionally, it has the advantage of smaller data requirement and storage requirement as compared to the least squares method.Keywords: Least squares method, linear system, system identification, wavelet transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 158110101 Seven step Adams Type Block Method With Continuous Coefficient For Periodic Ordinary Differential Equation
Authors: Olusheye Akinfenwa
Abstract:
We consider the development of an eight order Adam-s type method, with A-stability property discussed by expressing them as a one-step method in higher dimension. This makes it suitable for solving variety of initial-value problems. The main method and additional methods are obtained from the same continuous scheme derived via interpolation and collocation procedures. The methods are then applied in block form as simultaneous numerical integrators over non-overlapping intervals. Numerical results obtained using the proposed block form reveals that it is highly competitive with existing methods in the literature.Keywords: Block Adam's type Method; Periodic Ordinary Differential Equation; Stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1589