Search results for: time complexity.
6608 Fast Algorithm of Shot Cut Detection
Authors: Lenka Krulikovská, Jaroslav Polec, Tomáš Hirner
Abstract:
In this paper we present a novel method, which reduces the computational complexity of abrupt cut detection. We have proposed fast algorithm, where the similarity of frames within defined step is evaluated instead of comparing successive frames. Based on the results of simulation on large video collection, the proposed fast algorithm is able to achieve 80% reduction of needed frames comparisons compared to actually used methods without the shot cut detection accuracy degradation.Keywords: Abrupt cut, fast algorithm, shot cut detection, Pearson correlation coefficient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17466607 Numerical Studies of Galerkin-type Time-discretizations Applied to Transient Convection-diffusion-reaction Equations
Authors: Naveed Ahmed, Gunar Matthies
Abstract:
We deal with the numerical solution of time-dependent convection-diffusion-reaction equations. We combine the local projection stabilization method for the space discretization with two different time discretization schemes: the continuous Galerkin-Petrov (cGP) method and the discontinuous Galerkin (dG) method of polynomial of degree k. We establish the optimal error estimates and present numerical results which shows that the cGP(k) and dG(k)- methods are accurate of order k +1, respectively, in the whole time interval. Moreover, the cGP(k)-method is superconvergent of order 2k and dG(k)-method is of order 2k +1 at the discrete time points. Furthermore, the dependence of the results on the choice of the stabilization parameter are discussed and compared.
Keywords: Convection-diffusion-reaction equations, stabilized finite elements, discontinuous Galerkin, continuous Galerkin-Petrov.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17506606 Improved Robust Stability Criteria for Discrete-time Neural Networks
Authors: Zixin Liu, Shu Lü, Shouming Zhong, Mao Ye
Abstract:
In this paper, the robust exponential stability problem of uncertain discrete-time recurrent neural networks with timevarying delay is investigated. By constructing a new augmented Lyapunov-Krasovskii function, some new improved stability criteria are obtained in forms of linear matrix inequality (LMI). Compared with some recent results in literature, the conservatism of the new criteria is reduced notably. Two numerical examples are provided to demonstrate the less conservatism and effectiveness of the proposed results.
Keywords: Robust exponential stability, delay-dependent stability, discrete-time neutral networks, time-varying delays.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14776605 Application of Generalized Autoregressive Score Model to Stock Returns
Authors: Katleho Daniel Makatjane, Diteboho Lawrence Xaba, Ntebogang Dinah Moroke
Abstract:
The current study investigates the behaviour of time-varying parameters that are based on the score function of the predictive model density at time t. The mechanism to update the parameters over time is the scaled score of the likelihood function. The results revealed that there is high persistence of time-varying, as the location parameter is higher and the skewness parameter implied the departure of scale parameter from the normality with the unconditional parameter as 1.5. The results also revealed that there is a perseverance of the leptokurtic behaviour in stock returns which implies the returns are heavily tailed. Prior to model estimation, the White Neural Network test exposed that the stock price can be modelled by a GAS model. Finally, we proposed further researches specifically to model the existence of time-varying parameters with a more detailed model that encounters the heavy tail distribution of the series and computes the risk measure associated with the returns.
Keywords: Generalized autoregressive score model, stock returns, time-varying.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10346604 Quantifying Uncertainties in an Archetype-Based Building Stock Energy Model by Use of Individual Building Models
Authors: Morten Brøgger, Kim Wittchen
Abstract:
Focus on reducing energy consumption in existing buildings at large scale, e.g. in cities or countries, has been increasing in recent years. In order to reduce energy consumption in existing buildings, political incentive schemes are put in place and large scale investments are made by utility companies. Prioritising these investments requires a comprehensive overview of the energy consumption in the existing building stock, as well as potential energy-savings. However, a building stock comprises thousands of buildings with different characteristics making it difficult to model energy consumption accurately. Moreover, the complexity of the building stock makes it difficult to convey model results to policymakers and other stakeholders. In order to manage the complexity of the building stock, building archetypes are often employed in building stock energy models (BSEMs). Building archetypes are formed by segmenting the building stock according to specific characteristics. Segmenting the building stock according to building type and building age is common, among other things because this information is often easily available. This segmentation makes it easy to convey results to non-experts. However, using a single archetypical building to represent all buildings in a segment of the building stock is associated with loss of detail. Thermal characteristics are aggregated while other characteristics, which could affect the energy efficiency of a building, are disregarded. Thus, using a simplified representation of the building stock could come at the expense of the accuracy of the model. The present study evaluates the accuracy of a conventional archetype-based BSEM that segments the building stock according to building type- and age. The accuracy is evaluated in terms of the archetypes’ ability to accurately emulate the average energy demands of the corresponding buildings they were meant to represent. This is done for the buildings’ energy demands as a whole as well as for relevant sub-demands. Both are evaluated in relation to the type- and the age of the building. This should provide researchers, who use archetypes in BSEMs, with an indication of the expected accuracy of the conventional archetype model, as well as the accuracy lost in specific parts of the calculation, due to use of the archetype method.Keywords: Building stock energy modelling, energy-savings, archetype.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7476603 Artificial Accelerated Ageing Test of 22 kVXLPE Cable for Distribution System Applications in Thailand
Authors: A. Rawangpai, B. Maraungsri, N. Chomnawang
Abstract:
This paper presents the experimental results on artificial ageing test of 22 kV XLPE cable for distribution system application in Thailand. XLPE insulating material of 22 kV cable was sliced to 60-70 μm in thick and was subjected to ac high voltage at 23 Ôùª C, 60 Ôùª C and 75 Ôùª C. Testing voltage was constantly applied to the specimen until breakdown. Breakdown voltage and time to breakdown were used to evaluate life time of insulating material. Furthermore, the physical model by J. P. Crine for predicts life time of XLPE insulating material was adopted as life time model and was calculated in order to compare the experimental results. Acceptable life time results were obtained from Crine-s model comparing with the experimental result. In addition, fourier transform infrared spectroscopy (FTIR) for chemical analysis and scanning electron microscope (SEM) for physical analysis were conducted on tested specimens.Keywords: Artificial accelerated ageing test, XLPE cable, distribution system, insulating material, life time, life time model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36786602 Dual Construction of Stern-based Signature Scheme
Authors: Pierre-Louis Cayrel, Sidi Mohamed El Yousfi Alaoui
Abstract:
In this paper, we propose a dual version of the first threshold ring signature scheme based on error-correcting code proposed by Aguilar et. al in [1]. Our scheme uses an improvement of Véron zero-knowledge identification scheme, which provide smaller public and private key sizes and better computation complexity than the Stern one. This scheme is secure in the random oracle model.Keywords: Stern algorithm, Véron algorithm, threshold ring signature, post-quantum cryptography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17996601 Real-Time Image Analysis of Capsule Endoscopy for Bleeding Discrimination in Embedded System Platform
Authors: Yong-Gyu Lee, Gilwon Yoon
Abstract:
Image processing for capsule endoscopy requires large memory and it takes hours for diagnosis since operation time is normally more than 8 hours. A real-time analysis algorithm of capsule images can be clinically very useful. It can differentiate abnormal tissue from health structure and provide with correlation information among the images. Bleeding is our interest in this regard and we propose a method of detecting frames with potential bleeding in real-time. Our detection algorithm is based on statistical analysis and the shapes of bleeding spots. We tested our algorithm with 30 cases of capsule endoscopy in the digestive track. Results were excellent where a sensitivity of 99% and a specificity of 97% were achieved in detecting the image frames with bleeding spots.Keywords: bleeding, capsule endoscopy, image processing, real time analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18756600 Self – Tuning Method of Fuzzy System: An Application on Greenhouse Process
Authors: M. Massour El Aoud, M. Franceschi, M. Maher
Abstract:
The approach proposed here is oriented in the direction of fuzzy system for the analysis and the synthesis of intelligent climate controllers, the simulation of the internal climate of the greenhouse is achieved by a linear model whose coefficients are obtained by identification. The use of fuzzy logic controllers for the regulation of climate variables represents a powerful way to minimize the energy cost. Strategies of reduction and optimization are adopted to facilitate the tuning and to reduce the complexity of the controller.
Keywords: Greenhouse, fuzzy logic, optimization, gradient descent.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19476599 A Real-time 4M Collecting Method for Production Information System
Authors: Seung Woo Lee, So Jeong Nam, Jai-Kyung Lee
Abstract:
It can be said that the business sector is faced with a range of challenges–a rapidly changing business environment, an increase and diversification of customers- demands and the consequent need for quick response–for having in place flexible management and production info systems. As a matter of fact, many manufacturers have adopted production info management systems such as MES and ERP. Nevertheless, managers are having difficulties obtaining ever-changing production process information in real time, or responding quickly to any change in production related needs on the basis of such information. This is because they rely on poor production info systems which are not capable of providing real-time factory settings. If the manufacturer doesn-t have a capacity for collecting or digitalizing the 4 Ms (Man, Machine, Material, Method), which are resources for production, on a real time basis, it might to difficult to effectively maintain the information on production process. In this regard, this paper will introduce some new alternatives to the existing methods of collecting the 4 Ms in real time, which are currently comprise the production field.
Keywords: 4M, Acquisition of Data on shop-floor, Real-time machine interface
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 43336598 A Novel Application of Network Equivalencing Method in Time Domain to Precise Calculation of Dead Time in Power Transmission Title
Authors: J. Moshtagh, L. Eslami
Abstract:
Various studies have showed that about 90% of single line to ground faults occurred on High voltage transmission lines have transient nature. This type of faults is cleared by temporary outage (by the single phase auto-reclosure). The interval between opening and reclosing of the faulted phase circuit breakers is named “Dead Time” that is varying about several hundred milliseconds. For adjustment of traditional single phase auto-reclosures that usually are not intelligent, it is necessary to calculate the dead time in the off-line condition precisely. If the dead time used in adjustment of single phase auto-reclosure is less than the real dead time, the reclosing of circuit breakers threats the power systems seriously. So in this paper a novel approach for precise calculation of dead time in power transmission lines based on the network equivalencing in time domain is presented. This approach has extremely higher precision in comparison with the traditional method based on Thevenin equivalent circuit. For comparison between the proposed approach in this paper and the traditional method, a comprehensive simulation by EMTP-ATP is performed on an extensive power network.
Keywords: Dead Time, Network Equivalencing, High Voltage Transmission Lines, Single Phase Auto-Reclosure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15816597 Design of Real Time Early Response Systems for Natural Disaster Management Based On Automation and Control Technologies
Authors: C. Pacheco, A. Cipriano
Abstract:
A new concept of response system is proposed for filling the gap that exists in reducing vulnerability during immediate response to natural disasters. Real Time Early Response Systems (RTERSs) incorporate real time information as feedback data for closing control loop and for generating real time situation assessment. A review of the state of the art on works that fit the concept of RTERS is presented, and it is found that they are mainly focused on manmade disasters. At the same time, in response phase of natural disaster management many works are involved in creating early warning systems, but just few efforts have been put on deciding what to do once an alarm is activated. In this context a RTERS arises as a useful tool for supporting people in their decision making process during natural disasters after an event is detected, and also as an innovative context for applying well-known automation technologies and automatic control concepts and tools.
Keywords: Disaster management, emergency response system, natural disasters, real time.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35816596 A Comparison of Software Analysis and Design Methods for Real Time Systems
Authors: Anthony Spiteri Staines
Abstract:
This paper examines and compares several of the most common real time methods. These methods are CORE, YSM, MASCOT, JSD, DARTS, RTSAD, ADARTS, CODARTS, HOOD, HRT-HOOD, ROOM, UML, UML-RT. The methods are compared using attributes like i) usability, ii) compositionality and iii) proper RT notations available. Finally some comparison results are given and discussed.Keywords: Software Engineering Methods, MethodComparison, Real Time Analysis and Design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36986595 Optimum Time Coordination of Overcurrent Relays using Two Phase Simplex Method
Authors: Prashant P. Bedekar, Sudhir R. Bhide, Vijay S. Kale
Abstract:
Overcurrent (OC) relays are the major protection devices in a distribution system. The operating time of the OC relays are to be coordinated properly to avoid the mal-operation of the backup relays. The OC relay time coordination in ring fed distribution networks is a highly constrained optimization problem which can be stated as a linear programming problem (LPP). The purpose is to find an optimum relay setting to minimize the time of operation of relays and at the same time, to keep the relays properly coordinated to avoid the mal-operation of relays. This paper presents two phase simplex method for optimum time coordination of OC relays. The method is based on the simplex algorithm which is used to find optimum solution of LPP. The method introduces artificial variables to get an initial basic feasible solution (IBFS). Artificial variables are removed using iterative process of first phase which minimizes the auxiliary objective function. The second phase minimizes the original objective function and gives the optimum time coordination of OC relays.Keywords: Constrained optimization, LPP, Overcurrent relaycoordination, Two-phase simplex method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30076594 Organization of the Purchasing Function for Innovation
Authors: Jasna Prester, Ivana Rašić Bakarić, Božidar Matijević
Abstract:
Innovations not only contribute to competitiveness of the company but have also positive effects on revenues. On average, product innovations account to 14 percent of companies’ sales. Innovation management has substantially changed during the last decade, because of growing reliance on external partners. As a consequence, a new task for purchasing arises, as firms need to understand which suppliers actually do have high potential contributing to the innovativeness of the firm and which do not. Proper organization of the purchasing function is important since for the majority of manufacturing companies deal with substantial material costs which pass through the purchasing function. In the past the purchasing function was largely seen as a transaction-oriented, clerical function but today purchasing is the intermediate with supply chain partners contributing to innovations, be it product or process innovations. Therefore, purchasing function has to be organized differently to enable firm innovation potential. However, innovations are inherently risky. There are behavioral risk (that some partner will take advantage of the other party), technological risk in terms of complexity of products and processes of manufacturing and incoming materials and finally market risks, which in fact judge the value of the innovation. These risks are investigated in this work. Specifically, technological risks which deal with complexity of the products, and processes will be investigated more thoroughly. Buying components or such high edge technologies necessities careful investigation of technical features and therefore is usually conducted by a team of experts. Therefore it is hypothesized that higher the technological risk, higher will be the centralization of the purchasing function as an interface with other supply chain members. Main contribution of this research lies is in the fact that analysis was performed on a large data set of 1493 companies, from 25 countries collected in the GMRG 4 survey. Most analyses of purchasing function are done by case study analysis of innovative firms. Therefore this study contributes with empirical evaluations that can be generalized.
Keywords: Purchasing function organization, innovation, technological risk, GMRG 4 survey.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37216593 Graphene Oxide Fiber with Different Exfoliation Time and Activated Carbon Particle
Authors: Nuray Uçar, Mervin Ölmez, Özge Alptoğa, Nilgün K. Yavuz, Ayşen Önen
Abstract:
In recent years, research on continuous graphene oxide fibers has been intensified. Therefore, many factors of production stages are being studied. In this study, the effect of exfoliation time and presence of activated carbon particle (ACP) on graphene oxide fiber’s properties has been analyzed. It has been seen that cross-sectional appearance of sample with ACP is harsh and porous because of ACP. The addition of ACP did not change the electrical conductivity. However, ACP results in an enormous decrease of mechanical properties. Longer exfoliation time results to higher crystallinity degree, C/O ratio and less d space between layers. The breaking strength and electrical conductivity of sample with less exfoliation time is some higher than sample with high exfoliation time.
Keywords: Activated carbon, coagulation by wet spinning, exfoliation, graphene oxide fiber.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16356592 Computational Intelligence Hybrid Learning Approach to Time Series Forecasting
Authors: Chunshien Li, Jhao-Wun Hu, Tai-Wei Chiang, Tsunghan Wu
Abstract:
Time series forecasting is an important and widely popular topic in the research of system modeling. This paper describes how to use the hybrid PSO-RLSE neuro-fuzzy learning approach to the problem of time series forecasting. The PSO algorithm is used to update the premise parameters of the proposed prediction system, and the RLSE is used to update the consequence parameters. Thanks to the hybrid learning (HL) approach for the neuro-fuzzy system, the prediction performance is excellent and the speed of learning convergence is much faster than other compared approaches. In the experiments, we use the well-known Mackey-Glass chaos time series. According to the experimental results, the prediction performance and accuracy in time series forecasting by the proposed approach is much better than other compared approaches, as shown in Table IV. Excellent prediction performance by the proposed approach has been observed.Keywords: forecasting, hybrid learning (HL), Neuro-FuzzySystem (NFS), particle swarm optimization (PSO), recursiveleast-squares estimator (RLSE), time series
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15596591 Research on Weakly Hard Real-Time Constraints and Their Boolean Combination to Support Adaptive QoS
Authors: Xiangbin Zhu
Abstract:
Advances in computing applications in recent years have prompted the demand for more flexible scheduling models for QoS demand. Moreover, in practical applications, partly violated temporal constraints can be tolerated if the violation meets certain distribution. So we need extend the traditional Liu and Lanland model to adapt to these circumstances. There are two extensions, which are the (m, k)-firm model and Window-Constrained model. This paper researches on weakly hard real-time constraints and their combination to support QoS. The fact that a practical application can tolerate some violations of temporal constraint under certain distribution is employed to support adaptive QoS on the open real-time system. The experiment results show these approaches are effective compared to traditional scheduling algorithms.Keywords: Weakly Hard Real-Time, Real-Time, Scheduling, Quality of Service.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15796590 Revealing Nonlinear Couplings between Oscillators from Time Series
Authors: B.P. Bezruchko, D.A. Smirnov
Abstract:
Quantitative characterization of nonlinear directional couplings between stochastic oscillators from data is considered. We suggest coupling characteristics readily interpreted from a physical viewpoint and their estimators. An expression for a statistical significance level is derived analytically that allows reliable coupling detection from a relatively short time series. Performance of the technique is demonstrated in numerical experiments.Keywords: Nonlinear time series analysis, directional couplings, coupled oscillators.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12656589 Existence and Stability Analysis of Discrete-time Fuzzy BAM Neural Networks with Delays and Impulses
Authors: Chao Wang, Yongkun Li
Abstract:
In this paper, the discrete-time fuzzy BAM neural network with delays and impulses is studied. Sufficient conditions are obtained for the existence and global stability of a unique equilibrium of this class of fuzzy BAM neural networks with Lipschitzian activation functions without assuming their boundedness, monotonicity or differentiability and subjected to impulsive state displacements at fixed instants of time. Some numerical examples are given to demonstrate the effectiveness of the obtained results.
Keywords: Discrete-time fuzzy BAM neural networks, ımpulses, global exponential stability, global asymptotical stability, equilibrium point.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15086588 Towards a Suitable and Systematic Approach for Component Based Software Development
Authors: Kuljit Kaur, Parminder Kaur, Jaspreet Bedi, Hardeep Singh
Abstract:
Software crisis refers to the situation in which the developers are not able to complete the projects within time and budget constraints and moreover these overscheduled and over budget projects are of low quality as well. Several methodologies have been adopted form time to time to overcome this situation and now in the focus is component based software engineering. In this approach, emphasis is on reuse of already existing software artifacts. But the results can not be achieved just by preaching the principles; they need to be practiced as well. This paper highlights some of the very basic elements of this approach, which has to be in place to get the desired goals of high quality, low cost with shorter time-to-market software products.Keywords: Component Model, Software Components, SoftwareRepository, Process Models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17656587 Least Square-SVM Detector for Wireless BPSK in Multi-Environmental Noise
Authors: J. P. Dubois, Omar M. Abdul-Latif
Abstract:
Support Vector Machine (SVM) is a statistical learning tool developed to a more complex concept of structural risk minimization (SRM). In this paper, SVM is applied to signal detection in communication systems in the presence of channel noise in various environments in the form of Rayleigh fading, additive white Gaussian background noise (AWGN), and interference noise generalized as additive color Gaussian noise (ACGN). The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these advanced stochastic noise models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to conventional binary signaling optimal model-based detector driven by binary phase shift keying (BPSK) modulation. We show that the SVM performance is superior to that of conventional matched filter-, innovation filter-, and Wiener filter-driven detectors, even in the presence of random Doppler carrier deviation, especially for low SNR (signal-to-noise ratio) ranges. For large SNR, the performance of the SVM was similar to that of the classical detectors. However, the convergence between SVM and maximum likelihood detection occurred at a higher SNR as the noise environment became more hostile.Keywords: Colour noise, Doppler shift, innovation filter, least square-support vector machine, matched filter, Rayleigh fading, Wiener filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18136586 An Algorithm Proposed for FIR Filter Coefficients Representation
Authors: Mohamed Al Mahdi Eshtawie, Masuri Bin Othman
Abstract:
Finite impulse response (FIR) filters have the advantage of linear phase, guaranteed stability, fewer finite precision errors, and efficient implementation. In contrast, they have a major disadvantage of high order need (more coefficients) than IIR counterpart with comparable performance. The high order demand imposes more hardware requirements, arithmetic operations, area usage, and power consumption when designing and fabricating the filter. Therefore, minimizing or reducing these parameters, is a major goal or target in digital filter design task. This paper presents an algorithm proposed for modifying values and the number of non-zero coefficients used to represent the FIR digital pulse shaping filter response. With this algorithm, the FIR filter frequency and phase response can be represented with a minimum number of non-zero coefficients. Therefore, reducing the arithmetic complexity needed to get the filter output. Consequently, the system characteristic i.e. power consumption, area usage, and processing time are also reduced. The proposed algorithm is more powerful when integrated with multiplierless algorithms such as distributed arithmetic (DA) in designing high order digital FIR filters. Here the DA usage eliminates the need for multipliers when implementing the multiply and accumulate unit (MAC) and the proposed algorithm will reduce the number of adders and addition operations needed through the minimization of the non-zero values coefficients to get the filter output.
Keywords: Pulse shaping Filter, Distributed Arithmetic, Optimization algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31746585 Impact of ISO 9000 on Time-based Performance: An Event Study
Authors: Chris K. Y. Lo, Andy C. L. Yeung, T. C. Edwin Cheng
Abstract:
ISO 9000 is the most popular and widely adopted meta-standard for quality and operational improvements. However, only limited empirical research has been conducted to examine the impact of ISO 9000 on operational performance based on objective and longitudinal data. To reveal any causal relationship between the adoption of ISO 9000 and operational performance, we examined the timing and magnitude of change in time-based performance as a result of ISO 9000 adoption. We analyzed the changes in operating cycle, inventory days, and account receivable days prior and after the implementation of ISO 9000 in 695 publicly listed manufacturing firms. We found that ISO 9000 certified firms shortened their operating cycle time by 5.28 days one year after the implementation of ISO 9000. In the long-run (3 years after certification), certified firms showed continuous improvement in time-based efficiency, and experienced a shorter operating cycle time of 11 days than that of non-certified firms. There was an average of 6.5% improvement in operating cycle time for ISO 9000 certified firms. Both inventory days and account receivable days showed similar significant improvements after the implementation of ISO 9000, too.
Keywords: ISO 9000, Operating Cycle, Time-based efficiency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20726584 Modelling Conditional Volatility of Saving Rate by a Time-Varying Parameter Model
Authors: Katleho D. Makatjane, Kalebe M. Kalebe
Abstract:
The present paper used time-varying parameters which are based on the score function of a probability density at time t to model volatility of saving rate. We used a scaled likelihood function to update the parameters of the model overtime. Our results revealed high diligence of time-varying since the location parameter is greater than zero. Furthermore, we discovered a leptokurtic condition on saving rate’s distribution. Kapetanios, Shin-Shell Nonlinear Augmented Dickey-Fuller (KSS-NADF) test showed that the saving rate has a nonlinear unit root; therefore, it can be modeled by a generalised autoregressive score (GAS) model. Additionally, value at risk (VaR) and conditional tail expectation (CTE) indicate that 99% of the time people in Lesotho are saving more than spending. This puts the economy in high risk of not expanding. Therefore, the monetary policy committee (MPC) of Lesotho should revise their monetary policies towards this high saving rates risk.
Keywords: Generalized autoregressive score, time-varying, saving rate, Lesotho.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6196583 Using Genetic Algorithms to Outline Crop Rotations and a Cropping-System Model
Authors: Nicolae Bold, Daniel Nijloveanu
Abstract:
The idea of cropping-system is a method used by farmers. It is an environmentally-friendly method, protecting the natural resources (soil, water, air, nutritive substances) and increase the production at the same time, taking into account some crop particularities. The combination of this powerful method with the concepts of genetic algorithms results into a possibility of generating sequences of crops in order to form a rotation. The usage of this type of algorithms has been efficient in solving problems related to optimization and their polynomial complexity allows them to be used at solving more difficult and various problems. In our case, the optimization consists in finding the most profitable rotation of cultures. One of the expected results is to optimize the usage of the resources, in order to minimize the costs and maximize the profit. In order to achieve these goals, a genetic algorithm was designed. This algorithm ensures the finding of several optimized solutions of cropping-systems possibilities which have the highest profit and, thus, which minimize the costs. The algorithm uses genetic-based methods (mutation, crossover) and structures (genes, chromosomes). A cropping-system possibility will be considered a chromosome and a crop within the rotation is a gene within a chromosome. Results about the efficiency of this method will be presented in a special section. The implementation of this method would bring benefits into the activity of the farmers by giving them hints and helping them to use the resources efficiently.Keywords: Genetic algorithm, chromosomes, genes, cropping, agriculture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16026582 LiDAR Based Real Time Multiple Vehicle Detection and Tracking
Authors: Zhongzhen Luo, Saeid Habibi, Martin v. Mohrenschildt
Abstract:
Self-driving vehicle require a high level of situational awareness in order to maneuver safely when driving in real world condition. This paper presents a LiDAR based real time perception system that is able to process sensor raw data for multiple target detection and tracking in dynamic environment. The proposed algorithm is nonparametric and deterministic that is no assumptions and priori knowledge are needed from the input data and no initializations are required. Additionally, the proposed method is working on the three-dimensional data directly generated by LiDAR while not scarifying the rich information contained in the domain of 3D. Moreover, a fast and efficient for real time clustering algorithm is applied based on a radially bounded nearest neighbor (RBNN). Hungarian algorithm procedure and adaptive Kalman filtering are used for data association and tracking algorithm. The proposed algorithm is able to run in real time with average run time of 70ms per frame.Keywords: LiDAR, real-time system, clustering, tracking, data association.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 46706581 A Semi- One Time Pad Using Blind Source Separation for Speech Encryption
Authors: Long Jye Sheu, Horng-Shing Chiou, Wei Ching Chen
Abstract:
We propose a new perspective on speech communication using blind source separation. The original speech is mixed with key signals which consist of the mixing matrix, chaotic signals and a random noise. However, parts of the keys (the mixing matrix and the random noise) are not necessary in decryption. In practice implement, one can encrypt the speech by changing the noise signal every time. Hence, the present scheme obtains the advantages of a One Time Pad encryption while avoiding its drawbacks in key exchange. It is demonstrated that the proposed scheme is immune against traditional attacks.Keywords: one time pad, blind source separation, independentcomponent analysis, speech encryption.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15726580 Real-time Detection of Space Manipulator Self-collision
Authors: Zhang Xiaodong, Tang Zixin, Liu Xin
Abstract:
In order to avoid self-collision of space manipulators during operation process, a real-time detection method is proposed in this paper. The manipulator is fitted into a cylinder-enveloping surface, and then, a kind of detection algorithm of collision between cylinders is analyzed. The collision model of space manipulator self-links can be detected by using this algorithm in real-time detection during the operation process. To ensure security of the operation, a safety threshold is designed. The simulation and experiment results verify the effectiveness of the proposed algorithm for a 7-DOF space manipulator.Keywords: Space manipulator, Collision detection, Self-collision, the real-time collision detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20346579 Matching Facial Images using Age Related Morphing Changes
Authors: Udeni Jayasinghe, Anuja Dharmaratne
Abstract:
Each year many people are reported missing in most of the countries in the world owing to various reasons. Arrangements have to be made to find these people after some time. So the investigating agencies are compelled to make out these people by using manpower. But in many cases, the investigations carried out to find out an absconding for a long time may not be successful. At a time like that it may be difficult to identify these people by examining their old photographs, because their facial appearance might have changed mainly due to the natural aging process. On some occasions in forensic medicine if a dead body is found, investigations should be held to make sure that this corpse belongs to the same person disappeared some time ago. With the passage of time the face of the person might have changed and there should be a mechanism to reveal the person-s identity. In order to make this process easy, we must guess and decide as to how he will look like by now. To address this problem this paper presents a way of synthesizing a facial image with the aging effects.
Keywords: Cranio-facial growth model, eigenfaces, eigenvectors, Face Anthropometry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1741