Search results for: time based DNA codes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15450

Search results for: time based DNA codes

14160 Cost Effective Real-Time Image Processing Based Optical Mark Reader

Authors: Amit Kumar, Himanshu Singal, Arnav Bhavsar

Abstract:

In this modern era of automation, most of the academic exams and competitive exams are Multiple Choice Questions (MCQ). The responses of these MCQ based exams are recorded in the Optical Mark Reader (OMR) sheet. Evaluation of the OMR sheet requires separate specialized machines for scanning and marking. The sheets used by these machines are special and costs more than a normal sheet. Available process is non-economical and dependent on paper thickness, scanning quality, paper orientation, special hardware and customized software. This study tries to tackle the problem of evaluating the OMR sheet without any special hardware and making the whole process economical. We propose an image processing based algorithm which can be used to read and evaluate the scanned OMR sheets with no special hardware required. It will eliminate the use of special OMR sheet. Responses recorded in normal sheet is enough for evaluation. The proposed system takes care of color, brightness, rotation, little imperfections in the OMR sheet images.

Keywords: OMR, image processing, hough circle transform, interpolation, detection, Binary Thresholding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1544
14159 Continuous Feature Adaptation for Non-Native Speech Recognition

Authors: Y. Deng, X. Li, C. Kwan, B. Raj, R. Stern

Abstract:

The current speech interfaces in many military applications may be adequate for native speakers. However, the recognition rate drops quite a lot for non-native speakers (people with foreign accents). This is mainly because the nonnative speakers have large temporal and intra-phoneme variations when they pronounce the same words. This problem is also complicated by the presence of large environmental noise such as tank noise, helicopter noise, etc. In this paper, we proposed a novel continuous acoustic feature adaptation algorithm for on-line accent and environmental adaptation. Implemented by incremental singular value decomposition (SVD), the algorithm captures local acoustic variation and runs in real-time. This feature-based adaptation method is then integrated with conventional model-based maximum likelihood linear regression (MLLR) algorithm. Extensive experiments have been performed on the NATO non-native speech corpus with baseline acoustic model trained on native American English. The proposed feature-based adaptation algorithm improved the average recognition accuracy by 15%, while the MLLR model based adaptation achieved 11% improvement. The corresponding word error rate (WER) reduction was 25.8% and 2.73%, as compared to that without adaptation. The combined adaptation achieved overall recognition accuracy improvement of 29.5%, and WER reduction of 31.8%, as compared to that without adaptation.

Keywords: speaker adaptation; environment adaptation; robust speech recognition; SVD; non-native speech recognition

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3217
14158 On the Solution of the Towers of Hanoi Problem

Authors: Hayedeh Ahrabian, Comfar Badamchi, Abbass Nowzari-Dalini

Abstract:

In this paper, two versions of an iterative loopless algorithm for the classical towers of Hanoi problem with O(1) storage complexity and O(2n) time complexity are presented. Based on this algorithm the number of different moves in each of pegs with its direction is formulated.

Keywords: Loopless algorithm, Binary tree, Towers of Hanoi.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4835
14157 The Explanation for Dark Matter and Dark Energy

Authors: Richard Lewis

Abstract:

The following assumptions of the Big Bang theory are challenged and found to be false: the cosmological principle, the assumption that all matter formed at the same time and the assumption regarding the cause of the cosmic microwave background radiation. The evolution of the universe is described based on the conclusion that the universe is finite with a space boundary. This conclusion is reached by ruling out the possibility of an infinite universe or a universe which is finite with no boundary. In a finite universe, the centre of the universe can be located with reference to our home galaxy (The Milky Way) using the speed relative to the Cosmic Microwave Background (CMB) rest frame and Hubble's law. This places our home galaxy at a distance of approximately 26 million light years from the centre of the universe. Because we are making observations from a point relatively close to the centre of the universe, the universe appears to be isotropic and homogeneous but this is not the case. The CMB is coming from a source located within the event horizon of the universe. There is sufficient mass in the universe to create an event horizon at the Schwarzschild radius. Galaxies form over time due to the energy released by the expansion of space. Conservation of energy must consider total energy which is mass (+ve) plus energy (+ve) plus spacetime curvature (-ve) so that the total energy of the universe is always zero. The predominant position of galaxy formation moves over time from the centre of the universe towards the boundary so that today the majority of new galaxy formation is taking place beyond our horizon of observation at 14 billion light years.

Keywords: Cosmic microwave background, dark energy, dark matter, evolution of the universe.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 905
14156 Highly Scalable, Reversible and Embedded Image Compression System

Authors: Federico Pérez González, Iñaki Goiricelaia Ordorika, Pedro Iriondo Bengoa

Abstract:

A new method for low complexity image coding is presented, that permits different settings and great scalability in the generation of the final bit stream. This coding presents a continuoustone still image compression system that groups loss and lossless compression making use of finite arithmetic reversible transforms. Both transformation in the space of color and wavelet transformation are reversible. The transformed coefficients are coded by means of a coding system in depending on a subdivision into smaller components (CFDS) similar to the bit importance codification. The subcomponents so obtained are reordered by means of a highly configure alignment system depending on the application that makes possible the re-configure of the elements of the image and obtaining different levels of importance from which the bit stream will be generated. The subcomponents of each level of importance are coded using a variable length entropy coding system (VBLm) that permits the generation of an embedded bit stream. This bit stream supposes itself a bit stream that codes a compressed still image. However, the use of a packing system on the bit stream after the VBLm allows the realization of a final highly scalable bit stream from a basic image level and one or several enhance levels.

Keywords: Image compression, wavelet transform, highlyscalable, reversible transform, embedded, subcomponents.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1413
14155 Interfacing C and TMS320C6713 Assembly Language (Part-I)

Authors: Abdullah A. Wardak

Abstract:

This paper describes an interfacing of C and the TMS320C6713 assembly language which is crucially important for many real-time applications. Similarly, interfacing of C with the assembly language of a conventional microprocessor such as MC68000 is presented for comparison. However, it should be noted that the way the C compiler passes arguments among various functions in the TMS320C6713-based environment is totally different from the way the C compiler passes arguments in a conventional microprocessor such as MC68000. Therefore, it is very important for a user of the TMS320C6713-based system to properly understand and follow the register conventions when interfacing C with the TMS320C6713 assembly language subroutine. It should be also noted that in some cases (examples 6-9) the endian-mode of the board needs to be taken into consideration. In this paper, one method is presented in great detail. Other methods will be presented in the future.

Keywords: Assembly language, high level language, interfacing, stack, arguments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2521
14154 Construction of Recombinant E.coli Expressing Fusion Protein to Produce 1,3-Propanediol

Authors: Rosarin Rujananon, Poonsuk Prasertsan, Amornrat Phongdara, Tanate Panrat, Jibin Sun, Sugima Rappert, An-Ping Zeng

Abstract:

In this study, a synthetic pathway was created by assembling genes from Clostridium butyricum and Escherichia coli in different combinations. Among the genes were dhaB1 and dhaB2 from C. butyricum VPI1718 coding for glycerol dehydratase (GDHt) and its activator (GDHtAc), respectively, involved in the conversion of glycerol to 3-hydroxypropionaldehyde (3-HPA). The yqhD gene from E.coli BL21 was also included which codes for an NADPHdependent 1,3-propanediol oxidoreductase isoenzyme (PDORI) reducing 3-HPA to 1,3-propanediol (1,3-PD). Molecular modeling analysis indicated that the conformation of fusion protein of YQHD and DHAB1 was favorable for direct molecular channeling of the intermediate 3-HPA. According to the simulation results, the yqhD and dhaB1 gene were assembled in the upstream of dhaB2 to express a fusion protein, yielding the recombinant strain E. coliBL21 (DE3)//pET22b+::yqhD-dhaB1_dhaB2 (strain BP41Y3). Strain BP41Y3 gave 10-fold higher 1,3-PD concentration than E. coliBL21 (DE3)//pET22b+::yqhD-dhaB1_dhaB2 (strain BP31Y2) expressing the recombinant enzymes simultaneously but in a non-fusion mode. This is the first report using a gene fusion approach to enhance the biological conversion of glycerol to the value added compound 1,3- PD.

Keywords: Recombinant E.coli, 1, 3-propanediol, glycerol, fusion protein.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2014
14153 Fast Forecasting of Stock Market Prices by using New High Speed Time Delay Neural Networks

Authors: Hazem M. El-Bakry, Nikos Mastorakis

Abstract:

Fast forecasting of stock market prices is very important for strategic planning. In this paper, a new approach for fast forecasting of stock market prices is presented. Such algorithm uses new high speed time delay neural networks (HSTDNNs). The operation of these networks relies on performing cross correlation in the frequency domain between the input data and the input weights of neural networks. It is proved mathematically and practically that the number of computation steps required for the presented HSTDNNs is less than that needed by traditional time delay neural networks (TTDNNs). Simulation results using MATLAB confirm the theoretical computations.

Keywords: Fast Forecasting, Stock Market Prices, Time Delay NeuralNetworks, Cross Correlation, Frequency Domain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2068
14152 Switching Rule for the Exponential Stability and Stabilization of Switched Linear Systems with Interval Time-varying Delays

Authors: Kreangkri Ratchagit

Abstract:

This paper is concerned with exponential stability and stabilization of switched linear systems with interval time-varying delays. The time delay is any continuous function belonging to a given interval, in which the lower bound of delay is not restricted to zero. By constructing a suitable augmented Lyapunov-Krasovskii functional combined with Leibniz-Newton-s formula, a switching rule for the exponential stability and stabilization of switched linear systems with interval time-varying delays and new delay-dependent sufficient conditions for the exponential stability and stabilization of the systems are first established in terms of LMIs. Numerical examples are included to illustrate the effectiveness of the results.

Keywords: Switching design, exponential stability and stabilization, switched linear systems, interval delay, Lyapunov function, linear matrix inequalities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1525
14151 Signal Driven Sampling and Filtering a Promising Approach for Time Varying Signals Processing

Authors: Saeed Mian Qaisar, Laurent Fesquet, Marc Renaudin

Abstract:

The mobile systems are powered by batteries. Reducing the system power consumption is a key to increase its autonomy. It is known that mostly the systems are dealing with time varying signals. Thus, we aim to achieve power efficiency by smartly adapting the system processing activity in accordance with the input signal local characteristics. It is done by completely rethinking the processing chain, by adopting signal driven sampling and processing. In this context, a signal driven filtering technique, based on the level crossing sampling is devised. It adapts the sampling frequency and the filter order by analysing the input signal local variations. Thus, it correlates the processing activity with the signal variations. It leads towards a drastic computational gain of the proposed technique compared to the classical one.

Keywords: Level Crossing Sampling, Activity Selection, Adaptive Rate Filtering, Computational Complexity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1361
14150 Cryogenic Freezing Process Optimization Based On Desirability Function on the Path of Steepest Ascent

Authors: R. Uporn, P. Luangpaiboon

Abstract:

This paper presents a comparative study of statistical methods for the multi-response surface optimization of a cryogenic freezing process. Taguchi design and analysis and steepest ascent methods based on the desirability function were conducted to ascertain the influential factors of a cryogenic freezing process and their optimal levels. The more preferable levels of the set point, exhaust fan speed, retention time and flow direction are set at -90oC, 20 Hz, 18 minutes and Counter Current, respectively. The overall desirability level is 0.7044.

Keywords: Cryogenic Freezing Process, Taguchi Design and Analysis, Response Surface Method, Steepest Ascent Method and Desirability Function Approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1832
14149 The Effect of Processing Parameters of the Vinyl Ester Matrix Nanocomposites Based On Layered Silicate on the Level of Exfoliation

Authors: A. I. Alateyah, H. N. Dhakal, Z. Y. Zhang

Abstract:

The study of the effect of the processing parameters on the level of intercalation between the layered silicate and polymer of two different methodology took place. X-ray diffraction, Scanning Electron Microscopy, Energy Dispersive X-ray Spectrometry, and Transmission Electron Microscopy were utilized in order to examine the intercalation level of nanocomposites of both methodologies. It was found that drying the clay prior to mixing with the polymer, mixing time and speed, degassing time, and the curing method had major changes to the level of distribution of the nanocomposites structure. In methodology 1, the presence of aggregation layers was observed at only 2.5 wt.% clay loading whereas in methodology 2 the presence of aggregation layers was found at higher clay loading (i.e. 5 wt.%).

Keywords: Vinyl ester, nanocomposites, layered silicate, characterisations, aggregation layers, intercalation, exfoliation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1718
14148 Adaptive Impedance Control for Unknown Time-Varying Environment Position and Stiffness

Authors: Norsinnira Zainul Azlan, Hiroshi Yamaura

Abstract:

This study is concerned with a new adaptive impedance control strategy to compensate for unknown time-varying environment stiffness and position. The uncertainties are expressed by Function Approximation Technique (FAT), which allows the update laws to be derived easily using Lyapunov stability theory. Computer simulation results are presented to validate the effectiveness of the proposed strategy.

Keywords: Adaptive Impedance Control, Function Approximation Technique (FAT), unknown time-varying environment position and stiffness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2150
14147 Risk Assessment for Aerial Package Delivery

Authors: Haluk Eren, Ümit Çelik

Abstract:

Recent developments in unmanned aerial vehicles (UAVs) have begun to attract intense interest. UAVs started to use for many different applications from military to civilian use. Some online retailer and logistics companies are testing the UAV delivery. UAVs have great potentials to reduce cost and time of deliveries and responding to emergencies in a short time. Despite these great positive sides, just a few works have been done for routing of UAVs for package deliveries. As known, transportation of goods from one place to another may have many hazards on delivery route due to falling hazards that can be exemplified as ground objects or air obstacles. This situation refers to wide-range insurance concept. For this reason, deliveries that are made with drones get into the scope of shipping insurance. On the other hand, air traffic was taken into account in the absence of unmanned aerial vehicle. But now, it has been a reality for aerial fields. In this study, the main goal is to conduct risk analysis of package delivery services using drone, based on delivery routes.

Keywords: Drone risk assessment, drone package delivery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1952
14146 Determination of the Gain in Learning the Free-Fall Motion of Bodies by Applying the Resource of Previous Concepts

Authors: Ricardo Merlo

Abstract:

In this paper, we analyzed the different didactic proposals for teaching about the free fall motion of bodies available online. An important aspect was the interpretation of the direction and sense of the acceleration of gravity and of the falling velocity of a body, which is why we found different applications of the Cartesian reference system used and also different graphical presentations of the velocity as a function of time and of the distance traveled vertically by the body in the period of time that it was dropped from a height h0. In this framework, a survey of previous concepts was applied to a voluntary group of first-year university students of an Engineering degree before and after the development of the class of the subject in question. Then, Hake's index (0.52) was determined, which resulted in an average learning gain from the meaningful use of the reference system and the respective graphs of velocity versus time and height versus time.

Keywords: Didactic gain, free–fall, physics teaching, previous knowledge.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 205
14145 The Optimum Aeration Time of Wastewater Treatment by Surface Aerators in Suan Sunandha Rajabhat University

Authors: Anat Thapinta

Abstract:

This research aimed to study on the efficiency of wastewater treatment by comparing the different aeration times of surface aerators in Suan Sunandha Rajabhat University. In doing so, the operation of surface aerators was divided into 2 groups which included the groups of 8 hours (8-0/opened-closed) and 4 hours (2-2/opened-closed) of aeration time per day. As a result of the study, it was found that the efficiency of wastewater treatment in the forms of DO, BOD, turbidity and NO2- by 8 hours (8-0/opened-closed) and 4 hours (2-2/opened-closed) of aeration time per day of surface aerators was not statistically different [Sig. = .644, .488, .716 and .054 > α (.05)] while the efficiency in the forms of NO3- and P was significantly different at the statistical level of .01 [Sig. = .001 and .000 < α (.01)].

Keywords: Aeration time, Surface aerator, Wastewater treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2242
14144 Fractal Patterns for Power Quality Detection Using Color Relational Analysis Based Classifier

Authors: Chia-Hung Lin, Mei-Sung Kang, Cong-Hui Huang, Chao-Lin Kuo

Abstract:

This paper proposes fractal patterns for power quality (PQ) detection using color relational analysis (CRA) based classifier. Iterated function system (IFS) uses the non-linear interpolation in the map and uses similarity maps to construct various fractal patterns of power quality disturbances, including harmonics, voltage sag, voltage swell, voltage sag involving harmonics, voltage swell involving harmonics, and voltage interruption. The non-linear interpolation functions (NIFs) with fractal dimension (FD) make fractal patterns more distinguishing between normal and abnormal voltage signals. The classifier based on CRA discriminates the disturbance events in a power system. Compared with the wavelet neural networks, the test results will show accurate discrimination, good robustness, and faster processing time for detecting disturbing events.

Keywords: Power Quality (PQ), Color Relational Analysis(CRA), Iterated Function System (IFS), Non-linear InterpolationFunction (NIF), Fractal Dimension (FD).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1648
14143 Perforation Analysis of the Aluminum Alloy Sheets Subjected to High Rate of Loading and Heated Using Thermal Chamber: Experimental and Numerical Approach

Authors: A. Bendarma, T. Jankowiak, A. Rusinek, T. Lodygowski, M. Klósak, S. Bouslikhane

Abstract:

The analysis of the mechanical characteristics and dynamic behavior of aluminum alloy sheet due to perforation tests based on the experimental tests coupled with the numerical simulation is presented. The impact problems (penetration and perforation) of the metallic plates have been of interest for a long time. Experimental, analytical as well as numerical studies have been carried out to analyze in details the perforation process. Based on these approaches, the ballistic properties of the material have been studied. The initial and residual velocities laser sensor is used during experiments to obtain the ballistic curve and the ballistic limit. The energy balance is also reported together with the energy absorbed by the aluminum including the ballistic curve and ballistic limit. The high speed camera helps to estimate the failure time and to calculate the impact force. A wide range of initial impact velocities from 40 up to 180 m/s has been covered during the tests. The mass of the conical nose shaped projectile is 28 g, its diameter is 12 mm, and the thickness of the aluminum sheet is equal to 1.0 mm. The ABAQUS/Explicit finite element code has been used to simulate the perforation processes. The comparison of the ballistic curve was obtained numerically and was verified experimentally, and the failure patterns are presented using the optimal mesh densities which provide the stability of the results. A good agreement of the numerical and experimental results is observed.

Keywords: Aluminum alloy, ballistic behavior, failure criterion, numerical simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 933
14142 Comparison of Particle Swarm Optimization and Genetic Algorithm for TCSC-based Controller Design

Authors: Sidhartha Panda, N. P. Padhy

Abstract:

Recently, genetic algorithms (GA) and particle swarm optimization (PSO) technique have attracted considerable attention among various modern heuristic optimization techniques. Since the two approaches are supposed to find a solution to a given objective function but employ different strategies and computational effort, it is appropriate to compare their performance. This paper presents the application and performance comparison of PSO and GA optimization techniques, for Thyristor Controlled Series Compensator (TCSC)-based controller design. The design objective is to enhance the power system stability. The design problem of the FACTS-based controller is formulated as an optimization problem and both the PSO and GA optimization techniques are employed to search for optimal controller parameters. The performance of both optimization techniques in terms of computational time and convergence rate is compared. Further, the optimized controllers are tested on a weakly connected power system subjected to different disturbances, and their performance is compared with the conventional power system stabilizer (CPSS). The eigenvalue analysis and non-linear simulation results are presented and compared to show the effectiveness of both the techniques in designing a TCSC-based controller, to enhance power system stability.

Keywords: Thyristor Controlled Series Compensator, geneticalgorithm; particle swarm optimization; Phillips-Heffron model;power system stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3153
14141 VaR Forecasting in Times of Increased Volatility

Authors: Ivo Jánský, Milan Rippel

Abstract:

The paper evaluates several hundred one-day-ahead VaR forecasting models in the time period between the years 2004 and 2009 on data from six world stock indices - DJI, GSPC, IXIC, FTSE, GDAXI and N225. The models model mean using the ARMA processes with up to two lags and variance with one of GARCH, EGARCH or TARCH processes with up to two lags. The models are estimated on the data from the in-sample period and their forecasting accuracy is evaluated on the out-of-sample data, which are more volatile. The main aim of the paper is to test whether a model estimated on data with lower volatility can be used in periods with higher volatility. The evaluation is based on the conditional coverage test and is performed on each stock index separately. The primary result of the paper is that the volatility is best modelled using a GARCH process and that an ARMA process pattern cannot be found in analyzed time series.

Keywords: VaR, risk analysis, conditional volatility, garch, egarch, tarch, moving average process, autoregressive process

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1429
14140 Energy Loss Reduction in Oil Refineries through Flare Gas Recovery Approaches

Authors: Majid Amidpour, Parisa Karimi, Marzieh Joda

Abstract:

For the last few years, release of burned undesirable by-products has become a challenging issue in oil industries. Flaring, as one of the main sources of air contamination, involves detrimental and long-lasting effects on human health and is considered a substantial reason for energy losses worldwide. This research involves studying the implications of two main flare gas recovery methods at three oil refineries, all in Iran as the case I, case II, and case III in which the production capacities are increasing respectively. In the proposed methods, flare gases are converted into more valuable products, before combustion by the flare networks. The first approach involves collecting, compressing and converting the flare gas to smokeless fuel which can be used in the fuel gas system of the refineries. The other scenario includes utilizing the flare gas as a feed into liquefied petroleum gas (LPG) production unit already established in the refineries. The processes of these scenarios are simulated, and the capital investment is calculated for each procedure. The cumulative profits of the scenarios are evaluated using Net Present Value method. Furthermore, the sensitivity analysis based on total propane and butane mole fraction is carried out to make a rational comparison for LPG production approach, and the results are illustrated for different mole fractions of propane and butane. As the mole fraction of propane and butane contained in LPG differs in summer and winter seasons, the results corresponding to LPG scenario are demonstrated for each season. The results of the simulations show that cumulative profit in fuel gas production scenario and LPG production rate increase with the capacity of the refineries. Moreover, the investment return time in LPG production method experiences a decline, followed by a rising trend with an increase in C3 and C4 content. The minimum value of time return occurs at propane and butane sum concentration values of 0.7, 0.6, and 0.7 in case I, II, and III, respectively. Based on comparison of the time of investment return and cumulative profit, fuel gas production is the superior scenario for three case studies.

Keywords: Flare gas reduction, liquefied petroleum gas, fuel gas, net present value method, sensitivity analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 771
14139 Fault-Tolerant Control Study and Classification: Case Study of a Hydraulic-Press Model Simulated in Real-Time

Authors: Jorge Rodriguez-Guerra, Carlos Calleja, Aron Pujana, Iker Elorza, Ana Maria Macarulla

Abstract:

Society demands more reliable manufacturing processes capable of producing high quality products in shorter production cycles. New control algorithms have been studied to satisfy this paradigm, in which Fault-Tolerant Control (FTC) plays a significant role. It is suitable to detect, isolate and adapt a system when a harmful or faulty situation appears. In this paper, a general overview about FTC characteristics are exposed; highlighting the properties a system must ensure to be considered faultless. In addition, a research to identify which are the main FTC techniques and a classification based on their characteristics is presented in two main groups: Active Fault-Tolerant Controllers (AFTCs) and Passive Fault-Tolerant Controllers (PFTCs). AFTC encompasses the techniques capable of re-configuring the process control algorithm after the fault has been detected, while PFTC comprehends the algorithms robust enough to bypass the fault without further modifications. The mentioned re-configuration requires two stages, one focused on detection, isolation and identification of the fault source and the other one in charge of re-designing the control algorithm by two approaches: fault accommodation and control re-design. From the algorithms studied, one has been selected and applied to a case study based on an industrial hydraulic-press. The developed model has been embedded under a real-time validation platform, which allows testing the FTC algorithms and analyse how the system will respond when a fault arises in similar conditions as a machine will have on factory. One AFTC approach has been picked up as the methodology the system will follow in the fault recovery process. In a first instance, the fault will be detected, isolated and identified by means of a neural network. In a second instance, the control algorithm will be re-configured to overcome the fault and continue working without human interaction.

Keywords: Fault-tolerant control, electro-hydraulic actuator, fault detection and isolation, control re-design, real-time.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 840
14138 Model Order Reduction of Discrete-Time Systems Using Fuzzy C-Means Clustering

Authors: Anirudha Narain, Dinesh Chandra, Ravindra K. S.

Abstract:

A computationally simple approach of model order reduction for single input single output (SISO) and linear timeinvariant discrete systems modeled in frequency domain is proposed in this paper. Denominator of the reduced order model is determined using fuzzy C-means clustering while the numerator parameters are found by matching time moments and Markov parameters of high order system.

Keywords: Model Order reduction, Discrete-time system, Fuzzy C-Means Clustering, Padé approximation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2813
14137 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System

Authors: Cheima Ben Soltane, Ittansa Yonas Kelbesa

Abstract:

Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.

Keywords: Feature Extraction, Speaker Modeling, Feature Matching, Mel Frequency Cepstrum Coefficient (MFCC), Gaussian mixture model (GMM), Vector Quantization (VQ), Linde-Buzo-Gray (LBG), Expectation Maximization (EM), pre-processing, Voice Activity Detection (VAD), Short Time Energy (STE), Background Noise Statistical Modeling, Closed-Set Tex-Independent Speaker Identification System (CISI).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1887
14136 Real Time Object Tracking in H.264/ AVC Using Polar Vector Median and Block Coding Modes

Authors: T. Kusuma, K. Ashwini

Abstract:

This paper presents a real time video surveillance system which is capable of tracking multiple real time objects using Polar Vector Median (PVM) and Block Coding Modes (BCM) with Global Motion Compensation (GMC). This strategy works in the packed area and furthermore utilizes the movement vectors and BCM from the compressed bit stream to perform real time object tracking. We propose to do this in view of the neighboring Motion Vectors (MVs) using a method called PVM. Since GM adds to the object’s native motion, for accurate tracking, it is important to remove GM from the MV field prior to further processing. The proposed method is tested on a number of standard sequences and the results show its advantages over some of the current modern methods.

Keywords: Block coding mode, global motion compensation, object tracking, polar vector median, video surveillance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 748
14135 Applications of Building Information Modeling (BIM) in Knowledge Sharing and Management in Construction

Authors: Shu-Hui Jan, Shih-Ping Ho, Hui-Ping Tserng

Abstract:

Construction knowledge can be referred to and reused among involved project managers and jobsite engineers to alleviate problems on a construction jobsite and reduce the time and cost of solving problems related to constructability. This paper proposes a new methodology to provide sharing of construction knowledge by using the Building Information Modeling (BIM) approach. The main characteristics of BIM include illustrating 3D CAD-based presentations and keeping information in a digital format, and facilitation of easy updating and transfer of information in the 3D BIM environment. Using the BIM approach, project managers and engineers can gain knowledge related to 3D BIM and obtain feedback provided by jobsite engineers for future reference. This study addresses the application of knowledge sharing management in the construction phase of construction projects and proposes a BIM-based Knowledge Sharing Management (BIMKSM) system for project managers and engineers. The BIMKSM system is then applied in a selected case study of a construction project in Taiwan to verify the proposed methodology and demonstrate the effectiveness of sharing knowledge in the BIM environment. The combined results demonstrate that the BIMKSM system can be used as a visual BIM-based knowledge sharing management platform by utilizing the BIM approach and web technology.

Keywords: Construction knowledge management, building information modeling, project management, web-based information system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4350
14134 Improving University Operations with Data Mining: Predicting Student Performance

Authors: Mladen Dragičević, Mirjana Pejić Bach, Vanja Šimičević

Abstract:

The purpose of this paper is to develop models that would enable predicting student success. These models could improve allocation of students among colleges and optimize the newly introduced model of government subsidies for higher education. For the purpose of collecting data, an anonymous survey was carried out in the last year of undergraduate degree student population using random sampling method. Decision trees were created of which two have been chosen that were most successful in predicting student success based on two criteria: Grade Point Average (GPA) and time that a student needs to finish the undergraduate program (time-to-degree). Decision trees have been shown as a good method of classification student success and they could be even more improved by increasing survey sample and developing specialized decision trees for each type of college. These types of methods have a big potential for use in decision support systems.

Keywords: Data mining, knowledge discovery in databases, prediction models, student success.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2540
14133 Educating the Educators: Interdisciplinary Approaches to Enhance Science Teaching

Authors: Denise Levy, Anna Lucia C. H. Villavicencio

Abstract:

In a rapid-changing world, science teachers face considerable challenges. In addition to the basic curriculum, there must be included several transversal themes, which demand creative and innovative strategies to be arranged and integrated to traditional disciplines. In Brazil, nuclear science is still a controversial theme, and teachers themselves seem to be unaware of the issue, most often perpetuating prejudice, errors and misconceptions. This article presents the authors’ experience in the development of an interdisciplinary pedagogical proposal to include nuclear science in the basic curriculum, in a transversal and integrating way. The methodology applied was based on the analysis of several normative documents that define the requirements of essential learning, competences and skills of basic education for all schools in Brazil. The didactic materials and resources were developed according to the best practices to improve learning processes privileging constructivist educational techniques, with emphasis on active learning process, collaborative learning and learning through research. The material consists of an illustrated book for students, a book for teachers and a manual with activities that can articulate nuclear science to different disciplines: Portuguese, mathematics, science, art, English, history and geography. The content counts on high scientific rigor and articulate nuclear technology with topics of interest to society in the most diverse spheres, such as food supply, public health, food safety and foreign trade. Moreover, this pedagogical proposal takes advantage of the potential value of digital technologies, implementing QR codes that excite and challenge students of all ages, improving interaction and engagement. The expected results include the education of the educators for nuclear science communication in a transversal and integrating way, demystifying nuclear technology in a contextualized and significant approach. It is expected that the interdisciplinary pedagogical proposal contributes to improving attitudes towards knowledge construction, privileging reconstructive questioning, fostering a culture of systematic curiosity and encouraging critical thinking skills.

Keywords: Science education, interdisciplinary learning, nuclear science; scientific literacy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 819
14132 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore

Authors: Ronal Muresano, Andrea Pagano

Abstract:

Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.

Keywords: Algorithm optimization, Bank Failures, OpenMP, Parallel Techniques, Statistical tool.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1900
14131 Mediation in Turkish Health Law for Healthcare Disputes

Authors: V. Durmus, M. Uydaci

Abstract:

In order to prevent overburdened courts, rising costs of litigation, and lengthy trial resolutions, the Law on Mediation for Civil Disputes was enacted, which was aimed at defining the procedure and guiding principles for dispute resolutions under Civil Law, in 2012. This “Mediation Code” also applies for civil healthcare disputes in Turkey. Aside from mediation, reconciliation, governed by Articles 253-255 of Criminal Procedure Law, has emerged as an alternative way to resolve criminal medical disputes, but the difference between mediation and conciliation is mostly procedural. This article deals with mediation in Turkish health law and aspect of medical malpractice mediation in Turkey. In addition, this study examines the issue of mediation in health law from both a legal and normative point of view, including codes of mediation which regulate both the structural and professional practice of mediation providers. As a result, although there is not official record about success rate of medical malpractice litigations and malpractice mediation in Turkey, it is widely accepted that the success rate for medical malpractice cases is relatively low compared to other personal injury cases even if it is generally considered that medical malpractice case filings have gradually increased recently. According to the Justice Ministry’s Department of Mediation in Turkey, 719 civil disputes have referred to mediators since 2013 (when the first mediation law came into force) with a 98% success rate.

Keywords: Malpractice mediation, medical disputes, reconciliation, health litigation, Turkish Health Law.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1484