Search results for: time complexity measurements.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7692

Search results for: time complexity measurements.

7362 A Novel In-Place Sorting Algorithm with O(n log z) Comparisons and O(n log z) Moves

Authors: Hanan Ahmed-Hosni Mahmoud, Nadia Al-Ghreimil

Abstract:

In-place sorting algorithms play an important role in many fields such as very large database systems, data warehouses, data mining, etc. Such algorithms maximize the size of data that can be processed in main memory without input/output operations. In this paper, a novel in-place sorting algorithm is presented. The algorithm comprises two phases; rearranging the input unsorted array in place, resulting segments that are ordered relative to each other but whose elements are yet to be sorted. The first phase requires linear time, while, in the second phase, elements of each segment are sorted inplace in the order of z log (z), where z is the size of the segment, and O(1) auxiliary storage. The algorithm performs, in the worst case, for an array of size n, an O(n log z) element comparisons and O(n log z) element moves. Further, no auxiliary arithmetic operations with indices are required. Besides these theoretical achievements of this algorithm, it is of practical interest, because of its simplicity. Experimental results also show that it outperforms other in-place sorting algorithms. Finally, the analysis of time and space complexity, and required number of moves are presented, along with the auxiliary storage requirements of the proposed algorithm.

Keywords: Auxiliary storage sorting, in-place sorting, sorting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884
7361 Risk Allocation in Public-Private Partnership (PPP) Projects for Wastewater Treatment Plants

Authors: Samuel Capintero, Ole H. Petersen

Abstract:

This paper examines the utilization of public-private partnerships for the building and operation of wastewater treatment plants. Our research focuses on risk allocation in this kind of projects. Our analysis builds on more than hundred wastewater treatment plants built and operated through PPP projects in Aragon (Spain). The paper illustrates the consequences of an inadequate management of construction risk and an unsuitable transfer of demand risk in wastewater treatment plants. It also shows that the involvement of many public bodies at local, regional and national level further increases the complexity of this kind of projects and make time delays more likely.

Keywords: Wastewater, treatment plants, PPP, construction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2241
7360 Intelligent Aid-Analysis Based on the Use of Digital Twin: Application to Electronic Warfare System

Authors: L. Chaussy, M. Nouvel

Abstract:

Workload of the system engineers during Integration Validation Verification process of Electronic Warfare Systems (EWS) is growing with complexity of the systems and with the diversity of tested cases (diversity of operational scenario in front of EWS). Even if the use of Digital Twin makes easier conception and development phases in term of planning and test equipment availability, time to analyze tests results is still too long and too complex. The idea to reduce the system engineer’s workload and improve test coverage is to introduce some intelligent and aid-analysis algorithms to improve this step.

Keywords: Analysis tools, automatic testing, digital twin, electronic warfare system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 328
7359 A Simulation Software for DNA Computing Algorithms Implementation

Authors: M. S. Muhammad, S. M. W. Masra, K. Kipli, N. Zamhari

Abstract:

The capturing of gel electrophoresis image represents the output of a DNA computing algorithm. Before this image is being captured, DNA computing involves parallel overlap assembly (POA) and polymerase chain reaction (PCR) that is the main of this computing algorithm. However, the design of the DNA oligonucleotides to represent a problem is quite complicated and is prone to errors. In order to reduce these errors during the design stage before the actual in-vitro experiment is carried out; a simulation software capable of simulating the POA and PCR processes is developed. This simulation software capability is unlimited where problem of any size and complexity can be simulated, thus saving cost due to possible errors during the design process. Information regarding the DNA sequence during the computing process as well as the computing output can be extracted at the same time using the simulation software.

Keywords: DNA computing, PCR, POA, simulation software

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1772
7358 Isobaric Vapor-Liquid Equilibrium of Binary Mixture of Methyl Acetate with Isopropylbenzene at 97.3 kPa

Authors: Seema Kapoor, Baljinder K. Gill, V. K. Rattan

Abstract:

Isobaric vapor-liquid equilibrium measurements are reported for the binary mixture of Methyl acetate and Isopropylbenzene at 97.3 kPa. The measurements have been performed using a vapor recirculating type (modified Othmer's) equilibrium still. The mixture shows positive deviation from ideality and does not form an azeotrope. The activity coefficients have been calculated taking into consideration the vapor phase nonideality. The data satisfy the thermodynamic consistency tests of Herington and Black. The activity coefficients have been satisfactorily correlated by means of the Margules, NRTL, and Black equations. A comparison of the values of activity coefficients obtained by experimental data with the UNIFAC model has been made.

Keywords: Binary mixture, Isopropylbenzene, Methyl acetate, Vapor-liquid equilibrium.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2165
7357 Exploiting Machine Learning Techniques for the Enhancement of Acceptance Sampling

Authors: Aikaterini Fountoulaki, Nikos Karacapilidis, Manolis Manatakis

Abstract:

This paper proposes an innovative methodology for Acceptance Sampling by Variables, which is a particular category of Statistical Quality Control dealing with the assurance of products quality. Our contribution lies in the exploitation of machine learning techniques to address the complexity and remedy the drawbacks of existing approaches. More specifically, the proposed methodology exploits Artificial Neural Networks (ANNs) to aid decision making about the acceptance or rejection of an inspected sample. For any type of inspection, ANNs are trained by data from corresponding tables of a standard-s sampling plan schemes. Once trained, ANNs can give closed-form solutions for any acceptance quality level and sample size, thus leading to an automation of the reading of the sampling plan tables, without any need of compromise with the values of the specific standard chosen each time. The proposed methodology provides enough flexibility to quality control engineers during the inspection of their samples, allowing the consideration of specific needs, while it also reduces the time and the cost required for these inspections. Its applicability and advantages are demonstrated through two numerical examples.

Keywords: Acceptance Sampling, Neural Networks, Statistical Quality Control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1662
7356 Effect of Twelve Weeks Brisk Walking on Blood Pressure, Body Mass Index, and Anthropometric Circumference of Obese Males

Authors: Kaukab Azeem

Abstract:

Introduction: Obesity is a major health risk issue in the present day of life for one and all globally. Obesity is one of the major concerns for public health according to recent increasing trends in obesity-related diseases such as Type 2 diabetes. ( Kazuya, 1994).and hyperlipidemia, (Sakata,1990) .which are more prevalent in Japanese adults with body mass index (BMI) values Z25 kg/m2.( Japanese Ministry of Health and Welfare,1997). The purpose of the study was to assess the effect of twelve weeks of brisk walking on blood pressure and body mass index, anthropometric measurements of obese males. Method: Thirty obese (BMI= above 30) males, aged 18 to 22 years, were selected from King Fahd University of Petroleum & Minerals, Saudi Arabia. The subject-s height (cm) was measured using a stadiometer and body mass (kg) was measured with a electronic weighing machine. BMI was subsequently calculated (kg/m2). The blood pressure was measured with standardized sphygmomanometer in mm of Hg. All the measurements were taken twice before and twice after the experimental period. The pre and post anthropometric measurements of waist and hip circumference were measured with the steel tape in cm. The subjects underwent walking schedule two times in a week for 12 weeks. The 45 minute sessions of brisk walking were undertaken at an average intensity of 65% to 85% of maximum HR (HRmax; calculated as 220-age). Results & Discussion: Statistical findings revealed significant changes from pre test to post test in case of both systolic blood pressure and diastolic blood pressure in the walking group. Results also showed significant decrease in their body mass index and anthropometric measurements i.e. (waist & hip circumference). Conclusion: It was concluded that twelve weeks brisk walking is beneficial for lowering of blood pressure, body mass index, and anthropometric circumference of obese males.

Keywords: Anthropometric, Blood pressure, Body mass index

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3040
7355 Wind Tunnel Investigation of the Turbulent Flow around the Panorama Giustinelli Building for VAWT Application

Authors: M. Raciti Castelli, S. Mogno, S. Giacometti, E. Benini

Abstract:

A boundary layer wind tunnel facility has been adopted in order to conduct experimental measurements of the flow field around a model of the Panorama Giustinelli Building, Trieste (Italy). Information on the main flow structures has been obtained by means of flow visualization techniques and has been compared to the numerical predictions of the vortical structures spread on top of the roof, in order to investigate the optimal positioning for a vertical-axis wind energy conversion system, registering a good agreement between experimental measurements and numerical predictions.

Keywords: Boundary layer wind tunnel, flow around buildings, atmospheric flow field, vertical-axis wind turbine (VAWT).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1772
7354 The Study of the Intelligent Fuzzy Weighted Input Estimation Method Combined with the Experiment Verification for the Multilayer Materials

Authors: Ming-Hui Lee, Tsung-Chien Chen, Tsu-Ping Yu, Horng-Yuan Jang

Abstract:

The innovative intelligent fuzzy weighted input estimation method (FWIEM) can be applied to the inverse heat transfer conduction problem (IHCP) to estimate the unknown time-varying heat flux of the multilayer materials as presented in this paper. The feasibility of this method can be verified by adopting the temperature measurement experiment. The experiment modular may be designed by using the copper sample which is stacked up 4 aluminum samples with different thicknesses. Furthermore, the bottoms of copper samples are heated by applying the standard heat source, and the temperatures on the tops of aluminum are measured by using the thermocouples. The temperature measurements are then regarded as the inputs into the presented method to estimate the heat flux in the bottoms of copper samples. The influence on the estimation caused by the temperature measurement of the sample with different thickness, the processing noise covariance Q, the weighting factor γ , the sampling time interval Δt , and the space discrete interval Δx , will be investigated by utilizing the experiment verification. The results show that this method is efficient and robust to estimate the unknown time-varying heat input of the multilayer materials.

Keywords: Multilayer Materials, Input Estimation Method, IHCP, Heat Flux.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1200
7353 A Robust Al-Hawalees Gaming Automation using Minimax and BPNN Decision

Authors: Ahmad Sharieh, R Bremananth

Abstract:

Artificial Intelligence based gaming is an interesting topic in the state-of-art technology. This paper presents an automation of a tradition Omani game, called Al-Hawalees. Its related issues are resolved and implemented using artificial intelligence approach. An AI approach called mini-max procedure is incorporated to make a diverse budges of the on-line gaming. If number of moves increase, time complexity will be increased in terms of propositionally. In order to tackle the time and space complexities, we have employed a back propagation neural network (BPNN) to train in off-line to make a decision for resources required to fulfill the automation of the game. We have utilized Leverberg- Marquardt training in order to get the rapid response during the gaming. A set of optimal moves is determined by the on-line back propagation training fashioned with alpha-beta pruning. The results and analyses reveal that the proposed scheme will be easily incorporated in the on-line scenario with one player against the system.

Keywords: Artificial neural network, back propagation gaming, Leverberg-Marquardt, minimax procedure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1894
7352 Behavior of Generated Gas in Lost Foam Casting

Authors: M. Khodai, S. M. H. Mirbagheri

Abstract:

In the Lost Foam Casting process, melting point temperature of metal, as well as volume and rate of the foam degradation have significant effect on the mold filling pattern. Therefore, gas generation capacity and gas gap length are two important parameters for modeling of mold filling time of the lost foam casting processes. In this paper, the gas gap length at the liquidfoam interface for a low melting point (aluminum) alloy and a high melting point (Carbon-steel) alloy are investigated by the photography technique. Results of the photography technique indicated, that the gas gap length and the mold filling time are increased with increased coating thickness and density of the foam. The Gas gap lengths measured in aluminum and Carbon-steel, depend on the foam density, and were approximately 4-5 and 25-60 mm, respectively. By using a new system, the gas generation capacity for the aluminum and steel was measured. The gas generation capacity measurements indicated that gas generation in the Aluminum and Carbon-steel lost foam casting was about 50 CC/g and 3200 CC/g polystyrene, respectively.

Keywords: gas gap, lost foam casting, photographytechnique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3467
7351 Modeling and Simulation of Robotic Arm Movement using Soft Computing

Authors: V. K. Banga, Jasjit Kaur, R. Kumar, Y. Singh

Abstract:

In this research paper we have presented control architecture for robotic arm movement and trajectory planning using Fuzzy Logic (FL) and Genetic Algorithms (GAs). This architecture is used to compensate the uncertainties like; movement, friction and settling time in robotic arm movement. The genetic algorithms and fuzzy logic is used to meet the objective of optimal control movement of robotic arm. This proposed technique represents a general model for redundant structures and may extend to other structures. Results show optimal angular movement of joints as result of evolutionary process. This technique has edge over the other techniques as minimum mathematics complexity used.

Keywords: Kinematics, Genetic algorithms (GAs), Fuzzy logic(FL), Optimal control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2963
7350 Wind Speed Data Analysis using Wavelet Transform

Authors: S. Avdakovic, A. Lukac, A. Nuhanovic, M. Music

Abstract:

Renewable energy systems are becoming a topic of great interest and investment in the world. In recent years wind power generation has experienced a very fast development in the whole world. For planning and successful implementations of good wind power plant projects, wind potential measurements are required. In these projects, of great importance is the effective choice of the micro location for wind potential measurements, installation of the measurement station with the appropriate measuring equipment, its maintenance and analysis of the gained data on wind potential characteristics. In this paper, a wavelet transform has been applied to analyze the wind speed data in the context of insight in the characteristics of the wind and the selection of suitable locations that could be the subject of a wind farm construction. This approach shows that it can be a useful tool in investigation of wind potential.

Keywords: Wind potential, Wind speed data, Wavelettransform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2598
7349 Counter-Policies by Industrial Countries to Tackle Global Warming, from Perspective of the Kyoto Protocol

Authors: Yau-Ting, Sung, Hsueh-Chih, Chen, Hui-Peng, Hsiung, Hsun-Tsum, Huang

Abstract:

In accordance with environmental impacts contended in Kyoto Protocol, the study aims to explore the different administrative and non-administrative measurements that industrial countries, such as America, German, Japan, Korea, Holland and British take to face with the increasing Global Warming phenomena. By large, these measurements consist of versatile dimensions, including of education and advocating, economical instruments, research developments and instances, restricted instruments, voluntary contacts, exchangeable permit for carbon-release and public investments. The results of discussion for the study are as follows: both economical impacts as well as reformations for nations that are affected via Kyoto Protocol, and human testifying for variables of global surroundings in the age of Kyoto Protocol.

Keywords: Global warming, Kyoto protocol.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1729
7348 PM10 Prediction and Forecasting Using CART: A Case Study for Pleven, Bulgaria

Authors: Snezhana G. Gocheva-Ilieva, Maya P. Stoimenova

Abstract:

Ambient air pollution with fine particulate matter (PM10) is a systematic permanent problem in many countries around the world. The accumulation of a large number of measurements of both the PM10 concentrations and the accompanying atmospheric factors allow for their statistical modeling to detect dependencies and forecast future pollution. This study applies the classification and regression trees (CART) method for building and analyzing PM10 models. In the empirical study, average daily air data for the city of Pleven, Bulgaria for a period of 5 years are used. Predictors in the models are seven meteorological variables, time variables, as well as lagged PM10 variables and some lagged meteorological variables, delayed by 1 or 2 days with respect to the initial time series, respectively. The degree of influence of the predictors in the models is determined. The selected best CART models are used to forecast future PM10 concentrations for two days ahead after the last date in the modeling procedure and show very accurate results.

Keywords: Cross-validation, decision tree, lagged variables, short-term forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 683
7347 Measurement of Greenhouse Gas Emissions from Sugarcane Plantation Soil in Thailand

Authors: Wilaiwan Sornpoon, Sébastien Bonnet, Savitri Garivait

Abstract:

Continuous measurements of greenhouse gases (GHGs) emitted from soils are required to understand diurnal and seasonal variations in soil emissions and related mechanism. This understanding plays an important role in appropriate quantification and assessment of the overall change in soil carbon flow and budget. This study proposes to monitor GHGs emissions from soil under sugarcane cultivation in Thailand. The measurements were conducted over 379 days. The results showed that the total net amount of GHGs emitted from sugarcane plantation soil amounts to 36 Mg CO2eq ha-1. Carbon dioxide (CO2) and nitrous oxide (N2O) were found to be the main contributors to the emissions. For methane (CH4), the net emission was found to be almost zero. The measurement results also confirmed that soil moisture content and GHGs emissions are positively correlated.

Keywords: Soil, GHG emission, Sugarcane, Agriculture, Thailand.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2408
7346 Performance Analysis of Selective Adaptive Multiple Access Interference Cancellation for Multicarrier DS-CDMA Systems

Authors: Maged Ahmed, Ahmed El-Mahdy

Abstract:

In this paper, Selective Adaptive Parallel Interference Cancellation (SA-PIC) technique is presented for Multicarrier Direct Sequence Code Division Multiple Access (MC DS-CDMA) scheme. The motivation of using SA-PIC is that it gives high performance and at the same time, reduces the computational complexity required to perform interference cancellation. An upper bound expression of the bit error rate (BER) for the SA-PIC under Rayleigh fading channel condition is derived. Moreover, the implementation complexities for SA-PIC and Adaptive Parallel Interference Cancellation (APIC) are discussed and compared. The performance of SA-PIC is investigated analytically and validated via computer simulations.

Keywords: Adaptive interference cancellation, communicationsystems, multicarrier signal processing, spread spectrum

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1812
7345 Hardware Centric Machine Vision for High Precision Center of Gravity Calculation

Authors: Xin Cheng, Benny Thörnberg, Abdul Waheed Malik, Najeem Lawal

Abstract:

We present a hardware oriented method for real-time measurements of object-s position in video. The targeted application area is light spots used as references for robotic navigation. Different algorithms for dynamic thresholding are explored in combination with component labeling and Center Of Gravity (COG) for highest possible precision versus Signal-to-Noise Ratio (SNR). This method was developed with a low hardware cost in focus having only one convolution operation required for preprocessing of data.

Keywords: Dynamic thresholding, segmentation, position measurement, sub-pixel precision, center of gravity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2323
7344 Ice Load Measurements on Known Structures Using Image Processing Methods

Authors: Azam Fazelpour, Saeed R. Dehghani, Vlastimil Masek, Yuri S. Muzychka

Abstract:

This study employs a method based on image analyses and structure information to detect accumulated ice on known structures. The icing of marine vessels and offshore structures causes significant reductions in their efficiency and creates unsafe working conditions. Image processing methods are used to measure ice loads automatically. Most image processing methods are developed based on captured image analyses. In this method, ice loads on structures are calculated by defining structure coordinates and processing captured images. A pyramidal structure is designed with nine cylindrical bars as the known structure of experimental setup. Unsymmetrical ice accumulated on the structure in a cold room represents the actual case of experiments. Camera intrinsic and extrinsic parameters are used to define structure coordinates in the image coordinate system according to the camera location and angle. The thresholding method is applied to capture images and detect iced structures in a binary image. The ice thickness of each element is calculated by combining the information from the binary image and the structure coordinate. Averaging ice diameters from different camera views obtains ice thicknesses of structure elements. Comparison between ice load measurements using this method and the actual ice loads shows positive correlations with an acceptable range of error. The method can be applied to complex structures defining structure and camera coordinates.

Keywords: Camera calibration, Ice detection, ice load measurements, image processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1217
7343 Low Jitter ADPLL based Clock Generator for High Speed SoC Applications

Authors: Moorthi S., Meganathan D., Janarthanan D., Praveen Kumar P., J. Raja paul perinbam

Abstract:

An efficient architecture for low jitter All Digital Phase Locked Loop (ADPLL) suitable for high speed SoC applications is presented in this paper. The ADPLL is designed using standard cells and described by Hardware Description Language (HDL). The ADPLL implemented in a 90 nm CMOS process can operate from 10 to 200 MHz and achieve worst case frequency acquisition in 14 reference clock cycles. The simulation result shows that PLL has cycle to cycle jitter of 164 ps and period jitter of 100 ps at 100MHz. Since the digitally controlled oscillator (DCO) can achieve both high resolution and wide frequency range, it can meet the demands of system-level integration. The proposed ADPLL can easily be ported to different processes in a short time. Thus, it can reduce the design time and design complexity of the ADPLL, making it very suitable for System-on-Chip (SoC) applications.

Keywords: All Digital Phase Locked Loop (ADPLL), Systemon-Chip (SoC), Phase Locked Loop (PLL), Very High speedIntegrated Circuit (VHSIC) Hardware Description Language(VHDL), Digitally Controlled Oscillator (DCO), Phase frequencydetector (PFD) and Voltage Controlled Oscillator (VCO).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3031
7342 Moving Object Detection Using Histogram of Uniformly Oriented Gradient

Authors: Wei-Jong Yang, Yu-Siang Su, Pau-Choo Chung, Jar-Ferr Yang

Abstract:

Moving object detection (MOD) is an important issue in advanced driver assistance systems (ADAS). There are two important moving objects, pedestrians and scooters in ADAS. In real-world systems, there exist two important challenges for MOD, including the computational complexity and the detection accuracy. The histogram of oriented gradient (HOG) features can easily detect the edge of object without invariance to changes in illumination and shadowing. However, to reduce the execution time for real-time systems, the image size should be down sampled which would lead the outlier influence to increase. For this reason, we propose the histogram of uniformly-oriented gradient (HUG) features to get better accurate description of the contour of human body. In the testing phase, the support vector machine (SVM) with linear kernel function is involved. Experimental results show the correctness and effectiveness of the proposed method. With SVM classifiers, the real testing results show the proposed HUG features achieve better than classification performance than the HOG ones.

Keywords: Moving object detection, histogram of oriented gradient histogram of oriented gradient, histogram of uniformly-oriented gradient, linear support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1200
7341 Improvement of GVPI Insulation System Characteristics by Curing Process Modification

Authors: M. Shadmand

Abstract:

The curing process of insulation system for electrical machines plays a determinative role for its durability and reliability. Polar structure of insulating resin molecules and used filler of insulation system can be taken as an occasion to leverage it to enhance overall characteristics of insulation system, mechanically and electrically. The curing process regime for insulating system plays an important role for its mechanical and electrical characteristics by arranging the polymerization of chain structure for resin. In this research, the effect of electrical field application on in-curing insulating system for Global Vacuum Pressurized Impregnation (GVPI) system for traction motor was considered by performing the dissipation factor, polarization and de-polarization current (PDC) and voltage endurance (aging) measurements on sample test objects. Outcome results depicted obvious improvement in mechanical strength of the insulation system as well as higher electrical characteristics with routing and long-time (aging) electrical tests. Coming together, polarization of insulation system during curing process would enhance the machine life time. 

Keywords: Insulation system, GVPI, PDC, aging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1036
7340 Measurement Scheme Improving for State Estimation Using Stochastic Tabu Search

Authors: T. Kerdchuen

Abstract:

This paper proposes the stochastic tabu search (STS) for improving the measurement scheme for power system state estimation. If the original measured scheme is not observable, the additional measurements with minimum number of measurements are added into the system by STS so that there is no critical measurement pair. The random bit flipping and bit exchanging perturbations are used for generating the neighborhood solutions in STS. The Pδ observable concept is used to determine the network observability. Test results of 10 bus, IEEE 14 and 30 bus systems are shown that STS can improve the original measured scheme to be observable without critical measurement pair. Moreover, the results of STS are superior to deterministic tabu search (DTS) in terms of the best solution hit.

Keywords: Measurement Scheme, Power System StateEstimation, Network Observability, Stochastic Tabu Search (STS).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1244
7339 Enhancing the Performance of H.264/AVC in Adaptive Group of Pictures Mode Using Octagon and Square Search Pattern

Authors: S. Sowmyayani, P. Arockia Jansi Rani

Abstract:

This paper integrates Octagon and Square Search pattern (OCTSS) motion estimation algorithm into H.264/AVC (Advanced Video Coding) video codec in Adaptive Group of Pictures (AGOP) mode. AGOP structure is computed based on scene change in the video sequence. Octagon and square search pattern block-based motion estimation method is implemented in inter-prediction process of H.264/AVC. Both these methods reduce bit rate and computational complexity while maintaining the quality of the video sequence respectively. Experiments are conducted for different types of video sequence. The results substantially proved that the bit rate, computation time and PSNR gain achieved by the proposed method is better than the existing H.264/AVC with fixed GOP and AGOP. With a marginal gain in quality of 0.28dB and average gain in bitrate of 132.87kbps, the proposed method reduces the average computation time by 27.31 minutes when compared to the existing state-of-art H.264/AVC video codec.

Keywords: Block Distortion Measure, Block Matching Algorithms, H.264/AVC, Motion estimation, Search patterns, Shot cut detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1698
7338 Estimation of Relative Subsidence of Collapsible Soils Using Electromagnetic Measurements

Authors: Henok Hailemariam, Frank Wuttke

Abstract:

Collapsible soils are weak soils that appear to be stable in their natural state, normally dry condition, but rapidly deform under saturation (wetting), thus generating large and unexpected settlements which often yield disastrous consequences for structures unwittingly built on such deposits. In this study, a prediction model for the relative subsidence of stressed collapsible soils based on dielectric permittivity measurement is presented. Unlike most existing methods for soil subsidence prediction, this model does not require moisture content as an input parameter, thus providing the opportunity to obtain accurate estimation of the relative subsidence of collapsible soils using dielectric measurement only. The prediction model is developed based on an existing relative subsidence prediction model (which is dependent on soil moisture condition) and an advanced theoretical frequency and temperature-dependent electromagnetic mixing equation (which effectively removes the moisture content dependence of the original relative subsidence prediction model). For large scale sub-surface soil exploration purposes, the spatial sub-surface soil dielectric data over wide areas and high depths of weak (collapsible) soil deposits can be obtained using non-destructive high frequency electromagnetic (HF-EM) measurement techniques such as ground penetrating radar (GPR). For laboratory or small scale in-situ measurements, techniques such as an open-ended coaxial line with widely applicable time domain reflectometry (TDR) or vector network analysers (VNAs) are usually employed to obtain the soil dielectric data. By using soil dielectric data obtained from small or large scale non-destructive HF-EM investigations, the new model can effectively predict the relative subsidence of weak soils without the need to extract samples for moisture content measurement. Some of the resulting benefits are the preservation of the undisturbed nature of the soil as well as a reduction in the investigation costs and analysis time in the identification of weak (problematic) soils. The accuracy of prediction of the presented model is assessed by conducting relative subsidence tests on a collapsible soil at various initial soil conditions and a good match between the model prediction and experimental results is obtained.

Keywords: Collapsible soil, relative subsidence, dielectric permittivity, moisture content.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1080
7337 Optimizing the Capacity of a Convolutional Neural Network for Image Segmentation and Pattern Recognition

Authors: Yalong Jiang, Zheru Chi

Abstract:

In this paper, we study the factors which determine the capacity of a Convolutional Neural Network (CNN) model and propose the ways to evaluate and adjust the capacity of a CNN model for best matching to a specific pattern recognition task. Firstly, a scheme is proposed to adjust the number of independent functional units within a CNN model to make it be better fitted to a task. Secondly, the number of independent functional units in the capsule network is adjusted to fit it to the training dataset. Thirdly, a method based on Bayesian GAN is proposed to enrich the variances in the current dataset to increase its complexity. Experimental results on the PASCAL VOC 2010 Person Part dataset and the MNIST dataset show that, in both conventional CNN models and capsule networks, the number of independent functional units is an important factor that determines the capacity of a network model. By adjusting the number of functional units, the capacity of a model can better match the complexity of a dataset.

Keywords: CNN, capsule network, capacity optimization, character recognition, data augmentation; semantic segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 658
7336 Performance Comparison of Real Time EDAC Systems for Applications On-Board Small Satellites

Authors: Y. Bentoutou

Abstract:

On-board Error Detection and Correction (EDAC) devices aim to secure data transmitted between the central processing unit (CPU) of a satellite onboard computer and its local memory. This paper presents a comparison of the performance of four low complexity EDAC techniques for application in Random Access Memories (RAMs) on-board small satellites. The performance of a newly proposed EDAC architecture is measured and compared with three different EDAC strategies, using the same FPGA technology. A statistical analysis of single-event upset (SEU) and multiple-bit upset (MBU) activity in commercial memories onboard Alsat-1 is given for a period of 8 years

Keywords: Error Detection and Correction; On-board computer; small satellite missions

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2229
7335 Analysis and Research of Two-Level Scheduling Profile for Open Real-Time System

Authors: Yongxian Jin, Jingzhou Huang

Abstract:

In an open real-time system environment, the coexistence of different kinds of real-time and non real-time applications makes the system scheduling mechanism face new requirements and challenges. One two-level scheduling scheme of the open real-time systems is introduced, and points out that hard and soft real-time applications are scheduled non-distinctively as the same type real-time applications, the Quality of Service (QoS) cannot be guaranteed. It has two flaws: The first, it can not differentiate scheduling priorities of hard and soft real-time applications, that is to say, it neglects characteristic differences between hard real-time applications and soft ones, so it does not suit a more complex real-time environment. The second, the worst case execution time of soft real-time applications cannot be predicted exactly, so it is not worth while to cost much spending in order to assure all soft real-time applications not to miss their deadlines, and doing that may cause resource wasting. In order to solve this problem, a novel two-level real-time scheduling mechanism (including scheduling profile and scheduling algorithm) which adds the process of dealing with soft real-time applications is proposed. Finally, we verify real-time scheduling mechanism from two aspects of theory and experiment. The results indicate that our scheduling mechanism can achieve the following objectives. (1) It can reflect the difference of priority when scheduling hard and soft real-time applications. (2) It can ensure schedulability of hard real-time applications, that is, their rate of missing deadline is 0. (3) The overall rate of missing deadline of soft real-time applications can be less than 1. (4) The deadline of a non-real-time application is not set, whereas the scheduling algorithm that server 0 S uses can avoid the “starvation" of jobs and increase QOS. By doing that, our scheduling mechanism is more compatible with different types of applications and it will be applied more widely.

Keywords: Hard real-time, two-level scheduling profile, open real-time system, non-distinctive schedule, soft real-time

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534
7334 Finite Element Analysis of Cooling Time and Residual Strains in Cold Spray Deposited Titanium Particles

Authors: Thanh-Duoc Phan, Saden H. Zahiri, S. H. Masood, Mahnaz Jahedi

Abstract:

In this article, using finite element analysis (FEA) and an X-ray diffractometer (XRD), cold-sprayed titanium particles on a steel substrate is investigated in term of cooling time and the development of residual strains. Three cooling-down models of sprayed particles after deposition stage are simulated and discussed: the first model (m1) considers conduction effect to the substrate only, the second model (m2) considers both conduction as well as convection effect to the environment, and the third model (m3) which is the same as the second model but with the substrate heated to a near particle temperature before spraying. Thereafter, residual strains developed in the third model is compared with the experimental measurement of residual strains, which involved a Bruker D8 Advance Diffractometer using CuKa radiation (40kV, 40mA) monochromatised with a graphite sample monochromator. For deposition conditions of this study, a good correlation was found to exist between the FEA results and XRD measurements of residual strains.

Keywords: cold gas dynamic spray, X-ray diffraction, explicit finite element analysis, residual strain, titanium, particle impact, deformation behavior.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730
7333 Creating or Destroying Objects Plan in the Graphplan Framework

Authors: Wen-xiang Gu, Zeng-yu Cai, Xin-mei Zhang, Gui-dong Jiang

Abstract:

At present, intelligent planning in the Graphplan framework is a focus of artificial intelligence. While the Creating or Destroying Objects Planning (CDOP) is one unsolved problem of this field, one of the difficulties, too. In this paper, we study this planning problem and bring forward the idea of transforming objects to propositions, based on which we offer an algorithm, Creating or Destroying Objects in the Graphplan framework (CDOGP). Compared to Graphplan, the new algorithm can solve not only the entire problems that Graphplan do, but also a part of CDOP. It is for the first time that we introduce the idea of object-proposition, and we emphasize the discussion on the representations of creating or destroying objects operator and an algorithm in the Graphplan framework. In addition, we analyze the complexity of this algorithm.

Keywords: Graphplan, object_proposition, Creating or destroying objects, CDOGP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1204