Search results for: perturb and observe algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4341

Search results for: perturb and observe algorithm

1671 The Influence of α-Defensin and Cytokine IL-1β, Molecular Factors of Innate Immune System, on Regulation of Inflammatory Periodontal Diseases in Orthodontic Patients

Authors: G. R. Khaliullina, S. L. Blashkova, I. G. Mustafin

Abstract:

The article presents the results of a study involving 97 patients with different types of orthodontic pathology. Immunological examination of patients included determination of the level of α-defensin and cytokine IL-1β in mixed saliva. The study showed that the level of α-defensin serves as a diagnostic marker for determining the therapeutic measures in the treatment of inflammatory processes in periodontal tissues. Α-defensins exhibit immunomodulating and antimicrobial activity during inflammatory processes and play an important role in the regulation of the pathology of periodontal disease. The obtained data allowed the development of an algorithm for diagnosis and the implementation of immunomodulating therapy in the treatment of periodontal diseases in orthodontic patients.

Keywords: α-difensin, cytokine, orthodontic treatment, periodontal disease, periodontal pathogens

Procedia PDF Downloads 167
1670 Improved Imaging and Tracking Algorithm for Maneuvering Extended UAVs Using High-Resolution ISAR Radar System

Authors: Mohamed Barbary, Mohamed H. Abd El-Azeem

Abstract:

Maneuvering extended object tracking (M-EOT) using high-resolution inverse synthetic aperture radar (ISAR) observations has been gaining momentum recently. This work presents a new robust implementation of the multiple models (MM) multi-Bernoulli (MB) filter for M-EOT, where the M-EOT’s ISAR observations are characterized using a skewed (SK) non-symmetrically normal distribution. To cope with the possible abrupt change of kinematic state, extension, and observation distribution over an extended object when a target maneuvers, a multiple model technique is represented based on MB-track-before-detect (TBD) filter supported by SK-sub-random matrix model (RMM) or sub-ellipses framework. Simulation results demonstrate this remarkable impact.

Keywords: maneuvering extended objects, ISAR, skewed normal distribution, sub-RMM, MM-MB-TBD filter

Procedia PDF Downloads 70
1669 Towards a Resources Provisioning for Dynamic Workflows in the Cloud

Authors: Fairouz Fakhfakh, Hatem Hadj Kacem, Ahmed Hadj Kacem

Abstract:

Cloud computing offers a new model of service provisioning for workflow applications, thanks to its elasticity and its paying model. However, it presents various challenges that need to be addressed in order to be efficiently utilized. The resources provisioning problem for workflow applications has been widely studied. Nevertheless, the existing works did not consider the change in workflow instances while they are being executed. This functionality has become a major requirement to deal with unusual situations and evolution. This paper presents a first step towards the resources provisioning for a dynamic workflow. In fact, we propose a provisioning algorithm which minimizes the overall workflow execution cost, while meeting a deadline constraint. Then, we extend it to support the dynamic adding of tasks. Experimental results show that our proposed heuristic demonstrates a significant reduction in resources cost by using a consolidation process.

Keywords: cloud computing, resources provisioning, dynamic workflow, workflow applications

Procedia PDF Downloads 281
1668 A New Floating Point Implementation of Base 2 Logarithm

Authors: Ahmed M. Mansour, Ali M. El-Sawy, Ahmed T. Sayed

Abstract:

Logarithms reduce products to sums and powers to products; they play an important role in signal processing, communication and information theory. They are primarily used for hardware calculations, handling multiplications, divisions, powers, and roots effectively. There are three commonly used bases for logarithms; the logarithm with base-10 is called the common logarithm, the natural logarithm with base-e and the binary logarithm with base-2. This paper demonstrates different methods of calculation for log2 showing the complexity of each and finds out the most accurate and efficient besides giving in- sights to their hardware design. We present a new method called Floor Shift for fast calculation of log2, and then we combine this algorithm with Taylor series to improve the accuracy of the output, we illustrate that by using two examples. We finally compare the algorithms and conclude with our remarks.

Keywords: logarithms, log2, floor, iterative, CORDIC, Taylor series

Procedia PDF Downloads 524
1667 Mobile Application Tool for Individual Maintenance Users on High-Rise Residential Buildings in South Korea

Authors: H. Cha, J. Kim, D. Kim, J. Shin, K. Lee

Abstract:

Since 1980's, the rapid economic growth resulted in so many aged apartment buildings in South Korea. Nevertheless, there is insufficient maintenance practice of buildings. In this study, to facilitate the building maintenance the authors classified the building defects into three levels according to their level of performance and developed a mobile application tool based on each level's appropriate feedback. The feedback structure consisted of 'Maintenance manual phase', 'Online feedback phase', 'Repair work phase of the specialty contractors'. In order to implement each phase the authors devised the necessary database for each phase and created a prototype system that can develop on its own. The authors expect that the building users can easily maintain their buildings by using this application.

Keywords: building defect, maintenance practice, mobile application, system algorithm

Procedia PDF Downloads 184
1666 Method of Cluster Based Cross-Domain Knowledge Acquisition for Biologically Inspired Design

Authors: Shen Jian, Hu Jie, Ma Jin, Peng Ying Hong, Fang Yi, Liu Wen Hai

Abstract:

Biologically inspired design inspires inventions and new technologies in the field of engineering by mimicking functions, principles, and structures in the biological domain. To deal with the obstacles of cross-domain knowledge acquisition in the existing biologically inspired design process, functional semantic clustering based on functional feature semantic correlation and environmental constraint clustering composition based on environmental characteristic constraining adaptability are proposed. A knowledge cell clustering algorithm and the corresponding prototype system is developed. Finally, the effectiveness of the method is verified by the visual prosthetic device design.

Keywords: knowledge clustering, knowledge acquisition, knowledge based engineering, knowledge cell, biologically inspired design

Procedia PDF Downloads 422
1665 Solving Process Planning and Scheduling with Number of Operation Plus Processing Time Due-Date Assignment Concurrently Using a Genetic Search

Authors: Halil Ibrahim Demir, Alper Goksu, Onur Canpolat, Caner Erden, Melek Nur

Abstract:

Traditionally process planning, scheduling and due date assignment are performed sequentially and separately. High interrelation between these functions makes integration very useful. Although there are numerous works on integrated process planning and scheduling and many works on scheduling with due date assignment, there are only a few works on the integration of these three functions. Here we tested the different integration levels of these three functions and found a fully integrated version as the best. We applied genetic search and random search and genetic search was found better compared to the random search. We penalized all earliness, tardiness and due date related costs. Since all these three terms are all undesired, it is better to penalize all of them.

Keywords: process planning, scheduling, due-date assignment, genetic algorithm, random search

Procedia PDF Downloads 369
1664 The Role of User Participation on Social Sustainability: A Case Study on Four Residential Areas

Authors: Hasan Taştan, Ayşen Ciravoğlu

Abstract:

The rapid growth of the human population and the environmental degradation associated with increased consumption of resources raises concerns on sustainability. Social sustainability constitutes one of the three dimensions of sustainability together with environmental and economic dimensions. Even though there is not an agreement on what social sustainability consists of, it is a well known fact that it necessitates user participation. The fore, this study aims to observe and analyze the role of user participation on social sustainability. In this paper, the links between user participation and indicators of social sustainability have been searched. In order to achieve this, first of all a literature review on social sustainability has been done; accordingly, the information obtained from researches has been used in the evaluation of the projects conducted in the developing countries considering user participation. These examples are taken as role models with pros and cons for the development of the checklist for the evaluation of the case studies. Furthermore, a case study over the post earthquake residential settlements in Turkey have been conducted. The case study projects are selected considering different building scales (differing number of residential units), scale of the problem (post-earthquake settlements, rehabilitation of shanty dwellings) and the variety of users (differing socio-economic dimensions). Decisionmaking, design, building and usage processes of the selected projects and actors of these processes have been investigated in the context of social sustainability. The cases include: New Gourna Village by Hassan Fathy, Quinta Monroy dwelling units conducted in Chile by Alejandro Aravena and Beyköy and Beriköy projects in Turkey aiming to solve the problem of housing which have appeared after the earthquake happened in 1999 have been investigated. Results of the study possible links between social sustainability indicators and user participation and links between user participation and the peculiarities of place. Results are compared and discussed in order to find possible solutions to form social sustainability through user participation. Results show that social sustainability issues depend on communities' characteristics, socio-economic conditions and user profile but user participation has positive effects on some social sustainability indicators like user satisfaction, a sense of belonging and social stability.

Keywords: housing projects, residential areas, social sustainability, user participation

Procedia PDF Downloads 384
1663 Liraglutide Augments Extra Body Weight Loss after Sleeve Gastrectomy without Change in Intrahepatic and Intra-Pancreatic Fat in Obese Individuals: Randomized, Controlled Study

Authors: Ashu Rastogi, Uttam Thakur, Jimmy Pathak, Rajesh Gupta, Anil Bhansali

Abstract:

Introduction: Liraglutide is known to induce weight loss and metabolic benefits in obese individuals. However, its effect after sleeve gastrectomy are not known. Methods: People with obesity (BMI>27.5 kg/m2) underwent LSG. Subsequently, participants were randomized to receive either 0.6mg liraglutide subcutaneously daily from 6 week post to be continued till 24 week (L-L group) or placebo (L-P group). Patients were assessed before surgery (baseline) and 6 weeks, 12weeks, 18weeks and 24weeks after surgery for height, weight, waist and hip circumference, BMI, body fat percentage, HbA1c, fasting C-peptide, fasting insulin, HOMA-IR, HOMA-β, GLP-1 levels (after standard OGTT). MRI abdomen was performed prior to surgery and at 24weeks post operatively for the estimation of intrapancreatic and intrahepatic fat content. Outcome measures: Primary outcomes were changes in metabolic variables of fasting and stimulated GLP-1 levels, insulin, c-peptide, plasma glucose levels. Secondary variables were indices of insulin resistance HOMA-IR, Matsuda index; and pancreatic and hepatic steatosis. Results: Thirty-eight patients undergoing LSG were screened and 29 participants were enrolled. Two patients withdrew consent and one patient died of acute coronary event. 26 patients were randomized and data analysed. Median BMI was 40.73±3.66 and 46.25±6.51; EBW of 49.225±11.14 and 651.48±4.85 in the L-P and L-L group, respectively. Baseline FPG was 132±51.48, 125±39.68; fasting insulin 21.5±13.99, 13.15±9.20, fasting GLP-1 2.4± .37, 2.4± .32, AUC GLP-1 340.78± 44 and 332.32 ± 44.1, HOMA-IR 7.0±4.2 and 4.42±4.5 in the L-P and L-L group, respectively. EBW loss was 47± 13.20 and 65.59± 24.20 (p<0.05) in the placebo versus liraglutide group. However, we did not observe inter-group difference in metabolic parameters between the groups in spite of significant intra-group changes after 6 months of LSG. Intra-pancreatic fat prior to surgery was 3.21±1.7 and 2.2±0.9 (p=0.38) that decreased to 2.14±1.8 and 1.06±0.8 (p=0.25) at 6 months in L-P and L-L group, respectively. Similarly, intra-pancreatic fat was 1.97±0.27 and 1.88±0.36 (p=0.361) at baseline that decreased to 1.14±0.44 and 1.36±0.47 (p=0.465) at 6 months in L-P and L-L group, respectively. Conclusion: Liraglutide augments extra body weight loss after sleeve gastrectomy. A decrease in intra-pancreatic and intra-hepatic fat is noticed after bariatric surgery without additive benefit of liraglutide administration.

Keywords: sleeve gastrectomy, liraglutide, intra-pancreatic fat, insulin

Procedia PDF Downloads 188
1662 High Capacity Reversible Watermarking through Interpolated Error Shifting

Authors: Hae-Yeoun Lee

Abstract:

Reversible watermarking that not only protects the copyright but also preserve the original quality of the digital content have been intensively studied. In particular, the demand for reversible watermarking has increased. In this paper, we propose a reversible watermarking scheme based on interpolation-error shifting and error precompensation. The intensity of a pixel is interpolated from the intensities of neighbouring pixels, and the difference histogram between the interpolated and the original intensities is obtained and modified to embed the watermark message. By restoring the difference histogram, the embedded watermark is extracted and the original image is recovered by compensating for the interpolation error. The overflow and underflow are prevented by error precompensation. To show the performance of the method, the proposed algorithm is compared with other methods using various test images.

Keywords: reversible watermarking, high capacity, high quality, interpolated error shifting, error precompensation

Procedia PDF Downloads 317
1661 Deployment of Matrix Transpose in Digital Image Encryption

Authors: Okike Benjamin, Garba E J. D.

Abstract:

Encryption is used to conceal information from prying eyes. Presently, information and data encryption are common due to the volume of data and information in transit across the globe on daily basis. Image encryption is yet to receive the attention of the researchers as deserved. In other words, video and multimedia documents are exposed to unauthorized accessors. The authors propose image encryption using matrix transpose. An algorithm that would allow image encryption is developed. In this proposed image encryption technique, the image to be encrypted is split into parts based on the image size. Each part is encrypted separately using matrix transpose. The actual encryption is on the picture elements (pixel) that make up the image. After encrypting each part of the image, the positions of the encrypted images are swapped before transmission of the image can take place. Swapping the positions of the images is carried out to make the encrypted image more robust for any cryptanalyst to decrypt.

Keywords: image encryption, matrices, pixel, matrix transpose

Procedia PDF Downloads 414
1660 Optimal Maintenance and Improvement Policies in Water Distribution System: Markov Decision Process Approach

Authors: Jong Woo Kim, Go Bong Choi, Sang Hwan Son, Dae Shik Kim, Jung Chul Suh, Jong Min Lee

Abstract:

The Markov Decision Process (MDP) based methodology is implemented in order to establish the optimal schedule which minimizes the cost. Formulation of MDP problem is presented using the information about the current state of pipe, improvement cost, failure cost and pipe deterioration model. The objective function and detailed algorithm of dynamic programming (DP) are modified due to the difficulty of implementing the conventional DP approaches. The optimal schedule derived from suggested model is compared to several policies via Monte Carlo simulation. Validity of the solution and improvement in computational time are proved.

Keywords: Markov decision processes, dynamic programming, Monte Carlo simulation, periodic replacement, Weibull distribution

Procedia PDF Downloads 416
1659 Aerobic Bioprocess Control Using Artificial Intelligence Techniques

Authors: M. Caramihai, Irina Severin

Abstract:

This paper deals with the design of an intelligent control structure for a bioprocess of Hansenula polymorpha yeast cultivation. The objective of the process control is to produce biomass in a desired physiological state. The work demonstrates that the designed Hybrid Control Techniques (HCT) are able to recognize specific evolution bioprocess trajectories using neural networks trained specifically for this purpose, in order to estimate the model parameters and to adjust the overall bioprocess evolution through an expert system and a fuzzy structure. The design of the control algorithm as well as its tuning through realistic simulations is presented. Taking into consideration the synergism of different paradigms like fuzzy logic, neural network, and symbolic artificial intelligence (AI), in this paper we present a real and fulfilled intelligent control architecture with application in bioprocess control.

Keywords: bioprocess, intelligent control, neural nets, fuzzy structure, hybrid techniques

Procedia PDF Downloads 411
1658 Channel Estimation for LTE Downlink

Authors: Rashi Jain

Abstract:

The LTE systems employ Orthogonal Frequency Division Multiplexing (OFDM) as the multiple access technology for the Downlink channels. For enhanced performance, accurate channel estimation is required. Various algorithms such as Least Squares (LS), Minimum Mean Square Error (MMSE) and Recursive Least Squares (RLS) can be employed for the purpose. The paper proposes channel estimation algorithm based on Kalman Filter for LTE-Downlink system. Using the frequency domain pilots, the initial channel response is obtained using the LS criterion. Then Kalman Filter is employed to track the channel variations in time-domain. To suppress the noise within a symbol, threshold processing is employed. The paper draws comparison between the LS, MMSE, RLS and Kalman filter for channel estimation. The parameters for evaluation are Bit Error Rate (BER), Mean Square Error (MSE) and run-time.

Keywords: LTE, channel estimation, OFDM, RLS, Kalman filter, threshold

Procedia PDF Downloads 349
1657 Black-Box-Base Generic Perturbation Generation Method under Salient Graphs

Authors: Dingyang Hu, Dan Liu

Abstract:

DNN (Deep Neural Network) deep learning models are widely used in classification, prediction, and other task scenarios. To address the difficulties of generic adversarial perturbation generation for deep learning models under black-box conditions, a generic adversarial ingestion generation method based on a saliency map (CJsp) is proposed to obtain salient image regions by counting the factors that influence the input features of an image on the output results. This method can be understood as a saliency map attack algorithm to obtain false classification results by reducing the weights of salient feature points. Experiments also demonstrate that this method can obtain a high success rate of migration attacks and is a batch adversarial sample generation method.

Keywords: adversarial sample, gradient, probability, black box

Procedia PDF Downloads 91
1656 Meta-Magnetic Properties of LaFe₁₂B₆ Type Compounds

Authors: Baptiste Vallet-Simond, Léopold V. B. Diop, Olivier Isnard

Abstract:

The antiferromagnetic itinerant-electron compound LaFe₁₂B₆ occupies a special place among rare-earth iron-rich intermetallic; it presents exotic magnetic and physical properties. The unusual amplitude-modulated spin configuration defined by a propagation vector k = (¼, ¼, ¼), remarkably weak Fe magnetic moment (0.43 μB) in the antiferromagnetic ground state, especially low magnetic ordering temperature TN = 36 K for an Fe-rich phase, a multicritical point in the complex magnetic phase diagram, both normal and inverse magnetocaloric effects, and huge hydrostatic pressure effects can be highlighted as the most relevant. Both antiferromagnetic (AFM) and paramagnetic (PM) states can be transformed into the ferromagnetic (FM) state via a field-induced first-order metamagnetic transition. Of particular interest is the low-temperature magnetization process. This process is discontinuous and evolves unexpected huge metamagnetic transitions consisting of a succession of steep magnetization jumps separated by plateaus, giving rise to an unusual avalanche-like behavior. The metamagnetic transition is accompanied by giant magnetoresistance and large magnetostriction. In the present work, we report on the intrinsic magnetic properties of the La₁₋ₓPrₓFe₁₂B₆ series of compounds exhibiting sharp metamagnetic transitions. The study of the structural, magnetic, magneto-transport, and magnetostrictive properties of the La₁₋ₓPrₓFe₁₂B₆ system was performed by combining a wide variety of measurement techniques. Magnetic measurements were performed up to µ0H = 10 T. It was found that the proportion of Pr had a strong influence on the magnetic properties of this series of compounds. At x=0.05, the ground state at 2K is that of an antiferromagnet, but the critical transition field Hc has been lowered from Hc = 6T at x = 0 to Hc = 2.5 Tat x=0.05. And starting from x=0.10, the ground state of this series of compounds is a coexistence of AFM and FM parts. At x=0.30, the AFM order has completely vanished, and only the FM part is left. However, we still observe meta-magnetic transitions at higher temperatures (above 100 K for x=0.30) from the paramagnetic (P) state to a forced FM state. And, of course, such transitions are accompanied by strong magneto-caloric, magnetostrictive, and magnetoresistance effects. The Curie temperatures for the probed compositions going from x=0.05 to x=0.30 were spread over the temperature range of 40 K up to 100 K.

Keywords: metamagnetism, RMB intermetallic, magneto-transport effect, metamagnetic transitions

Procedia PDF Downloads 59
1655 Solving Optimal Control of Semilinear Elliptic Variational Inequalities Obstacle Problems using Smoothing Functions

Authors: El Hassene Osmani, Mounir Haddou, Naceurdine Bensalem

Abstract:

In this paper, we investigate optimal control problems governed by semilinear elliptic variational inequalities involving constraints on the state, and more precisely, the obstacle problem. We present a relaxed formulation for the problem using smoothing functions. Since we adopt a numerical point of view, we first relax the feasible domain of the problem, then using both mathematical programming methods and penalization methods, we get optimality conditions with smooth Lagrange multipliers. Some numerical experiments using IPOPT algorithm (Interior Point Optimizer) are presented to verify the efficiency of our approach.

Keywords: complementarity problem, IPOPT, Lagrange multipliers, mathematical programming, optimal control, smoothing methods, variationally inequalities

Procedia PDF Downloads 164
1654 Charcoal Production from Invasive Species: Suggested Shift for Increased Household Income and Forest Plant Diversity in Nepal

Authors: Kishor Prasad Bhatta, Suman Ghimire, Durga Prasad Joshi

Abstract:

Invasive Alien Species (IAS) are considered waste forest resources in Nepal. The rapid expansion of IAS is one of the nine main drivers of forest degradation, though the extent and distribution of this species are not well known. Further, the knowledge of the impact of IAS removal on forest plant diversity is hardly known, and the possibilities of income generation from them at the grass-root communities are rarely documented. Systematic sampling of 1% with nested circular plots of 500 square meters was performed in IAS removed and non-removed area, each of 30 hectares in Udayapur Community Forest User Group (CFUG), Chitwan, central Nepal to observe whether the removal of IAS contributed to an increase in plant diversity. In addition, ten entrepreneurs of Udaypur CFUG, involved in the charcoal production, briquette making and marketing were interviewed and interacted as well as their record keeping booklets were reviewed to understand if the charcoal production contributed to their income and employment. The average annual precipitation and temperature of the study area is 2100 mm and 34 degree Celsius respectively with Shorea robusta as main tree species and Eupatorium odoratum as dominant IAS. All the interviewed households were from the ̔below-poverty-line’ category as per Community Forestry Guidelines. A higher Shannon-Weiner plant diversity index at regeneration level was observed in IAS removed areas (2.43) than in control site (1.95). Furthermore, the number of tree seedlings and saplings in the IAS harvested blocks were significantly higher (p < 0.005) compared to the unharvested one. The sale of charcoal produced through the pyrolysis of IAS in ̔ Bio-energy kilns’ contributed for an average increased income of 30.95 % (Nepalese rupees 31,000) of the involved households. Despite above factors, some operational policy hurdles related to charcoal transport and taxation existed at field level. This study suggests that plant diversity could be increased through the removal of IAS, and considerable economic benefits could be achieved if charcoal is substantially produced and utilized.

Keywords: briquette, economic benefits, pyrolysis, regeneration

Procedia PDF Downloads 268
1653 Bayesian Structural Identification with Systematic Uncertainty Using Multiple Responses

Authors: André Jesus, Yanjie Zhu, Irwanda Laory

Abstract:

Structural health monitoring is one of the most promising technologies concerning aversion of structural risk and economic savings. Analysts often have to deal with a considerable variety of uncertainties that arise during a monitoring process. Namely the widespread application of numerical models (model-based) is accompanied by a widespread concern about quantifying the uncertainties prevailing in their use. Some of these uncertainties are related with the deterministic nature of the model (code uncertainty) others with the variability of its inputs (parameter uncertainty) and the discrepancy between a model/experiment (systematic uncertainty). The actual process always exhibits a random behaviour (observation error) even when conditions are set identically (residual variation). Bayesian inference assumes that parameters of a model are random variables with an associated PDF, which can be inferred from experimental data. However in many Bayesian methods the determination of systematic uncertainty can be problematic. In this work systematic uncertainty is associated with a discrepancy function. The numerical model and discrepancy function are approximated by Gaussian processes (surrogate model). Finally, to avoid the computational burden of a fully Bayesian approach the parameters that characterise the Gaussian processes were estimated in a four stage process (modular Bayesian approach). The proposed methodology has been successfully applied on fields such as geoscience, biomedics, particle physics but never on the SHM context. This approach considerably reduces the computational burden; although the extent of the considered uncertainties is lower (second order effects are neglected). To successfully identify the considered uncertainties this formulation was extended to consider multiple responses. The efficiency of the algorithm has been tested on a small scale aluminium bridge structure, subjected to a thermal expansion due to infrared heaters. Comparison of its performance with responses measured at different points of the structure and associated degrees of identifiability is also carried out. A numerical FEM model of the structure was developed and the stiffness from its supports is considered as a parameter to calibrate. Results show that the modular Bayesian approach performed best when responses of the same type had the lowest spatial correlation. Based on previous literature, using different types of responses (strain, acceleration, and displacement) should also improve the identifiability problem. Uncertainties due to parametric variability, observation error, residual variability, code variability and systematic uncertainty were all recovered. For this example the algorithm performance was stable and considerably quicker than Bayesian methods that account for the full extent of uncertainties. Future research with real-life examples is required to fully access the advantages and limitations of the proposed methodology.

Keywords: bayesian, calibration, numerical model, system identification, systematic uncertainty, Gaussian process

Procedia PDF Downloads 322
1652 Road Vehicle Recognition Using Magnetic Sensing Feature Extraction and Classification

Authors: Xiao Chen, Xiaoying Kong, Min Xu

Abstract:

This paper presents a road vehicle detection approach for the intelligent transportation system. This approach mainly uses low-cost magnetic sensor and associated data collection system to collect magnetic signals. This system can measure the magnetic field changing, and it also can detect and count vehicles. We extend Mel Frequency Cepstral Coefficients to analyze vehicle magnetic signals. Vehicle type features are extracted using representation of cepstrum, frame energy, and gap cepstrum of magnetic signals. We design a 2-dimensional map algorithm using Vector Quantization to classify vehicle magnetic features to four typical types of vehicles in Australian suburbs: sedan, VAN, truck, and bus. Experiments results show that our approach achieves a high level of accuracy for vehicle detection and classification.

Keywords: vehicle classification, signal processing, road traffic model, magnetic sensing

Procedia PDF Downloads 312
1651 Research on ARQ Transmission Technique in Mars Detection Telecommunications System

Authors: Zhongfei Cai, Hui He, Changsheng Li

Abstract:

This paper studied in the automatic repeat request (ARQ) transmission technique in Mars detection telecommunications system. An ARQ method applied to proximity-1 space link protocol was proposed by this paper. In order to ensure the efficiency of data reliable transmission, this ARQ method combined these different ARQ maneuvers characteristics. Considering the Mars detection communication environments, this paper analyzed the characteristics of the saturation throughput rate, packet dropping probability, average delay and energy efficiency with different ARQ algorithms. Combined thus results with the theories of ARQ transmission technique, an ARQ transmission project in Mars detection telecommunications system was established. The simulation results showed that this algorithm had excellent saturation throughput rate and energy efficiency with low complexity.

Keywords: ARQ, mars, CCSDS, proximity-1, deepspace

Procedia PDF Downloads 331
1650 Assessing the Effect of Waste-based Geopolymer on Asphalt Binders

Authors: Amani A. Saleh, Maram M. Saudy, Mohamed N. AbouZeid

Abstract:

Asphalt cement concrete is a very commonly used material in the construction of roads. It has many advantages, such as being easy to use as well as providing high user satisfaction in terms of comfortability and safety on the road. However, there are some problems that come with asphalt cement concrete, such as its high carbon footprint, which makes it environmentally unfriendly. In addition, pavements require frequent maintenance, which could be very costly and uneconomic. The aim of this research is to study the effect of mixing waste-based geopolymers with asphalt binders. Geopolymer mixes were prepared by combining alumino-silicate sources such as fly ash, silica fumes, and metakaolin with alkali activators. The purpose of mixing geopolymers with the asphalt binder is to enhance the rheological and microstructural properties of asphalt. This was done through two phases, where the first phase was developing an optimum mix design of the geopolymer additive itself. The following phase was testing the geopolymer-modified asphalt binder after the addition of the optimum geopolymer mix design to it. The testing of the modified binder is performed according to the Superpave testing procedures, which include the dynamic shear rheometer to measure parameters such as rutting and fatigue cracking, and the rotational viscometer to measure workability. In addition, the microstructural properties of the modified binder is studied using the environmental scanning electron microscopy test (ESEM). In the testing phase, the aim is to observe whether the addition of different geopolymer percentages to the asphalt binder will enhance the properties of the binder and yield desirable results. Furthermore, the tests on the geopolymer-modified binder were carried out at fixed time intervals, therefore, the curing time was the main parameter being tested in this research. It was observed that the addition of geopolymers to asphalt binder has shown an increased performance of asphalt binder with time. It is worth mentioning that carbon emissions are expected to be reduced since geopolymers are environmentally friendly materials that minimize carbon emissions and lead to a more sustainable environment. Additionally, the use of industrial by-products such as fly ash and silica fumes is beneficial in the sense that they are recycled into producing geopolymers instead of being accumulated in landfills and therefore wasting space.

Keywords: geopolymer, rutting, superpave, fatigue cracking, sustainability, waste

Procedia PDF Downloads 122
1649 Density-based Denoising of Point Cloud

Authors: Faisal Zaman, Ya Ping Wong, Boon Yian Ng

Abstract:

Point cloud source data for surface reconstruction is usually contaminated with noise and outliers. To overcome this, we present a novel approach using modified kernel density estimation (KDE) technique with bilateral filtering to remove noisy points and outliers. First we present a method for estimating optimal bandwidth of multivariate KDE using particle swarm optimization technique which ensures the robust performance of density estimation. Then we use mean-shift algorithm to find the local maxima of the density estimation which gives the centroid of the clusters. Then we compute the distance of a certain point from the centroid. Points belong to outliers then removed by automatic thresholding scheme which yields an accurate and economical point surface. The experimental results show that our approach comparably robust and efficient.

Keywords: point preprocessing, outlier removal, surface reconstruction, kernel density estimation

Procedia PDF Downloads 339
1648 New Approaches for the Handwritten Digit Image Features Extraction for Recognition

Authors: U. Ravi Babu, Mohd Mastan

Abstract:

The present paper proposes a novel approach for handwritten digit recognition system. The present paper extract digit image features based on distance measure and derives an algorithm to classify the digit images. The distance measure can be performing on the thinned image. Thinning is the one of the preprocessing technique in image processing. The present paper mainly concentrated on an extraction of features from digit image for effective recognition of the numeral. To find the effectiveness of the proposed method tested on MNIST database, CENPARMI, CEDAR, and newly collected data. The proposed method is implemented on more than one lakh digit images and it gets good comparative recognition results. The percentage of the recognition is achieved about 97.32%.

Keywords: handwritten digit recognition, distance measure, MNIST database, image features

Procedia PDF Downloads 454
1647 Image Compression Using Block Power Method for SVD Decomposition

Authors: El Asnaoui Khalid, Chawki Youness, Aksasse Brahim, Ouanan Mohammed

Abstract:

In these recent decades, the important and fast growth in the development and demand of multimedia products is contributing to an insufficient in the bandwidth of device and network storage memory. Consequently, the theory of data compression becomes more significant for reducing the data redundancy in order to save more transfer and storage of data. In this context, this paper addresses the problem of the lossless and the near-lossless compression of images. This proposed method is based on Block SVD Power Method that overcomes the disadvantages of Matlab's SVD function. The experimental results show that the proposed algorithm has a better compression performance compared with the existing compression algorithms that use the Matlab's SVD function. In addition, the proposed approach is simple and can provide different degrees of error resilience, which gives, in a short execution time, a better image compression.

Keywords: image compression, SVD, block SVD power method, lossless compression, near lossless

Procedia PDF Downloads 377
1646 Optimal Production and Maintenance Policy for a Partially Observable Production System with Stochastic Demand

Authors: Leila Jafari, Viliam Makis

Abstract:

In this paper, the joint optimization of the economic manufacturing quantity (EMQ), safety stock level, and condition-based maintenance (CBM) is presented for a partially observable, deteriorating system subject to random failure. The demand is stochastic and it is described by a Poisson process. The stochastic model is developed and the optimization problem is formulated in the semi-Markov decision process framework. A modification of the policy iteration algorithm is developed to find the optimal policy. A numerical example is presented to compare the optimal policy with the policy considering zero safety stock.

Keywords: condition-based maintenance, economic manufacturing quantity, safety stock, stochastic demand

Procedia PDF Downloads 458
1645 Numerical Simulation of Rayleigh Benard Convection and Radiation Heat Transfer in Two-Dimensional Enclosure

Authors: Raoudha Chaabane, Faouzi Askri, Sassi Ben Nasrallah

Abstract:

A new numerical algorithm is developed to solve coupled convection-radiation heat transfer in a two dimensional enclosure. Radiative heat transfer in participating medium has been carried out using the control volume finite element method (CVFEM). The radiative transfer equations (RTE) are formulated for absorbing, emitting and scattering medium. The density, velocity and temperature fields are calculated using the two double population lattice Boltzmann equation (LBE). In order to test the efficiency of the developed method the Rayleigh Benard convection with and without radiative heat transfer is analyzed. The obtained results are validated against available works in literature and the proposed method is found to be efficient, accurate and numerically stable.

Keywords: participating media, LBM, CVFEM- radiation coupled with convection

Procedia PDF Downloads 398
1644 Efficient Neural and Fuzzy Models for the Identification of Dynamical Systems

Authors: Aouiche Abdelaziz, Soudani Mouhamed Salah, Aouiche El Moundhe

Abstract:

The present paper addresses the utilization of Artificial Neural Networks (ANNs) and Fuzzy Inference Systems (FISs) for the identification and control of dynamical systems with some degree of uncertainty. Because ANNs and FISs have an inherent ability to approximate functions and to adapt to changes in input and parameters, they can be used to control systems too complex for linear controllers. In this work, we show how ANNs and FISs can be put in order to form nets that can learn from external data. In sequence, it is presented structures of inputs that can be used along with ANNs and FISs to model non-linear systems. Four systems were used to test the identification and control of the structures proposed. The results show the ANNs and FISs (Back Propagation Algorithm) used were efficient in modeling and controlling the non-linear plants.

Keywords: non-linear systems, fuzzy set Models, neural network, control law

Procedia PDF Downloads 206
1643 Traffic Signal Control Using Citizens’ Knowledge through the Wisdom of the Crowd

Authors: Aleksandar Jovanovic, Katarina Kukic, Ana Uzelac, Dusan Teodorovic

Abstract:

Wisdom of the Crowd (WoC) is a decentralized method that uses the collective intelligence of humans. Individual guesses may be far from the target, but when considered as a group, they converge on optimal solutions for a given problem. We will utilize WoC to address the challenge of controlling traffic lights within intersections from the streets of Kragujevac, Serbia. The problem at hand falls within the category of NP-hard problems. We will employ an algorithm that leverages the swarm intelligence of bees: Bee Colony Optimization (BCO). Data regarding traffic signal timing at a single intersection will be gathered from citizens through a survey. Results obtained in that manner will be compared to the BCO results for different traffic scenarios. We will use Vissim traffic simulation software as a tool to compare the performance of bees’ and humans’ collective intelligence.

Keywords: wisdom of the crowd, traffic signal control, combinatorial optimization, bee colony optimization

Procedia PDF Downloads 103
1642 Phenomena-Based Approach for Automated Generation of Process Options and Process Models

Authors: Parminder Kaur Heer, Alexei Lapkin

Abstract:

Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.

Keywords: Phenomena, Process intensification, Process models , Process options

Procedia PDF Downloads 226