Search results for: heuristic method
5165 Location Based Clustering in Wireless Sensor Networks
Authors: Ashok Kumar, Narottam Chand, Vinod Kumar
Abstract:
Due to the limited energy resources, energy efficient operation of sensor node is a key issue in wireless sensor networks. Clustering is an effective method to prolong the lifetime of energy constrained wireless sensor network. However, clustering in wireless sensor network faces several challenges such as selection of an optimal group of sensor nodes as cluster, optimum selection of cluster head, energy balanced optimal strategy for rotating the role of cluster head in a cluster, maintaining intra and inter cluster connectivity and optimal data routing in the network. In this paper, we propose a protocol supporting an energy efficient clustering, cluster head selection/rotation and data routing method to prolong the lifetime of sensor network. Simulation results demonstrate that the proposed protocol prolongs network lifetime due to the use of efficient clustering, cluster head selection/rotation and data routing.
Keywords: Wireless sensor networks, clustering, energy efficient, localization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26845164 Effect of Composite Material on Damping Capacity Improvement of Cutting Tool in Machining Operation Using Taguchi Approach
Authors: S. Ghorbani, N. I. Polushin
Abstract:
Chatter vibrations, occurring during cutting process, cause vibration between the cutting tool and workpiece, which deteriorates surface roughness and reduces tool life. The purpose of this study is to investigate the influence of cutting parameters and tool construction on surface roughness and vibration in turning of aluminum alloy AA2024. A new design of cutting tool is proposed, which is filled up with epoxy granite in order to improve damping capacity of the tool. Experiments were performed at the lathe using carbide cutting insert coated with TiC and two different cutting tools made of AISI 5140 steel. Taguchi L9 orthogonal array was applied to design of experiment and to optimize cutting conditions. By the help of signal-to-noise ratio and analysis of variance the optimal cutting condition and the effect of the cutting parameters on surface roughness and vibration were determined. Effectiveness of Taguchi method was verified by confirmation test. It was revealed that new cutting tool with epoxy granite has reduced vibration and surface roughness due to high damping properties of epoxy granite in toolholder.
Keywords: ANOVA, damping capacity, surface roughness, Taguchi method, vibration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30635163 Optimal Design of Composite Patch for a Cracked Pipe by Utilizing Genetic Algorithm and Finite Element Method
Authors: Mahdi Fakoor, Seyed Mohammad Navid Ghoreishi
Abstract:
Composite patching is a common way for reinforcing the cracked pipes and cylinders. The effects of composite patch reinforcement on fracture parameters of a cracked pipe depend on a variety of parameters such as number of layers, angle, thickness, and material of each layer. Therefore, stacking sequence optimization of composite patch becomes crucial for the applications of cracked pipes. In this study, in order to obtain the optimal stacking sequence for a composite patch that has minimum weight and maximum resistance in propagation of cracks, a coupled Multi-Objective Genetic Algorithm (MOGA) and Finite Element Method (FEM) process is proposed. This optimization process has done for longitudinal and transverse semi-elliptical cracks and optimal stacking sequences and Pareto’s front for each kind of cracks are presented. The proposed algorithm is validated against collected results from the existing literature.
Keywords: Multi objective optimization, Pareto front, composite patch, cracked pipe.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9085162 Optimization of Distribution Network Configuration for Loss Reduction Using Artificial Bee Colony Algorithm
Authors: R. Srinivasa Rao, S.V.L. Narasimham, M. Ramalingaraju
Abstract:
Network reconfiguration in distribution system is realized by changing the status of sectionalizing switches to reduce the power loss in the system. This paper presents a new method which applies an artificial bee colony algorithm (ABC) for determining the sectionalizing switch to be operated in order to solve the distribution system loss minimization problem. The ABC algorithm is a new population based metaheuristic approach inspired by intelligent foraging behavior of honeybee swarm. The advantage of ABC algorithm is that it does not require external parameters such as cross over rate and mutation rate as in case of genetic algorithm and differential evolution and it is hard to determine these parameters in prior. The other advantage is that the global search ability in the algorithm is implemented by introducing neighborhood source production mechanism which is a similar to mutation process. To demonstrate the validity of the proposed algorithm, computer simulations are carried out on 14, 33, and 119-bus systems and compared with different approaches available in the literature. The proposed method has outperformed the other methods in terms of the quality of solution and computational efficiency.
Keywords: Distribution system, Network reconfiguration, Loss reduction, Artificial Bee Colony Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37605161 Predicting Bankruptcy using Tabu Search in the Mauritian Context
Authors: J. Cheeneebash, K. B. Lallmamode, A. Gopaul
Abstract:
Throughout this paper, a relatively new technique, the Tabu search variable selection model, is elaborated showing how it can be efficiently applied within the financial world whenever researchers come across the selection of a subset of variables from a whole set of descriptive variables under analysis. In the field of financial prediction, researchers often have to select a subset of variables from a larger set to solve different type of problems such as corporate bankruptcy prediction, personal bankruptcy prediction, mortgage, credit scoring and the Arbitrage Pricing Model (APM). Consequently, to demonstrate how the method operates and to illustrate its usefulness as well as its superiority compared to other commonly used methods, the Tabu search algorithm for variable selection is compared to two main alternative search procedures namely, the stepwise regression and the maximum R 2 improvement method. The Tabu search is then implemented in finance; where it attempts to predict corporate bankruptcy by selecting the most appropriate financial ratios and thus creating its own prediction score equation. In comparison to other methods, mostly the Altman Z-Score model, the Tabu search model produces a higher success rate in predicting correctly the failure of firms or the continuous running of existing entities.
Keywords: Predicting Bankruptcy, Tabu Search
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19375160 On the Algorithmic Iterative Solutions of Conjugate Gradient, Gauss-Seidel and Jacobi Methods for Solving Systems of Linear Equations
Authors: H. D. Ibrahim, H. C. Chinwenyi, H. N. Ude
Abstract:
In this paper, efforts were made to examine and compare the algorithmic iterative solutions of conjugate gradient method as against other methods such as Gauss-Seidel and Jacobi approaches for solving systems of linear equations of the form Ax = b, where A is a real n x n symmetric and positive definite matrix. We performed algorithmic iterative steps and obtained analytical solutions of a typical 3 x 3 symmetric and positive definite matrix using the three methods described in this paper (Gauss-Seidel, Jacobi and Conjugate Gradient methods) respectively. From the results obtained, we discovered that the Conjugate Gradient method converges faster to exact solutions in fewer iterative steps than the two other methods which took much iteration, much time and kept tending to the exact solutions.
Keywords: conjugate gradient, linear equations, symmetric and positive definite matrix, Gauss-Seidel, Jacobi, algorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4725159 Village Construction under China-s Rapid Urbanization: The Role and Strategy of Planning in the Rural Areas
Authors: Chen Zhang, Jiwu Wang
Abstract:
With China's urbanization continuing to accelerate, a amount of rural people flood into China's cities in recent years, and the issue of agriculture, rural areas and farmers is getting more and more serious. In 2005, the Chinese government put forward a plan for “the construction of new rural village", in order to coordinate the development of both urban and rural areas. The planning method of rural region differs sharply from that of urban areas, as same as village social structure and habits of farmer-s life, so the studies which can consider the special needs of village construction in China are absolutely essential. This paper expresses explore current situation and problems existing in the construction of China-s new rural village, such as bigger gap between urban and rural areas, excessive new construction projects, extinct traditional village style and so on. It tries to analyze the deep reason of the present situation of the village from law system, industrial structure, financial sources and planning method. Then it also provides a guide for developing policies and procedures promoting the development of china-s rural areas.
Keywords: Rural areas, village construction, physical planning, law system, financial sources, Public participation, China.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20025158 DODR : Delay On-Demand Routing
Authors: Dong Wan-li, Gu Nai-jie, Tu Kun, Bi Kun, Liu Gang
Abstract:
As originally designed for wired networks, TCP (transmission control protocol) congestion control mechanism is triggered into action when packet loss is detected. This implicit assumption for packet loss mostly due to network congestion does not work well in Mobile Ad Hoc Network, where there is a comparatively high likelihood of packet loss due to channel errors and node mobility etc. Such non-congestion packet loss, when dealt with by congestion control mechanism, causes poor TCP performance in MANET. In this study, we continue to investigate the impact of the interaction between transport protocols and on-demand routing protocols on the performance and stability of 802.11 multihop networks. We evaluate the important wireless networking events caused routing change, and propose a cross layer method to delay the unnecessary routing changes, only need to add a sensitivity parameter α , which represents the on-demand routing-s reaction to link failure of MAC layer. Our proposal is applicable to the plain 802.11 networking environment, the simulation results that this method can remarkably improve the stability and performance of TCP without any modification on TCP and MAC protocol.
Keywords: Mobile ad hoc networks (MANET), on-demandrouting, performance, transmission control protocol (TCP).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17915157 Hippocampus Segmentation using a Local Prior Model on its Boundary
Authors: Dimitrios Zarpalas, Anastasios Zafeiropoulos, Petros Daras, Nicos Maglaveras
Abstract:
Segmentation techniques based on Active Contour Models have been strongly benefited from the use of prior information during their evolution. Shape prior information is captured from a training set and is introduced in the optimization procedure to restrict the evolution into allowable shapes. In this way, the evolution converges onto regions even with weak boundaries. Although significant effort has been devoted on different ways of capturing and analyzing prior information, very little thought has been devoted on the way of combining image information with prior information. This paper focuses on a more natural way of incorporating the prior information in the level set framework. For proof of concept the method is applied on hippocampus segmentation in T1-MR images. Hippocampus segmentation is a very challenging task, due to the multivariate surrounding region and the missing boundary with the neighboring amygdala, whose intensities are identical. The proposed method, mimics the human segmentation way and thus shows enhancements in the segmentation accuracy.Keywords: Medical imaging & processing, Brain MRI segmentation, hippocampus segmentation, hippocampus-amygdala missingboundary, weak boundary segmentation, region based segmentation, prior information, local weighting scheme in level sets, spatialdistribution of labels, gradient distribution on boundary.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17515156 Tsunami Inundation Modeling in a Boundary Fitted Curvilinear Grid Model Using the Method of Lines Technique
Authors: M. Ashaque Meah, M. Shah Noor, M Asif Arefin, Md. Fazlul Karim
Abstract:
A numerical technique in a boundary-fitted curvilinear grid model is developed to simulate the extent of inland inundation along the coastal belts of Peninsular Malaysia and Southern Thailand due to 2004 Indian ocean tsunami. Tsunami propagation and run-up are also studied in this paper. The vertically integrated shallow water equations are solved by using the method of lines (MOL). For this purpose the boundary-fitted grids are generated along the coastal and island boundaries and the other open boundaries of the model domain. A transformation is used to the governing equations so that the transformed physical domain is converted into a rectangular one. The MOL technique is applied to the transformed shallow water equations and the boundary conditions so that the equations are converted into ordinary differential equations initial value problem. Finally the 4th order Runge-Kutta method is used to solve these ordinary differential equations. The moving boundary technique is applied instead of fixed sea side wall or fixed coastal boundary to ensure the movement of the coastal boundary. The extent of intrusion of water and associated tsunami propagation are simulated for the 2004 Indian Ocean tsunami along the west coast of Peninsular Malaysia and southern Thailand. The simulated results are compared with the results obtained from a finite difference model and the data available in the USGS website. All simulations show better approximation than earlier research and also show excellent agreement with the observed data.
Keywords: Open boundary condition, moving boundary condition, boundary-fitted curvilinear grids, far field tsunami, Shallow Water Equations, tsunami source, Indonesian tsunami of 2004.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8645155 Enhancing Human Mobility Exoskeleton Comfort Using Admittance Controller
Authors: Alexandre Rabaseda, Emelie Seguin, Marc Doumit
Abstract:
Human mobility exoskeletons have been in development for several years and are becoming increasingly efficient. Unfortunately, user comfort was not always a priority design criterion throughout their development. To further improve this technology, exoskeletons should operate and deliver assistance without causing discomfort to the user. For this, improvements are necessary from an ergonomic point of view. The device’s control method is important when endeavoring to enhance user comfort. Exoskeleton or rehabilitation device controllers use methods of control called interaction controls (admittance and impedance controls). This paper proposes an extended version of an admittance controller to enhance user comfort. The control method used consists of adding an inner loop that is controlled by a proportional-integral-derivative (PID) controller. This allows the interaction force to be kept as close as possible to the desired force trajectory. The force-tracking admittance controller modifies the actuation force of the system in order to follow both the desired motion trajectory and the desired relative force between the user and the exoskeleton.
Keywords: Mobility assistive device, exoskeleton, force-tracking admittance controller, user comfort.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4465154 Fuzzy Risk-Based Life Cycle Assessment for Estimating Environmental Aspects in EMS
Authors: Kevin Fong-Rey Liu, Ken Yeh, Cheng-Wu Chen, Han-Hsi Liang
Abstract:
Environmental aspects plays a central role in environmental management system (EMS) because it is the basis for the identification of an organization-s environmental targets. The existing methods for the assessment of environmental aspects are grouped into three categories: risk assessment-based (RA-based), LCA-based and criterion-based methods. To combine the benefits of these three categories of research, this study proposes an integrated framework, combining RA-, LCA- and criterion-based methods. The integrated framework incorporates LCA techniques for the identification of the causal linkage for aspect, pathway, receptor and impact, uses fuzzy logic to assess aspects, considers fuzzy conditions, in likelihood assessment, and employs a new multi-criteria decision analysis method - multi-criteria and multi-connection comprehensive assessment (MMCA) - to estimate significant aspects in EMS. The proposed model is verified, using a real case study and the results show that this method successfully prioritizes the environmental aspects.Keywords: Environmental management system, environmental aspect, risk assessment, life cycle assessment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22185153 A Novel RLS Based Adaptive Filtering Method for Speech Enhancement
Authors: Pogula Rakesh, T. Kishore Kumar
Abstract:
Speech enhancement is a long standing problem with numerous applications like teleconferencing, VoIP, hearing aids and speech recognition. The motivation behind this research work is to obtain a clean speech signal of higher quality by applying the optimal noise cancellation technique. Real-time adaptive filtering algorithms seem to be the best candidate among all categories of the speech enhancement methods. In this paper, we propose a speech enhancement method based on Recursive Least Squares (RLS) adaptive filter of speech signals. Experiments were performed on noisy data which was prepared by adding AWGN, Babble and Pink noise to clean speech samples at -5dB, 0dB, 5dB and 10dB SNR levels. We then compare the noise cancellation performance of proposed RLS algorithm with existing NLMS algorithm in terms of Mean Squared Error (MSE), Signal to Noise ratio (SNR) and SNR Loss. Based on the performance evaluation, the proposed RLS algorithm was found to be a better optimal noise cancellation technique for speech signals.
Keywords: Adaptive filter, Adaptive Noise Canceller, Mean Squared Error, Noise reduction, NLMS, RLS, SNR, SNR Loss.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31825152 Performance Analysis of a Discrete-time GeoX/G/1 Queue with Single Working Vacation
Authors: Shan Gao, Zaiming Liu
Abstract:
This paper treats a discrete-time batch arrival queue with single working vacation. The main purpose of this paper is to present a performance analysis of this system by using the supplementary variable technique. For this purpose, we first analyze the Markov chain underlying the queueing system and obtain its ergodicity condition. Next, we present the stationary distributions of the system length as well as some performance measures at random epochs by using the supplementary variable method. Thirdly, still based on the supplementary variable method we give the probability generating function (PGF) of the number of customers at the beginning of a busy period and give a stochastic decomposition formulae for the PGF of the stationary system length at the departure epochs. Additionally, we investigate the relation between our discretetime system and its continuous counterpart. Finally, some numerical examples show the influence of the parameters on some crucial performance characteristics of the system.
Keywords: Discrete-time queue, batch arrival, working vacation, supplementary variable technique, stochastic decomposition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14335151 Maximizer of the Posterior Marginal Estimate for Noise Reduction of JPEG-compressed Image
Authors: Yohei Saika, Yuji Haraguchi
Abstract:
We constructed a method of noise reduction for JPEG-compressed image based on Bayesian inference using the maximizer of the posterior marginal (MPM) estimate. In this method, we tried the MPM estimate using two kinds of likelihood, both of which enhance grayscale images converted into the JPEG-compressed image through the lossy JPEG image compression. One is the deterministic model of the likelihood and the other is the probabilistic one expressed by the Gaussian distribution. Then, using the Monte Carlo simulation for grayscale images, such as the 256-grayscale standard image “Lena" with 256 × 256 pixels, we examined the performance of the MPM estimate based on the performance measure using the mean square error. We clarified that the MPM estimate via the Gaussian probabilistic model of the likelihood is effective for reducing noises, such as the blocking artifacts and the mosquito noise, if we set parameters appropriately. On the other hand, we found that the MPM estimate via the deterministic model of the likelihood is not effective for noise reduction due to the low acceptance ratio of the Metropolis algorithm.Keywords: Noise reduction, JPEG-compressed image, Bayesian inference, the maximizer of the posterior marginal estimate
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19875150 Mathematical Modeling to Predict Surface Roughness in CNC Milling
Authors: Ab. Rashid M.F.F., Gan S.Y., Muhammad N.Y.
Abstract:
Surface roughness (Ra) is one of the most important requirements in machining process. In order to obtain better surface roughness, the proper setting of cutting parameters is crucial before the process take place. This research presents the development of mathematical model for surface roughness prediction before milling process in order to evaluate the fitness of machining parameters; spindle speed, feed rate and depth of cut. 84 samples were run in this study by using FANUC CNC Milling α-Τ14ιE. Those samples were randomly divided into two data sets- the training sets (m=60) and testing sets(m=24). ANOVA analysis showed that at least one of the population regression coefficients was not zero. Multiple Regression Method was used to determine the correlation between a criterion variable and a combination of predictor variables. It was established that the surface roughness is most influenced by the feed rate. By using Multiple Regression Method equation, the average percentage deviation of the testing set was 9.8% and 9.7% for training data set. This showed that the statistical model could predict the surface roughness with about 90.2% accuracy of the testing data set and 90.3% accuracy of the training data set.
Keywords: Surface roughness, regression analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21305149 A New Source Code Auditing Algorithm for Detecting LFI and RFI in PHP Programs
Authors: Seyed Ali Mir Heydari, Mohsen Sayadiharikandeh
Abstract:
Static analysis of source code is used for auditing web applications to detect the vulnerabilities. In this paper, we propose a new algorithm to analyze the PHP source code for detecting LFI and RFI potential vulnerabilities. In our approach, we first define some patterns for finding some functions which have potential to be abused because of unhandled user inputs. More precisely, we use regular expression as a fast and simple method to define some patterns for detection of vulnerabilities. As inclusion functions could be also used in a safe way, there could occur many false positives (FP). The first cause of these FP-s could be that the function does not use a usersupplied variable as an argument. So, we extract a list of usersupplied variables to be used for detecting vulnerable lines of code. On the other side, as vulnerability could spread among the variables like by multi-level assignment, we also try to extract the hidden usersupplied variables. We use the resulted list to decrease the false positives of our method. Finally, as there exist some ways to prevent the vulnerability of inclusion functions, we define also some patterns to detect them and decrease our false positives.Keywords: User-supplied Variables, hidden user-supplied variables, PHP vulnerabilities.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25065148 Temperature Dependence of Relative Permittivity: A Measurement Technique Using Split Ring Resonators
Authors: Sreedevi P. Chakyar, Jolly Andrews, V. P. Joseph
Abstract:
A compact method for measuring the relative permittivity of a dielectric material at different temperatures using a single circular Split Ring Resonator (SRR) metamaterial unit working as a test probe is presented in this paper. The dielectric constant of a material is dependent upon its temperature and the LC resonance of the SRR depends on its dielectric environment. Hence, the temperature of the dielectric material in contact with the resonator influences its resonant frequency. A single SRR placed between transmitting and receiving probes connected to a Vector Network Analyser (VNA) is used as a test probe. The dependence of temperature between 30 oC and 60 oC on resonant frequency of SRR is analysed. Relative permittivities ‘ε’ of test samples for different temperatures are extracted from a calibration graph drawn between the relative permittivity of samples of known dielectric constant and their corresponding resonant frequencies. This method is found to be an easy and efficient technique for analysing the temperature dependent permittivity of different materials.
Keywords: Metamaterials, negative permeability, permittivity measurement techniques, split ring resonators, temperature dependent dielectric constant.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25845147 Military Attack Helicopter Selection Using Distance Function Measures in Multiple Criteria Decision Making Analysis
Authors: C. Ardil
Abstract:
This paper aims to select the best military attack helicopter to purchase by the Armed Forces and provide greater reconnaissance and offensive combat capability in military operations. For this purpose, a multiple criteria decision analysis method integrated with the variance weight procedure was applied to the military attack helicopter selection problem. A real military aviation case problem is conducted to support the Armed Forces decision-making process and contributes to the better performance of the Armed Forces. Application of the methodology resulted in ranking lists for ordering and prioritizing attack helicopters, providing transparency and simplicity to the decision-making process. Nine military attack helicopter models were analyzed in the light of strategic, tactical, and operational criteria, considering attack helicopters. The selected military attack helicopter would be used for fire support and reconnaissance activities required by the Armed Forces operation. This study makes a valuable contribution to the problem of military attack helicopter selection, as it represents a state-of-the-art application of the MCDMA method to contribute to the solution of a real problem of the Armed Forces. The methodology presented in this paper can be used to solve real problems of a wide variety, especially strategic, tactical and operational, and is, therefore, a very useful method for decision making.
Keywords: aircraft selection, military attack helicopter selection, attack helicopter fleet planning, MCDMA, multiple criteria analysis, multiple criteria decision making analysis, distance function measure
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9175146 Swarmed Discriminant Analysis for Multifunction Prosthesis Control
Authors: Rami N. Khushaba, Ahmed Al-Ani, Adel Al-Jumaily
Abstract:
One of the approaches enabling people with amputated limbs to establish some sort of interface with the real world includes the utilization of the myoelectric signal (MES) from the remaining muscles of those limbs. The MES can be used as a control input to a multifunction prosthetic device. In this control scheme, known as the myoelectric control, a pattern recognition approach is usually utilized to discriminate between the MES signals that belong to different classes of the forearm movements. Since the MES is recorded using multiple channels, the feature vector size can become very large. In order to reduce the computational cost and enhance the generalization capability of the classifier, a dimensionality reduction method is needed to identify an informative yet moderate size feature set. This paper proposes a new fuzzy version of the well known Fisher-s Linear Discriminant Analysis (LDA) feature projection technique. Furthermore, based on the fact that certain muscles might contribute more to the discrimination process, a novel feature weighting scheme is also presented by employing Particle Swarm Optimization (PSO) for estimating the weight of each feature. The new method, called PSOFLDA, is tested on real MES datasets and compared with other techniques to prove its superiority.Keywords: Discriminant Analysis, Pattern Recognition, SignalProcessing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15555145 Linear Programming Application in Unit Commitment of Wind Farms with Considering Uncertainties
Authors: M. Esmaeeli Shahrakht, A. Kazemi
Abstract:
Due to uncertainty of wind velocity, wind power generators don’t have deterministic output power. Utilizing wind power generation and thermal power plants together create new concerns for operation engineers of power systems. In this paper, a model is presented to implement the uncertainty of load and generated wind power which can be utilized in power system operation planning. Stochastic behavior of parameters is simulated by generating scenarios that can be solved by deterministic method. A mixed-integer linear programming method is used for solving deterministic generation scheduling problem. The proposed approach is applied to a 12-unit test system including 10 thermal units and 2 wind farms. The results show affectivity of piecewise linear model in unit commitment problems. Also using linear programming causes a considerable reduction in calculation times and guarantees convergence to the global optimum. Neglecting the uncertainty of wind velocity causes higher cost assessment of generation scheduling.
Keywords: Load uncertainty, linear programming, scenario generation, unit commitment, wind farm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29355144 Bond Graph and Bayesian Networks for Reliable Diagnosis
Authors: Abdelaziz Zaidi, Belkacem Ould Bouamama, Moncef Tagina
Abstract:
Bond Graph as a unified multidisciplinary tool is widely used not only for dynamic modelling but also for Fault Detection and Isolation because of its structural and causal proprieties. A binary Fault Signature Matrix is systematically generated but to make the final binary decision is not always feasible because of the problems revealed by such method. The purpose of this paper is introducing a methodology for the improvement of the classical binary method of decision-making, so that the unknown and identical failure signatures can be treated to improve the robustness. This approach consists of associating the evaluated residuals and the components reliability data to build a Hybrid Bayesian Network. This network is used in two distinct inference procedures: one for the continuous part and the other for the discrete part. The continuous nodes of the network are the prior probabilities of the components failures, which are used by the inference procedure on the discrete part to compute the posterior probabilities of the failures. The developed methodology is applied to a real steam generator pilot process.Keywords: Redundancy relations, decision-making, Bond Graph, reliability, Bayesian Networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25245143 Efficacy of Combined CHAp and Lanthanum Carbonate in Therapy for Hyperphosphatemia
Authors: Andreea Cârâc, Elena Moroşan, Ana Corina Ioniță, Rica Boscencu, Geta Cârâc
Abstract:
Although, lanthanum carbonate has not been approved by the FDA for treatment of hyperphosphatemia, we prospectively evaluated the efficacy of the combination of Calcium hydroxyapatite (CHAp) and Lanthanum Carbonate (LaC) for the treatment of hyperphosphatemia on mice. CHAp was prepared by co-precipitation method using Ca(OH)2, H3PO4, NH4OH with calcination at 1200ºC. Lanthanum carbonate was prepared by chemical method using NaHCO3 and LaCl3 at low pH environment, below 4.0. The structures were characterized by FTIR spectra and SEM -EDX analysis. The study group included 16 subjects-mice divided into four groups according to the administered substance: lanthanum carbonate (group A), CHAp (group B), lanthanum carbonate + CHAp (group C) and salt water (group D). The results indicate a phosphate decrease when subjects (mice) were treated with CHAp and lanthanum carbonate (0.5% CMC), in a single dose of 1500 mg/kg. Serum phosphate concentration decreased [(from 4.5 ± 0.8 mg/dL) to 4.05 ± 0.2 mg/dL), P < 0.01] in group A and in group C (to 3.6 ± 0.2 mg/dL) at 12 hours from the administration. The combination of CHAp and lanthanum carbonate is a suitable regimen for hyperphosphatemia treatment because it avoids both the hypercalcemia of CaCO3 and the adverse effects of CHAp.
Keywords: Calcium hydroxyapatite, hyperphosphatemia, lanthanum carbonate, phosphatebinder, structures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16675142 Detecting and Locating Wormhole Attacks in Wireless Sensor Networks Using Beacon Nodes
Authors: He Ronghui, Ma Guoqing, Wang Chunlei, Fang Lan
Abstract:
This paper focuses on wormhole attacks detection in wireless sensor networks. The wormhole attack is particularly challenging to deal with since the adversary does not need to compromise any nodes and can use laptops or other wireless devices to send the packets on a low latency channel. This paper introduces an easy and effective method to detect and locate the wormholes: Since beacon nodes are assumed to know their coordinates, the straight line distance between each pair of them can be calculated and then compared with the corresponding hop distance, which in this paper equals hop counts × node-s transmission range R. Dramatic difference may emerge because of an existing wormhole. Our detection mechanism is based on this. The approximate location of the wormhole can also be derived in further steps based on this information. To the best of our knowledge, our method is much easier than other wormhole detecting schemes which also use beacon nodes, and to those have special requirements on each nodes (e.g., GPS receivers or tightly synchronized clocks or directional antennas), ours is more economical. Simulation results show that the algorithm is successful in detecting and locating wormholes when the density of beacon nodes reaches 0.008 per m2.
Keywords: Beacon node, wireless sensor network, worm hole attack.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18785141 Multiple Criteria Decision Making Analysis for Selecting and Evaluating Fighter Aircraft
Authors: C. Ardil, A. M. Pashaev, R.A. Sadiqov, P. Abdullayev
Abstract:
In this paper, multiple criteria decision making analysis technique, is presented for ranking and selection of a set of determined alternatives - fighter aircraft - which are associated with a set of decision factors. In fighter aircraft design, conflicting decision criteria, disciplines, and technologies are always involved in the design process. Multiple criteria decision making analysis techniques can be helpful to effectively deal with such situations and make wise design decisions. Multiple criteria decision making analysis theory is a systematic mathematical approach for dealing with problems which contain uncertainties in decision making. The feasibility and contributions of applying the multiple criteria decision making analysis technique in fighter aircraft selection analysis is explored. In this study, an integrated framework incorporating multiple criteria decision making analysis technique in fighter aircraft analysis is established using entropy objective weighting method. An improved integrated multiple criteria decision making analysis method is utilized to aggregate the multiple decision criteria into one composite figure of merit, which serves as an objective function in the decision process. Therefore, it is demonstrated that the suitable multiple criteria decision making analysis method with decision solution provides an effective objective function for the decision making analysis. Considering that the inherent uncertainties and the weighting factors have crucial decision impacts on the fighter aircraft evaluation, seven fighter aircraft models for the multiple design criteria in terms of the weighting factors are constructed. The proposed multiple criteria decision making analysis model is based on integrated entropy index procedure, and additive multiple criteria decision making analysis theory. Hence, the applicability of proposed technique for fighter aircraft selection problem is considered. The constructed multiple criteria decision making analysis model can provide efficient decision analysis approach for uncertainty assessment of the decision problem. Consequently, the fighter aircraft alternatives are ranked based their final evaluation scores, and sensitivity analysis is conducted.
Keywords: Fighter Aircraft, Fighter Aircraft Selection, Multiple Criteria Decision Making, Multiple Criteria Decision Making Analysis, MCDMA
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6245140 Specialized Reduced Models of Dynamic Flows in 2-Stroke Engines
Authors: S. Cagin, X. Fischer, E. Delacourt, N. Bourabaa, C. Morin, D. Coutellier, B. Carré, S. Loumé
Abstract:
The complexity of scavenging by ports and its impact on engine efficiency create the need to understand and to model it as realistically as possible. However, there are few empirical scavenging models and these are highly specialized. In a design optimization process, they appear very restricted and their field of use is limited. This paper presents a comparison of two methods to establish and reduce a model of the scavenging process in 2-stroke diesel engines. To solve the lack of scavenging models, a CFD model has been developed and is used as the referent case. However, its large size requires a reduction. Two techniques have been tested depending on their fields of application: The NTF method and neural networks. They both appear highly appropriate drastically reducing the model’s size (over 90% reduction) with a low relative error rate (under 10%). Furthermore, each method produces a reduced model which can be used in distinct specialized fields of application: the distribution of a quantity (mass fraction for example) in the cylinder at each time step (pseudo-dynamic model) or the qualification of scavenging at the end of the process (pseudo-static model).
Keywords: Diesel engine, Design optimization, Model reduction, Neural network, NTF algorithm, Scavenging.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13295139 Sustainable Energy Production with Closed-Loop Methods: Evaluating the Influence of Power Plant Age on Production Efficiency and Environmental Impact
Authors: Bujar Ismaili, Bahti Ismajli, Venhar Ismaili, Skender Ramadani
Abstract:
In Kosovo, the problem with the electricity supply is huge and it does not meet the demands of consumers. Older thermal power plants, which are regarded as big environmental polluters, produce most of the energy. Our experiment is based on the production of electricity using the closed method that does not affect environmental pollution by using waste as fuel that is considered to pollute the environment. The experiment was carried out in the village of Godanc, municipality of Shtime, Kosovo. In the experiment, a production line based on the production of electricity and central heating was designed at the same time. The results are the benefits of electricity as well as the release of temperature for heating with minimal expenses and with the release of 0% gases into the atmosphere. During this experiment, coal, plastic, waste from wood processing, and agricultural wastes were used as raw materials. The method utilized in the experiment allows for the release of gas through pipes and filters during the top-to-bottom combustion of the raw material in the boiler, followed by the method of gas filtration from waste wood processing (sawdust). During this process, the final product, gas, is obtained. This gas passes through the carburetor, enabling the combustion process to put the internal combustion machine and the generator into operation and produce electricity that does not release gases into the atmosphere. The results show that the system provides energy stability without environmental pollution from toxic substances and waste, as well as with low production costs. From the final results, it follows that, in the case of using coal fuel, we have benefited from more electricity and higher temperature release, followed by plastic waste, which also gave good results. The results obtained during these experiments prove that the current problems of lack of electricity and heating can be met at a lower cost and have a clean environment and waste management.
Keywords: Energy, heating, atmosphere, waste management, gasification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2235138 Wavelet Enhanced CCA for Minimization of Ocular and Muscle Artifacts in EEG
Authors: B. S. Raghavendra, D. Narayana Dutt
Abstract:
Electroencephalogram (EEG) recordings are often contaminated with ocular and muscle artifacts. In this paper, the canonical correlation analysis (CCA) is used as blind source separation (BSS) technique (BSS-CCA) to decompose the artifact contaminated EEG into component signals. We combine the BSSCCA technique with wavelet filtering approach for minimizing both ocular and muscle artifacts simultaneously, and refer the proposed method as wavelet enhanced BSS-CCA. In this approach, after careful visual inspection, the muscle artifact components are discarded and ocular artifact components are subjected to wavelet filtering to retain high frequency cerebral information, and then clean EEG is reconstructed. The performance of the proposed wavelet enhanced BSS-CCA method is tested on real EEG recordings contaminated with ocular and muscle artifacts, for which power spectral density is used as a quantitative measure. Our results suggest that the proposed hybrid approach minimizes ocular and muscle artifacts effectively, minimally affecting underlying cerebral activity in EEG recordings.Keywords: Blind source separation, Canonical correlationanalysis, Electroencephalogram, Muscle artifact, Ocular artifact, Power spectrum, Wavelet threshold.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23335137 Prediction of Dissolved Oxygen in Rivers Using a Wang-Mendel Method – Case Study of Au Sable River
Authors: Mahmoud R. Shaghaghian
Abstract:
Amount of dissolve oxygen in a river has a great direct affect on aquatic macroinvertebrates and this would influence on the region ecosystem indirectly. In this paper it is tried to predict dissolved oxygen in rivers by employing an easy Fuzzy Logic Modeling, Wang Mendel method. This model just uses previous records to estimate upcoming values. For this purpose daily and hourly records of eight stations in Au Sable watershed in Michigan, United States are employed for 12 years and 50 days period respectively. Calculations indicate that for long period prediction it is better to increase input intervals. But for filling missed data it is advisable to decrease the interval. Increasing partitioning of input and output features influence a little on accuracy but make the model too time consuming. Increment in number of input data also act like number of partitioning. Large amount of train data does not modify accuracy essentially, so, an optimum training length should be selected.
Keywords: Dissolved oxygen, Au Sable, fuzzy logic modeling, Wang Mendel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18905136 Thermoelastic Waves in Anisotropic Platesusing Normal Mode Expansion Method with Thermal Relaxation Time
Authors: K.L. Verma
Abstract:
Analysis for the generalized thermoelastic Lamb waves, which propagates in anisotropic thin plates in generalized thermoelasticity, is presented employing normal mode expansion method. The displacement and temperature fields are expressed by a summation of the symmetric and antisymmetric thermoelastic modes in the surface thermal stresses and thermal gradient free orthotropic plate, therefore the theory is particularly appropriate for waveform analyses of Lamb waves in thin anisotropic plates. The transient waveforms excited by the thermoelastic expansion are analyzed for an orthotropic thin plate. The obtained results show that the theory provides a quantitative analysis to characterize anisotropic thermoelastic stiffness properties of plates by wave detection. Finally numerical calculations have been presented for a NaF crystal, and the dispersion curves for the lowest modes of the symmetric and antisymmetric vibrations are represented graphically at different values of thermal relaxation time. However, the methods can be used for other materials as wellKeywords: Anisotropic, dispersion, frequency, normal, thermoelasticity, wave modes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1849