Search results for: total capacity algorithm
14767 Comparative Study on Structural Behaviour of Circular Hollow Steel Tubular, Concrete Filled Steel Tubular, and Reinforced Cement Concrete Stub Columns under Pure Axial Compression
Authors: Niladri Roy, M. Longshithung Patton
Abstract:
This paper is aimed at studying the structural response of circular hollow steel tubular (HST), concrete filled steel tubular (CFST), and reinforced cement concrete (RCC) stub columns when subjected to only axial compressive forces and also examining their comparative nature using finite element (FE) models. These results are further compared with the respective experimental results. FE software package ABAQUS 6.14 has been used for further parametric studies where a total of 108 FE models were modelled. The diameters of the HST, CFST, and RCC stub columns are kept as 100, 140, 180, and 220, with length to diameter ratio fixed at 3 to avoid end effects and flexural failure. To keep the same percentage of steel (by volume), the thicknesses of steel tubes in HST and CFST columns were varied in response to the change in diameter of the main reinforcement bar in RCC columns. M25 grade of concrete was used throughout. The objective is to compare the structural behaviour of HST, CFST, and RCC stub columns on the basis of their axial compressive load carrying capacity and failure modes. The studies show that filling the circular HST columns with concrete increases the Pu of the CCFST columns by 2.97 times. It was also observed that the Pu (HST) is about 0.72 times Pu (RCC) on average, and the Pu (CFST) is about 2.08 times Pu (RCC) on average. After the analysis and comparison, it has been proved that CFST has much more load carrying capacity than HST and RCC and also provides the same strength at a very less sectional size.Keywords: HST columns, stub columns, CFST columns, RCC columns, finite element modeling, ABAQUS
Procedia PDF Downloads 10014766 Effect of the Truss System to the Flexural Behavior of the External Reinforced Concrete Beams
Authors: Rudy Djamaluddin, Yasser Bachtiar, Rita Irmawati, Abd. Madjid Akkas, Rusdi Usman Latief
Abstract:
The aesthetic qualities and the versatility of reinforced concrete have made it a popular choice for many architects and structural engineers. Therefore, the exploration of natural materials such as gravels and sands as well as lime-stone for cement production is increasing to produce a concrete material. The exploration must affect to the environment. Therefore, the using of the concrete materials should be as efficient as possible. According to its natural behavior of the concrete material, it is strong in compression and weak in tension. Therefore the contribution of the tensile stresses of the concrete to the flexural capacity of the beams is neglected. However, removing of concrete on tension zone affects to the decreasing of flexural capacity. Introduce the strut action of truss structures may an alternative to solve the decreasing of flexural capacity. A series of specimens were prepared to clarify the effect of the truss structures in the concrete beams without concrete on the tension zone. Results indicated that the truss system is necessary for the external reinforced concrete beams. The truss system of concrete beam without concrete on tension zone (BR) could develop almost same capacity to the normal beam (BN). It can be observed also that specimens BR has lower number of cracks than specimen BN. This may be caused by the fact that there was no bonding effect on the tensile reinforcement on specimen BR to distribute the cracks.Keywords: external reinforcement, truss, concrete beams, flexural behavior
Procedia PDF Downloads 44614765 Recursive Parametric Identification of a Doubly Fed Induction Generator-Based Wind Turbine
Authors: A. El Kachani, E. Chakir, A. Ait Laachir, A. Niaaniaa, J. Zerouaoui
Abstract:
This document presents an adaptive controller based on recursive parametric identification applied to a wind turbine based on the doubly-fed induction machine (DFIG), to compensate the faults and guarantee efficient of the DFIG. The proposed adaptive controller is based on the recursive least square algorithm which considers that the best estimator for the vector parameter is the vector x minimizing a quadratic criterion. Furthermore, this method can improve the rapidity and precision of the controller based on a model. The proposed controller is validated via simulation on a 5.5 kW DFIG-based wind turbine. The results obtained seem to be good. In addition, they show the advantages of an adaptive controller based on recursive least square algorithm.Keywords: adaptive controller, recursive least squares algorithm, wind turbine, doubly fed induction generator
Procedia PDF Downloads 28814764 Space Telemetry Anomaly Detection Based On Statistical PCA Algorithm
Authors: Bassem Nassar, Wessam Hussein, Medhat Mokhtar
Abstract:
The crucial concern of satellite operations is to ensure the health and safety of satellites. The worst case in this perspective is probably the loss of a mission but the more common interruption of satellite functionality can result in compromised mission objectives. All the data acquiring from the spacecraft are known as Telemetry (TM), which contains the wealth information related to the health of all its subsystems. Each single item of information is contained in a telemetry parameter, which represents a time-variant property (i.e. a status or a measurement) to be checked. As a consequence, there is a continuous improvement of TM monitoring systems in order to reduce the time required to respond to changes in a satellite's state of health. A fast conception of the current state of the satellite is thus very important in order to respond to occurring failures. Statistical multivariate latent techniques are one of the vital learning tools that are used to tackle the aforementioned problem coherently. Information extraction from such rich data sources using advanced statistical methodologies is a challenging task due to the massive volume of data. To solve this problem, in this paper, we present a proposed unsupervised learning algorithm based on Principle Component Analysis (PCA) technique. The algorithm is particularly applied on an actual remote sensing spacecraft. Data from the Attitude Determination and Control System (ADCS) was acquired under two operation conditions: normal and faulty states. The models were built and tested under these conditions and the results shows that the algorithm could successfully differentiate between these operations conditions. Furthermore, the algorithm provides competent information in prediction as well as adding more insight and physical interpretation to the ADCS operation.Keywords: space telemetry monitoring, multivariate analysis, PCA algorithm, space operations
Procedia PDF Downloads 41514763 Numerical Investigation on Load Bearing Capacity of Pervious Concrete Piles as an Alternative to Granular Columns
Authors: Ashkan Shafee, Masoud Ghodrati, Ahmad Fahimifar
Abstract:
Pervious concrete combines considerable permeability with adequate strength, which makes it very beneficial in pavement construction and also in ground improvement projects. In this paper, a single pervious concrete pile subjected to vertical and lateral loading is analysed using a verified three dimensional finite element code. A parametric study was carried out in order to investigate load bearing capacity of a single unreinforced pervious concrete pile in saturated soft soil and also gain insight into the failure mechanism of this rather new soil improvement technique. The results show that concrete damaged plasticity constitutive model can perfectly simulate the highly brittle nature of the pervious concrete material and considering the computed vertical and horizontal load bearing capacities, some suggestions have been made for ground improvement projects.Keywords: concrete damaged plasticity, ground improvement, load-bearing capacity, pervious concrete pile
Procedia PDF Downloads 22914762 Breaking Stress Criterion that Changes Everything We Know About Materials Failure
Authors: Ali Nour El Hajj
Abstract:
Background: The perennial deficiencies of the failure models in the materials field have profoundly and significantly impacted all associated technical fields that depend on accurate failure predictions. Many preeminent and well-known scientists from an earlier era of groundbreaking discoveries attempted to solve the issue of material failure. However, a thorough understanding of material failure has been frustratingly elusive. Objective: The heart of this study is the presentation of a methodology that identifies a newly derived one-parameter criterion as the only general failure theory for noncompressible, homogeneous, and isotropic materials subjected to multiaxial states of stress and various boundary conditions, providing the solution to this longstanding problem. This theory is the counterpart and companion piece to the theory of elasticity and is in a formalism that is suitable for broad application. Methods: Utilizing advanced finite-element analysis, the maximum internal breaking stress corresponding to the maximum applied external force is identified as a unified and universal material failure criterion for determining the structural capacity of any system, regardless of its geometry or architecture. Results: A comparison between the proposed criterion and methodology against design codes reveals that current provisions may underestimate the structural capacity by 2.17 times or overestimate the capacity by 2.096 times. It also shows that existing standards may underestimate the structural capacity by 1.4 times or overestimate the capacity by 2.49 times. Conclusion: The proposed failure criterion and methodology will pave the way for a new era in designing unconventional structural systems composed of unconventional materials.Keywords: failure criteria, strength theory, failure mechanics, materials mechanics, rock mechanics, concrete strength, finite-element analysis, mechanical engineering, aeronautical engineering, civil engineering
Procedia PDF Downloads 7914761 A Fast Version of the Generalized Multi-Directional Radon Transform
Authors: Ines Elouedi, Atef Hammouda
Abstract:
This paper presents a new fast version of the generalized Multi-Directional Radon Transform method. The new method uses the inverse Fast Fourier Transform to lead to a faster Generalized Radon projections. We prove in this paper that the fast algorithm leads to almost the same results of the eldest one but with a considerable lower time computation cost. The projection end result of the fast method is a parameterized Radon space where a high valued pixel allows the detection of a curve from the original image. The proposed fast inversion algorithm leads to an exact reconstruction of the initial image from the Radon space. We show examples of the impact of this algorithm on the pattern recognition domain.Keywords: fast generalized multi-directional Radon transform, curve, exact reconstruction, pattern recognition
Procedia PDF Downloads 27914760 Heuristic Search Algorithm (HSA) for Enhancing the Lifetime of Wireless Sensor Networks
Authors: Tripatjot S. Panag, J. S. Dhillon
Abstract:
The lifetime of a wireless sensor network can be effectively increased by using scheduling operations. Once the sensors are randomly deployed, the task at hand is to find the largest number of disjoint sets of sensors such that every sensor set provides complete coverage of the target area. At any instant, only one of these disjoint sets is switched on, while all other are switched off. This paper proposes a heuristic search method to find the maximum number of disjoint sets that completely cover the region. A population of randomly initialized members is made to explore the solution space. A set of heuristics has been applied to guide the members to a possible solution in their neighborhood. The heuristics escalate the convergence of the algorithm. The best solution explored by the population is recorded and is continuously updated. The proposed algorithm has been tested for applications which require sensing of multiple target points, referred to as point coverage applications. Results show that the proposed algorithm outclasses the existing algorithms. It always finds the optimum solution, and that too by making fewer number of fitness function evaluations than the existing approaches.Keywords: coverage, disjoint sets, heuristic, lifetime, scheduling, Wireless sensor networks, WSN
Procedia PDF Downloads 45214759 The Behaviour of Laterally Loaded Piles Installed in the Sand with Enlarged Bases
Authors: J. Omer, H. Haroglu
Abstract:
Base enlargement in piles was invented to enhance pile resistance in downward loading, but the contribution of an enlarged base to the lateral load resistance of a pile has not been fully exploited or understood. This paper presents a laboratory investigation of the lateral capacity and deformation response of small-scale steel piles with enlarged bases installed in dry sand. Static loading tests were performed on 24 model piles having different base-to-shaft diameter ratios. The piles were installed in a box filled with dry sand, and lateral loads were applied to the pile tops using a pulley system. The test piles had shaft diameters of 20 mm, 16 mm, and 10 mm; base diameters of 900 mm, 700 mm, and 500 mm. As a control, a pile without base enlargement was tested to allow comparisons with the enlarged base piles. Incremental maintained loads were applied until pile failure approached while recording pile head deflections with high-precision dial gauges. The results showed that the lateral capacity increased with an increase in base diameter, albeit by different percentages depending on the shaft diameters and embedment length in the sand. There was always an increase in lateral capacity with increasing embedment length. Also, it was observed that an enlarged pile base had deflected less at a given load when compared to the control pile. Therefore, the research demonstrated the benefits of lateral capacity and stability of enlarging a pile base.Keywords: pile foundations, enlarged base, lateral loading
Procedia PDF Downloads 15514758 Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks
Authors: Sharon Hovav, Ilya Levner, Oren Nahum, Istvan Szabo
Abstract:
In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks, aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.Keywords: logistics center, product-workstation, assignment, maximum performance, load balancing, fast algorithm
Procedia PDF Downloads 22814757 A Hybrid Classical-Quantum Algorithm for Boundary Integral Equations of Scattering Theory
Authors: Damir Latypov
Abstract:
A hybrid classical-quantum algorithm to solve boundary integral equations (BIE) arising in problems of electromagnetic and acoustic scattering is proposed. The quantum speed-up is due to a Quantum Linear System Algorithm (QLSA). The original QLSA of Harrow et al. provides an exponential speed-up over the best-known classical algorithms but only in the case of sparse systems. Due to the non-local nature of integral operators, matrices arising from discretization of BIEs, are, however, dense. A QLSA for dense matrices was introduced in 2017. Its runtime as function of the system's size N is bounded by O(√Npolylog(N)). The run time of the best-known classical algorithm for an arbitrary dense matrix scales as O(N².³⁷³). Instead of exponential as in case of sparse matrices, here we have only a polynomial speed-up. Nevertheless, sufficiently high power of this polynomial, ~4.7, should make QLSA an appealing alternative. Unfortunately for the QLSA, the asymptotic separability of the Green's function leads to high compressibility of the BIEs matrices. Classical fast algorithms such as Multilevel Fast Multipole Method (MLFMM) take advantage of this fact and reduce the runtime to O(Nlog(N)), i.e., the QLSA is only quadratically faster than the MLFMM. To be truly impactful for computational electromagnetics and acoustics engineers, QLSA must provide more substantial advantage than that. We propose a computational scheme which combines elements of the classical fast algorithms with the QLSA to achieve the required performance.Keywords: quantum linear system algorithm, boundary integral equations, dense matrices, electromagnetic scattering theory
Procedia PDF Downloads 15514756 Comparison of Crossover Types to Obtain Optimal Queries Using Adaptive Genetic Algorithm
Authors: Wafa’ Alma'Aitah, Khaled Almakadmeh
Abstract:
this study presents an information retrieval system of using genetic algorithm to increase information retrieval efficiency. Using vector space model, information retrieval is based on the similarity measurement between query and documents. Documents with high similarity to query are judge more relevant to the query and should be retrieved first. Using genetic algorithms, each query is represented by a chromosome; these chromosomes are fed into genetic operator process: selection, crossover, and mutation until an optimized query chromosome is obtained for document retrieval. Results show that information retrieval with adaptive crossover probability and single point type crossover and roulette wheel as selection type give the highest recall. The proposed approach is verified using (242) proceedings abstracts collected from the Saudi Arabian national conference.Keywords: genetic algorithm, information retrieval, optimal queries, crossover
Procedia PDF Downloads 29314755 Evaluation of Real-Time Background Subtraction Technique for Moving Object Detection Using Fast-Independent Component Analysis
Authors: Naoum Abderrahmane, Boumehed Meriem, Alshaqaqi Belal
Abstract:
Background subtraction algorithm is a larger used technique for detecting moving objects in video surveillance to extract the foreground objects from a reference background image. There are many challenges to test a good background subtraction algorithm, like changes in illumination, dynamic background such as swinging leaves, rain, snow, and the changes in the background, for example, moving and stopping of vehicles. In this paper, we propose an efficient and accurate background subtraction method for moving object detection in video surveillance. The main idea is to use a developed fast-independent component analysis (ICA) algorithm to separate background, noise, and foreground masks from an image sequence in practical environments. The fast-ICA algorithm is adapted and adjusted with a matrix calculation and searching for an optimum non-quadratic function to be faster and more robust. Moreover, in order to estimate the de-mixing matrix and the denoising de-mixing matrix parameters, we propose to convert all images to YCrCb color space, where the luma component Y (brightness of the color) gives suitable results. The proposed technique has been verified on the publicly available datasets CD net 2012 and CD net 2014, and experimental results show that our algorithm can detect competently and accurately moving objects in challenging conditions compared to other methods in the literature in terms of quantitative and qualitative evaluations with real-time frame rate.Keywords: background subtraction, moving object detection, fast-ICA, de-mixing matrix
Procedia PDF Downloads 9614754 Methaheuristic Bat Algorithm in Training of Feed-Forward Neural Network for Stock Price Prediction
Authors: Marjan Golmaryami, Marzieh Behzadi
Abstract:
Recent developments in stock exchange highlight the need for an efficient and accurate method that helps stockholders make better decision. Since stock markets have lots of fluctuations during the time and different effective parameters, it is difficult to make good decisions. The purpose of this study is to employ artificial neural network (ANN) which can deal with time series data and nonlinear relation among variables to forecast next day stock price. Unlike other evolutionary algorithms which were utilized in stock exchange prediction, we trained our proposed neural network with metaheuristic bat algorithm, with fast and powerful convergence and applied it in stock price prediction for the first time. In order to prove the performance of the proposed method, this research selected a 7 year dataset from Parsian Bank stocks and after imposing data preprocessing, used 3 types of ANN (back propagation-ANN, particle swarm optimization-ANN and bat-ANN) to predict the closed price of stocks. Afterwards, this study engaged MATLAB to simulate 3 types of ANN, with the scoring target of mean absolute percentage error (MAPE). The results may be adapted to other companies stocks too.Keywords: artificial neural network (ANN), bat algorithm, particle swarm optimization algorithm (PSO), stock exchange
Procedia PDF Downloads 54814753 Classification Rule Discovery by Using Parallel Ant Colony Optimization
Authors: Waseem Shahzad, Ayesha Tahir Khan, Hamid Hussain Awan
Abstract:
Ant-Miner algorithm that lies under ACO algorithms is used to extract knowledge from data in the form of rules. A variant of Ant-Miner algorithm named as cAnt-MinerPB is used to generate list of rules using pittsburgh approach in order to maintain the rule interaction among the rules that are generated. In this paper, we propose a parallel Ant MinerPB in which Ant colony optimization algorithm runs parallel. In this technique, a data set is divided vertically (i-e attributes) into different subsets. These subsets are created based on the correlation among attributes using Mutual Information (MI). It generates rules in a parallel manner and then merged to form a final list of rules. The results have shown that the proposed technique achieved higher accuracy when compared with original cAnt-MinerPB and also the execution time has also reduced.Keywords: ant colony optimization, parallel Ant-MinerPB, vertical partitioning, classification rule discovery
Procedia PDF Downloads 29514752 Investigation of Effective Parameters on Pullout Capacity in Soil Nailing with Special Attention to International Design Codes
Authors: R. Ziaie Moayed, M. Mortezaee
Abstract:
An important and influential factor in design and determining the safety factor in Soil Nailing is the ultimate pullout capacity, or, in other words, bond strength. This important parameter depends on several factors such as material and soil texture, method of implementation, excavation diameter, friction angle between the nail and the soil, grouting pressure, the nail depth (overburden pressure), the angle of drilling and the degree of saturation in soil. Federal Highway Administration (FHWA), a customary regulation in the design of nailing, is considered only the effect of the soil type (or rock) and the method of implementation in determining the bond strength, which results in non-economic design. The other regulations are each of a kind, some of the parameters affecting bond resistance are not taken into account. Therefore, in the present paper, at first the relationships and tables presented by several valid regulations are presented for estimating the ultimate pullout capacity, and then the effect of several important factors affecting on ultimate Pullout capacity are studied. Finally, it was determined, the effect of overburden pressure (in method of injection with pressure), soil dilatation and roughness of the drilling surface on pullout strength is incremental, and effect of degree of soil saturation on pullout strength to a certain degree of saturation is increasing and then decreasing. therefore it is better to get help from nail pullout-strength test results and numerical modeling to evaluate the effect of parameters such as overburden pressure, dilatation, and degree of soil saturation, and so on to reach an optimal and economical design.Keywords: soil nailing, pullout capacity, federal highway administration (FHWA), grout
Procedia PDF Downloads 15214751 Comparison of Entropy Coefficient and Internal Resistance of Two (Used and Fresh) Cylindrical Commercial Lithium-Ion Battery (NCR18650) with Different Capacities
Authors: Sara Kamalisiahroudi, Zhang Jianbo, Bin Wu, Jun Huang, Laisuo Su
Abstract:
The temperature rising within a battery cell depends on the level of heat generation, the thermal properties and the heat transfer around the cell. The rising of temperature is a serious problem of Lithium-Ion batteries and the internal resistance of battery is the main reason for this heating up, so the heat generation rate of the batteries is an important investigating factor in battery pack design. The delivered power of a battery is directly related to its capacity, decreases in the battery capacity means the growth of the Solid Electrolyte Interface (SEI) layer which is because of the deposits of lithium from the electrolyte to form SEI layer that increases the internal resistance of the battery. In this study two identical cylindrical Lithium-Ion (NCR18650)batteries from the same company with noticeable different in capacity (a fresh and a used battery) were compared for more focusing on their heat generation parameters (entropy coefficient and internal resistance) according to Brandi model, by utilizing potentiometric method for entropy coefficient and EIS method for internal resistance measurement. The results clarify the effect of capacity difference on cell electrical (R) and thermal (dU/dT) parameters. It can be very noticeable in battery pack design for its Safety.Keywords: heat generation, Solid Electrolyte Interface (SEI), potentiometric method, entropy coefficient
Procedia PDF Downloads 47314750 Optimization Analysis of a Concentric Tube Heat Exchanger with Field Synergy Principle
Abstract:
The paper investigates the optimization analysis to the heat exchanger design, mainly with response surface method and genetic algorithm to explore the relationship between optimal fluid flow velocity and temperature of the heat exchanger using field synergy principle. First, finite volume method is proposed to calculate the flow temperature and flow rate distribution for numerical analysis. We identify the most suitable simulation equations by response surface methodology. Furthermore, a genetic algorithm approach is applied to optimize the relationship between fluid flow velocity and flow temperature of the heat exchanger. The results show that the field synergy angle plays vital role in the performance of a true heat exchanger.Keywords: optimization analysis, field synergy, heat exchanger, genetic algorithm
Procedia PDF Downloads 30714749 A Novel Breast Cancer Detection Algorithm Using Point Region Growing Segmentation and Pseudo-Zernike Moments
Authors: Aileen F. Wang
Abstract:
Mammography has been one of the most reliable methods for early detection and diagnosis of breast cancer. However, mammography misses about 17% and up to 30% of breast cancers due to the subtle and unstable appearances of breast cancer in their early stages. Recent computer-aided diagnosis (CADx) technology using Zernike moments has improved detection accuracy. However, it has several drawbacks: it uses manual segmentation, Zernike moments are not robust, and it still has a relatively high false negative rate (FNR)–17.6%. This project will focus on the development of a novel breast cancer detection algorithm to automatically segment the breast mass and further reduce FNR. The algorithm consists of automatic segmentation of a single breast mass using Point Region Growing Segmentation, reconstruction of the segmented breast mass using Pseudo-Zernike moments, and classification of the breast mass using the root mean square (RMS). A comparative study among the various algorithms on the segmentation and reconstruction of breast masses was performed on randomly selected mammographic images. The results demonstrated that the newly developed algorithm is the best in terms of accuracy and cost effectiveness. More importantly, the new classifier RMS has the lowest FNR–6%.Keywords: computer aided diagnosis, mammography, point region growing segmentation, pseudo-zernike moments, root mean square
Procedia PDF Downloads 45314748 Faulty Sensors Detection in Planar Array Antenna Using Pelican Optimization Algorithm
Authors: Shafqat Ullah Khan, Ammar Nasir
Abstract:
Using planar antenna array (PAA) in radars, Broadcasting, satellite antennas, and sonar for the detection of targets, Helps provide instant beam pattern control. High flexibility and Adaptability are achieved by multiple beam steering by using a Planar array and are particularly needed in real-life Sanrio’s where the need arises for several high-directivity beams. Faulty sensors in planar arrays generate asymmetry, which leads to service degradation, radiation pattern distortion, and increased levels of sidelobe. The POA, a nature-inspired optimization algorithm, accurately determines faulty sensors within an array, enhancing the reliability and performance of planar array antennas through extensive simulations and experiments. The analysis was done for different types of faults in 7 x 7 and 8 x 8 planar arrays in MATLAB.Keywords: Planar antenna array, , Pelican optimisation Algorithm, , Faculty sensor, Antenna arrays
Procedia PDF Downloads 8114747 Algorithm for Quantification of Pulmonary Fibrosis in Chest X-Ray Exams
Authors: Marcela de Oliveira, Guilherme Giacomini, Allan Felipe Fattori Alves, Ana Luiza Menegatti Pavan, Maria Eugenia Dela Rosa, Fernando Antonio Bacchim Neto, Diana Rodrigues de Pina
Abstract:
It is estimated that each year one death every 10 seconds (about 2 million deaths) in the world is attributed to tuberculosis (TB). Even after effective treatment, TB leaves sequelae such as, for example, pulmonary fibrosis, compromising the quality of life of patients. Evaluations of the aforementioned sequel are usually performed subjectively by radiology specialists. Subjective evaluation may indicate variations inter and intra observers. The examination of x-rays is the diagnostic imaging method most accomplished in the monitoring of patients diagnosed with TB and of least cost to the institution. The application of computational algorithms is of utmost importance to make a more objective quantification of pulmonary impairment in individuals with tuberculosis. The purpose of this research is the use of computer algorithms to quantify the pulmonary impairment pre and post-treatment of patients with pulmonary TB. The x-ray images of 10 patients with TB diagnosis confirmed by examination of sputum smears were studied. Initially the segmentation of the total lung area was performed (posteroanterior and lateral views) then targeted to the compromised region by pulmonary sequel. Through morphological operators and the application of signal noise tool, it was possible to determine the compromised lung volume. The largest difference found pre- and post-treatment was 85.85% and the smallest was 54.08%.Keywords: algorithm, radiology, tuberculosis, x-rays exam
Procedia PDF Downloads 41914746 Extracting the Antioxidant Compounds of Medicinal Plant Limoniastrum guyonianum
Authors: Assia Belfar, Mohamed Hadjadj, Messaouda Dakmouche, Zineb Ghiaba, Mahdi Belguidoum
Abstract:
Introduction: This study aims to phytochemical screening; Extracting the active compounds and estimate the effectiveness of antioxidant in Medicinal plants desert Limoniastrum guyonianum (Zeïta) from South Algeria. Methods: Total phenolic content and total flavonoid content using Folin-Ciocalteu and aluminum chloride colorimetric methods, respectively. The total antioxidant capacity was estimated by the following methods: DPPH (1.1-diphenyl-2-picrylhydrazyl radical) and reducing power assay. Results: Phytochemical screening of the plant part reveals the presence of phenols, saponins, flavonoids and tannins. While alkaloids and Terpenoids were absent. The Methanolic extract of L. guyonianum was extracted successively with ethyl acetate and butanol. Extraction of yield varied widely in the L. guyonianum ranging from (1.315 % to 4.218%). butanol fraction had the highest yield. The higher content of phenols was recorded in butanol fraction (311.81 ± 0.02mg GAE/g DW), the higher content of flavonoids was found in butanol fraction (9.58 ± 0.33mg QE/g DW). IC50 of inhibition of radical DPPH in ethyl acetate fraction was (0.05 ± 0.01µg/ml) Equal effectiveness with BHT, All extracts showed good activity of ferric reducing power, the higher power was in butanol fraction (16.16 ± 0.05mM). Conclusions: Demonstrated this study that the Methanolic extract of L. guyonianum contain a considerable quantity of phenolic compounds and possess a good antioxidant activity. It can be used as an easily accessible source of Natural Antioxidants and as a possible food supplement and in pharmaceutical industry.Keywords: flavonoid compound, l. guyonianum, medicinal plants, phenolic compounds, phytochemical screening
Procedia PDF Downloads 30514745 Fasted and Postprandial Response of Serum Physiological Response, Hepatic Antioxidant Abilities and Hsp70 Expression in M. amblycephala Fed Different Dietary Carbohydrate
Authors: Chuanpeng Zhou
Abstract:
The effect of dietary carbohydrate (CHO) level on serum physiological response, hepatic antioxidant abilities and heat shock protein 70 (HSP70) expression of Wuchang bream (Megalobrama amblycephala) was studied. Two isonitrogenous (28.56% crude protein) and isolipidic (5.28% crude lipid) diets were formulated to contain 30% or 53% wheat starch. Diets were fed for 90 days to fish in triplicate tanks (28 fish per tank). At the end of feeding trial, significantly higher serum triglyceride level, insulin level, cortisol level, malondialdehyde (MDA) content were observed in fish fed the 53% CHO diet, while significantly lower serum total protein content, alkaline phosphatase (AKP) activity, superoxide dismutase (SOD) activity and total antioxidative capacity (T-AOC) were found in fish fed the 53% CHO diet compared with those fed the 30% diet. The relative level of hepatic heat shock protein 70 mRNA was significantly higher in the 53% CHO group than that in the 30% CHO at 6, 12, and 48 h after feeding. The results of this study indicated that ingestion of 53% dietary CHO impacted the nonspecific immune ability and caused metabolic stress of Megalobrama amblycephala.Keywords: Megalobrama amblycephala, carbohydrate, fasted and postprandial response, immunity, Hsp70
Procedia PDF Downloads 45914744 Using LTE-Sim in New Hanover Decision Algorithm for 2-Tier Macrocell-Femtocell LTE Network
Authors: Umar D. M., Aminu A. M., Izaddeen K. Y.
Abstract:
Deployments of mini macrocell base stations also referred to as femtocells, improve the quality of service of indoor and outdoor users. Nevertheless, mobility management remains a key issue with regards to their deployment. This paper is leaned towards this issue, with an in-depth focus on the most important aspect of mobility management -handover. In handover management, making a handover decision in the LTE two-tier macrocell femtocell network is a crucial research area. Decision algorithms in this research are classified and comparatively analyzed according to received signal strength, user equipment speed, cost function, and interference. However, it was observed that most of the discussed decision algorithms fail to consider cell selection with hybrid access policy in a single macrocell multiple femtocell scenario, another observation was a majority of these algorithms lack the incorporation of user equipment residence parameter. Not including this parameter boosts the number of unnecessary handover occurrence. To deal with these issues, a sophisticated handover decision algorithm is proposed. The proposed algorithm considers the user’s velocity, received signal strength, residence time, as well as the femtocell base station’s access policy. Simulation results have shown that the proposed algorithm reduces the number of unnecessary handovers when compared to conventional received signal strength-based handover decision algorithm.Keywords: user-equipment, radio signal service, long term evolution, mobility management, handoff
Procedia PDF Downloads 12514743 Design of Digital IIR Filter Using Opposition Learning and Artificial Bee Colony Algorithm
Authors: J. S. Dhillon, K. K. Dhaliwal
Abstract:
In almost all the digital filtering applications the digital infinite impulse response (IIR) filters are preferred over finite impulse response (FIR) filters because they provide much better performance, less computational cost and have smaller memory requirements for similar magnitude specifications. However, the digital IIR filters are generally multimodal with respect to the filter coefficients and therefore, reliable methods that can provide global optimal solutions are required. The artificial bee colony (ABC) algorithm is one such recently introduced meta-heuristic optimization algorithm. But in some cases it shows insufficiency while searching the solution space resulting in a weak exchange of information and hence is not able to return better solutions. To overcome this deficiency, the opposition based learning strategy is incorporated in ABC and hence a modified version called oppositional artificial bee colony (OABC) algorithm is proposed in this paper. Duplication of members is avoided during the run which also augments the exploration ability. The developed algorithm is then applied for the design of optimal and stable digital IIR filter structure where design of low-pass (LP) and high-pass (HP) filters is carried out. Fuzzy theory is applied to achieve maximize satisfaction of minimum magnitude error and stability constraints. To check the effectiveness of OABC, the results are compared with some well established filter design techniques and it is observed that in most cases OABC returns better or atleast comparable results.Keywords: digital infinite impulse response filter, artificial bee colony optimization, opposition based learning, digital filter design, multi-parameter optimization
Procedia PDF Downloads 47814742 Generation of Photo-Mosaic Images through Block Matching and Color Adjustment
Authors: Hae-Yeoun Lee
Abstract:
Mosaic refers to a technique that makes image by gathering lots of small materials in various colours. This paper presents an automatic algorithm that makes the photomosaic image using photos. The algorithm is composed of four steps: Partition and feature extraction, block matching, redundancy removal and colour adjustment. The input image is partitioned in the small block to extract feature. Each block is matched to find similar photo in database by comparing similarity with Euclidean difference between blocks. The intensity of the block is adjusted to enhance the similarity of image by replacing the value of light and darkness with that of relevant block. Further, the quality of image is improved by minimizing the redundancy of tiles in the adjacent blocks. Experimental results support that the proposed algorithm is excellent in quantitative analysis and qualitative analysis.Keywords: photomosaic, Euclidean distance, block matching, intensity adjustment
Procedia PDF Downloads 27914741 Modification of Rk Equation of State for Liquid and Vapor of Ammonia by Genetic Algorithm
Authors: S. Mousavian, F. Mousavian, V. Nikkhah Rashidabad
Abstract:
Cubic equations of state like Redlich–Kwong (RK) EOS have been proved to be very reliable tools in the prediction of phase behavior. Despite their good performance in compositional calculations, they usually suffer from weaknesses in the predictions of saturated liquid density. In this research, RK equation was modified. The result of this study shows that modified equation has good agreement with experimental data.Keywords: equation of state, modification, ammonia, genetic algorithm
Procedia PDF Downloads 38214740 Implementation and Performance Analysis of Data Encryption Standard and RSA Algorithm with Image Steganography and Audio Steganography
Authors: S. C. Sharma, Ankit Gambhir, Rajeev Arya
Abstract:
In today’s era data security is an important concern and most demanding issues because it is essential for people using online banking, e-shopping, reservations etc. The two major techniques that are used for secure communication are Cryptography and Steganography. Cryptographic algorithms scramble the data so that intruder will not able to retrieve it; however steganography covers that data in some cover file so that presence of communication is hidden. This paper presents the implementation of Ron Rivest, Adi Shamir, and Leonard Adleman (RSA) Algorithm with Image and Audio Steganography and Data Encryption Standard (DES) Algorithm with Image and Audio Steganography. The coding for both the algorithms have been done using MATLAB and its observed that these techniques performed better than individual techniques. The risk of unauthorized access is alleviated up to a certain extent by using these techniques. These techniques could be used in Banks, RAW agencies etc, where highly confidential data is transferred. Finally, the comparisons of such two techniques are also given in tabular forms.Keywords: audio steganography, data security, DES, image steganography, intruder, RSA, steganography
Procedia PDF Downloads 29014739 Analysis of Genomics Big Data in Cloud Computing Using Fuzzy Logic
Authors: Mohammad Vahed, Ana Sadeghitohidi, Majid Vahed, Hiroki Takahashi
Abstract:
In the genomics field, the huge amounts of data have produced by the next-generation sequencers (NGS). Data volumes are very rapidly growing, as it is postulated that more than one billion bases will be produced per year in 2020. The growth rate of produced data is much faster than Moore's law in computer technology. This makes it more difficult to deal with genomics data, such as storing data, searching information, and finding the hidden information. It is required to develop the analysis platform for genomics big data. Cloud computing newly developed enables us to deal with big data more efficiently. Hadoop is one of the frameworks distributed computing and relies upon the core of a Big Data as a Service (BDaaS). Although many services have adopted this technology, e.g. amazon, there are a few applications in the biology field. Here, we propose a new algorithm to more efficiently deal with the genomics big data, e.g. sequencing data. Our algorithm consists of two parts: First is that BDaaS is applied for handling the data more efficiently. Second is that the hybrid method of MapReduce and Fuzzy logic is applied for data processing. This step can be parallelized in implementation. Our algorithm has great potential in computational analysis of genomics big data, e.g. de novo genome assembly and sequence similarity search. We will discuss our algorithm and its feasibility.Keywords: big data, fuzzy logic, MapReduce, Hadoop, cloud computing
Procedia PDF Downloads 29914738 Iterative Method for Lung Tumor Localization in 4D CT
Authors: Sarah K. Hagi, Majdi Alnowaimi
Abstract:
In the last decade, there were immense advancements in the medical imaging modalities. These advancements can scan a whole volume of the lung organ in high resolution images within a short time. According to this performance, the physicians can clearly identify the complicated anatomical and pathological structures of lung. Therefore, these advancements give large opportunities for more advance of all types of lung cancer treatment available and will increase the survival rate. However, lung cancer is still one of the major causes of death with around 19% of all the cancer patients. Several factors may affect survival rate. One of the serious effects is the breathing process, which can affect the accuracy of diagnosis and lung tumor treatment plan. We have therefore developed a semi automated algorithm to localize the 3D lung tumor positions across all respiratory data during respiratory motion. The algorithm can be divided into two stages. First, a lung tumor segmentation for the first phase of the 4D computed tomography (CT). Lung tumor segmentation is performed using an active contours method. Then, localize the tumor 3D position across all next phases using a 12 degrees of freedom of an affine transformation. Two data set where used in this study, a compute simulate for 4D CT using extended cardiac-torso (XCAT) phantom and 4D CT clinical data sets. The result and error calculation is presented as root mean square error (RMSE). The average error in data sets is 0.94 mm ± 0.36. Finally, evaluation and quantitative comparison of the results with a state-of-the-art registration algorithm was introduced. The results obtained from the proposed localization algorithm show a promising result to localize alung tumor in 4D CT data.Keywords: automated algorithm , computed tomography, lung tumor, tumor localization
Procedia PDF Downloads 602