Search results for: Particle Swarm Optimization algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5076

Search results for: Particle Swarm Optimization algorithm

2226 Dynamic Stall Vortex Formation of OA-209 Airfoil at Low Reynolds Number

Authors: Aung Myo Thu, Sang Eon Jeon, Yung Hwan Byun, Soo Hyung Park

Abstract:

The unsteady flow field around oscillating OA-209 airfoil at a Reynolds number of 3.5×105 were investigated. Three different reduced frequencies were tested in order to see how it affects the hysteresis loop of an airfoil. At a reduced frequency of 0.05 the deep dynamic stall phenomenon was observed. Lift overshooting was observed as a result of dynamic stall vortex (DSV) shedding. Further investigation was carried out to find out the cause of DSV formation and shedding over airfoil. Particle image velocimetry (PIV) and CFD tools were used and it was found out that dynamic stall separation (DSS), which is separated from leading edge separation (LES) and trailing edge separation (TES), triggered the dynamic stall vortex (DSV).

Keywords: Airfoil Flow, CFD, PIV, Dynamic Stall, Flow Separation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3180
2225 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling

Authors: M. Almutairi, S. Hadjiloucas

Abstract:

The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.

Keywords: Harmonics, passive filter, power factor, power quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2197
2224 Game Theory Based Diligent Energy Utilization Algorithm for Routing in Wireless Sensor Network

Authors: X. Mercilin Raajini, R. Raja Kumar, P. Indumathi, V. Praveen

Abstract:

Many cluster based routing protocols have been proposed in the field of wireless sensor networks, in which a group of nodes are formed as clusters. A cluster head is selected from one among those nodes based on residual energy, coverage area, number of hops and that cluster-head will perform data gathering from various sensor nodes and forwards aggregated data to the base station or to a relay node (another cluster-head), which will forward the packet along with its own data packet to the base station. Here a Game Theory based Diligent Energy Utilization Algorithm (GTDEA) for routing is proposed. In GTDEA, the cluster head selection is done with the help of game theory, a decision making process, that selects a cluster-head based on three parameters such as residual energy (RE), Received Signal Strength Index (RSSI) and Packet Reception Rate (PRR). Finding a feasible path to the destination with minimum utilization of available energy improves the network lifetime and is achieved by the proposed approach. In GTDEA, the packets are forwarded to the base station using inter-cluster routing technique, which will further forward it to the base station. Simulation results reveal that GTDEA improves the network performance in terms of throughput, lifetime, and power consumption.

Keywords: Cluster head, Energy utilization, Game Theory, LEACH, Sensor network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1913
2223 Properties of MWCNTs/PAN Nanofiber Sheet Prepared from Chemically Modified MWCNTs

Authors: M. Seneewong-Na-Ayuttaya, T. Pongprayoon

Abstract:

The nanofiber sheet of Multiwall Cabon Nanotube (MWCNTs)/Polyacylonitile (PAN) composites was fabricated from electrospun nanofiber. Firstly the surface of MWCNTs was chemically modified, comparing two different techniques consisting of admicellar polymerization and functionalization to improve the dispersion and prevent the aggregation in the PAN matrix. The modified MWCNTs were characterized by the dispersion in dimethylformamide (DMF) solvent, Laser particle size, and FTRaman. Lastly, DSC, SEM and mechanical properties of the nanofiber sheet were examined. The results show that the mechanical properties of the nanofiber sheet prepared from admicellar polymerization-modified MWCNTs were higher than those of the others.

Keywords: Multiwall carbon nanotube, admicellar polymerization, functionalization, nanofiber sheet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1878
2222 Sperm Identification Using Elliptic Model and Tail Detection

Authors: Vahid Reza Nafisi, Mohammad Hasan Moradi, Mohammad Hosain Nasr-Esfahani

Abstract:

The conventional assessment of human semen is a highly subjective assessment, with considerable intra- and interlaboratory variability. Computer-Assisted Sperm Analysis (CASA) systems provide a rapid and automated assessment of the sperm characteristics, together with improved standardization and quality control. However, the outcome of CASA systems is sensitive to the method of experimentation. While conventional CASA systems use digital microscopes with phase-contrast accessories, producing higher contrast images, we have used raw semen samples (no staining materials) and a regular light microscope, with a digital camera directly attached to its eyepiece, to insure cost benefits and simple assembling of the system. However, since the accurate finding of sperms in the semen image is the first step in the examination and analysis of the semen, any error in this step can affect the outcome of the analysis. This article introduces and explains an algorithm for finding sperms in low contrast images: First, an image enhancement algorithm is applied to remove extra particles from the image. Then, the foreground particles (including sperms and round cells) are segmented form the background. Finally, based on certain features and criteria, sperms are separated from other cells.

Keywords: Computer-Assisted Sperm Analysis (CASA), Sperm identification, Tail detection, Elliptic shape model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1936
2221 Integrating Computational Intelligence Techniques and Assessment Agents in ELearning Environments

Authors: Konstantinos C. Giotopoulos, Christos E. Alexakos, Grigorios N. Beligiannis, Spiridon D.Likothanassis

Abstract:

In this contribution an innovative platform is being presented that integrates intelligent agents and evolutionary computation techniques in legacy e-learning environments. It introduces the design and development of a scalable and interoperable integration platform supporting: I) various assessment agents for e-learning environments, II) a specific resource retrieval agent for the provision of additional information from Internet sources matching the needs and profile of the specific user and III) a genetic algorithm designed to extract efficient information (classifying rules) based on the students- answering input data. The agents are implemented in order to provide intelligent assessment services based on computational intelligence techniques such as Bayesian Networks and Genetic Algorithms. The proposed Genetic Algorithm (GA) is used in order to extract efficient information (classifying rules) based on the students- answering input data. The idea of using a GA in order to fulfil this difficult task came from the fact that GAs have been widely used in applications including classification of unknown data. The utilization of new and emerging technologies like web services allows integrating the provided services to any web based legacy e-learning environment.

Keywords: Bayesian Networks, Computational Intelligencetechniques, E-learning legacy systems, Service Oriented Integration, Intelligent Agents, Genetic Algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1749
2220 Factors Affecting Aluminum Dissolve from Acidified Water Purification Sludge

Authors: Wen Po Cheng, Chi Hua Fu, Ping Hung Chen, Ruey Fang Yu

Abstract:

Recovering resources from water purification sludge (WPS) have been gradually stipulated in environmental protection laws and regulations in many nations. Hence, reusing the WPS is becoming an important topic, and recovering alum from WPS is one of the many practical alternatives. Most previous research efforts have been conducted on studying the amphoteric characteristic of aluminum hydroxide for investigating the optimum pH range to dissolve the Al(III) species from WPS, but it has been lack of reaction kinetics or mechanisms related discussion. Therefore, in this investigation, water purification sludge (WPS) solution was broken by ultrasound to make particle size of reactants smaller, specific surface area larger. According to the reaction kinetics, these phenomena let the dissolved aluminum salt quantity increased and the reaction rate go faster.

Keywords: Aluminum, Acidification, Sludge, Recovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1721
2219 An Experimental Procedure for Design and Construction of Monocopter and Its Control Using Optical and GPS-Aided AHRS Sensors

Authors: A. Safaee, M. S. Mehrabani, M. B. Menhaj, V. Mousavi, S. Z. Moussavi

Abstract:

Monocopter is a single-wing rotary flying vehicle which has the capability of hovering. This flying vehicle includes two dynamic parts in which more efficiency can be expected rather than other Micro UAVs due to the extended area of wing compared to its fuselage. Low cost and simple mechanism in comparison to other vehicles such as helicopter are the most important specifications of this flying vehicle. In the previous paper we discussed the introduction of the final system but in this paper, the experimental design process of Monocopter and its control algorithm has been investigated in general. Also the editorial bugs in the previous article have been corrected and some translational ambiguities have been resolved. Initially by constructing several prototypes and carrying out many flight tests the main design parameters of this air vehicle were obtained by experimental measurements. Eventually the required main monocopter for this project was constructed. After construction of the monocopter in order to design, implementation and testing of control algorithms first a simple optic system used for determining the heading angle. After doing numerous tests on Test Stand, the control algorithm designed and timing of applying control inputs adjusted. Then other control parameters of system were tuned in flight tests. Eventually the final control system designed and implemented using the AHRS sensor and the final operational tests performed successfully.

Keywords: Monocopter, Flap, Heading Angle, AHRS, Cyclic, Photo Diode.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3445
2218 Properties of Cement Pastes with Different Particle Size Fractions of Metakaolin

Authors: M. Boháč, R. Novotný, F. Frajkorová, R. S. Yadav, T. Opravil, M. Palou

Abstract:

Properties of Portland cement mixtures with various fractions of metakaolin were studied. 10% of Portland cement CEM I 42.5 R was replaced by different fractions of high reactivity metakaolin with defined chemical and mineralogical properties. Various fractions of metakaolin were prepared by jet mill classifying system. There is a clear trend between fineness of metakaolin and hydration heat development. Due to metakaolin presence in mixtures the compressive strength development of mortars is rather slower for coarser fractions but 28-day flexural strengths are improved for all fractions of metakaoline used in mixtures compared to reference sample of pure Portland cement. Yield point, plastic viscosity and adhesion of fresh pastes are considerably influenced by fineness of metakaolin used in cement pastes.

Keywords: Calorimetry, cement, metakaolin fineness, rheology, strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2704
2217 Laser Ultrasonic Imaging Based on Synthetic Aperture Focusing Technique Algorithm

Authors: Sundara Subramanian Karuppasamy, Che Hua Yang

Abstract:

In this work, the laser ultrasound technique has been used for analyzing and imaging the inner defects in metal blocks. To detect the defects in blocks, traditionally the researchers used piezoelectric transducers for the generation and reception of ultrasonic signals. These transducers can be configured into the sparse and phased array. But these two configurations have their drawbacks including the requirement of many transducers, time-consuming calculations, limited bandwidth, and provide confined image resolution. Here, we focus on the non-contact method for generating and receiving the ultrasound to examine the inner defects in aluminum blocks. A Q-switched pulsed laser has been used for the generation and the reception is done by using Laser Doppler Vibrometer (LDV). Based on the Doppler effect, LDV provides a rapid and high spatial resolution way for sensing ultrasonic waves. From the LDV, a series of scanning points are selected which serves as the phased array elements. The side-drilled hole of 10 mm diameter with a depth of 25 mm has been introduced and the defect is interrogated by the linear array of scanning points obtained from the LDV. With the aid of the Synthetic Aperture Focusing Technique (SAFT) algorithm, based on the time-shifting principle the inspected images are generated from the A-scan data acquired from the 1-D linear phased array elements. Thus the defect can be precisely detected with good resolution.

Keywords: Laser ultrasonics, linear phased array, nondestructive testing, synthetic aperture focusing technique, ultrasonic imaging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 974
2216 Effect of Cr and Fe Doping on the Structural and Optical Properties of ZnO Nanostructures

Authors: Prakash Chand, Anurag Gaur, Ashavani Kumar

Abstract:

In the present study, we have synthesized Cr and Fe doped zinc oxide (ZnO) nanostructures (Zn1-δCraFebO; where δ = a + b = 20%, a = 5, 6, 8 & 10% and b = 15, 14, 12 & 10%) via sol-gel method at different doping concentrations. The synthesized samples were characterized for structural properties by X-ray diffractrometer and field emission scanning electron microscope and the optical properties were carried out through photoluminescence and UVvisible spectroscopy. The particle size calculated through field emission scanning electron microscope varies from 41 to 96 nm for the samples synthesized at different doping concentrations. The optical band gaps calculated through UV-visible spectroscopy are found to be decreasing from 3.27 to 3.02 eV as the doping concentration of Cr increases and Fe decreases.

Keywords: Nanostructures, Optical Properties, Sol-gel method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4707
2215 Optimization of Springback Prediction in U-Channel Process Using Response Surface Methodology

Authors: Muhamad Sani Buang, Shahrul Azam Abdullah, Juri Saedon

Abstract:

There is not much effective guideline on development of design parameters selection on spring back for advanced high strength steel sheet metal in U-channel process during cold forming process. This paper presents the development of predictive model for spring back in U-channel process on advanced high strength steel sheet employing Response Surface Methodology (RSM). The experimental was performed on dual phase steel sheet, DP590 in Uchannel forming process while design of experiment (DoE) approach was used to investigates the effects of four factors namely blank holder force (BHF), clearance (C) and punch travel (Tp) and rolling direction (R) were used as input parameters using two level values by applying Full Factorial design (24 ). From a statistical analysis of variant (ANOVA), result showed that blank holder force (BHF), clearance (C) and punch travel (Tp) displayed significant effect on spring back of flange angle (β2 ) and wall opening angle (β1 ), while rolling direction (R) factor is insignificant. The significant parameters are optimized in order to reduce the spring back behavior using Central Composite Design (CCD) in RSM and the optimum parameters were determined. A regression model for spring back was developed. The effect of individual parameters and their response was also evaluated. The results obtained from optimum model are in agreement with the experimental values.  

Keywords: Advance high strength steel, U-channel process, Springback, Design of Experiment, Optimization, Response Surface Methodology (RSM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2302
2214 Transportation Under the Threat of Influenza

Authors: Yujun Zheng, Qin Song, Haihe Shi, and Jinyun Xue

Abstract:

There are a number of different cars for transferring hundreds of close contacts of swine influenza patients to hospital, and we need to carefully assign the passengers to those cars in order to minimize the risk of influenza spreading during transportation. The paper presents an approach to straightforward obtain the optimal solution of the relaxed problems, and develops two iterative improvement algorithms to effectively tackle the general problem.

Keywords: Influenza spread, discrete optimization, stationary point, iterative improvement

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1187
2213 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements

Authors: Alexander Buhr, Klaus Ehrenfried

Abstract:

Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.

Keywords: Boundary layer, high-speed PIV, ICE3, moving train model, roughness elements.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1533
2212 Classification of Business Models of Italian Bancassurance by Balance Sheet Indicators

Authors: Andrea Bellucci, Martina Tofi

Abstract:

The aim of paper is to analyze business models of bancassurance in Italy for life business. The life insurance business is very developed in the Italian market and banks branches have 80% of the market share. Given its maturity, the life insurance market needs to consolidate its organizational form to allow for the development of non-life business, which nowadays collects few premiums but represents a great opportunity to enlarge the market share of bancassurance using its strength in the distribution channel while the market share of independent agents is decreasing. Starting with the main business model of bancassurance for life business, this paper will analyze the performances of life companies in the Italian market by balance sheet indicators and by main discriminant variables of business models. The study will observe trends from 2013 to 2015 for the Italian market by exploiting a database managed by Associazione Nazionale delle Imprese di Assicurazione (ANIA). The applied approach is based on a bottom-up analysis starting with variables and indicators to define business models’ classification. The statistical classification algorithm proposed by Ward is employed to design business models’ profiles. Results from the analysis will be a representation of the main business models built by their profile related to indicators. In that way, an unsupervised analysis is developed that has the limit of its judgmental dimension based on research opinion, but it is possible to obtain a design of effective business models.

Keywords: Balance sheet indicators, Bancassurance, business models, ward algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1265
2211 Feature Based Unsupervised Intrusion Detection

Authors: Deeman Yousif Mahmood, Mohammed Abdullah Hussein

Abstract:

The goal of a network-based intrusion detection system is to classify activities of network traffics into two major categories: normal and attack (intrusive) activities. Nowadays, data mining and machine learning plays an important role in many sciences; including intrusion detection system (IDS) using both supervised and unsupervised techniques. However, one of the essential steps of data mining is feature selection that helps in improving the efficiency, performance and prediction rate of proposed approach. This paper applies unsupervised K-means clustering algorithm with information gain (IG) for feature selection and reduction to build a network intrusion detection system. For our experimental analysis, we have used the new NSL-KDD dataset, which is a modified dataset for KDDCup 1999 intrusion detection benchmark dataset. With a split of 60.0% for the training set and the remainder for the testing set, a 2 class classifications have been implemented (Normal, Attack). Weka framework which is a java based open source software consists of a collection of machine learning algorithms for data mining tasks has been used in the testing process. The experimental results show that the proposed approach is very accurate with low false positive rate and high true positive rate and it takes less learning time in comparison with using the full features of the dataset with the same algorithm.

Keywords: Information Gain (IG), Intrusion Detection System (IDS), K-means Clustering, Weka.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2783
2210 Rank-Based Chain-Mode Ensemble for Binary Classification

Authors: Chongya Song, Kang Yen, Alexander Pons, Jin Liu

Abstract:

In the field of machine learning, the ensemble has been employed as a common methodology to improve the performance upon multiple base classifiers. However, the true predictions are often canceled out by the false ones during consensus due to a phenomenon called “curse of correlation” which is represented as the strong interferences among the predictions produced by the base classifiers. In addition, the existing practices are still not able to effectively mitigate the problem of imbalanced classification. Based on the analysis on our experiment results, we conclude that the two problems are caused by some inherent deficiencies in the approach of consensus. Therefore, we create an enhanced ensemble algorithm which adopts a designed rank-based chain-mode consensus to overcome the two problems. In order to evaluate the proposed ensemble algorithm, we employ a well-known benchmark data set NSL-KDD (the improved version of dataset KDDCup99 produced by University of New Brunswick) to make comparisons between the proposed and 8 common ensemble algorithms. Particularly, each compared ensemble classifier uses the same 22 base classifiers, so that the differences in terms of the improvements toward the accuracy and reliability upon the base classifiers can be truly revealed. As a result, the proposed rank-based chain-mode consensus is proved to be a more effective ensemble solution than the traditional consensus approach, which outperforms the 8 ensemble algorithms by 20% on almost all compared metrices which include accuracy, precision, recall, F1-score and area under receiver operating characteristic curve.

Keywords: Consensus, curse of correlation, imbalanced classification, rank-based chain-mode ensemble.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 745
2209 Micro Environmental Concrete

Authors: M.Lanez, M.N.Oudjit, A.Bali

Abstract:

Reactive powder concretes (RPC) are characterized by particle diameter not exceeding 600 μm and having very high compressive and tensile strengths. This paper describes a new generation of micro concrete, which has an initial, as well as a final, high physicomechanical performance. To achieve this, we replaced the Portland cement (15% by weight) by materials rich in Silica (Slag and Dune Sand). The results obtained from tests carried out on RPC show that compressive and tensile strengths increase when adding the additions, thus improving the compactness of mixtures via filler and pozzolanic effect. With a reduction of the aggregate phase in the RPC and the abundance of dune sand (south Algeria) and slag (industrial byproduct of blast furnace), the use of the RPC will allow Algeria to fulfil economical as well as ecological requirements.

Keywords: High mechanical strength, Reactive Powder Concrete, rheology, superplasticizer, workability

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1530
2208 Measurements of Radial Velocity in Fixed Fluidized Bed for Fischer-Tropsch Synthesis Using LDV

Authors: Xiaolai Zhang, Haitao Zhang, Qiwen Sun, Weixin Qian, Weiyong Ying

Abstract:

High temperature Fischer-Tropsch synthesis process use fixed fluidized bed as a reactor. In order to understand the flow behavior in the fluidized bed better, the research of how the radial velocity affects the entire flow field is necessary. Laser Doppler Velocimetry (LDV) was used to study the radial velocity distribution along the diameter direction of the cross-section of the particle in a fixed fluidized bed. The velocity in the cross-section is fluctuating within a small range. The direction of the speed is a random phenomenon. In addition to r/R is 1, the axial velocity are more than 6 times of the radial velocity, the radial velocity has little impact on the axial velocity in a fixed fluidized bed.

Keywords: LDV, fixed fluidized bed, velocity, Fischer-Tropsch synthesis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1641
2207 Fast Approximate Bayesian Contextual Cold Start Learning (FAB-COST)

Authors: Jack R. McKenzie, Peter A. Appleby, Thomas House, Neil Walton

Abstract:

Cold-start is a notoriously difficult problem which can occur in recommendation systems, and arises when there is insufficient information to draw inferences for users or items. To address this challenge, a contextual bandit algorithm – the Fast Approximate Bayesian Contextual Cold Start Learning algorithm (FAB-COST) – is proposed, which is designed to provide improved accuracy compared to the traditionally used Laplace approximation in the logistic contextual bandit, while controlling both algorithmic complexity and computational cost. To this end, FAB-COST uses a combination of two moment projection variational methods: Expectation Propagation (EP), which performs well at the cold start, but becomes slow as the amount of data increases; and Assumed Density Filtering (ADF), which has slower growth of computational cost with data size but requires more data to obtain an acceptable level of accuracy. By switching from EP to ADF when the dataset becomes large, it is able to exploit their complementary strengths. The empirical justification for FAB-COST is presented, and systematically compared to other approaches on simulated data. In a benchmark against the Laplace approximation on real data consisting of over 670, 000 impressions from autotrader.co.uk, FAB-COST demonstrates at one point increase of over 16% in user clicks. On the basis of these results, it is argued that FAB-COST is likely to be an attractive approach to cold-start recommendation systems in a variety of contexts.

Keywords: Cold-start, expectation propagation, multi-armed bandits, Thompson sampling, variational inference.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 560
2206 A Static Android Malware Detection Based on Actual Used Permissions Combination and API Calls

Authors: Xiaoqing Wang, Junfeng Wang, Xiaolan Zhu

Abstract:

Android operating system has been recognized by most application developers because of its good open-source and compatibility, which enriches the categories of applications greatly. However, it has become the target of malware attackers due to the lack of strict security supervision mechanisms, which leads to the rapid growth of malware, thus bringing serious safety hazards to users. Therefore, it is critical to detect Android malware effectively. Generally, the permissions declared in the AndroidManifest.xml can reflect the function and behavior of the application to a large extent. Since current Android system has not any restrictions to the number of permissions that an application can request, developers tend to apply more than actually needed permissions in order to ensure the successful running of the application, which results in the abuse of permissions. However, some traditional detection methods only consider the requested permissions and ignore whether it is actually used, which leads to incorrect identification of some malwares. Therefore, a machine learning detection method based on the actually used permissions combination and API calls was put forward in this paper. Meanwhile, several experiments are conducted to evaluate our methodology. The result shows that it can detect unknown malware effectively with higher true positive rate and accuracy while maintaining a low false positive rate. Consequently, the AdaboostM1 (J48) classification algorithm based on information gain feature selection algorithm has the best detection result, which can achieve an accuracy of 99.8%, a true positive rate of 99.6% and a lowest false positive rate of 0.

Keywords: Android, permissions combination, API calls, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1927
2205 Image Transmission via Iterative Cellular-Turbo System

Authors: Ersin Gose, Kenan Buyukatak, Onur Osman, Osman N. Ucan

Abstract:

To compress, improve bit error performance and also enhance 2D images, a new scheme, called Iterative Cellular-Turbo System (IC-TS) is introduced. In IC-TS, the original image is partitioned into 2N quantization levels, where N is denoted as bit planes. Then each of the N-bit-plane is coded by Turbo encoder and transmitted over Additive White Gaussian Noise (AWGN) channel. At the receiver side, bit-planes are re-assembled taking into consideration of neighborhood relationship of pixels in 2-D images. Each of the noisy bit-plane values of the image is evaluated iteratively using IC-TS structure, which is composed of equalization block; Iterative Cellular Image Processing Algorithm (ICIPA) and Turbo decoder. In IC-TS, there is an iterative feedback link between ICIPA and Turbo decoder. ICIPA uses mean and standard deviation of estimated values of each pixel neighborhood. It has extra-ordinary satisfactory results of both Bit Error Rate (BER) and image enhancement performance for less than -1 dB Signal-to-Noise Ratio (SNR) values, compared to traditional turbo coding scheme and 2-D filtering, applied separately. Also, compression can be achieved by using IC-TS systems. In compression, less memory storage is used and data rate is increased up to N-1 times by simply choosing any number of bit slices, sacrificing resolution. Hence, it is concluded that IC-TS system will be a compromising approach in 2-D image transmission, recovery of noisy signals and image compression.

Keywords: Iterative Cellular Image Processing Algorithm (ICIPA), Turbo Coding, Iterative Cellular Turbo System (IC-TS), Image Compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1825
2204 EZW Coding System with Artificial Neural Networks

Authors: Saudagar Abdul Khader Jilani, Syed Abdul Sattar

Abstract:

Image compression plays a vital role in today-s communication. The limitation in allocated bandwidth leads to slower communication. To exchange the rate of transmission in the limited bandwidth the Image data must be compressed before transmission. Basically there are two types of compressions, 1) LOSSY compression and 2) LOSSLESS compression. Lossy compression though gives more compression compared to lossless compression; the accuracy in retrievation is less in case of lossy compression as compared to lossless compression. JPEG, JPEG2000 image compression system follows huffman coding for image compression. JPEG 2000 coding system use wavelet transform, which decompose the image into different levels, where the coefficient in each sub band are uncorrelated from coefficient of other sub bands. Embedded Zero tree wavelet (EZW) coding exploits the multi-resolution properties of the wavelet transform to give a computationally simple algorithm with better performance compared to existing wavelet transforms. For further improvement of compression applications other coding methods were recently been suggested. An ANN base approach is one such method. Artificial Neural Network has been applied to many problems in image processing and has demonstrated their superiority over classical methods when dealing with noisy or incomplete data for image compression applications. The performance analysis of different images is proposed with an analysis of EZW coding system with Error Backpropagation algorithm. The implementation and analysis shows approximately 30% more accuracy in retrieved image compare to the existing EZW coding system.

Keywords: Accuracy, Compression, EZW, JPEG2000, Performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1939
2203 Unveiling the Mathematical Essence of Machine Learning: A Comprehensive Exploration

Authors: Randhir Singh Baghel

Abstract:

In this study, the fundamental ideas guiding the dynamic area of machine learning—where models thrive and algorithms change over time—are rooted in an innate mathematical link. This study explores the fundamental ideas that drive the development of intelligent systems, providing light on the mutually beneficial link between mathematics and machine learning.

Keywords: Machine Learning, deep learning, Neural Network, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 200
2202 Combination of Geological, Geophysical and Reservoir Engineering Analyses in Field Development: A Case Study

Authors: Atif Zafar, Fan Haijun

Abstract:

A sequence of different Reservoir Engineering methods and tools in reservoir characterization and field development are presented in this paper. The real data of Jin Gas Field of L-Basin of Pakistan is used. The basic concept behind this work is to enlighten the importance of well test analysis in a broader way (i.e. reservoir characterization and field development) unlike to just determine the permeability and skin parameters. Normally in the case of reservoir characterization we rely on well test analysis to some extent but for field development plan, the well test analysis has become a forgotten tool specifically for locations of new development wells. This paper describes the successful implementation of well test analysis in Jin Gas Field where the main uncertainties are identified during initial stage of field development when location of new development well was marked only on the basis of G&G (Geologic and Geophysical) data. The seismic interpretation could not encounter one of the boundary (fault, sub-seismic fault, heterogeneity) near the main and only producing well of Jin Gas Field whereas the results of the model from the well test analysis played a very crucial rule in order to propose the location of second well of the newly discovered field. The results from different methods of well test analysis of Jin Gas Field are also integrated with and supported by other tools of Reservoir Engineering i.e. Material Balance Method and Volumetric Method. In this way, a comprehensive way out and algorithm is obtained in order to integrate the well test analyses with Geological and Geophysical analyses for reservoir characterization and field development. On the strong basis of this working and algorithm, it was successfully evaluated that the proposed location of new development well was not justified and it must be somewhere else except South direction.

Keywords: Field development, reservoir characterization, reservoir engineering, well test analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1126
2201 Enhanced Planar Pattern Tracking for an Outdoor Augmented Reality System

Authors: L. Yu, W. K. Li, S. K. Ong, A. Y. C. Nee

Abstract:

In this paper, a scalable augmented reality framework for handheld devices is presented. The presented framework is enabled by using a server-client data communication structure, in which the search for tracking targets among a database of images is performed on the server-side while pixel-wise 3D tracking is performed on the client-side, which, in this case, is a handheld mobile device. Image search on the server-side adopts a residual-enhanced image descriptors representation that gives the framework a scalability property. The tracking algorithm on the client-side is based on a gravity-aligned feature descriptor which takes the advantage of a sensor-equipped mobile device and an optimized intensity-based image alignment approach that ensures the accuracy of 3D tracking. Automatic content streaming is achieved by using a key-frame selection algorithm, client working phase monitoring and standardized rules for content communication between the server and client. The recognition accuracy test performed on a standard dataset shows that the method adopted in the presented framework outperforms the Bag-of-Words (BoW) method that has been used in some of the previous systems. Experimental test conducted on a set of video sequences indicated the real-time performance of the tracking system with a frame rate at 15-30 frames per second. The presented framework is exposed to be functional in practical situations with a demonstration application on a campus walk-around.

Keywords: Augmented reality framework, server-client model, vision-based tracking, image search.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1146
2200 Optimization Approach on Flapping Aerodynamic Characteristics of Corrugated Airfoil

Authors: Wei-Hsin Sun, Jr-Ming Miao, Chang-Hsien Tai, Chien-Chun Hung

Abstract:

The development of biomimetic micro-aerial-vehicles (MAVs) with flapping wings is the future trend in military/domestic field. The successful flight of MAVs is strongly related to the understanding of unsteady aerodynamic performance of low Reynolds number airfoils under dynamic flapping motion. This study explored the effects of flapping frequency, stroke amplitude, and the inclined angle of stroke plane on lift force and thrust force of a bio-inspiration corrugated airfoil with 33 full factorial design of experiment and ANOVA analysis. Unsteady vorticity flows over a corrugated thin airfoil executing flapping motion are computed with time-dependent two-dimensional laminar incompressible Reynolds-averaged Navier-Stokes equations with the conformal hybrid mesh. The tested freestream Reynolds number based on the chord length of airfoil as characteristic length is fixed of 103. The dynamic mesh technique is applied to model the flapping motion of a corrugated airfoil. Instant vorticity contours over a complete flapping cycle clearly reveals the flow mechanisms for lift force generation are dynamic stall, rotational circulation, and wake capture. The thrust force is produced as the leading edge vortex shedding from the trailing edge of airfoil to form a reverse von Karman vortex. Results also indicated that the inclined angle is the most significant factor on both the lift force and thrust force. There are strong interactions between tested factors which mean an optimization study on parameters should be conducted in further runs.

Keywords: biomimetic, MAVs, aerodynamic, ANOVA analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2144
2199 Optimization of Quercus cerris Bark Liquefaction

Authors: Luísa P. Cruz-Lopes, Hugo Costa e Silva, Idalina Domingos, José Ferreira, Luís Teixeira de Lemos, Bruno Esteves

Abstract:

The liquefaction process of cork based tree barks has led to an increase of interest due to its potential innovation in the lumber and wood industries. In this particular study the bark of Quercus cerris (Turkish oak) is used due to its appreciable amount of cork tissue, although of inferior quality when compared to the cork provided by other Quercus trees. This study aims to optimize alkaline catalysis liquefaction conditions, regarding several parameters. To better comprehend the possible chemical characteristics of the bark of Quercus cerris, a complete chemical analysis was performed. The liquefaction process was performed in a double-jacket reactor heated with oil, using glycerol and a mixture of glycerol/ethylene glycol as solvents, potassium hydroxide as a catalyst, and varying the temperature, liquefaction time and granulometry. Due to low liquefaction efficiency resulting from the first experimental procedures a study was made regarding different washing techniques after the filtration process using methanol and methanol/water. The chemical analysis stated that the bark of Quercus cerris is mostly composed by suberin (ca. 30%) and lignin (ca. 24%) as well as insolvent hemicelluloses in hot water (ca. 23%). On the liquefaction stage, the results that led to higher yields were: using a mixture of methanol/ethylene glycol as reagents and a time and temperature of 120 minutes and 200 ºC, respectively. It is concluded that using a granulometry of <80 mesh leads to better results, even if this parameter barely influences the liquefaction efficiency. Regarding the filtration stage, washing the residue with methanol and then distilled water leads to a considerable increase on final liquefaction percentages, which proves that this procedure is effective at liquefying suberin content and lignocellulose fraction.

Keywords: Liquefaction, alkaline catalysis, optimization, Quercus cerris bark.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1496
2198 Synthesis of Bimetallic Fe/Cu Nanoparticles with Different Copper Loading Ratios

Authors: May Thant Zin, Josephine Borja, Hirofumi Hinode, Winarto Kurniawan

Abstract:

Nanotechnology has multiple and enormous advantages for all application. Therefore, this research is carried out to synthesize and characterize bimetallic iron with copper nanoparticles. After synthesizing nano zero valent iron by reduction of ferric chloride by sodium borohydride under nitrogen purging environment, bimetallic iron with copper nanoparticles are synthesized by varying different loads of copper chloride. Due to different standard potential (E0) values of copper and iron, copper is coupled with iron at (Cu to Fe ratio of 1:5, 1:6.7, 1:10, 1:20). It is found that the resulted bimetallic Fe/Cu nanoparticles are composing phases of iron and copper. According to the diffraction patterns indicating the state of chemical combination of the bimetallic nanoparticles, the particles are well-combined and crystalline sizes are less than 1000Ao (or 100nm). Specifically, particle sizes of synthesized bimetallic Fe/Cu nanoparticles are ranging from 44.583 nm to 85.149 nm.

Keywords: Bimetallic Fe/Cu nanoparticles, Loading ratio, Synthesis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5547
2197 Evolutionary Approach for Automated Discovery of Censored Production Rules

Authors: Kamal K. Bharadwaj, Basheer M. Al-Maqaleh

Abstract:

In the recent past, there has been an increasing interest in applying evolutionary methods to Knowledge Discovery in Databases (KDD) and a number of successful applications of Genetic Algorithms (GA) and Genetic Programming (GP) to KDD have been demonstrated. The most predominant representation of the discovered knowledge is the standard Production Rules (PRs) in the form If P Then D. The PRs, however, are unable to handle exceptions and do not exhibit variable precision. The Censored Production Rules (CPRs), an extension of PRs, were proposed by Michalski & Winston that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: If P Then D Unless C, where C (Censor) is an exception to the rule. Such rules are employed in situations, in which the conditional statement 'If P Then D' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence are tight or there is simply no information available as to whether it holds or not. Thus, the 'If P Then D' part of the CPR expresses important information, while the Unless C part acts only as a switch and changes the polarity of D to ~D. This paper presents a classification algorithm based on evolutionary approach that discovers comprehensible rules with exceptions in the form of CPRs. The proposed approach has flexible chromosome encoding, where each chromosome corresponds to a CPR. Appropriate genetic operators are suggested and a fitness function is proposed that incorporates the basic constraints on CPRs. Experimental results are presented to demonstrate the performance of the proposed algorithm.

Keywords: Censored Production Rule, Data Mining, MachineLearning, Evolutionary Algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1887