Search results for: variable separation method
7202 PeliGRIFF: A Parallel DEM-DLM/FD Method for DNS of Particulate Flows with Collisions
Authors: Anthony Wachs, Guillaume Vinay, Gilles Ferrer, Jacques Kouakou, Calin Dan, Laurence Girolami
Abstract:
An original Direct Numerical Simulation (DNS) method to tackle the problem of particulate flows at moderate to high concentration and finite Reynolds number is presented. Our method is built on the framework established by Glowinski and his coworkers [1] in the sense that we use their Distributed Lagrange Multiplier/Fictitious Domain (DLM/FD) formulation and their operator-splitting idea but differs in the treatment of particle collisions. The novelty of our contribution relies on replacing the simple artificial repulsive force based collision model usually employed in the literature by an efficient Discrete Element Method (DEM) granular solver. The use of our DEM solver enables us to consider particles of arbitrary shape (at least convex) and to account for actual contacts, in the sense that particles actually touch each other, in contrast with the simple repulsive force based collision model. We recently upgraded our serial code, GRIFF 1 [2], to full MPI capabilities. Our new code, PeliGRIFF 2, is developed under the framework of the full MPI open source platform PELICANS [3]. The new MPI capabilities of PeliGRIFF open new perspectives in the study of particulate flows and significantly increase the number of particles that can be considered in a full DNS approach: O(100000) in 2D and O(10000) in 3D. Results on the 2D/3D sedimentation/fluidization of isometric polygonal/polyedral particles with collisions are presented.
Keywords: Particulate flow, distributed lagrange multiplier/fictitious domain method, discrete element method, polygonal shape, sedimentation, distributed computing, MPI
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21237201 F-IVT Actuation System to Power Artificial Knee Joint
Authors: Alò Roberta, Bottiglione Francesco, Mantriota Giacomo
Abstract:
The efficiency of the actuation system of exoskeletons and active orthoses for lower limbs is a significant aspect of the design of such devices because it affects their efficacy. The F-IVT is an innovative actuation system to power artificial knee joint with energy recovery capabilities. Its key and non-conventional elements are a flywheel that acts as a mechanical energy storage system, and an Infinitely Variable Transmission (IVT). The design of the F-IVT can be optimized for a certain walking condition, resulting in a heavy reduction of both the electric energy consumption and of the electric peak power. In this work, by means of simulations of level ground walking at different speeds, it is demonstrated that the F-IVT is still an advantageous actuator which permits to save energy consumption and to downsize the electric motor even when it does not work in nominal conditions.Keywords: Active orthoses, actuators, lower extremity exoskeletons, knee joint.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24567200 Hybrid Weighted Multiple Attribute Decision Making Handover Method for Heterogeneous Networks
Authors: Mohanad Alhabo, Li Zhang, Naveed Nawaz
Abstract:
Small cell deployment in 5G networks is a promising technology to enhance the capacity and coverage. However, unplanned deployment may cause high interference levels and high number of unnecessary handovers, which in turn result in an increase in the signalling overhead. To guarantee service continuity, minimize unnecessary handovers and reduce signalling overhead in heterogeneous networks, it is essential to properly model the handover decision problem. In this paper, we model the handover decision problem using Multiple Attribute Decision Making (MADM) method, specifically Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS), and propose a hybrid TOPSIS method to control the handover in heterogeneous network. The proposed method adopts a hybrid weighting policy, which is a combination of entropy and standard deviation. A hybrid weighting control parameter is introduced to balance the impact of the standard deviation and entropy weighting on the network selection process and the overall performance. Our proposed method show better performance, in terms of the number of frequent handovers and the mean user throughput, compared to the existing methods.
Keywords: Handover, HetNets, interference, MADM, small cells, TOPSIS, weight.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5697199 Effect of Inclusions on the Shape and Size of Crack Tip Plastic Zones by Element Free Galerkin Method
Authors: A. Jameel, G. A. Harmain, Y. Anand, J. H. Masoodi, F. A. Najar
Abstract:
The present study investigates the effect of inclusions on the shape and size of crack tip plastic zones in engineering materials subjected to static loads by employing the element free Galerkin method (EFGM). The modeling of the discontinuities produced by cracks and inclusions becomes independent of the grid chosen for analysis. The standard displacement approximation is modified by adding additional enrichment functions, which introduce the effects of different discontinuities into the formulation. The level set method has been used to represent different discontinuities present in the domain. The effect of inclusions on the extent of crack tip plastic zones is investigated by solving some numerical problems by the EFGM.
Keywords: EFGM, stress intensity factors, crack tip plastic zones, inclusions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8857198 Thermal Buckling of Rectangular FGM Plate with Variation Thickness
Authors: Mostafa Raki, Mahdi Hamzehei
Abstract:
Equilibrium and stability equations of a thin rectangular plate with length a, width b, and thickness h(x)=C1x+C2, made of functionally graded materials under thermal loads are derived based on the first order shear deformation theory. It is assumed that the material properties vary as a power form of thickness coordinate variable z. The derived equilibrium and buckling equations are then solved analytically for a plate with simply supported boundary conditions. One type of thermal loading, uniform temperature rise and gradient through the thickness are considered, and the buckling temperatures are derived. The influences of the plate aspect ratio, the relative thickness, the gradient index and the transverse shear on buckling temperature difference are all discussed.
Keywords: Stability of plate, thermal buckling, rectangularplate, functionally graded material, first order shear deformationtheory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20977197 Development of Motor and Controller for VVA Module of Gasoline Vehicle
Authors: Joon Sung Park, Jun-Hyuk Choi, Jin-Hong Kim, In-Soung Jung
Abstract:
Due to environmental concerns, the recent regulation on automobile fuel economy has been strengthened. The market demand for efficient vehicles is growing and automakers to improve engine fuel efficiency in the industry have been paying a lot of effort. To improve the fuel efficiency, it is necessary to reduce losses or to improve combustion efficiency of the engine. VVA (Variable Valve Actuation) technology enhances the engine's intake air flow, reduce pumping losses and mechanical friction losses. And also, VVA technology is the engine's low speed and high speed operation to implement each of appropriate valve lift. It improves the performance of engine in the entire operating range. This paper presents a design procedure of DC motor and drive for VVA system and shows the validity of the design result by experimental result with prototype.
Keywords: DC motor, Inverter, VVA, Electric Drive.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15937196 A Method to Calculate Frenet Apparatus of W-Curves in the Euclidean 6-Space
Authors: Süha Yılmaz, Melih Turgut
Abstract:
These In this work, a regular unit speed curve in six dimensional Euclidean space, whose Frenet curvatures are constant, is considered. Thereafter, a method to calculate Frenet apparatus of this curve is presented.Keywords: Classical Differential Geometry, Euclidean 6-space, Frenet Apparatus of the curves.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12877195 Technological Value of Selected Spring Wheat Cultivars Depending on the Sowing Date
Authors: Marta Wyzińska, Jerzy Grabiński, Alicja Sułek
Abstract:
The grain quality is a decisive factor in its use. In Poland, spring wheat is characterized by more favorable quality parameters in relation to the winter form of this species. In the present study, the effects of three different sowing dates (autumn, delayed autumn, and spring) and cultivar (Tybalt, Cytra, Bombona, Monsun, and Parabola) on the selected technological value parameters of spring wheat over three years were studied. The field trials were carried out in two locations (Bezek, Czesławice) in the Lubelskie Vivodeship, Poland. It was found that the falling number of spring wheat grains from autumn sowing dates was at a similar level to wheat sown in spring. The amount of wet gluten in the grain was variable in years, and its quality was better in wheat sown in spring. Sedimentation index was dependent upon on the cultivar.
Keywords: Sowing term, spring wheat, technological value, quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11727194 Identification of Disease Causing DNA Motifs in Human DNA Using Clustering Approach
Authors: G. Tamilpavai, C. Vishnuppriya
Abstract:
Studying DNA (deoxyribonucleic acid) sequence is useful in biological processes and it is applied in the fields such as diagnostic and forensic research. DNA is the hereditary information in human and almost all other organisms. It is passed to their generations. Earlier stage detection of defective DNA sequence may lead to many developments in the field of Bioinformatics. Nowadays various tedious techniques are used to identify defective DNA. The proposed work is to analyze and identify the cancer-causing DNA motif in a given sequence. Initially the human DNA sequence is separated as k-mers using k-mer separation rule. The separated k-mers are clustered using Self Organizing Map (SOM). Using Levenshtein distance measure, cancer associated DNA motif is identified from the k-mer clusters. Experimental results of this work indicate the presence or absence of cancer causing DNA motif. If the cancer associated DNA motif is found in DNA, it is declared as the cancer disease causing DNA sequence. Otherwise the input human DNA is declared as normal sequence. Finally, elapsed time is calculated for finding the presence of cancer causing DNA motif using clustering formation. It is compared with normal process of finding cancer causing DNA motif. Locating cancer associated motif is easier in cluster formation process than the other one. The proposed work will be an initiative aid for finding genetic disease related research.
Keywords: Bioinformatics, cancer motif, DNA, k-mers, Levenshtein distance, SOM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13817193 Road Extraction Using Stationary Wavelet Transform
Authors: Somkait Udomhunsakul
Abstract:
In this paper, a novel road extraction method using Stationary Wavelet Transform is proposed. To detect road features from color aerial satellite imagery, Mexican hat Wavelet filters are used by applying the Stationary Wavelet Transform in a multiresolution, multi-scale, sense and forming the products of Wavelet coefficients at a different scales to locate and identify road features at a few scales. In addition, the shifting of road features locations is considered through multiple scales for robust road extraction in the asymmetry road feature profiles. From the experimental results, the proposed method leads to a useful technique to form the basis of road feature extraction. Also, the method is general and can be applied to other features in imagery.
Keywords: Road extraction, Multiresolution, Stationary Wavelet Transform, Multi-scale analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18757192 Face Recognition Using Principal Component Analysis, K-Means Clustering, and Convolutional Neural Network
Authors: Zukisa Nante, Wang Zenghui
Abstract:
Face recognition is the problem of identifying or recognizing individuals in an image. This paper investigates a possible method to bring a solution to this problem. The method proposes an amalgamation of Principal Component Analysis (PCA), K-Means clustering, and Convolutional Neural Network (CNN) for a face recognition system. It is trained and evaluated using the ORL dataset. This dataset consists of 400 different faces with 40 classes of 10 face images per class. Firstly, PCA enabled the usage of a smaller network. This reduces the training time of the CNN. Thus, we get rid of the redundancy and preserve the variance with a smaller number of coefficients. Secondly, the K-Means clustering model is trained using the compressed PCA obtained data which select the K-Means clustering centers with better characteristics. Lastly, the K-Means characteristics or features are an initial value of the CNN and act as input data. The accuracy and the performance of the proposed method were tested in comparison to other Face Recognition (FR) techniques namely PCA, Support Vector Machine (SVM), as well as K-Nearest Neighbour (kNN). During experimentation, the accuracy and the performance of our suggested method after 90 epochs achieved the highest performance: 99% accuracy F1-Score, 99% precision, and 99% recall in 463.934 seconds. It outperformed the PCA that obtained 97% and KNN with 84% during the conducted experiments. Therefore, this method proved to be efficient in identifying faces in the images.
Keywords: Face recognition, Principal Component Analysis, PCA, Convolutional Neural Network, CNN, Rectified Linear Unit, ReLU, feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5047191 A Perceptual Image Coding method of High Compression Rate
Authors: Fahmi Kammoun, Mohamed Salim Bouhlel
Abstract:
In the framework of the image compression by Wavelet Transforms, we propose a perceptual method by incorporating Human Visual System (HVS) characteristics in the quantization stage. Indeed, human eyes haven-t an equal sensitivity across the frequency bandwidth. Therefore, the clarity of the reconstructed images can be improved by weighting the quantization according to the Contrast Sensitivity Function (CSF). The visual artifact at low bit rate is minimized. To evaluate our method, we use the Peak Signal to Noise Ratio (PSNR) and a new evaluating criteria witch takes into account visual criteria. The experimental results illustrate that our technique shows improvement on image quality at the same compression ratio.Keywords: Contrast Sensitivity Function, Human Visual System, Image compression, Wavelet transforms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18737190 Elliptical Features Extraction Using Eigen Values of Covariance Matrices, Hough Transform and Raster Scan Algorithms
Authors: J. Prakash, K. Rajesh
Abstract:
In this paper, we introduce a new method for elliptical object identification. The proposed method adopts a hybrid scheme which consists of Eigen values of covariance matrices, Circular Hough transform and Bresenham-s raster scan algorithms. In this approach we use the fact that the large Eigen values and small Eigen values of covariance matrices are associated with the major and minor axial lengths of the ellipse. The centre location of the ellipse can be identified using circular Hough transform (CHT). Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain a small number of nonzero elements they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of circumference pixels is identified using raster scan algorithm which uses the geometrical symmetry property. This method does not require the evaluation of tangents or curvature of edge contours, which are generally very sensitive to noise working conditions. The proposed method has the advantages of small storage, high speed and accuracy in identifying the feature. The new method has been tested on both synthetic and real images. Several experiments have been conducted on various images with considerable background noise to reveal the efficacy and robustness. Experimental results about the accuracy of the proposed method, comparisons with Hough transform and its variants and other tangential based methods are reported.Keywords: Circular Hough transform, covariance matrix, Eigen values, ellipse detection, raster scan algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26397189 Enhancing Student Evaluation Through Student Idol
Authors: M. S. Roslina, M.O. Syahrul Hakimah Ong, S. F. Syarifah Fazlin
Abstract:
Since after the historical moment of Malaysia Independence Day on the year of 1957, the government had been trying hard in order to find the most efficient methods in learning. However, it is hard to actually access and evaluate students whom will then be called an excellent student. It because in our realtime student who excellent is only excel in academic. This evaluation become a problem because it not balance in our real life interm of to get an excellent student in whole area in their involvement of curiculum and co-curiculum. To overcome this scenario, we proposed a method called Student Idol to evaluate student through three categories which are academic, co-curiculum and leadership. All the categories have their own merit point. Using this method, student will be evaluated more accurate compared to the previously. So, teacher can easily evaluate their student without having any emotion factor, relation factor and others. As conclustion this method will helps student evaluation more accurate and valid.Keywords: evaluation, curiculum, co-curriculum, idol.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13267188 A Study on Changing of Energy-Saving Performance of GHP Air Conditioning System with Time-Series Variation
Authors: Ying Xin, Shigeki Kametani
Abstract:
This paper deals the energy saving performance of GHP (Gas engine heat pump) air conditioning system has improved with time-series variation. There are two types of air conditioning systems, VRF (Variable refrigerant flow) and central cooling and heating system. VRF is classified as EHP (Electric driven heat pump) and GHP. EHP drives the compressor with electric motor. GHP drives the compressor with the gas engine. The electric consumption of GHP is less than one tenth of EHP does.
In this study, the energy consumption data of GHP installed the junior high schools was collected. An annual and monthly energy consumption per rated thermal output power of each apparatus was calculated, and then their energy efficiency was analyzed. From these data, we investigated improvement of the energy saving of the GHP air conditioning system by the change in the generation.
Keywords: Energy-saving, VRF, GHP, EHP, Air Conditioning System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18827187 Study on Position Polarity Compensation for Permanent Magnet Synchronous Motor Based on High Frequency Signal Injection
Authors: Gu Shan-Mao, He Feng-You, Ye Sheng-Wen, Ma Zhi-Xun
Abstract:
The application of a high frequency signal injection method as speed and position observer in PMSM drives has been a research focus. At present, the precision of this method is nearly good as that of ten-bit encoder. But there are some questions for estimating position polarity. Based on high frequency signal injection, this paper presents a method to compensate position polarity for permanent magnet synchronous motor (PMSM). Experiments were performed to test the effectiveness of the proposed algorithm and results present the good performance.
Keywords: permanent magnet synchronous motor, sensorless, high-frequency signal injection, magnetic pole position.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19747186 Design of Bayesian MDS Sampling Plan Based on the Process Capability Index
Authors: Davood Shishebori, Mohammad Saber Fallah Nezhad, Sina Seifi
Abstract:
In this paper, a variable multiple dependent state (MDS) sampling plan is developed based on the process capability index using Bayesian approach. The optimal parameters of the developed sampling plan with respect to constraints related to the risk of consumer and producer are presented. Two comparison studies have been done. First, the methods of double sampling model, sampling plan for resubmitted lots and repetitive group sampling (RGS) plan are elaborated and average sample numbers of the developed MDS plan and other classical methods are compared. A comparison study between the developed MDS plan based on Bayesian approach and the exact probability distribution is carried out.
Keywords: MDS sampling plan, RGS plan, sampling plan for resubmitted lots, process capability index, average sample number, Bayesian approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10087185 Estimating Regression Effects in Com Poisson Generalized Linear Model
Authors: Vandna Jowaheer, Naushad A. Mamode Khan
Abstract:
Com Poisson distribution is capable of modeling the count responses irrespective of their mean variance relation and the parameters of this distribution when fitted to a simple cross sectional data can be efficiently estimated using maximum likelihood (ML) method. In the regression setup, however, ML estimation of the parameters of the Com Poisson based generalized linear model is computationally intensive. In this paper, we propose to use quasilikelihood (QL) approach to estimate the effect of the covariates on the Com Poisson counts and investigate the performance of this method with respect to the ML method. QL estimates are consistent and almost as efficient as ML estimates. The simulation studies show that the efficiency loss in the estimation of all the parameters using QL approach as compared to ML approach is quite negligible, whereas QL approach is lesser involving than ML approach.
Keywords: Com Poisson, Cross-sectional, Maximum Likelihood, Quasi likelihood
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17617184 The Extraction and Stripping of Hg (II) from Produced Water via Hollow Fiber Contactor
Authors: Dolapop Sribudda, Ura Pancharoen
Abstract:
The separation of Hg (II) from produced water by hollow fiber contactors (HFC) was investigation. This system included of two hollow fiber modules in the series connecting. The first module used for the extraction reaction and the second module for stripping reaction. Aliquat336 extractant was fed from the organic reservoirs into the shell side of the first hollow fiber module and continuous to the shell side of the second module. The organic liquid was continuously feed recirculate and back to the reservoirs. The feed solution was pumped into the lumen (tube side) of the first hollow fiber module. Simultaneously, the stripping solution was pumped in the same way in tube side of the second module. The feed and stripping solution was fed which had a countercurrent flow. Samples were kept in the outlet of feed and stripping solution at 1 hour and characterized concentration of Hg (II) by Inductively Couple Plasma Atomic Emission Spectroscopy (ICP-AES). Feed solution was produced water from natural gulf of Thailand. The extractant was Aliquat336 dissolved in kerosene diluent. Stripping solution used was nitric acid (HNO3) and thiourea (NH2CSNH2). The effect of carrier concentration and type of stripping solution were investigated. Results showed that the best condition were 10 % (v/v) Aliquat336 and 1.0 M NH2CSNH2. At the optimum condition, the extraction and stripping of Hg (II) were 98% and 44.2%, respectively.Keywords: Hg (II), hollow fiber contactor, produced water, wastewater treatment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18227183 Constructing a Fuzzy Net Present Value Method to Evaluating the BOT Sport Facilities
Authors: Huei-Fu Lu
Abstract:
This paper is to develop a fuzzy net present value (FNPV) method by taking vague cash flow and imprecise required rate of return into account for evaluating the value of the Build-Operate-Transfer (BOT) sport facilities. In order to clearly manifest a more realistic capital budgeting model based on the classical net present value (NPV) method, some uncertain financial elements in NPV formula will be fuzzified as triangular fuzzy numbers. Through the conscientious manipulation of fuzzy set theory, we will find that the proposed FNPV model is a more explicit extension of classical (crisp) model and could be more practicable for the financial managers to capture the essence of capital budgeting of sport facilities than non-fuzzy model.
Keywords: Fuzzy sets; Capital budgeting, Sport facility, Net present value (NPV), Build-Operate-Transfer (BOT) scheme
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20317182 Predicting Protein-Protein Interactions from Protein Sequences Using Phylogenetic Profiles
Authors: Omer Nebil Yaveroglu, Tolga Can
Abstract:
In this study, a high accuracy protein-protein interaction prediction method is developed. The importance of the proposed method is that it only uses sequence information of proteins while predicting interaction. The method extracts phylogenetic profiles of proteins by using their sequence information. Combining the phylogenetic profiles of two proteins by checking existence of homologs in different species and fitting this combined profile into a statistical model, it is possible to make predictions about the interaction status of two proteins. For this purpose, we apply a collection of pattern recognition techniques on the dataset of combined phylogenetic profiles of protein pairs. Support Vector Machines, Feature Extraction using ReliefF, Naive Bayes Classification, K-Nearest Neighborhood Classification, Decision Trees, and Random Forest Classification are the methods we applied for finding the classification method that best predicts the interaction status of protein pairs. Random Forest Classification outperformed all other methods with a prediction accuracy of 76.93%Keywords: Protein Interaction Prediction, Phylogenetic Profile, SVM , ReliefF, Decision Trees, Random Forest Classification
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16127181 Test Data Compression Using a Hybrid of Bitmask Dictionary and 2n Pattern Runlength Coding Methods
Authors: C. Kalamani, K. Paramasivam
Abstract:
In VLSI, testing plays an important role. Major problem in testing are test data volume and test power. The important solution to reduce test data volume and test time is test data compression. The Proposed technique combines the bit maskdictionary and 2n pattern run length-coding method and provides a substantial improvement in the compression efficiency without introducing any additional decompression penalty. This method has been implemented using Mat lab and HDL Language to reduce test data volume and memory requirements. This method is applied on various benchmark test sets and compared the results with other existing methods. The proposed technique can achieve a compression ratio up to 86%.Keywords: Bit Mask dictionary, 2n pattern run length code, system-on-chip, SOC, test data compression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19207180 Secure Image Retrieval Based On Orthogonal Decomposition under Cloud Environment
Authors: Yanyan Xu, Lizhi Xiong, Zhengquan Xu, Li Jiang
Abstract:
In order to protect data privacy, image with sensitive or private information needs to be encrypted before being outsourced to the cloud. However, this causes difficulties in image retrieval and data management. A secure image retrieval method based on orthogonal decomposition is proposed in the paper. The image is divided into two different components, for which encryption and feature extraction are executed separately. As a result, cloud server can extract features from an encrypted image directly and compare them with the features of the queried images, so that the user can thus obtain the image. Different from other methods, the proposed method has no special requirements to encryption algorithms. Experimental results prove that the proposed method can achieve better security and better retrieval precision.
Keywords: Secure image retrieval, secure search, orthogonal decomposition, secure cloud computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21137179 Design and Characteristics of New Test Facility for Flat Plate Boundary Layer Research
Authors: N. Patten, T. M. Young, P. Griffin
Abstract:
Preliminary results for a new flat plate test facility are presented here in the form of Computational Fluid Dynamics (CFD), flow visualisation, pressure measurements and thermal anemometry. The results from the CFD and flow visualisation show the effectiveness of the plate design, with the trailing edge flap anchoring the stagnation point on the working surface and reducing the extent of the leading edge separation. The flow visualization technique demonstrates the two-dimensionality of the flow in the location where the thermal anemometry measurements are obtained. Measurements of the boundary layer mean velocity profiles compare favourably with the Blasius solution, thereby allowing for comparison of future measurements with the wealth of data available on zero pressure gradient Blasius flows. Results for the skin friction, boundary layer thickness, frictional velocity and wall shear stress are shown to agree well with the Blasius theory, with a maximum experimental deviation from theory of 5%. Two turbulence generating grids have been designed and characterized and it is shown that the turbulence decay downstream of both grids agrees with established correlations. It is also demonstrated that there is little dependence of turbulence on the freestream velocity.Keywords: CFD, Flow Visualisation, Thermal Anemometry, Turbulence Grids.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17727178 Statistical Optimization of the Enzymatic Saccharification of the Oil Palm Empty Fruit Bunches
Authors: Rashid S. S., Alam M. Z.
Abstract:
A statistical optimization of the saccharification process of EFB was studied. The statistical analysis was done by applying faced centered central composite design (FCCCD) under response surface methodology (RSM). In this investigation, EFB dose, enzyme dose and saccharification period was examined, and the maximum 53.45% (w/w) yield of reducing sugar was found with 4% (w/v) of EFB, 10% (v/v) of enzyme after 120 hours of incubation. It can be calculated that the conversion rate of cellulose content of the substrate is more than 75% (w/w) which can be considered as a remarkable achievement. All the variables, linear, quadratic and interaction coefficient, were found to be highly significant, other than two coefficients, one quadratic and another interaction coefficient. The coefficient of determination (R2) is 0.9898 that confirms a satisfactory data and indicated that approximately 98.98% of the variability in the dependent variable, saccharification of EFB, could be explained by this model.Keywords: Face centered central composite design (FCCCD), Liquid state bioconversion (LSB), Palm oil mill effluent, Trichoderma reesei RUT C-30.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22507177 An Improved Data Mining Method Applied to the Search of Relationship between Metabolic Syndrome and Lifestyles
Authors: Yi Chao Huang, Yu Ling Liao, Chiu Shuang Lin
Abstract:
A data cutting and sorting method (DCSM) is proposed to optimize the performance of data mining. DCSM reduces the calculation time by getting rid of redundant data during the data mining process. In addition, DCSM minimizes the computational units by splitting the database and by sorting data with support counts. In the process of searching for the relationship between metabolic syndrome and lifestyles with the health examination database of an electronics manufacturing company, DCSM demonstrates higher search efficiency than the traditional Apriori algorithm in tests with different support counts.Keywords: Data mining, Data cutting and sorting method, Apriori algorithm, Metabolic syndrome
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15877176 A Tutorial on Dynamic Simulation of DC Motor and Implementation of Kalman Filter on a Floating Point DSP
Authors: Padmakumar S., Vivek Agarwal, Kallol Roy
Abstract:
With the advent of inexpensive 32 bit floating point digital signal processor-s availability in market, many computationally intensive algorithms such as Kalman filter becomes feasible to implement in real time. Dynamic simulation of a self excited DC motor using second order state variable model and implementation of Kalman Filter in a floating point DSP TMS320C6713 is presented in this paper with an objective to introduce and implement such an algorithm, for beginners. A fractional hp DC motor is simulated in both Matlab® and DSP and the results are included. A step by step approach for simulation of DC motor in Matlab® and “C" routines in CC Studio® is also given. CC studio® project file details and environmental setting requirements are addressed. This tutorial can be used with 6713 DSK, which is based on floating point DSP and CC Studio either in hardware mode or in simulation mode.
Keywords: DC motor, DSP, Dynamic simulation, Kalman Filter
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30137175 Analysis of Physicochemical Properties on Prediction of R5, X4 and R5X4 HIV-1 Coreceptor Usage
Authors: Kai-Ti Hsu, Hui-Ling Huang, Chun-Wei Tung, Yi-Hsiung Chen, Shinn-Ying Ho
Abstract:
Bioinformatics methods for predicting the T cell coreceptor usage from the array of membrane protein of HIV-1 are investigated. In this study, we aim to propose an effective prediction method for dealing with the three-class classification problem of CXCR4 (X4), CCR5 (R5) and CCR5/CXCR4 (R5X4). We made efforts in investigating the coreceptor prediction problem as follows: 1) proposing a feature set of informative physicochemical properties which is cooperated with SVM to achieve high prediction test accuracy of 81.48%, compared with the existing method with accuracy of 70.00%; 2) establishing a large up-to-date data set by increasing the size from 159 to 1225 sequences to verify the proposed prediction method where the mean test accuracy is 88.59%, and 3) analyzing the set of 14 informative physicochemical properties to further understand the characteristics of HIV-1coreceptors.Keywords: Coreceptor, genetic algorithm, HIV-1, SVM, physicochemical properties, prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23837174 An UML Statechart Diagram-Based MM-Path Generation Approach for Object-Oriented Integration Testing
Authors: Ruilian Zhao, Ling Lin
Abstract:
MM-Path, an acronym for Method/Message Path, describes the dynamic interactions between methods in object-oriented systems. This paper discusses the classifications of MM-Path, based on the characteristics of object-oriented software. We categorize it according to the generation reasons, the effect scope and the composition of MM-Path. A formalized representation of MM-Path is also proposed, which has considered the influence of state on response method sequences of messages. .Moreover, an automatic MM-Path generation approach based on UML Statechart diagram has been presented, and the difficulties in identifying and generating MM-Path can be solved. . As a result, it provides a solid foundation for further research on test cases generation based on MM-Path.
Keywords: MM-Path, Message Sequence, Object-Oriented Integration Testing, Response Method Sequence, UML Statechart Diagram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26087173 Optimization of HALO Structure Effects in 45nm p-type MOSFETs Device Using Taguchi Method
Authors: F. Salehuddin, I. Ahmad, F. A. Hamid, A. Zaharim, H. A. Elgomati, B. Y. Majlis, P. R. Apte
Abstract:
In this study, the Taguchi method was used to optimize the effect of HALO structure or halo implant variations on threshold voltage (VTH) and leakage current (ILeak) in 45nm p-type Metal Oxide Semiconductor Field Effect Transistors (MOSFETs) device. Besides halo implant dose, the other process parameters which used were Source/Drain (S/D) implant dose, oxide growth temperature and silicide anneal temperature. This work was done using TCAD simulator, consisting of a process simulator, ATHENA and device simulator, ATLAS. These two simulators were combined with Taguchi method to aid in design and optimize the process parameters. In this research, the most effective process parameters with respect to VTH and ILeak are halo implant dose (40%) and S/D implant dose (52%) respectively. Whereas the second ranking factor affecting VTH and ILeak are oxide growth temperature (32%) and halo implant dose (34%) respectively. The results show that after optimizations approaches is -0.157V at ILeak=0.195mA/μm.
Keywords: Optimization, p-type MOSFETs device, HALO Structure, Taguchi Method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2037