Search results for: Protected Web-based Applications
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2621

Search results for: Protected Web-based Applications

461 Optimizing of Fuzzy C-Means Clustering Algorithm Using GA

Authors: Mohanad Alata, Mohammad Molhim, Abdullah Ramini

Abstract:

Fuzzy C-means Clustering algorithm (FCM) is a method that is frequently used in pattern recognition. It has the advantage of giving good modeling results in many cases, although, it is not capable of specifying the number of clusters by itself. In FCM algorithm most researchers fix weighting exponent (m) to a conventional value of 2 which might not be the appropriate for all applications. Consequently, the main objective of this paper is to use the subtractive clustering algorithm to provide the optimal number of clusters needed by FCM algorithm by optimizing the parameters of the subtractive clustering algorithm by an iterative search approach and then to find an optimal weighting exponent (m) for the FCM algorithm. In order to get an optimal number of clusters, the iterative search approach is used to find the optimal single-output Sugenotype Fuzzy Inference System (FIS) model by optimizing the parameters of the subtractive clustering algorithm that give minimum least square error between the actual data and the Sugeno fuzzy model. Once the number of clusters is optimized, then two approaches are proposed to optimize the weighting exponent (m) in the FCM algorithm, namely, the iterative search approach and the genetic algorithms. The above mentioned approach is tested on the generated data from the original function and optimal fuzzy models are obtained with minimum error between the real data and the obtained fuzzy models.

Keywords: Fuzzy clustering, Fuzzy C-Means, Genetic Algorithm, Sugeno fuzzy systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3256
460 Optimizing the Performance of Thermoelectric for Cooling Computer Chips Using Different Types of Electrical Pulses

Authors: Saleh Alshehri

Abstract:

Thermoelectric technology is currently being used in many industrial applications for cooling, heating and generating electricity. This research mainly focuses on using thermoelectric to cool down high-speed computer chips at different operating conditions. A previously developed and validated three-dimensional model for optimizing and assessing the performance of cascaded thermoelectric and non-cascaded thermoelectric is used in this study to investigate the possibility of decreasing the hotspot temperature of computer chip. Additionally, a test assembly is built and tested at steady-state and transient conditions. The obtained optimum thermoelectric current at steady-state condition is used to conduct a number of pulsed tests (i.e. transient tests) with different shapes to cool the computer chips hotspots. The results of the steady-state tests showed that at hotspot heat rate of 15.58 W (5.97 W/cm2), using thermoelectric current of 4.5 A has resulted in decreasing the hotspot temperature at open circuit condition (89.3 °C) by 50.1 °C. Maximum and minimum hotspot temperatures have been affected by ON and OFF duration of the electrical current pulse. Maximum hotspot temperature was resulted by longer OFF pulse period. In addition, longer ON pulse period has generated the minimum hotspot temperature.

Keywords: Thermoelectric generator, thermoelectric cooler, chip hotspots, electronic cooling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 622
459 Interactions between Cells and Nanoscale Surfaces of Oxidized Silicon Substrates

Authors: Chung-Yao Yang, Lin-Ya Huang, Tang-Long Shen, J. Andrew Yeh

Abstract:

The importance for manipulating an incorporated scaffold and directing cell behaviors is well appreciated for tissue engineering. Here, we developed newly nano-topographic oxidized silicon nanosponges capable of being various chemical modifications to provide much insight into the fundamental biology of how cells interact with their surrounding environment in vitro. A wet etching technique is exerted to allow us fabricated the silicon nanosponges in a high-throughput manner. Furthermore, various organo-silane chemicals enabled self-assembled on the surfaces by vapor deposition. We have found that Chinese hamster ovary (CHO) cells displayed certain distinguishable morphogenesis, adherent responses, and biochemical properties while cultured on these chemical modified nano-topographic structures in compared with the planar oxidized silicon counterparts, indicating that cell behaviors can be influenced by certain physical characteristic derived from nano-topography in addition to the hydrophobicity of contact surfaces crucial for cell adhesion and spreading. Of particular, there were predominant nano-actin punches and slender protrusions formed while cells were cultured on the nano-topographic structures. This study shed potential applications of these nano-topographic biomaterials for controlling cell development in tissue engineering or basic cell biology research.

Keywords: Nanosponge, Cell adhesion, Cell morphology

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1556
458 Effect of Birks Constant and Defocusing Parameter on Triple-to-Double Coincidence Ratio Parameter in Monte Carlo Simulation-GEANT4

Authors: F. Abubaker, F. Tortorici, M. Capogni, C. Sutera, V. Bellini

Abstract:

This project concerns with the detection efficiency of the portable Triple-to-Double Coincidence Ratio (TDCR) at the National Institute of Metrology of Ionizing Radiation (INMRI-ENEA) which allows direct activity measurement and radionuclide standardization for pure-beta emitter or pure electron capture radionuclides. The dependency of the simulated detection efficiency of the TDCR, by using Monte Carlo simulation Geant4 code, on the Birks factor (kB) and defocusing parameter has been examined especially for low energy beta-emitter radionuclides such as 3H and 14C, for which this dependency is relevant. The results achieved in this analysis can be used for selecting the best kB factor and the defocusing parameter for computing theoretical TDCR parameter value. The theoretical results were compared with the available ones, measured by the ENEA TDCR portable detector, for some pure-beta emitter radionuclides. This analysis allowed to improve the knowledge of the characteristics of the ENEA TDCR detector that can be used as a traveling instrument for in-situ measurements with particular benefits in many applications in the field of nuclear medicine and in the nuclear energy industry.

Keywords: Birks constant, defocusing parameter, GEANT4 code, TDCR parameter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 520
457 TOSOM: A Topic-Oriented Self-Organizing Map for Text Organization

Authors: Hsin-Chang Yang, Chung-Hong Lee, Kuo-Lung Ke

Abstract:

The self-organizing map (SOM) model is a well-known neural network model with wide spread of applications. The main characteristics of SOM are two-fold, namely dimension reduction and topology preservation. Using SOM, a high-dimensional data space will be mapped to some low-dimensional space. Meanwhile, the topological relations among data will be preserved. With such characteristics, the SOM was usually applied on data clustering and visualization tasks. However, the SOM has main disadvantage of the need to know the number and structure of neurons prior to training, which are difficult to be determined. Several schemes have been proposed to tackle such deficiency. Examples are growing/expandable SOM, hierarchical SOM, and growing hierarchical SOM. These schemes could dynamically expand the map, even generate hierarchical maps, during training. Encouraging results were reported. Basically, these schemes adapt the size and structure of the map according to the distribution of training data. That is, they are data-driven or dataoriented SOM schemes. In this work, a topic-oriented SOM scheme which is suitable for document clustering and organization will be developed. The proposed SOM will automatically adapt the number as well as the structure of the map according to identified topics. Unlike other data-oriented SOMs, our approach expands the map and generates the hierarchies both according to the topics and their characteristics of the neurons. The preliminary experiments give promising result and demonstrate the plausibility of the method.

Keywords: Self-organizing map, topic identification, learning algorithm, text clustering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2026
456 Entropy Based Spatial Design: A Genetic Algorithm Approach (Case Study)

Authors: Abbas Siefi, Mohammad Javad Karimifar

Abstract:

We study the spatial design of experiment and we want to select a most informative subset, having prespecified size, from a set of correlated random variables. The problem arises in many applied domains, such as meteorology, environmental statistics, and statistical geology. In these applications, observations can be collected at different locations and possibly at different times. In spatial design, when the design region and the set of interest are discrete then the covariance matrix completely describe any objective function and our goal is to choose a feasible design that minimizes the resulting uncertainty. The problem is recast as that of maximizing the determinant of the covariance matrix of the chosen subset. This problem is NP-hard. For using these designs in computer experiments, in many cases, the design space is very large and it's not possible to calculate the exact optimal solution. Heuristic optimization methods can discover efficient experiment designs in situations where traditional designs cannot be applied, exchange methods are ineffective and exact solution not possible. We developed a GA algorithm to take advantage of the exploratory power of this algorithm. The successful application of this method is demonstrated in large design space. We consider a real case of design of experiment. In our problem, design space is very large and for solving the problem, we used proposed GA algorithm.

Keywords: Spatial design of experiments, maximum entropy sampling, computer experiments, genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1657
455 Two-Stage Launch Vehicle Trajectory Modeling for Low Earth Orbit Applications

Authors: Assem M. F. Sallam, Ah. El-S. Makled

Abstract:

This paper presents a study on the trajectory of a two stage launch vehicle. The study includes dynamic responses of motion parameters as well as the variation of angles affecting the orientation of the launch vehicle (LV). LV dynamic characteristics including state vector variation with corresponding altitude and velocity for the different LV stages separation, as well as the angle of attack and flight path angles are also discussed. A flight trajectory study for the drop zone of first stage and the jettisoning of fairing are introduced in the mathematical modeling to study their effect. To increase the accuracy of the LV model, atmospheric model is used taking into consideration geographical location and the values of solar flux related to the date and time of launch, accurate atmospheric model leads to enhancement of the calculation of Mach number, which affects the drag force over the LV. The mathematical model is implemented on MATLAB based software (Simulink). The real available experimental data are compared with results obtained from the theoretical computation model. The comparison shows good agreement, which proves the validity of the developed simulation model; the maximum error noticed was generally less than 10%, which is a result that can lead to future works and enhancement to decrease this level of error.

Keywords: Launch vehicle modeling, launch vehicle trajectory, mathematical modeling, MATLAB-Simulink.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3300
454 Characterization Study of Aluminium 6061 Hybrid Composite

Authors: U. Achutha Kini, S. S. Sharma, K. Jagannath, P. R. Prabhu, Gowri Shankar M. C.

Abstract:

Aluminium matrix composites with alumina reinforcements give superior mechanical & physical properties. Their applications in several fields like automobile, aerospace, defense, sports, electronics, bio-medical and other industrial purposes are becoming essential for the last several decades. In the present work, fabrication of hybrid composite was done by Stir casting technique using Al 6061 as a matrix with alumina and silicon carbide (SiC) as reinforcement materials. The weight percentage of alumina is varied from 2 to 4% and the silicon carbide weight percentage is maintained constant at 2%. Hardness and wear tests are performed in the as cast and heat treated conditions. Age hardening treatment was performed on the specimen with solutionizing at 550°C, aging at two temperatures (150 and 200°C) for different time durations. Hardness distribution curves are drawn and peak hardness values are recorded. Hardness increase was very sensitive with respect to the decrease in aging temperature. There was an improvement in wear resistance of the peak aged material when aged at lower temperature. Also increase in weight percent of alumina, increases wear resistance at lower temperature but opposite behavior was seen when aged at higher temperature.

Keywords: Hybrid composite, hardness test, wear test, heat treatment, pin on disc wear testing machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2934
453 Large Eddy Simulation of Compartment Fire with Gas Combustible

Authors: Mliki Bouchmel, Abbassi Mohamed Ammar, Kamel Geudri, Chrigui Mouldi, Omri Ahmed

Abstract:

The objective of this work is to use the Fire Dynamics Simulator (FDS) to investigate the behavior of a kerosene small-scale fire. FDS is a Computational Fluid Dynamics (CFD) tool developed specifically for fire applications. Throughout its development, FDS is used for the resolution of practical problems in fire protection engineering. At the same time FDS is used to study fundamental fire dynamics and combustion. Predictions are based on Large Eddy Simulation (LES) with a Smagorinsky turbulence model. LES directly computes the large-scale eddies and the sub-grid scale dissipative processes are modeled. This technique is the default turbulence model which was used in this study. The validation of the numerical prediction is done using a direct comparison of combustion output variables to experimental measurements. Effect of the mesh size on the temperature evolutions is investigated and optimum grid size is suggested. Effect of width openings is investigated. Temperature distribution and species flow are presented for different operating conditions. The effect of the composition of the used fuel on atmospheric pollution is also a focus point within this work. Good predictions are obtained where the size of the computational cells within the fire compartment is less than 1/10th of the characteristic fire diameter.

Keywords: Large eddy simulation, Radiation, Turbulence, combustion, pollution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2177
452 Optimal Manufacturing Scheduling for Dependent Details Processing

Authors: Ivan C. Mustakerov, Daniela I. Borissova

Abstract:

The increasing competitiveness in manufacturing industry is forcing manufacturers to seek effective processing schedules. The paper presents an optimization manufacture scheduling approach for dependent details processing with given processing sequences and times on multiple machines. By defining decision variables as start and end moments of details processing it is possible to use straightforward variables restrictions to satisfy different technological requirements and to formulate easy to understand and solve optimization tasks for multiple numbers of details and machines. A case study example is solved for seven base moldings for CNC metalworking machines processed on five different machines with given processing order among details and machines and known processing time-s duration. As a result of linear optimization task solution the optimal manufacturing schedule minimizing the overall processing time is obtained. The manufacturing schedule defines the moments of moldings delivery thus minimizing storage costs and provides mounting due-time satisfaction. The proposed optimization approach is based on real manufacturing plant problem. Different processing schedules variants for different technological restrictions were defined and implemented in the practice of Bulgarian company RAIS Ltd. The proposed approach could be generalized for other job shop scheduling problems for different applications.

Keywords: Optimal manufacturing scheduling, linear programming, metalworking machines production, dependant details processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487
451 Novel D- glucose Based Glycomonomers Synthesis and Characterization

Authors: M.S. Mazăre, A. M. Pană, L. M. Ştefan, M. Silion, M. Bălan, G. Bandur, L. M. Rusnac

Abstract:

In the last decade, carbohydrates have attracted great attention as renewable resources for the chemical industry. Carbohydrates are abundantly found in nature in the form of monomers, oligomers and polymers, or as components of biopolymers and other naturally occurring substances. As natural products, they play important roles in conferring certain physical, chemical, and biological properties to their carrier molecules.The synthesis of this particular carbohydrate glycomonomer is part of our work to obtain biodegradable polymers. Our current paper describes the synthesis and characterization of a novel carbohydrate glycomonomer starting from D-glucose, in several synthesis steps, that involve the protection/deprotection of the D-glucose ring via acetylation, tritylation, then selective deprotection of the aromaticaliphatic protective group, in order to obtain 1,2,3,4-tetra-O-acetyl- 6-O-allyl-β-D-glucopyranose. The glycomonomer was then obtained by the allylation in drastic conditions of 1,2,3,4-tetra-O-acetyl-6-Oallyl- β-D-glucopyranose with allylic alcohol in the presence of stannic chloride, in methylene chloride, at room temperature. The proposed structure of the glycomonomer, 2,3,4-tri-O-acetyl-1,6-di- O-allyl-β-D-glucopyranose, was confirmed by FTIR, NMR and HPLC-MS spectrometry. This glycomonomer will be further submitted to copolymerization with certain acrylic or methacrylic monomers in order to obtain competitive plastic materials for applications in the biomedical field.

Keywords: allylation, D-glucose, glycomonomer, trityl chloride

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2044
450 System for Monitoring Marine Turtles Using Unstructured Supplementary Service Data

Authors: Luís Pina

Abstract:

The conservation of marine biodiversity keeps ecosystems in balance and ensures the sustainable use of resources. In this context, technological resources have been used for monitoring marine species to allow biologists to obtain data in real-time. There are different mobile applications developed for data collection for monitoring purposes, but these systems are designed to be utilized only on third-generation (3G) phones or smartphones with Internet access and in rural parts of the developing countries, Internet services and smartphones are scarce. Thus, the objective of this work is to develop a system to monitor marine turtles using Unstructured Supplementary Service Data (USSD), which users can access through basic mobile phones. The system aims to improve the data collection mechanism and enhance the effectiveness of current systems in monitoring sea turtles using any type of mobile device without Internet access. The system will be able to report information related to the biological activities of marine turtles. Also, it will be used as a platform to assist marine conservation entities to receive reports of illegal sales of sea turtles. The system can also be utilized as an educational tool for communities, providing knowledge and allowing the inclusion of communities in the process of monitoring marine turtles. Therefore, this work may contribute with information to decision-making and implementation of contingency plans for marine conservation programs.

Keywords: GSM, marine biology, marine turtles, USSD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 930
449 Motivated Support Vector Regression using Structural Prior Knowledge

Authors: Wei Zhang, Yao-Yu Li, Yi-Fan Zhu, Qun Li, Wei-Ping Wang

Abstract:

It-s known that incorporating prior knowledge into support vector regression (SVR) can help to improve the approximation performance. Most of researches are concerned with the incorporation of knowledge in the form of numerical relationships. Little work, however, has been done to incorporate the prior knowledge on the structural relationships among the variables (referred as to Structural Prior Knowledge, SPK). This paper explores the incorporation of SPK in SVR by constructing appropriate admissible support vector kernel (SV kernel) based on the properties of reproducing kernel (R.K). Three-levels specifications of SPK are studied with the corresponding sub-levels of prior knowledge that can be considered for the method. These include Hierarchical SPK (HSPK), Interactional SPK (ISPK) consisting of independence, global and local interaction, Functional SPK (FSPK) composed of exterior-FSPK and interior-FSPK. A convenient tool for describing the SPK, namely Description Matrix of SPK is introduced. Subsequently, a new SVR, namely Motivated Support Vector Regression (MSVR) whose structure is motivated in part by SPK, is proposed. Synthetic examples show that it is possible to incorporate a wide variety of SPK and helpful to improve the approximation performance in complex cases. The benefits of MSVR are finally shown on a real-life military application, Air-toground battle simulation, which shows great potential for MSVR to the complex military applications.

Keywords: admissible support vector kernel, reproducing kernel, structural prior knowledge, motivated support vector regression

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1623
448 Motivated Support Vector Regression with Structural Prior Knowledge

Authors: Wei Zhang, Yao-Yu Li, Yi-Fan Zhu, Qun Li, Wei-Ping Wang

Abstract:

It-s known that incorporating prior knowledge into support vector regression (SVR) can help to improve the approximation performance. Most of researches are concerned with the incorporation of knowledge in form of numerical relationships. Little work, however, has been done to incorporate the prior knowledge on the structural relationships among the variables (referred as to Structural Prior Knowledge, SPK). This paper explores the incorporation of SPK in SVR by constructing appropriate admissible support vector kernel (SV kernel) based on the properties of reproducing kernel (R.K). Three-levels specifications of SPK are studies with the corresponding sub-levels of prior knowledge that can be considered for the method. These include Hierarchical SPK (HSPK), Interactional SPK (ISPK) consisting of independence, global and local interaction, Functional SPK (FSPK) composed of exterior-FSPK and interior-FSPK. A convenient tool for describing the SPK, namely Description Matrix of SPK is introduced. Subsequently, a new SVR, namely Motivated Support Vector Regression (MSVR) whose structure is motivated in part by SPK, is proposed. Synthetic examples show that it is possible to incorporate a wide variety of SPK and helpful to improve the approximation performance in complex cases. The benefits of MSVR are finally shown on a real-life military application, Air-toground battle simulation, which shows great potential for MSVR to the complex military applications.

Keywords: admissible support vector kernel, reproducing kernel, structural prior knowledge, motivated support vector regression

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1400
447 The Effection of Different Culturing Proportion of Deep Sea Water(DSW) to Surface Sea Water(SSW) in Reductive Ability and Phenolic Compositions of Sargassum Cristaefolium

Authors: H. L. Ku, K. C. Yang, S. Y. Jhou, S. C. Lee, C. S. Lin

Abstract:

Characterized as rich mineral substances, low temperature, few bacteria, and stability with numerous implementation aspects on aquaculture, food, drinking, and leisure, the deep sea water (DSW) development has become a new industry in the world. It has been report that marine algae contain various biologically active compounds. This research focued on the affections in cultivating Sagrassum cristaefolium with different concentration of deep sea water(DSW) and surface sea water(SSW). After two and four weeks, the total phenolic contents were compared in Sagrassum cristaefolium culturing with different ways, and the reductive activity of them was also be tried with potassium ferricyanide. Those fresh seaweeds were dried with oven and were ground to powder. Progressively, the marine algae we cultured was extracted by water under the condition with heating them at 90Ôäâ for 1hr.The total phenolic contents were be executed using Folin–Ciocalteu method. The results were explaining as follows: the highest total phenolic contents and the best reductive ability of all could be observed on the 1/4 proportion of DSW to SSW culturing in two weeks. Furthermore, the 1/2 proportion of DSW to SSW also showed good reductive ability and plentiful phenolic compositions. Finally, we confirmed that difference proportion of DSW and SSW is the major point relating to ether the total phenolic components or the reductive ability in the Sagrassum cristaefolium. In the future, we will use this way to mass production the marine algae or other micro algae on industry applications.

Keywords: deep sea water(DSW), surface sea water(SSW), phenolic contents, reductive ability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1619
446 Multi-Layer Multi-Feature Background Subtraction Using Codebook Model Framework

Authors: Yun-Tao Zhang, Jong-Yeop Bae, Whoi-Yul Kim

Abstract:

Background modeling and subtraction in video analysis has been widely used as an effective method for moving objects detection in many computer vision applications. Recently, a large number of approaches have been developed to tackle different types of challenges in this field. However, the dynamic background and illumination variations are the most frequently occurred problems in the practical situation. This paper presents a favorable two-layer model based on codebook algorithm incorporated with local binary pattern (LBP) texture measure, targeted for handling dynamic background and illumination variation problems. More specifically, the first layer is designed by block-based codebook combining with LBP histogram and mean value of each RGB color channel. Because of the invariance of the LBP features with respect to monotonic gray-scale changes, this layer can produce block wise detection results with considerable tolerance of illumination variations. The pixel-based codebook is employed to reinforce the precision from the output of the first layer which is to eliminate false positives further. As a result, the proposed approach can greatly promote the accuracy under the circumstances of dynamic background and illumination changes. Experimental results on several popular background subtraction datasets demonstrate very competitive performance compared to previous models.

Keywords: Background subtraction, codebook model, local binary pattern, dynamic background, illumination changes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1965
445 A Novel VLSI Architecture of Hybrid Image Compression Model based on Reversible Blockade Transform

Authors: C. Hemasundara Rao, M. Madhavi Latha

Abstract:

Image compression can improve the performance of the digital systems by reducing time and cost in image storage and transmission without significant reduction of the image quality. Furthermore, the discrete cosine transform has emerged as the new state-of-the art standard for image compression. In this paper, a hybrid image compression technique based on reversible blockade transform coding is proposed. The technique, implemented over regions of interest (ROIs), is based on selection of the coefficients that belong to different transforms, depending on the coefficients is proposed. This method allows: (1) codification of multiple kernals at various degrees of interest, (2) arbitrary shaped spectrum,and (3) flexible adjustment of the compression quality of the image and the background. No standard modification for JPEG2000 decoder was required. The method was applied over different types of images. Results show a better performance for the selected regions, when image coding methods were employed for the whole set of images. We believe that this method is an excellent tool for future image compression research, mainly on images where image coding can be of interest, such as the medical imaging modalities and several multimedia applications. Finally VLSI implementation of proposed method is shown. It is also shown that the kernal of Hartley and Cosine transform gives the better performance than any other model.

Keywords: VLSI, Discrete Cosine Transform, JPEG, Hartleytransform, Radon Transform

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1836
444 Active Segment Selection Method in EEG Classification Using Fractal Features

Authors: Samira Vafaye Eslahi

Abstract:

BCI (Brain Computer Interface) is a communication machine that translates brain massages to computer commands. These machines with the help of computer programs can recognize the tasks that are imagined. Feature extraction is an important stage of the process in EEG classification that can effect in accuracy and the computation time of processing the signals. In this study we process the signal in three steps of active segment selection, fractal feature extraction, and classification. One of the great challenges in BCI applications is to improve classification accuracy and computation time together. In this paper, we have used student’s 2D sample t-statistics on continuous wavelet transforms for active segment selection to reduce the computation time. In the next level, the features are extracted from some famous fractal dimension estimation of the signal. These fractal features are Katz and Higuchi. In the classification stage we used ANFIS (Adaptive Neuro-Fuzzy Inference System) classifier, FKNN (Fuzzy K-Nearest Neighbors), LDA (Linear Discriminate Analysis), and SVM (Support Vector Machines). We resulted that active segment selection method would reduce the computation time and Fractal dimension features with ANFIS analysis on selected active segments is the best among investigated methods in EEG classification.

Keywords: EEG, Student’s t- statistics, BCI, Fractal Features, ANFIS, FKNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2120
443 Environmental Modeling of Storm Water Channels

Authors: L. Grinis

Abstract:

Turbulent flow in complex geometries receives considerable attention due to its importance in many engineering applications. It has been the subject of interest for many researchers. Some of these interests include the design of storm water channels. The design of these channels requires testing through physical models. The main practical limitation of physical models is the so called “scale effect”, that is, the fact that in many cases only primary physical mechanisms can be correctly represented, while secondary mechanisms are often distorted. These observations form the basis of our study, which centered on problems associated with the design of storm water channels near the Dead Sea, in Israel. To help reach a final design decision we used different physical models. Our research showed good coincidence with the results of laboratory tests and theoretical calculations, and allowed us to study different effects of fluid flow in an open channel. We determined that problems of this nature cannot be solved only by means of theoretical calculation and computer simulation. This study demonstrates the use of physical models to help resolve very complicated problems of fluid flow through baffles and similar structures. The study applies these models and observations to different construction and multiphase water flows, among them, those that include sand and stone particles, a significant attempt to bring to the testing laboratory a closer association with reality.

Keywords: Baffles, open channel, physical modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1914
442 Recent Developments in Speed Control System of Pipeline PIGs for Deepwater Pipeline Applications

Authors: Mohamad Azmi Haniffa, Fakhruldin Mohd Hashim

Abstract:

Pipeline infrastructures normally represent high cost of investment and the pipeline must be free from risks that could cause environmental hazard and potential threats to personnel safety. Pipeline integrity such monitoring and management become very crucial to provide unimpeded transportation and avoiding unnecessary production deferment. Thus proper cleaning and inspection is the key to safe and reliable pipeline operation and plays an important role in pipeline integrity management program and has become a standard industry procedure. In view of this, understanding the motion (dynamic behavior), prediction and control of the PIG speed is important in executing pigging operation as it offers significant benefits, such as estimating PIG arrival time at receiving station, planning for suitable pigging operation, and improves efficiency of pigging tasks. The objective of this paper is to review recent developments in speed control system of pipeline PIGs. The review carried out would serve as an industrial application in a form of quick reference of recent developments in pipeline PIG speed control system, and further initiate others to add-in/update the list in the future leading to knowledge based data, and would attract active interest of others to share their view points.

Keywords: Pipeline Inspection Gauge (PIG), In Line Inspection Tools (ILI), PIG motion, PIG speed control system

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3330
441 Impact of Gate Insulation Material and Thickness on Pocket Implanted MOS Device

Authors: Muhibul Haque Bhuyan

Abstract:

This paper reports on the impact study with the variation of the gate insulation material and thickness on different models of pocket implanted sub-100 nm n-MOS device. The gate materials used here are silicon dioxide (SiO2), aluminum silicate (Al2SiO5), silicon nitride (Si3N4), alumina (Al2O3), hafnium silicate (HfSiO4), tantalum pentoxide (Ta2O5), hafnium dioxide (HfO2), zirconium dioxide (ZrO2), and lanthanum oxide (La2O3) upon a p-type silicon substrate material. The gate insulation thickness was varied from 2.0 nm to 3.5 nm for a 50 nm channel length pocket implanted n-MOSFET. There are several models available for this device. We have studied and simulated threshold voltage model incorporating drain and substrate bias effects, surface potential, inversion layer charge, pinch-off voltage, effective electric field, inversion layer mobility, and subthreshold drain current models based on two linear symmetric pocket doping profiles. We have changed the values of the two parameters, viz. gate insulation material and thickness gradually fixing the other parameter at their typical values. Then we compared and analyzed the simulation results. This study would be helpful for the nano-scaled MOS device designers for various applications to predict the device behavior.

Keywords: Linear symmetric pocket profile, pocket implanted n-MOS Device, model, impact of gate material, insulator thickness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 387
440 A Generalization of Planar Pascal’s Triangle to Polynomial Expansion and Connection with Sierpinski Patterns

Authors: Wajdi Mohamed Ratemi

Abstract:

The very well-known stacked sets of numbers referred to as Pascal’s triangle present the coefficients of the binomial expansion of the form (x+y)n. This paper presents an approach (the Staircase Horizontal Vertical, SHV-method) to the generalization of planar Pascal’s triangle for polynomial expansion of the form (x+y+z+w+r+⋯)n. The presented generalization of Pascal’s triangle is different from other generalizations of Pascal’s triangles given in the literature. The coefficients of the generalized Pascal’s triangles, presented in this work, are generated by inspection, using embedded Pascal’s triangles. The coefficients of I-variables expansion are generated by horizontally laying out the Pascal’s elements of (I-1) variables expansion, in a staircase manner, and multiplying them with the relevant columns of vertically laid out classical Pascal’s elements, hence avoiding factorial calculations for generating the coefficients of the polynomial expansion. Furthermore, the classical Pascal’s triangle has some pattern built into it regarding its odd and even numbers. Such pattern is known as the Sierpinski’s triangle. In this study, a presentation of Sierpinski-like patterns of the generalized Pascal’s triangles is given. Applications related to those coefficients of the binomial expansion (Pascal’s triangle), or polynomial expansion (generalized Pascal’s triangles) can be in areas of combinatorics, and probabilities.

Keywords: Generalized Pascal’s triangle, Pascal’s triangle, polynomial expansion, Sierpinski’s triangle, staircase horizontal vertical method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2381
439 The Reproducibility and Repeatability of Modified Likelihood Ratio for Forensics Handwriting Examination

Authors: O. Abiodun Adeyinka, B. Adeyemo Adesesan

Abstract:

The forensic use of handwriting depends on the analysis, comparison, and evaluation decisions made by forensic document examiners. When using biometric technology in forensic applications, it is necessary to compute Likelihood Ratio (LR) for quantifying strength of evidence under two competing hypotheses, namely the prosecution and the defense hypotheses wherein a set of assumptions and methods for a given data set will be made. It is therefore important to know how repeatable and reproducible our estimated LR is. This paper evaluated the accuracy and reproducibility of examiners' decisions. Confidence interval for the estimated LR were presented so as not get an incorrect estimate that will be used to deliver wrong judgment in the court of Law. The estimate of LR is fundamentally a Bayesian concept and we used two LR estimators, namely Logistic Regression (LoR) and Kernel Density Estimator (KDE) for this paper. The repeatability evaluation was carried out by retesting the initial experiment after an interval of six months to observe whether examiners would repeat their decisions for the estimated LR. The experimental results, which are based on handwriting dataset, show that LR has different confidence intervals which therefore implies that LR cannot be estimated with the same certainty everywhere. Though the LoR performed better than the KDE when tested using the same dataset, the two LR estimators investigated showed a consistent region in which LR value can be estimated confidently. These two findings advance our understanding of LR when used in computing the strength of evidence in handwriting using forensics.

Keywords: Logistic Regression LoR, Kernel Density Estimator KDE, Handwriting, Confidence Interval, Repeatability, Reproducibility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 471
438 Continuous Feature Adaptation for Non-Native Speech Recognition

Authors: Y. Deng, X. Li, C. Kwan, B. Raj, R. Stern

Abstract:

The current speech interfaces in many military applications may be adequate for native speakers. However, the recognition rate drops quite a lot for non-native speakers (people with foreign accents). This is mainly because the nonnative speakers have large temporal and intra-phoneme variations when they pronounce the same words. This problem is also complicated by the presence of large environmental noise such as tank noise, helicopter noise, etc. In this paper, we proposed a novel continuous acoustic feature adaptation algorithm for on-line accent and environmental adaptation. Implemented by incremental singular value decomposition (SVD), the algorithm captures local acoustic variation and runs in real-time. This feature-based adaptation method is then integrated with conventional model-based maximum likelihood linear regression (MLLR) algorithm. Extensive experiments have been performed on the NATO non-native speech corpus with baseline acoustic model trained on native American English. The proposed feature-based adaptation algorithm improved the average recognition accuracy by 15%, while the MLLR model based adaptation achieved 11% improvement. The corresponding word error rate (WER) reduction was 25.8% and 2.73%, as compared to that without adaptation. The combined adaptation achieved overall recognition accuracy improvement of 29.5%, and WER reduction of 31.8%, as compared to that without adaptation.

Keywords: speaker adaptation; environment adaptation; robust speech recognition; SVD; non-native speech recognition

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3217
437 Investigation of Thermal and Mechanical Loading on Functional Graded Material Plates

Authors: Mine Uslu Uysal

Abstract:

This paper interested in the mechanical deformation behavior of shear deformable functionally graded ceramic-metal (FGM) plates. Theoretical formulations are based on power law theory when build up functional graded material. The mechanical properties of the plate are graded in the thickness direction according to a power-law Displacement and stress is obtained using finite element method (FEM). The load is supposed to be a uniform distribution over the plate surface (XY plane) and varied in the thickness direction only. An FGM’s gradation in material properties allows the designer to tailor material response to meet design criteria. An FGM made of ceramic and metal can provide the thermal protection and load carrying capability in one material thus eliminating the problem of thermo-mechanical deformation behavior. This thesis will explore analysis of FGM flat plates and shell panels, and their applications to r structural problems. FGMs are first characterized as flat plates under pressure in order to understand the effect variation of material properties has on structural response. In addition, results are compared to published results in order to show the accuracy of modeling FGMs using ABAQUS software.

Keywords: Functionally graded material, finite element method, thermal and structural loading.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3565
436 Medical Image Watermark and Tamper Detection Using Constant Correlation Spread Spectrum Watermarking

Authors: Peter U. Eze, P. Udaya, Robin J. Evans

Abstract:

Data hiding can be achieved by Steganography or invisible digital watermarking. For digital watermarking, both accurate retrieval of the embedded watermark and the integrity of the cover image are important. Medical image security in Teleradiology is one of the applications where the embedded patient record needs to be extracted with accuracy as well as the medical image integrity verified. In this research paper, the Constant Correlation Spread Spectrum digital watermarking for medical image tamper detection and accurate embedded watermark retrieval is introduced. In the proposed method, a watermark bit from a patient record is spread in a medical image sub-block such that the correlation of all watermarked sub-blocks with a spreading code, W, would have a constant value, p. The constant correlation p, spreading code, W and the size of the sub-blocks constitute the secret key. Tamper detection is achieved by flagging any sub-block whose correlation value deviates by more than a small value, ℇ, from p. The major features of our new scheme include: (1) Improving watermark detection accuracy for high-pixel depth medical images by reducing the Bit Error Rate (BER) to Zero and (2) block-level tamper detection in a single computational process with simultaneous watermark detection, thereby increasing utility with the same computational cost.

Keywords: Constant correlation, medical image, spread spectrum, tamper detection, watermarking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 973
435 Evaluating Machine Learning Techniques for Activity Classification in Smart Home Environments

Authors: Talal Alshammari, Nasser Alshammari, Mohamed Sedky, Chris Howard

Abstract:

With the widespread adoption of the Internet-connected devices, and with the prevalence of the Internet of Things (IoT) applications, there is an increased interest in machine learning techniques that can provide useful and interesting services in the smart home domain. The areas that machine learning techniques can help advance are varied and ever-evolving. Classifying smart home inhabitants’ Activities of Daily Living (ADLs), is one prominent example. The ability of machine learning technique to find meaningful spatio-temporal relations of high-dimensional data is an important requirement as well. This paper presents a comparative evaluation of state-of-the-art machine learning techniques to classify ADLs in the smart home domain. Forty-two synthetic datasets and two real-world datasets with multiple inhabitants are used to evaluate and compare the performance of the identified machine learning techniques. Our results show significant performance differences between the evaluated techniques. Such as AdaBoost, Cortical Learning Algorithm (CLA), Decision Trees, Hidden Markov Model (HMM), Multi-layer Perceptron (MLP), Structured Perceptron and Support Vector Machines (SVM). Overall, neural network based techniques have shown superiority over the other tested techniques.

Keywords: Activities of daily living, classification, internet of things, machine learning, smart home.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1770
434 A Deep Learning Framework for Polarimetric SAR Change Detection Using Capsule Network

Authors: Sanae Attioui, Said Najah

Abstract:

The Earth's surface is constantly changing through forces of nature and human activities. Reliable, accurate, and timely change detection is critical to environmental monitoring, resource management, and planning activities. Recently, interest in deep learning algorithms, especially convolutional neural networks, has increased in the field of image change detection due to their powerful ability to extract multi-level image features automatically. However, these networks are prone to drawbacks that limit their applications, which reside in their inability to capture spatial relationships between image instances, as this necessitates a large amount of training data. As an alternative, Capsule Network has been proposed to overcome these shortcomings. Although its effectiveness in remote sensing image analysis has been experimentally verified, its application in change detection tasks remains very sparse. Motivated by its greater robustness towards improved hierarchical object representation, this study aims to apply a capsule network for PolSAR image Change Detection. The experimental results demonstrate that the proposed change detection method can yield a significantly higher detection rate compared to methods based on convolutional neural networks.

Keywords: Change detection, capsule network, deep network, Convolutional Neural Networks, polarimetric synthetic aperture radar images, PolSAR images.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 498
433 Model Order Reduction of Linear Time Variant High Speed VLSI Interconnects using Frequency Shift Technique

Authors: J.V.R.Ravindra, M.B.Srinivas,

Abstract:

Accurate modeling of high speed RLC interconnects has become a necessity to address signal integrity issues in current VLSI design. To accurately model a dispersive system of interconnects at higher frequencies; a full-wave analysis is required. However, conventional circuit simulation of interconnects with full wave models is extremely CPU expensive. We present an algorithm for reducing large VLSI circuits to much smaller ones with similar input-output behavior. A key feature of our method, called Frequency Shift Technique, is that it is capable of reducing linear time-varying systems. This enables it to capture frequency-translation and sampling behavior, important in communication subsystems such as mixers, RF components and switched-capacitor filters. Reduction is obtained by projecting the original system described by linear differential equations into a lower dimension. Experiments have been carried out using Cadence Design Simulator cwhich indicates that the proposed technique achieves more % reduction with less CPU time than the other model order reduction techniques existing in literature. We also present applications to RF circuit subsystems, obtaining size reductions and evaluation speedups of orders of magnitude with insignificant loss of accuracy.

Keywords: Model order Reduction, RLC, crosstalk

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1652
432 Comparison of Phylogenetic Trees of Multiple Protein Sequence Alignment Methods

Authors: Khaddouja Boujenfa, Nadia Essoussi, Mohamed Limam

Abstract:

Multiple sequence alignment is a fundamental part in many bioinformatics applications such as phylogenetic analysis. Many alignment methods have been proposed. Each method gives a different result for the same data set, and consequently generates a different phylogenetic tree. Hence, the chosen alignment method affects the resulting tree. However in the literature, there is no evaluation of multiple alignment methods based on the comparison of their phylogenetic trees. This work evaluates the following eight aligners: ClustalX, T-Coffee, SAGA, MUSCLE, MAFFT, DIALIGN, ProbCons and Align-m, based on their phylogenetic trees (test trees) produced on a given data set. The Neighbor-Joining method is used to estimate trees. Three criteria, namely, the dNNI, the dRF and the Id_Tree are established to test the ability of different alignment methods to produce closer test tree compared to the reference one (true tree). Results show that the method which produces the most accurate alignment gives the nearest test tree to the reference tree. MUSCLE outperforms all aligners with respect to the three criteria and for all datasets, performing particularly better when sequence identities are within 10-20%. It is followed by T-Coffee at lower sequence identity (<10%), Align-m at 20-30% identity, and ClustalX and ProbCons at 30-50% identity. Also, it is noticed that when sequence identities are higher (>30%), trees scores of all methods become similar.

Keywords: Multiple alignment methods, phylogenetic trees, Neighbor-Joining method, Robinson-Foulds distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1827