Search results for: combined channel
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1572

Search results for: combined channel

492 Speed Characteristics of Mixed Traffic Flow on Urban Arterials

Authors: Ashish Dhamaniya, Satish Chandra

Abstract:

Speed and traffic volume data are collected on different sections of four lane and six lane roads in three metropolitan cities in India. Speed data are analyzed to fit the statistical distribution to individual vehicle speed data and all vehicles speed data. It is noted that speed data of individual vehicle generally follows a normal distribution but speed data of all vehicle combined at a section of urban road may or may not follow the normal distribution depending upon the composition of traffic stream. A new term Speed Spread Ratio (SSR) is introduced in this paper which is the ratio of difference in 85th and 50th percentile speed to the difference in 50th and 15th percentile speed. If SSR is unity then speed data are truly normally distributed. It is noted that on six lane urban roads, speed data follow a normal distribution only when SSR is in the range of 0.86 – 1.11. The range of SSR is validated on four lane roads also.

Keywords: Normal distribution, percentile speed, speed spread ratio, traffic volume.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4218
491 Simulation on the Performance of Carbon Dioxide and HFC-125 Heat Pumpsfor Medium-and High-Temperature Heating

Authors: Young-Jin Baikand, Minsung Kim

Abstract:

In order to compare the performance of the carbon dioxide and HFC-125 heat pumps for medium-and high-temperature heating, both heat pump cycles were optimized using a simulation method. To fairly compare the performance of the cycles by using different working fluids, each cycle was optimized from the viewpoint of heating COP by two design parameters. The first is the gas cooler exit temperature and the other is the ratio of the overall heat conductance of the gas cooler to the combined overall heat conductance of the gas cooler and the evaporator. The inlet and outlet temperatures of secondary fluid of the gas cooler were fixed at 40/90°C and 40/150°C.The results shows that the HFC-125 heat pump has 6% higher heating COP than carbon dioxide heat pump when the heat sink exit temperature is fixed at 90ºC, while the latter outperforms the former when the heat sink exit temperature is fixed at 150ºC under the simulation conditions considered in the present study.

Keywords: Carbon dioxide, HFC-125, trans critical, heat pump.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1608
490 Modes of Collapse of Compress–Expand Member under Axial Loading

Authors: Shigeyuki Haruyama, Aidil Khaidir Bin Muhamad, Ken Kaminishi, Dai-Heng Chen

Abstract:

In this paper, a study on the modes of collapse of compress- expand members are presented. Compress- expand member is a compact, multiple-combined cylinders, to be proposed as energy absorbers. Previous studies on the compress- expand member have clarified its energy absorption efficiency, proposed an approximate equation to describe its deformation characteristics and also highlighted the improvement that it has brought. However, for the member to be practical, the actual range of geometrical dimension that it can maintain its applicability must be investigated. In this study, using a virtualized materials that comply the bilinear hardening law, Finite element Method (FEM) analysis on the collapse modes of compress- expand member have been conducted. Deformation maps that plotted the member's collapse modes with regards to the member's geometric and material parameters were then presented in order to determine the dimensional range of each collapse modes.

Keywords: Axial collapse, compress-expand member, tubular member, finite element method, modes of collapse, thin-walled cylindrical tube.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1995
489 Quantitative Indicator of Abdominal Aortic Aneurysm Rupture Risk Based on its Geometric Parameters

Authors: Guillermo Vilalta, Félix Nieto, Carlos Vaquero, José A. Vilalta

Abstract:

Abdominal aortic aneurysms rupture (AAAs) is one of the main causes of death in the world. This is a very complex phenomenon that usually occurs “without previous warning". Currently, criteria to assess the aneurysm rupture risk (peak diameter and growth rate) can not be considered as reliable indicators. In a first approach, the main geometric parameters of aneurysms have been linked into five biomechanical factors. These are combined to obtain a dimensionless rupture risk index, RI(t), which has been validated preliminarily with a clinical case and others from literature. This quantitative indicator is easy to understand, it allows estimating the aneurysms rupture risks and it is expected to be able to identify the one in aneurysm whose peak diameter is less than the threshold value. Based on initial results, a broader study has begun with twelve patients from the Clinic Hospital of Valladolid-Spain, which are submitted to periodic follow-up examinations.

Keywords: AAA, rupture risk prediction, biomechanical factors, AAA geometric characterization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1511
488 Fatigue Failure Analysis in AISI 304 Stainless Wind Turbine Shafts

Authors: M. F. V. Montezuma, E. P. Deus, M. C. Carvalho

Abstract:

Wind turbines are equipment of great importance for generating clean energy in countries and regions with abundant winds. However, complex loadings fluctuations to which they are subject can cause premature failure of these equipment due to the material fatigue process. This work evaluates fatigue failures in small AISI 304 stainless steel turbine shafts. Fractographic analysis techniques, chemical analyzes using energy dispersive spectrometry (EDS), and hardness tests were used to verify the origin of the failures, characterize the properties of the components and the material. The nucleation of cracks on the shafts' surface was observed due to a combined effect of variable stresses, geometric stress concentrating details, and surface wear, leading to the crack's propagation until the catastrophic failure. Beach marks were identified in the macrographic examination, characterizing the probable failure due to fatigue. The sensitization phenomenon was also observed.

Keywords: Fatigue, sensitization phenomenon, stainless steel shafts, wind turbine failure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 669
487 Theoretical Considerations for Software Component Metrics

Authors: V. Lakshmi Narasimhan, Bayu Hendradjaya

Abstract:

We have defined two suites of metrics, which cover static and dynamic aspects of component assembly. The static metrics measure complexity and criticality of component assembly, wherein complexity is measured using Component Packing Density and Component Interaction Density metrics. Further, four criticality conditions namely, Link, Bridge, Inheritance and Size criticalities have been identified and quantified. The complexity and criticality metrics are combined to form a Triangular Metric, which can be used to classify the type and nature of applications. Dynamic metrics are collected during the runtime of a complete application. Dynamic metrics are useful to identify super-component and to evaluate the degree of utilisation of various components. In this paper both static and dynamic metrics are evaluated using Weyuker-s set of properties. The result shows that the metrics provide a valid means to measure issues in component assembly. We relate our metrics suite with McCall-s Quality Model and illustrate their impact on product quality and to the management of component-based product development.

Keywords: Component Assembly, Component Based SoftwareEngineering, CORBA Component Model, Software ComponentMetrics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2261
486 An Approach of Quantum Steganography through Special SSCE Code

Authors: Indradip Banerjee, Souvik Bhattacharyya, Gautam Sanyal

Abstract:

Encrypted messages sending frequently draws the attention of third parties, perhaps causing attempts to break and reveal the original messages. Steganography is introduced to hide the existence of the communication by concealing a secret message in an appropriate carrier like text, image, audio or video. Quantum steganography where the sender (Alice) embeds her steganographic information into the cover and sends it to the receiver (Bob) over a communication channel. Alice and Bob share an algorithm and hide quantum information in the cover. An eavesdropper (Eve) without access to the algorithm can-t find out the existence of the quantum message. In this paper, a text quantum steganography technique based on the use of indefinite articles (a) or (an) in conjunction with the nonspecific or non-particular nouns in English language and quantum gate truth table have been proposed. The authors also introduced a new code representation technique (SSCE - Secret Steganography Code for Embedding) at both ends in order to achieve high level of security. Before the embedding operation each character of the secret message has been converted to SSCE Value and then embeds to cover text. Finally stego text is formed and transmits to the receiver side. At the receiver side different reverse operation has been carried out to get back the original information.

Keywords: Quantum Steganography, SSCE (Secret SteganographyCode for Embedding), Security, Cover Text, Stego Text.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2079
485 Finite Element Method for Calculating Temperature Field of Main Cable of Suspension Bridge

Authors: Heng Han, Zhilei Liang, Xiangong Zhou

Abstract:

In this paper, the finite element method is used to study the temperature field of the main cable of the suspension bridge, and the calculation method of the average temperature of the cross-section of the main cable suitable for the construction control of the cable system is proposed. By comparing and analyzing the temperature field of the main cable with five diameters, a reasonable diameter limit for calculating the average temperature of the cross section of the main cable by finite element method is proposed. The results show that the maximum error of this method is less than 1 ℃, which meets the requirements of construction control accuracy. For the main cable with a diameter greater than 400 mm, the surface temperature measuring points combined with the finite element method shall be used to calculate the average cross-section temperature.

Keywords: Suspension bridge, main cable, temperature field, finite element.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 310
484 Combining Color and Layout Features for the Identification of Low-resolution Documents

Authors: Ardhendu Behera, Denis Lalanne, Rolf Ingold

Abstract:

This paper proposes a method, combining color and layout features, for identifying documents captured from lowresolution handheld devices. On one hand, the document image color density surface is estimated and represented with an equivalent ellipse and on the other hand, the document shallow layout structure is computed and hierarchically represented. The combined color and layout features are arranged in a symbolic file, which is unique for each document and is called the document-s visual signature. Our identification method first uses the color information in the signatures in order to focus the search space on documents having a similar color distribution, and finally selects the document having the most similar layout structure in the remaining search space. Finally, our experiment considers slide documents, which are often captured using handheld devices.

Keywords: Document color modeling, document visual signature, kernel density estimation, document identification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1349
483 Sparse Unmixing of Hyperspectral Data by Exploiting Joint-Sparsity and Rank-Deficiency

Authors: Fanqiang Kong, Chending Bian

Abstract:

In this work, we exploit two assumed properties of the abundances of the observed signatures (endmembers) in order to reconstruct the abundances from hyperspectral data. Joint-sparsity is the first property of the abundances, which assumes the adjacent pixels can be expressed as different linear combinations of same materials. The second property is rank-deficiency where the number of endmembers participating in hyperspectral data is very small compared with the dimensionality of spectral library, which means that the abundances matrix of the endmembers is a low-rank matrix. These assumptions lead to an optimization problem for the sparse unmixing model that requires minimizing a combined l2,p-norm and nuclear norm. We propose a variable splitting and augmented Lagrangian algorithm to solve the optimization problem. Experimental evaluation carried out on synthetic and real hyperspectral data shows that the proposed method outperforms the state-of-the-art algorithms with a better spectral unmixing accuracy.

Keywords: Hyperspectral unmixing, joint-sparse, low-rank representation, abundance estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 752
482 Design and Construction of the Semi-Automatic Sliced Ginger Machine

Authors: J. Chatthong, W. Boonchouytan, R. Burapa

Abstract:

The purpose of study was to design and construction the semi-automatic sliced ginger machine for reduce production times in sheet and slice ginger procedure furthermore, reduced amount of labor of slides and cutting method. Take consider into clean and safety of workers and consumers. The principle of machines, used 1 horsepower motor, rotation speed of sliced blade 967 rpm, the diameter of sliced dish 310 mm, consists of 2 blades for sheet cutting ginger and the power from motor which transfer to rotate the sliced blade roller, rotation speed 440 rpm. The slice cutter roller was sliced ginger from sheet ginger to line ginger. The conveyer could adjustment level of motors, used to the beginning area that sheet ginger was transference to the roller for sheet and sliced cutting in next process. The cover of sliced cutting had channel for 1 tuber of ginger. The semi-automatic sliced ginger machine could produced sheet ginger 81.8 kg/h (6.2 times of labor) and line ginger 17.9 kg/h (2.5 times of labor) compare with, labor work could produced sheet ginger 13.2 kg/h and line ginger 7.1 kg/h, and when timekeeper, the total times of semi auto machine 30.86 kg/h and labor 4.6 kg/h, there for the semi auto machine was 6.7 times of labor. The semiautomatic sliced ginger machine convenient, easy for use and maintain, in addition to reduce fatigue of body and seriousness from works; must be used high skill, and protection accident in slicing procedure. Beside, machine could used with other vegetables for example potato, carrot .etc

Keywords: Sliced Machine, Sliced Ginger, Line Ginger

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3207
481 Automated Detection of Alzheimer Disease Using Region Growing technique and Artificial Neural Network

Authors: B. Al-Naami, N. Gharaibeh, A. AlRazzaq Kheshman

Abstract:

Alzheimer is known as the loss of mental functions such as thinking, memory, and reasoning that is severe enough to interfere with a person's daily functioning. The appearance of Alzheimer Disease symptoms (AD) are resulted based on which part of the brain has a variety of infection or damage. In this case, the MRI is the best biomedical instrumentation can be ever used to discover the AD existence. Therefore, this paper proposed a fusion method to distinguish between the normal and (AD) MRIs. In this combined method around 27 MRIs collected from Jordanian Hospitals are analyzed based on the use of Low pass -morphological filters to get the extracted statistical outputs through intensity histogram to be employed by the descriptive box plot. Also, the artificial neural network (ANN) is applied to test the performance of this approach. Finally, the obtained result of t-test with confidence accuracy (95%) has compared with classification accuracy of ANN (100 %). The robust of the developed method can be considered effectively to diagnose and determine the type of AD image.

Keywords: Alzheimer disease, Brain MRI analysis, Morphological filter, Box plot, Intensity histogram, ANN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3116
480 Hybrid Feature and Adaptive Particle Filter for Robust Object Tracking

Authors: Xinyue Zhao, Yutaka Satoh, Hidenori Takauji, Shun'ichi Kaneko

Abstract:

A hybrid feature based adaptive particle filter algorithm is presented for object tracking in real scenarios with static camera. The hybrid feature is combined by two effective features: the Grayscale Arranging Pairs (GAP) feature and the color histogram feature. The GAP feature has high discriminative ability even under conditions of severe illumination variation and dynamic background elements, while the color histogram feature has high reliability to identify the detected objects. The combination of two features covers the shortage of single feature. Furthermore, we adopt an updating target model so that some external problems such as visual angles can be overcame well. An automatic initialization algorithm is introduced which provides precise initial positions of objects. The experimental results show the good performance of the proposed method.

Keywords: Hybrid feature, adaptive Particle Filter, robust Object Tracking, Grayscale Arranging Pairs (GAP) feature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1504
479 Effect of Alloying Elements and Hot Forging/Rolling Reduction Ratio on Hardness and Impact Toughness of Heat Treated Low Alloy Steels

Authors: Mahmoud M. Tash

Abstract:

The present study was carried out to investigate the effect of alloying elements and thermo-mechanical treatment (TMT) i.e. hot rolling and forging with different reduction ratios on the hardness (HV) and impact toughness (J) of heat-treated low alloy steels. An understanding of the combined effect of TMT and alloying elements and by measuring hardness, impact toughness, resulting from different heat treatment following TMT of the low alloy steels, it is possible to determine which conditions yielded optimum mechanical properties and high strength to weight ratio. Experimental Correlations between hot work reduction ratio, hardness and impact toughness for thermo-mechanically heat treated low alloy steels are analyzed quantitatively, and both regression and mathematical hardness and impact toughness models are developed.

Keywords: Hot Forging, hot rolling, heat treatment, hardness (hv), impact toughness (j), microstructure, low alloy steels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3415
478 Phytoadaptation in Desert Soil Prediction Using Fuzzy Logic Modeling

Authors: S. Bouharati, F. Allag, M. Belmahdi, M. Bounechada

Abstract:

In terms of ecology forecast effects of desertification, the purpose of this study is to develop a predictive model of growth and adaptation of species in arid environment and bioclimatic conditions. The impact of climate change and the desertification phenomena is the result of combined effects in magnitude and frequency of these phenomena. Like the data involved in the phytopathogenic process and bacteria growth in arid soil occur in an uncertain environment because of their complexity, it becomes necessary to have a suitable methodology for the analysis of these variables. The basic principles of fuzzy logic those are perfectly suited to this process. As input variables, we consider the physical parameters, soil type, bacteria nature, and plant species concerned. The result output variable is the adaptability of the species expressed by the growth rate or extinction. As a conclusion, we prevent the possible strategies for adaptation, with or without shifting areas of plantation and nature adequate vegetation.

Keywords: Climate changes, dry soil, Phytopathogenicity, Predictive model, Fuzzy logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1848
477 The Estimation of Human Vital Signs Complexity

Authors: L. Bikulciene, E. Venskaityte, G. Jarusevicius

Abstract:

Nonstationary and nonlinear signals generated by living complex systems defy traditional mechanistic approaches, which are based on homeostasis. Previous our studies have shown that the evaluation of the interactions of physiological signals by using special analysis methods is suitable for observation of physiological processes. It is demonstrated the possibility of using deep physiological model, based on the interpretation of the changes of the human body’s functional states combined with an application of the analytical method based on matrix theory for the physiological signals analysis, which was applied on high risk cardiac patients. It is shown that evaluation of cardiac signals interactions show peculiar for each individual functional changes at the onset of hemodynamic restoration procedure. Therefore, we suggest that the alterations of functional state of the body, after patients overcome surgery can be complemented by the data received from the suggested approach of the evaluation of functional variables’ interactions.

Keywords: Cardiac diseases, Complex systems theory, ECG analysis, matrix analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2220
476 An AI-Based Dynamical Resource Allocation Calculation Algorithm for Unmanned Aerial Vehicle

Authors: Zhou Luchen, Wu Yubing, Burra Venkata Durga Kumar

Abstract:

As the scale of the network becomes larger and more complex than before, the density of user devices is also increasing. The development of Unmanned Aerial Vehicle (UAV) networks is able to collect and transform data in an efficient way by using software-defined networks (SDN) technology. This paper proposed a three-layer distributed and dynamic cluster architecture to manage UAVs by using an AI-based resource allocation calculation algorithm to address the overloading network problem. Through separating services of each UAV, the UAV hierarchical cluster system performs the main function of reducing the network load and transferring user requests, with three sub-tasks including data collection, communication channel organization, and data relaying. In this cluster, a head node and a vice head node UAV are selected considering the CPU, RAM, and ROM memory of devices, battery charge, and capacity. The vice head node acts as a backup that stores all the data in the head node. The k-means clustering algorithm is used in order to detect high load regions and form the UAV layered clusters. The whole process of detecting high load areas, forming and selecting UAV clusters, and moving the selected UAV cluster to that area is proposed as offloading traffic algorithm.

Keywords: k-means, resource allocation, SDN, UAV network, unmanned aerial vehicles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 306
475 Utility Assessment Model for Wireless Technology in Construction

Authors: Y. Abdelrazig, A. Ghanem

Abstract:

Construction projects are information intensive in nature and involve many activities that are related to each other. Wireless technologies can be used to improve the accuracy and timeliness of data collected from construction sites and shares it with appropriate parties. Nonetheless, the construction industry tends to be conservative and shows hesitation to adopt new technologies. A main concern for owners, contractors or any person in charge on a job site is the cost of the technology in question. Wireless technologies are not cheap. There are a lot of expenses to be taken into consideration, and a study should be completed to make sure that the importance and savings resulting from the usage of this technology is worth the expenses. This research attempts to assess the effectiveness of using the appropriate wireless technologies based on criteria such as performance, reliability, and risk. The assessment is based on a utility function model that breaks down the selection issue into alternatives attribute. Then the attributes are assigned weights and single attributes are measured. Finally, single attribute are combined to develop one single aggregate utility index for each alternative.

Keywords: Analytic Hierarchy Process, Utility Function, Wireless Technologies, construction management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1902
474 An Improved Conjugate Gradient Based Learning Algorithm for Back Propagation Neural Networks

Authors: N. M. Nawi, R. S. Ransing, M. R. Ransing

Abstract:

The conjugate gradient optimization algorithm is combined with the modified back propagation algorithm to yield a computationally efficient algorithm for training multilayer perceptron (MLP) networks (CGFR/AG). The computational efficiency is enhanced by adaptively modifying initial search direction as described in the following steps: (1) Modification on standard back propagation algorithm by introducing a gain variation term in the activation function, (2) Calculation of the gradient descent of error with respect to the weights and gains values and (3) the determination of a new search direction by using information calculated in step (2). The performance of the proposed method is demonstrated by comparing accuracy and computation time with the conjugate gradient algorithm used in MATLAB neural network toolbox. The results show that the computational efficiency of the proposed method was better than the standard conjugate gradient algorithm.

Keywords: Adaptive gain variation, back-propagation, activation function, conjugate gradient, search direction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1499
473 Scheduling Multiple Workflow Using De-De Dodging Algorithm and PBD Algorithm in Cloud: Detailed Study

Authors: B. Arun Kumar, T. Ravichandran

Abstract:

Workflow scheduling is an important part of cloud computing and based on different criteria it decides cost, execution time, and performances. A cloud workflow system is a platform service facilitating automation of distributed applications based on new cloud infrastructure. An aspect which differentiates cloud workflow system from others is market-oriented business model, an innovation which challenges conventional workflow scheduling strategies. Time and Cost optimization algorithm for scheduling Hybrid Clouds (TCHC) algorithm decides which resource should be chartered from public providers is combined with a new De-De algorithm considering that every instance of single and multiple workflows work without deadlocks. To offset this, two new concepts - De-De Dodging Algorithm and Priority Based Decisive Algorithm - combine with conventional deadlock avoidance issues by proposing one algorithm that maximizes active (not just allocated) resource use and reduces Makespan.

Keywords: Workflow Scheduling, cloud workflow, TCHC algorithm, De-De Dodging Algorithm, Priority Based Decisive Algorithm (PBD), Makespan.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2772
472 A Hybrid Particle Swarm Optimization Solution to Ramping Rate Constrained Dynamic Economic Dispatch

Authors: Pichet Sriyanyong

Abstract:

This paper presents the application of an enhanced Particle Swarm Optimization (EPSO) combined with Gaussian Mutation (GM) for solving the Dynamic Economic Dispatch (DED) problem considering the operating constraints of generators. The EPSO consists of the standard PSO and a modified heuristic search approaches. Namely, the ability of the traditional PSO is enhanced by applying the modified heuristic search approach to prevent the solutions from violating the constraints. In addition, Gaussian Mutation is aimed at increasing the diversity of global search, whilst it also prevents being trapped in suboptimal points during search. To illustrate its efficiency and effectiveness, the developed EPSO-GM approach is tested on the 3-unit and 10-unit 24-hour systems considering valve-point effect. From the experimental results, it can be concluded that the proposed EPSO-GM provides, the accurate solution, the efficiency, and the feature of robust computation compared with other algorithms under consideration.

Keywords: Particle Swarm Optimization (PSO), GaussianMutation (GM), Dynamic Economic Dispatch (DED).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1770
471 The Use of Dynamically Optimised High Frequency Moving Average Strategies for Intraday Trading

Authors: Abdalla Kablan, Joseph Falzon

Abstract:

This paper is motivated by the aspect of uncertainty in financial decision making, and how artificial intelligence and soft computing, with its uncertainty reducing aspects can be used for algorithmic trading applications that trade in high frequency. This paper presents an optimized high frequency trading system that has been combined with various moving averages to produce a hybrid system that outperforms trading systems that rely solely on moving averages. The paper optimizes an adaptive neuro-fuzzy inference system that takes both the price and its moving average as input, learns to predict price movements from training data consisting of intraday data, dynamically switches between the best performing moving averages, and performs decision making of when to buy or sell a certain currency in high frequency.

Keywords: Financial decision making, High frequency trading, Adaprive neuro-fuzzy systems, moving average strategy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5034
470 Unequal Error Protection of Facial Features for Personal ID Images Coding

Authors: T. Hirner, J. Polec

Abstract:

This paper presents an approach for an unequal error protection of facial features of personal ID images coding. We consider unequal error protection (UEP) strategies for the efficient progressive transmission of embedded image codes over noisy channels. This new method is based on the progressive image compression embedded zerotree wavelet (EZW) algorithm and UEP technique with defined region of interest (ROI). In this case is ROI equal facial features within personal ID image. ROI technique is important in applications with different parts of importance. In ROI coding, a chosen ROI is encoded with higher quality than the background (BG). Unequal error protection of image is provided by different coding techniques and encoding LL band separately. In our proposed method, image is divided into two parts (ROI, BG) that consist of more important bytes (MIB) and less important bytes (LIB). The proposed unequal error protection of image transmission has shown to be more appropriate to low bit rate applications, producing better quality output for ROI of the compresses image. The experimental results verify effectiveness of the design. The results of our method demonstrate the comparison of the UEP of image transmission with defined ROI with facial features and the equal error protection (EEP) over additive white gaussian noise (AWGN) channel.

Keywords: Embedded zerotree wavelet (EZW), equal error protection (EEP), facial features, personal ID images, region of interest (ROI), unequal error protection (UEP)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1471
469 Dempster-Shafer Evidence Theory for Image Segmentation: Application in Cells Images

Authors: S. Ben Chaabane, M. Sayadi, F. Fnaiech, E. Brassart

Abstract:

In this paper we propose a new knowledge model using the Dempster-Shafer-s evidence theory for image segmentation and fusion. The proposed method is composed essentially of two steps. First, mass distributions in Dempster-Shafer theory are obtained from the membership degrees of each pixel covering the three image components (R, G and B). Each membership-s degree is determined by applying Fuzzy C-Means (FCM) clustering to the gray levels of the three images. Second, the fusion process consists in defining three discernment frames which are associated with the three images to be fused, and then combining them to form a new frame of discernment. The strategy used to define mass distributions in the combined framework is discussed in detail. The proposed fusion method is illustrated in the context of image segmentation. Experimental investigations and comparative studies with the other previous methods are carried out showing thus the robustness and superiority of the proposed method in terms of image segmentation.

Keywords: Fuzzy C-means, Color image, data fusion, Dempster-Shafer's evidence theory

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2173
468 A Hybrid Gene Selection Technique Using Improved Mutual Information and Fisher Score for Cancer Classification Using Microarrays

Authors: M. Anidha, K. Premalatha

Abstract:

Feature Selection is significant in order to perform constructive classification in the area of cancer diagnosis. However, a large number of features compared to the number of samples makes the task of classification computationally very hard and prone to errors in microarray gene expression datasets. In this paper, we present an innovative method for selecting highly informative gene subsets of gene expression data that effectively classifies the cancer data into tumorous and non-tumorous. The hybrid gene selection technique comprises of combined Mutual Information and Fisher score to select informative genes. The gene selection is validated by classification using Support Vector Machine (SVM) which is a supervised learning algorithm capable of solving complex classification problems. The results obtained from improved Mutual Information and F-Score with SVM as a classifier has produced efficient results.

Keywords: Gene selection, mutual information, Fisher score, classification, SVM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1127
467 Action Potential Propagation in Inhomogeneous 2D Mouse Ventricular Tissue Model

Authors: Mouse, cardiac myocytes, computer simulation, action potential.

Abstract:

Heterogeneous repolarization causes dispersion of the T-wave and has been linked to arrhythmogenesis. Such heterogeneities appear due to differential expression of ionic currents in different regions of the heart, both in healthy and diseased animals and humans. Mice are important animals for the study of heart diseases because of the ability to create transgenic animals. We used our previously reported model of mouse ventricular myocytes to develop 2D mouse ventricular tissue model consisting of 14,000 cells (apical or septal ventricular myocytes) and to study the stability of action potential propagation and Ca2+ dynamics. The 2D tissue model was implemented as a FORTRAN program code for highperformance multiprocessor computers that runs on 36 processors. Our tissue model is able to simulate heterogeneities not only in action potential repolarization, but also heterogeneities in intracellular Ca2+ transients. The multicellular model reproduced experimentally observed velocities of action potential propagation and demonstrated the importance of incorporation of realistic Ca2+ dynamics for action potential propagation. The simulations show that relatively sharp gradients of repolarization are predicted to exist in 2D mouse tissue models, and they are primarily determined by the cellular properties of ventricular myocytes. Abrupt local gradients of channel expression can cause alternans at longer pacing basic cycle lengths than gradual changes, and development of alternans depends on the site of stimulation.

Keywords: Mouse, cardiac myocytes, computer simulation, action potential

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1446
466 A Methodology of Testing Beam to Column Connection under Lateral Impact Load

Authors: A. Al-Rifaie, Z. W. Guan, S. W. Jones

Abstract:

Beam to column connection can be considered as the most important structural part that affects the response of buildings to progressive collapse. However, many studies were conducted to investigate the beam to column connection under accidental loads such as fire, blast and impact load to investigate the connection response. The study is a part of a PhD plan to investigate different types of connections under lateral impact load. The conventional test setups, such as cruciform setup, were designed to apply shear forces and bending moment on the connection, whilst, in the lateral impact case, the connection is subjected to combined tension and moment. Hence, a review is presented to introduce the previous test setup that is used to investigate the connection behaviour. Then, the design and fabrication of the novel test setup is presented. Finally, some trial test results to investigate the efficiency of the proposed setup are discussed. The final results indicate that the setup was efficient in terms of the simplicity and strength.

Keywords: Connections, impact load, drop hammer, testing methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1164
465 Analysis of Different Resins in Web-to-Flange Joints

Authors: W. F. Ribeiro, J. L. N. Góes

Abstract:

The industrial process adds to engineering wood products features absent in solid wood, with homogeneous structure and reduced defects, improved physical and mechanical properties, bio-deterioration, resistance and better dimensional stability, improving quality and increasing the reliability of structures wood. These features combined with using fast-growing trees, make them environmentally ecological products, ensuring a strong consumer market. The wood I-joists are manufactured by the industrial profiles bonding flange and web, an important aspect of the production of wooden I-beams is the adhesive joint that bonds the web to the flange. Adhesives can effectively transfer and distribute stresses, thereby increasing the strength and stiffness of the composite. The objective of this study is to evaluate different resins in a shear strain specimens with the aim of analyzing the most efficient resin and possibility of using national products, reducing the manufacturing cost. First was conducted a literature review, where established the geometry and materials generally used, then established and analyzed 8 national resins and produced six specimens for each.

Keywords: Engineered wood products, structural resin, wood i-joist.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2298
464 Optimal Distributed Generator Sizing and Placement by Analytical Method and PSO Algorithm Considering Optimal Reactive Power Dispatch

Authors: Kyaw Myo Lin, Pyone Lai Swe, Khine Zin Oo

Abstract:

In this paper, an approach combining analytical method for the distributed generator (DG) sizing and meta-heuristic search for the optimal location of DG has been presented. The optimal size of DG on each bus is estimated by the loss sensitivity factor method while the optimal sites are determined by Particle Swarm Optimization (PSO) based optimal reactive power dispatch for minimizing active power loss. To confirm the proposed approach, it has been tested on IEEE-30 bus test system. The adjustments of operating constraints and voltage profile improvements have also been observed. The obtained results show that the allocation of DGs results in a significant loss reduction with good voltage profiles and the combined approach is competent in keeping the system voltages within the acceptable limits.

Keywords: Analytical approach, distributed generations, optimal size, optimal location, optimal reactive power dispatch, particle swarm optimization algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1147
463 Order Reduction of Linear Dynamic Systems using Stability Equation Method and GA

Authors: G. Parmar, R. Prasad, S. Mukherjee

Abstract:

The authors present an algorithm for order reduction of linear dynamic systems using the combined advantages of stability equation method and the error minimization by Genetic algorithm. The denominator of the reduced order model is obtained by the stability equation method and the numerator terms of the lower order transfer function are determined by minimizing the integral square error between the transient responses of original and reduced order models using Genetic algorithm. The reduction procedure is simple and computer oriented. It is shown that the algorithm has several advantages, e.g. the reduced order models retain the steady-state value and stability of the original system. The proposed algorithm has also been extended for the order reduction of linear multivariable systems. Two numerical examples are solved to illustrate the superiority of the algorithm over some existing ones including one example of multivariable system.

Keywords: Genetic algorithm, Integral square error, Orderreduction, Stability equation method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3161