Search results for: Gaussian operator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 528

Search results for: Gaussian operator

78 Estimation of Thermal Conductivity of Nanofluids Using MD-Stochastic Simulation Based Approach

Authors: Sujoy Das, M. M. Ghosh

Abstract:

The thermal conductivity of a fluid can be significantly enhanced by dispersing nano-sized particles in it, and the resultant fluid is termed as "nanofluid". A theoretical model for estimating the thermal conductivity of a nanofluid has been proposed here. It is based on the mechanism that evenly dispersed nanoparticles within a nanofluid undergo Brownian motion in course of which the nanoparticles repeatedly collide with the heat source. During each collision a rapid heat transfer occurs owing to the solidsolid contact. Molecular dynamics (MD) simulation of the collision of nanoparticles with the heat source has shown that there is a pulselike pick up of heat by the nanoparticles within 20-100 ps, the extent of which depends not only on thermal conductivity of the nanoparticles, but also on the elastic and other physical properties of the nanoparticle. After the collision the nanoparticles undergo Brownian motion in the base fluid and release the excess heat to the surrounding base fluid within 2-10 ms. The Brownian motion and associated temperature variation of the nanoparticles have been modeled by stochastic analysis. Repeated occurrence of these events by the suspended nanoparticles significantly contributes to the characteristic thermal conductivity of the nanofluids, which has been estimated by the present model for a ethylene glycol based nanofluid containing Cu-nanoparticles of size ranging from 8 to 20 nm, with Gaussian size distribution. The prediction of the present model has shown a reasonable agreement with the experimental data available in literature.

Keywords: Brownian dynamics, Molecular dynamics, Nanofluid, Thermal conductivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2266
77 The Application of Queuing Theory in Multi-Stage Production Lines

Authors: Hani Shafeek, Muhammed Marsudi

Abstract:

The purpose of this work is examining the multiproduct multi-stage in a battery production line. To improve the performances of an assembly production line by determine the efficiency of each workstation. Data collected from every workstation. The data are throughput rate, number of operator, and number of parts that arrive and leaves during part processing. Data for the number of parts that arrives and leaves are collected at least at the amount of ten samples to make the data is possible to be analyzed by Chi-Squared Goodness Test and queuing theory. Measures of this model served as the comparison with the standard data available in the company. Validation of the task time value resulted by comparing it with the task time value based on the company database. Some performance factors for the multi-product multi-stage in a battery production line in this work are shown. The efficiency in each workstation was also shown. Total production time to produce each part can be determined by adding the total task time in each workstation. To reduce the queuing time and increase the efficiency based on the analysis any probably improvement should be done. One probably action is by increasing the number of operators how manually operate this workstation.

Keywords: Production line, manufacturing, performance measurement, queuing theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3151
76 Hybrid of Hunting Search and Modified Simplex Methods for Grease Position Parameter Design Optimisation

Authors: P. Luangpaiboon, S. Boonhao

Abstract:

This study proposes a multi-response surface optimization problem (MRSOP) for determining the proper choices of a process parameter design (PPD) decision problem in a noisy environment of a grease position process in an electronic industry. The proposed models attempts to maximize dual process responses on the mean of parts between failure on left and right processes. The conventional modified simplex method and its hybridization of the stochastic operator from the hunting search algorithm are applied to determine the proper levels of controllable design parameters affecting the quality performances. A numerical example demonstrates the feasibility of applying the proposed model to the PPD problem via two iterative methods. Its advantages are also discussed. Numerical results demonstrate that the hybridization is superior to the use of the conventional method. In this study, the mean of parts between failure on left and right lines improve by 39.51%, approximately. All experimental data presented in this research have been normalized to disguise actual performance measures as raw data are considered to be confidential.

Keywords: Grease Position Process, Multi-response Surfaces, Modified Simplex Method, Hunting Search Method, Desirability Function Approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1690
75 Dynamic Variational Multiscale LES of Bluff Body Flows on Unstructured Grids

Authors: Carine Moussaed, Stephen Wornom, Bruno Koobus, Maria Vittoria Salvetti, Alain Dervieux,

Abstract:

The effects of dynamic subgrid scale (SGS) models are investigated in variational multiscale (VMS) LES simulations of bluff body flows. The spatial discretization is based on a mixed finite element/finite volume formulation on unstructured grids. In the VMS approach used in this work, the separation between the largest and the smallest resolved scales is obtained through a variational projection operator and a finite volume cell agglomeration. The dynamic version of Smagorinsky and WALE SGS models are used to account for the effects of the unresolved scales. In the VMS approach, these effects are only modeled in the smallest resolved scales. The dynamic VMS-LES approach is applied to the simulation of the flow around a circular cylinder at Reynolds numbers 3900 and 20000 and to the flow around a square cylinder at Reynolds numbers 22000 and 175000. It is observed as in previous studies that the dynamic SGS procedure has a smaller impact on the results within the VMS approach than in LES. But improvements are demonstrated for important feature like recirculating part of the flow. The global prediction is improved for a small computational extra cost.

Keywords: variational multiscale LES, dynamic SGS model, unstructured grids, circular cylinder, square cylinder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1827
74 Unequal Error Protection of Facial Features for Personal ID Images Coding

Authors: T. Hirner, J. Polec

Abstract:

This paper presents an approach for an unequal error protection of facial features of personal ID images coding. We consider unequal error protection (UEP) strategies for the efficient progressive transmission of embedded image codes over noisy channels. This new method is based on the progressive image compression embedded zerotree wavelet (EZW) algorithm and UEP technique with defined region of interest (ROI). In this case is ROI equal facial features within personal ID image. ROI technique is important in applications with different parts of importance. In ROI coding, a chosen ROI is encoded with higher quality than the background (BG). Unequal error protection of image is provided by different coding techniques and encoding LL band separately. In our proposed method, image is divided into two parts (ROI, BG) that consist of more important bytes (MIB) and less important bytes (LIB). The proposed unequal error protection of image transmission has shown to be more appropriate to low bit rate applications, producing better quality output for ROI of the compresses image. The experimental results verify effectiveness of the design. The results of our method demonstrate the comparison of the UEP of image transmission with defined ROI with facial features and the equal error protection (EEP) over additive white gaussian noise (AWGN) channel.

Keywords: Embedded zerotree wavelet (EZW), equal error protection (EEP), facial features, personal ID images, region of interest (ROI), unequal error protection (UEP)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1491
73 Artificial Intelligence Model to Predict Surface Roughness of Ti-15-3 Alloy in EDM Process

Authors: Md. Ashikur Rahman Khan, M. M. Rahman, K. Kadirgama, M.A. Maleque, Rosli A. Bakar

Abstract:

Conventionally the selection of parameters depends intensely on the operator-s experience or conservative technological data provided by the EDM equipment manufacturers that assign inconsistent machining performance. The parameter settings given by the manufacturers are only relevant with common steel grades. A single parameter change influences the process in a complex way. Hence, the present research proposes artificial neural network (ANN) models for the prediction of surface roughness on first commenced Ti-15-3 alloy in electrical discharge machining (EDM) process. The proposed models use peak current, pulse on time, pulse off time and servo voltage as input parameters. Multilayer perceptron (MLP) with three hidden layer feedforward networks are applied. An assessment is carried out with the models of distinct hidden layer. Training of the models is performed with data from an extensive series of experiments utilizing copper electrode as positive polarity. The predictions based on the above developed models have been verified with another set of experiments and are found to be in good agreement with the experimental results. Beside this they can be exercised as precious tools for the process planning for EDM.

Keywords: Ti-15l-3, surface roughness, copper, positive polarity, multi-layered perceptron.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1909
72 A Novel SVM-Based OOK Detector in Low SNR Infrared Channels

Authors: J. P. Dubois, O. M. Abdul-Latif

Abstract:

Support Vector Machine (SVM) is a recent class of statistical classification and regression techniques playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM is applied to an infrared (IR) binary communication system with different types of channel models including Ricean multipath fading and partially developed scattering channel with additive white Gaussian noise (AWGN) at the receiver. The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these channel stochastic models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to classical binary signal maximum likelihood detection using a matched filter driven by On-Off keying (OOK) modulation. We found that the performance of SVM is superior to that of the traditional optimal detection schemes used in statistical communication, especially for very low signal-to-noise ratio (SNR) ranges. For large SNR, the performance of the SVM is similar to that of the classical detectors. The implication of these results is that SVM can prove very beneficial to IR communication systems that notoriously suffer from low SNR at the cost of increased computational complexity.

Keywords: Least square-support vector machine, on-off keying, matched filter, maximum likelihood detector, wireless infrared communication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1955
71 Intelligent Video-Based Monitoring of Freeway Traffic

Authors: Saad M. Al-Garni, Adel A. Abdennour

Abstract:

Freeways are originally designed to provide high mobility to road users. However, the increase in population and vehicle numbers has led to increasing congestions around the world. Daily recurrent congestion substantially reduces the freeway capacity when it is most needed. Building new highways and expanding the existing ones is an expensive solution and impractical in many situations. Intelligent and vision-based techniques can, however, be efficient tools in monitoring highways and increasing the capacity of the existing infrastructures. The crucial step for highway monitoring is vehicle detection. In this paper, we propose one of such techniques. The approach is based on artificial neural networks (ANN) for vehicles detection and counting. The detection process uses the freeway video images and starts by automatically extracting the image background from the successive video frames. Once the background is identified, subsequent frames are used to detect moving objects through image subtraction. The result is segmented using Sobel operator for edge detection. The ANN is, then, used in the detection and counting phase. Applying this technique to the busiest freeway in Riyadh (King Fahd Road) achieved higher than 98% detection accuracy despite the light intensity changes, the occlusion situations, and shadows.

Keywords: Background Extraction, Neural Networks, VehicleDetection, Freeway Traffic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1916
70 Privacy in New Mobile Payment Protocol

Authors: Tan Soo Fun, Leau Yu Beng, Rozaini Roslan, Habeeb Saleh Habeeb

Abstract:

The increasing development of wireless networks and the widespread popularity of handheld devices such as Personal Digital Assistants (PDAs), mobile phones and wireless tablets represents an incredible opportunity to enable mobile devices as a universal payment method, involving daily financial transactions. Unfortunately, some issues hampering the widespread acceptance of mobile payment such as accountability properties, privacy protection, limitation of wireless network and mobile device. Recently, many public-key cryptography based mobile payment protocol have been proposed. However, limited capabilities of mobile devices and wireless networks make these protocols are unsuitable for mobile network. Moreover, these protocols were designed to preserve traditional flow of payment data, which is vulnerable to attack and increase the user-s risk. In this paper, we propose a private mobile payment protocol which based on client centric model and by employing symmetric key operations. The proposed mobile payment protocol not only minimizes the computational operations and communication passes between the engaging parties, but also achieves a completely privacy protection for the payer. The future work will concentrate on improving the verification solution to support mobile user authentication and authorization for mobile payment transactions.

Keywords: Mobile Network Operator, Mobile payment protocol, Privacy, Symmetric key.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2134
69 Simulation of Laser Structuring by Three Dimensional Heat Transfer Model

Authors: Bassim Bachy, Joerg Franke

Abstract:

In this study, a three dimensional numerical heat transfer model has been used to simulate the laser structuring of polymer substrate material in the Three-Dimensional Molded Interconnect Device (3D MID) which is used in the advanced multifunctional applications. A finite element method (FEM) transient thermal analysis is performed using APDL (ANSYS Parametric Design Language) provided by ANSYS. In this model, the effect of surface heat source was modeled with Gaussian distribution, also the effect of the mixed boundary conditions which consist of convection and radiation heat transfers have been considered in this analysis. The model provides a full description of the temperature distribution, as well as calculates the depth and the width of the groove upon material removal at different set of laser parameters such as laser power and laser speed. This study also includes the experimental procedure to study the effect of laser parameters on the depth and width of the removal groove metal as verification to the modeled results. Good agreement between the experimental and the model results is achieved for a wide range of laser powers. It is found that the quality of the laser structure process is affected by the laser scan speed and laser power. For a high laser structured quality, it is suggested to use laser with high speed and moderate to high laser power.

Keywords: Laser Structuring, Simulation, Finite element analysis, Thermal modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4348
68 Design and Development of an Efficient and Cost-Effective Microcontroller-Based Irrigation Control System to Enhance Food Security

Authors: Robert A. Sowah, Stephen K. Armoo, Koudjo M. Koumadi, Rockson Agyeman, Seth Y. Fiawoo

Abstract:

The development of the agricultural sector in Ghana has been reliant on the use of irrigation systems to ensure food security. However, the manual operation of these systems has not facilitated their maximum efficiency due to human limitations. This paper seeks to address this problem by designing and implementing an efficient, cost effective automated system which monitors and controls the water flow of irrigation through communication with an authorized operator via text messages. The automatic control component of the system is timer based with an Atmega32 microcontroller and a real time clock from the SM5100B cellular module. For monitoring purposes, the system sends periodic notification of the system on the performance of duty via SMS to the authorized person(s). Moreover, the GSM based Irrigation Monitoring and Control System saves time and labour and reduces cost of operating irrigation systems by saving electricity usage and conserving water. Field tests conducted have proven its operational efficiency and ease of assessment of farm irrigation equipment due to its costeffectiveness and data logging capabilities.

Keywords: Agriculture, control system, data logging, food security, irrigation system, microcontroller.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5211
67 Texture Feature-Based Language Identification Using Wavelet-Domain BDIP and BVLC Features and FFT Feature

Authors: Ick Hoon Jang, Hoon Jae Lee, Dae Hoon Kwon, Ui Young Pak

Abstract:

In this paper, we propose a texture feature-based language identification using wavelet-domain BDIP (block difference of inverse probabilities) and BVLC (block variance of local correlation coefficients) features and FFT (fast Fourier transform) feature. In the proposed method, wavelet subbands are first obtained by wavelet transform from a test image and denoised by Donoho-s soft-thresholding. BDIP and BVLC operators are next applied to the wavelet subbands. FFT blocks are also obtained by 2D (twodimensional) FFT from the blocks into which the test image is partitioned. Some significant FFT coefficients in each block are selected and magnitude operator is applied to them. Moments for each subband of BDIP and BVLC and for each magnitude of significant FFT coefficients are then computed and fused into a feature vector. In classification, a stabilized Bayesian classifier, which adopts variance thresholding, searches the training feature vector most similar to the test feature vector. Experimental results show that the proposed method with the three operations yields excellent language identification even with rather low feature dimension.

Keywords: BDIP, BVLC, FFT, language identification, texture feature, wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2150
66 Effect of Testing Device Calibration on Liquid Limit Assessment

Authors: M. O. Bayram, H. B. Gencdal, N. O. Fercan, B. Basbug

Abstract:

Liquid limit, which is used as a measure of soil strength, can be detected by Casagrande and fall-cone testing methods. The two methods majorly diverge from each other in terms of operator dependency. The Casagrande method that is applied according to ASTM D4318-17 standards may give misleading results, especially if the calibration process is not performed well. In this study, to reveal the effect of calibration for drop height and amount of soil paste placement in the Casagrande cup, a series of tests were carried out by multipoint method as it is specified in the ASTM standards. The tests include the combination of 6 mm, 8 mm, 10 mm, and 12 mm drop heights and under-filled, half-filled, and full-filled Casagrande cups by kaolin samples. It was observed that during successive tests, the drop height of the cup deteriorated; hence the device was recalibrated before and after each test to provide the accuracy of the results. Besides, the tests by under-filled and full-filled samples for higher drop heights revealed lower liquid limit values than the lower drop heights revealed. For the half-filled samples, it was clearly seen that the liquid limit values did not change at all as the drop height increased, and this explains the function of standard specifications.

Keywords: Calibration, Casagrande cup method, drop height, kaolin, liquid limit, placing form.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 365
65 Estimation of the Road Traffic Emissions and Dispersion in the Developing Countries Conditions

Authors: Hicham Gourgue, Ahmed Aharoune, Ahmed Ihlal

Abstract:

We present in this work our model of road traffic emissions (line sources) and dispersion of these emissions, named DISPOLSPEM (Dispersion of Poly Sources and Pollutants Emission Model). In its emission part, this model was designed to keep the consistent bottom-up and top-down approaches. It also allows to generate emission inventories from reduced input parameters being adapted to existing conditions in Morocco and in the other developing countries. While several simplifications are made, all the performance of the model results are kept. A further important advantage of the model is that it allows the uncertainty calculation and emission rate uncertainty according to each of the input parameters. In the dispersion part of the model, an improved line source model has been developed, implemented and tested against a reference solution. It provides improvement in accuracy over previous formulas of line source Gaussian plume model, without being too demanding in terms of computational resources. In the case study presented here, the biggest errors were associated with the ends of line source sections; these errors will be canceled by adjacent sections of line sources during the simulation of a road network. In cases where the wind is parallel to the source line, the use of the combination discretized source and analytical line source formulas minimizes remarkably the error. Because this combination is applied only for a small number of wind directions, it should not excessively increase the calculation time.

Keywords: Air pollution, dispersion, emissions, line sources, road traffic, urban transport.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1931
64 Development of an Indoor Drone Designed for the Needs of the Creative Industries

Authors: V. Santamarina Campos, M. de Miguel Molina, S. Kröner, B. de Miguel Molina

Abstract:

With this contribution, we want to show how the AiRT system could change the future way of working of a part of the creative industry and what new economic opportunities could arise for them. Remotely Piloted Aircraft Systems (RPAS), also more commonly known as drones, are now essential tools used by many different companies for their creative outdoor work. However, using this very flexible applicable tool indoor is almost impossible, since safe navigation cannot be guaranteed by the operator due to the lack of a reliable and affordable indoor positioning system which ensures a stable flight, among other issues. Here we present our first results of a European project, which consists of developing an indoor drone for professional footage especially designed for the creative industries. One of the main achievements of this project is the successful implication of the end-users in the overall design process from the very beginning. To ensure safe flight in confined spaces, our drone incorporates a positioning system based on ultra-wide band technology, an RGB-D (depth) camera for 3D environment reconstruction and the possibility to fully pre-program automatic flights. Since we also want to offer this tool for inexperienced pilots, we have always focused on user-friendly handling of the whole system throughout the entire process.

Keywords: Virtual reality, 3D reconstruction, indoor positioning system, UWB, RPAS, aerial film, intelligent navigation, advanced safety measures, creative industries.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 906
63 Challenges for Interface Designers in Designing Sensor Dashboards in the Context of Industry 4.0

Authors: Naveen Kumar, Shyambihari Prajapati

Abstract:

Industry 4.0 is the fourth industrial revolution that focuses on interconnectivity of machine to machine, human to machine and human to human via Internet of Things (IoT). Technologies of industry 4.0 facilitate communication between human and machine through IoT and forms Cyber-Physical Production System (CPPS). In CPPS, multiple shop floors sensor data are connected through IoT and displayed through sensor dashboard to the operator. These sensor dashboards have enormous amount of information to be presented which becomes complex for operators to perform monitoring, controlling and interpretation tasks. Designing handheld sensor dashboards for supervision task will become a challenge for the interface designers. This paper reports emerging technologies of industry 4.0, changing context of increasing information complexity in consecutive industrial revolutions and upcoming design challenges for interface designers in context of Industry 4.0. Authors conclude that information complexity of sensor dashboards design has increased with consecutive industrial revolutions and designs of sensor dashboard causes cognitive load on users. Designing such complex dashboards interfaces in Industry 4.0 context will become main challenges for the interface designers.

Keywords: Industry 4.0, sensor dashboard design, Cyber-physical production system, Interface designer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 672
62 Text-independent Speaker Identification Based on MAP Channel Compensation and Pitch-dependent Features

Authors: Jiqing Han, Rongchun Gao

Abstract:

One major source of performance decline in speaker recognition system is channel mismatch between training and testing. This paper focuses on improving channel robustness of speaker recognition system in two aspects of channel compensation technique and channel robust features. The system is text-independent speaker identification system based on two-stage recognition. In the aspect of channel compensation technique, this paper applies MAP (Maximum A Posterior Probability) channel compensation technique, which was used in speech recognition, to speaker recognition system. In the aspect of channel robust features, this paper introduces pitch-dependent features and pitch-dependent speaker model for the second stage recognition. Based on the first stage recognition to testing speech using GMM (Gaussian Mixture Model), the system uses GMM scores to decide if it needs to be recognized again. If it needs to, the system selects a few speakers from all of the speakers who participate in the first stage recognition for the second stage recognition. For each selected speaker, the system obtains 3 pitch-dependent results from his pitch-dependent speaker model, and then uses ANN (Artificial Neural Network) to unite the 3 pitch-dependent results and 1 GMM score for getting a fused result. The system makes the second stage recognition based on these fused results. The experiments show that the correct rate of two-stage recognition system based on MAP channel compensation technique and pitch-dependent features is 41.7% better than the baseline system for closed-set test.

Keywords: Channel Compensation, Channel Robustness, MAP, Speaker Identification

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1546
61 Producing Outdoor Design Conditions Based on the Dependency between Meteorological Elements: Copula Approach

Authors: Zhichao Jiao, Craig Farnham, Jihui Yuan, Kazuo Emura

Abstract:

It is common to use the outdoor design weather data to select the air-conditioning capacity in the building design stage. The meteorological elements of outdoor design weather data are usually selected based on their excess frequency separately while the dependency between the elements is not well considered. It means that the simultaneous occurrence probability of these elements is smaller than the original excess frequency which may cause an overestimation of selecting air-conditioning capacity. Therefore, the copula approach which can capture the dependency between multivariate data was used to model the joint distributions of the meteorological elements, like air temperature and global solar radiation. We suggest a method based on the specific simultaneous occurrence probability of these two elements of selecting more credible outdoor design conditions. The hourly weather data at 12 noon from 2001 to 2010 in Tokyo, Japan are used to analyze the dependency structure and joint distribution, the Gaussian copula represents the dependence of data best. According to calculating the air temperature and global solar radiation in specific simultaneous occurrence probability and the common exceeding, the results show that both the air temperature and global solar radiation based on simultaneous occurrence probability are lower than these based on the conventional method in the same probability.

Keywords: Copula approach, Design weather database, energy conservation, HVAC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 363
60 Fused Structure and Texture (FST) Features for Improved Pedestrian Detection

Authors: Hussin K. Ragb, Vijayan K. Asari

Abstract:

In this paper, we present a pedestrian detection descriptor called Fused Structure and Texture (FST) features based on the combination of the local phase information with the texture features. Since the phase of the signal conveys more structural information than the magnitude, the phase congruency concept is used to capture the structural features. On the other hand, the Center-Symmetric Local Binary Pattern (CSLBP) approach is used to capture the texture information of the image. The dimension less quantity of the phase congruency and the robustness of the CSLBP operator on the flat images, as well as the blur and illumination changes, lead the proposed descriptor to be more robust and less sensitive to the light variations. The proposed descriptor can be formed by extracting the phase congruency and the CSLBP values of each pixel of the image with respect to its neighborhood. The histogram of the oriented phase and the histogram of the CSLBP values for the local regions in the image are computed and concatenated to construct the FST descriptor. Several experiments were conducted on INRIA and the low resolution DaimlerChrysler datasets to evaluate the detection performance of the pedestrian detection system that is based on the FST descriptor. A linear Support Vector Machine (SVM) is used to train the pedestrian classifier. These experiments showed that the proposed FST descriptor has better detection performance over a set of state of the art feature extraction methodologies.

Keywords: Pedestrian detection, phase congruency, local phase, LBP features, CSLBP features, FST descriptor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1489
59 Data Recording for Remote Monitoring of Autonomous Vehicles

Authors: Rong-Terng Juang

Abstract:

Autonomous vehicles offer the possibility of significant benefits to social welfare. However, fully automated cars might not be going to happen in the near further. To speed the adoption of the self-driving technologies, many governments worldwide are passing laws requiring data recorders for the testing of autonomous vehicles. Currently, the self-driving vehicle, (e.g., shuttle bus) has to be monitored from a remote control center. When an autonomous vehicle encounters an unexpected driving environment, such as road construction or an obstruction, it should request assistance from a remote operator. Nevertheless, large amounts of data, including images, radar and lidar data, etc., have to be transmitted from the vehicle to the remote center. Therefore, this paper proposes a data compression method of in-vehicle networks for remote monitoring of autonomous vehicles. Firstly, the time-series data are rearranged into a multi-dimensional signal space. Upon the arrival, for controller area networks (CAN), the new data are mapped onto a time-data two-dimensional space associated with the specific CAN identity. Secondly, the data are sampled based on differential sampling. Finally, the whole set of data are encoded using existing algorithms such as Huffman, arithmetic and codebook encoding methods. To evaluate system performance, the proposed method was deployed on an in-house built autonomous vehicle. The testing results show that the amount of data can be reduced as much as 1/7 compared to the raw data.

Keywords: Autonomous vehicle, data recording, remote monitoring, controller area network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1354
58 Design and Fabrication of a Programmable Stiffness-Sensitive Gripper for Object Handling

Authors: Mehdi Modabberifar, Sanaz Jabary, Mojtaba Ghodsi

Abstract:

Stiffness sensing is an important issue in medical diagnostic, robotics surgery, safe handling, and safe grasping of objects in production lines. Detecting and obtaining the characteristics in dwelling lumps embedded in a soft tissue and safe removing and handling of detected lumps is needed in surgery. Also in industry, grasping and handling an object without damaging in a place where it is not possible to access a human operator is very important. In this paper, a method for object handling is presented. It is based on the use of an intelligent gripper to detect the object stiffness and then setting a programmable force for grasping the object to move it. The main components of this system includes sensors (sensors for measuring force and displacement), electrical (electrical and electronic circuits, tactile data processing and force control system), mechanical (gripper mechanism and driving system for the gripper) and the display unit. The system uses a rotary potentiometer for measuring gripper displacement. A microcontroller using the feedback received by the load cell, mounted on the finger of the gripper, calculates the amount of stiffness, and then commands the gripper motor to apply a certain force on the object. Results of Experiments on some samples with different stiffness show that the gripper works successfully. The gripper can be used in haptic interfaces or robotic systems used for object handling.

Keywords: Gripper, haptic, stiffness, robotic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1155
57 Modelling Dengue Fever (DF) and Dengue Haemorrhagic Fever (DHF) Outbreak Using Poisson and Negative Binomial Model

Authors: W. Y. Wan Fairos, W. H. Wan Azaki, L. Mohamad Alias, Y. Bee Wah

Abstract:

Dengue fever has become a major concern for health authorities all over the world particularly in the tropical countries. These countries, in particular are experiencing the most worrying outbreak of dengue fever (DF) and dengue haemorrhagic fever (DHF). The DF and DHF epidemics, thus, have become the main causes of hospital admissions and deaths in Malaysia. This paper, therefore, attempts to examine the environmental factors that may influence the recent dengue outbreak. The aim of this study is twofold, firstly is to establish a statistical model to describe the relationship between the number of dengue cases and a range of explanatory variables and secondly, to identify the lag operator for explanatory variables which affect the dengue incidence the most. The explanatory variables involved include the level of cloud cover, percentage of relative humidity, amount of rainfall, maximum temperature, minimum temperature and wind speed. The Poisson and Negative Binomial regression analyses were used in this study. The results of the analyses on the 915 observations (daily data taken from July 2006 to Dec 2008), reveal that the climatic factors comprising of daily temperature and wind speed were found to significantly influence the incidence of dengue fever after 2 and 3 weeks of their occurrences. The effect of humidity, on the other hand, appears to be significant only after 2 weeks.

Keywords: Dengue Fever, Dengue Hemorrhagic Fever, Negative Binomial Regression model, Poisson Regression model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2818
56 Automation of the Maritime UAV Command, Control, Navigation Operations, Simulated in Real-Time Using Kinect Sensor: A Feasibility Study

Authors: Regius Asiimwe, Amir Anvar

Abstract:

This paper describes the process used in the automation of the Maritime UAV commands using the Kinect sensor. The AR Drone is a Quadrocopter manufactured by Parrot [1] to be controlled using the Apple operating systems such as iPhones and Ipads. However, this project uses the Microsoft Kinect SDK and Microsoft Visual Studio C# (C sharp) software, which are compatible with Windows Operating System for the automation of the navigation and control of the AR drone. The navigation and control software for the Quadrocopter runs on a windows 7 computer. The project is divided into two sections; the Quadrocopter control system and the Kinect sensor control system. The Kinect sensor is connected to the computer using a USB cable from which commands can be sent to and from the Kinect sensors. The AR drone has Wi-Fi capabilities from which it can be connected to the computer to enable transfer of commands to and from the Quadrocopter. The project was implemented in C#, a programming language that is commonly used in the automation systems. The language was chosen because there are more libraries already established in C# for both the AR drone and the Kinect sensor. The study will contribute toward research in automation of systems using the Quadrocopter and the Kinect sensor for navigation involving a human operator in the loop. The prototype created has numerous applications among which include the inspection of vessels such as ship, airplanes and areas that are not accessible by human operators.

Keywords: UAV, AR drone, Kinect Sensors, Automation, Real time, C sharp, Microsoft Kinect SDK.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2934
55 Stock Price Forecast by Using Neuro-Fuzzy Inference System

Authors: Ebrahim Abbasi, Amir Abouec

Abstract:

In this research, the researchers have managed to design a model to investigate the current trend of stock price of the "IRAN KHODRO corporation" at Tehran Stock Exchange by utilizing an Adaptive Neuro - Fuzzy Inference system. For the Longterm Period, a Neuro-Fuzzy with two Triangular membership functions and four independent Variables including trade volume, Dividend Per Share (DPS), Price to Earning Ratio (P/E), and also closing Price and Stock Price fluctuation as an dependent variable are selected as an optimal model. For the short-term Period, a neureo – fuzzy model with two triangular membership functions for the first quarter of a year, two trapezoidal membership functions for the Second quarter of a year, two Gaussian combination membership functions for the third quarter of a year and two trapezoidal membership functions for the fourth quarter of a year were selected as an optimal model for the stock price forecasting. In addition, three independent variables including trade volume, price to earning ratio, closing Stock Price and a dependent variable of stock price fluctuation were selected as an optimal model. The findings of the research demonstrate that the trend of stock price could be forecasted with the lower level of error.

Keywords: Stock Price forecast, membership functions, Adaptive Neuro-Fuzzy Inference System, trade volume, P/E, DPS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2615
54 New Wavelet Indices to Assess Muscle Fatigue during Dynamic Contractions

Authors: González-Izal M., Rodríguez-Carreño I, Mallor-Giménez F, Malanda A, Izquierdo M

Abstract:

The purpose of this study was to evaluate and compare new indices based on the discrete wavelet transform with another spectral parameters proposed in the literature as mean average voltage, median frequency and ratios between spectral moments applied to estimate acute exercise-induced changes in power output, i.e., to assess peripheral muscle fatigue during a dynamic fatiguing protocol. 15 trained subjects performed 5 sets consisting of 10 leg press, with 2 minutes rest between sets. Surface electromyography was recorded from vastus medialis (VM) muscle. Several surface electromyographic parameters were compared to detect peripheral muscle fatigue. These were: mean average voltage (MAV), median spectral frequency (Fmed), Dimitrov spectral index of muscle fatigue (FInsm5), as well as other five parameters obtained from the discrete wavelet transform (DWT) as ratios between different scales. The new wavelet indices achieved the best results in Pearson correlation coefficients with power output changes during acute dynamic contractions. Their regressions were significantly different from MAV and Fmed. On the other hand, they showed the highest robustness in presence of additive white gaussian noise for different signal to noise ratios (SNRs). Therefore, peripheral impairments assessed by sEMG wavelet indices may be a relevant factor involved in the loss of power output after dynamic high-loading fatiguing task.

Keywords: Median Frequency, EMG, wavelet transform, muscle fatigue

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1868
53 Information Requirements for Vessel Traffic Service Operations

Authors: Fan Li, Chun-Hsien Chen, Li Pheng Khoo

Abstract:

Operators of vessel traffic service (VTS) center provides three different types of services; namely information service, navigational assistance and traffic organization to vessels. To provide these services, operators monitor vessel traffic through computer interface and provide navigational advice based on the information integrated from multiple sources, including automatic identification system (AIS), radar system, and closed circuit television (CCTV) system. Therefore, this information is crucial in VTS operation. However, what information the VTS operator actually need to efficiently and properly offer services is unclear. The aim of this study is to investigate into information requirements for VTS operation. To achieve this aim, field observation was carried out to elicit the information requirements for VTS operation. The study revealed that the most frequent and important tasks were handling arrival vessel report, potential conflict control and abeam vessel report. Current location and vessel name were used in all tasks. Hazard cargo information was particularly required when operators handle arrival vessel report. The speed, the course, and the distance of two or several vessels were only used in potential conflict control. The information requirements identified in this study can be utilized in designing a human-computer interface that takes into consideration what and when information should be displayed, and might be further used to build the foundation of a decision support system for VTS.

Keywords: Vessel traffic service, information requirements, hierarchy task analysis, field observation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1595
52 Modeling Default Probabilities of the Chosen Czech Banks in the Time of the Financial Crisis

Authors: Petr Gurný

Abstract:

One of the most important tasks in the risk management is the correct determination of probability of default (PD) of particular financial subjects. In this paper a possibility of determination of financial institution’s PD according to the creditscoring models is discussed. The paper is divided into the two parts. The first part is devoted to the estimation of the three different models (based on the linear discriminant analysis, logit regression and probit regression) from the sample of almost three hundred US commercial banks. Afterwards these models are compared and verified on the control sample with the view to choose the best one. The second part of the paper is aimed at the application of the chosen model on the portfolio of three key Czech banks to estimate their present financial stability. However, it is not less important to be able to estimate the evolution of PD in the future. For this reason, the second task in this paper is to estimate the probability distribution of the future PD for the Czech banks. So, there are sampled randomly the values of particular indicators and estimated the PDs’ distribution, while it’s assumed that the indicators are distributed according to the multidimensional subordinated Lévy model (Variance Gamma model and Normal Inverse Gaussian model, particularly). Although the obtained results show that all banks are relatively healthy, there is still high chance that “a financial crisis” will occur, at least in terms of probability. This is indicated by estimation of the various quantiles in the estimated distributions. Finally, it should be noted that the applicability of the estimated model (with respect to the used data) is limited to the recessionary phase of the financial market.

Keywords: Credit-scoring Models, Multidimensional Subordinated Lévy Model, Probability of Default.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1919
51 Image Transmission via Iterative Cellular-Turbo System

Authors: Ersin Gose, Kenan Buyukatak, Onur Osman, Osman N. Ucan

Abstract:

To compress, improve bit error performance and also enhance 2D images, a new scheme, called Iterative Cellular-Turbo System (IC-TS) is introduced. In IC-TS, the original image is partitioned into 2N quantization levels, where N is denoted as bit planes. Then each of the N-bit-plane is coded by Turbo encoder and transmitted over Additive White Gaussian Noise (AWGN) channel. At the receiver side, bit-planes are re-assembled taking into consideration of neighborhood relationship of pixels in 2-D images. Each of the noisy bit-plane values of the image is evaluated iteratively using IC-TS structure, which is composed of equalization block; Iterative Cellular Image Processing Algorithm (ICIPA) and Turbo decoder. In IC-TS, there is an iterative feedback link between ICIPA and Turbo decoder. ICIPA uses mean and standard deviation of estimated values of each pixel neighborhood. It has extra-ordinary satisfactory results of both Bit Error Rate (BER) and image enhancement performance for less than -1 dB Signal-to-Noise Ratio (SNR) values, compared to traditional turbo coding scheme and 2-D filtering, applied separately. Also, compression can be achieved by using IC-TS systems. In compression, less memory storage is used and data rate is increased up to N-1 times by simply choosing any number of bit slices, sacrificing resolution. Hence, it is concluded that IC-TS system will be a compromising approach in 2-D image transmission, recovery of noisy signals and image compression.

Keywords: Iterative Cellular Image Processing Algorithm (ICIPA), Turbo Coding, Iterative Cellular Turbo System (IC-TS), Image Compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1817
50 Secure Protocol for Short Message Service

Authors: Shubat S. Ahmeda, Ashraf M. Ali Edwila

Abstract:

Short Message Service (SMS) has grown in popularity over the years and it has become a common way of communication, it is a service provided through General System for Mobile Communications (GSM) that allows users to send text messages to others. SMS is usually used to transport unclassified information, but with the rise of mobile commerce it has become a popular tool for transmitting sensitive information between the business and its clients. By default SMS does not guarantee confidentiality and integrity to the message content. In the mobile communication systems, security (encryption) offered by the network operator only applies on the wireless link. Data delivered through the mobile core network may not be protected. Existing end-to-end security mechanisms are provided at application level and typically based on public key cryptosystem. The main concern in a public-key setting is the authenticity of the public key; this issue can be resolved by identity-based (IDbased) cryptography where the public key of a user can be derived from public information that uniquely identifies the user. This paper presents an encryption mechanism based on the IDbased scheme using Elliptic curves to provide end-to-end security for SMS. This mechanism has been implemented over the standard SMS network architecture and the encryption overhead has been estimated and compared with RSA scheme. This study indicates that the ID-based mechanism has advantages over the RSA mechanism in key distribution and scalability of increasing security level for mobile service.

Keywords: Elliptic Curve Cryptography (ECC), End-to-end Security, Identity-based Cryptography, Public Key, RSA, SMS Protocol.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2224
49 Probabilistic Method of Wind Generation Placement for Congestion Management

Authors: S. Z. Moussavi, A. Badri, F. Rastegar Kashkooli

Abstract:

Wind farms (WFs) with high level of penetration are being established in power systems worldwide more rapidly than other renewable resources. The Independent System Operator (ISO), as a policy maker, should propose appropriate places for WF installation in order to maximize the benefits for the investors. There is also a possibility of congestion relief using the new installation of WFs which should be taken into account by the ISO when proposing the locations for WF installation. In this context, efficient wind farm (WF) placement method is proposed in order to reduce burdens on congested lines. Since the wind speed is a random variable and load forecasts also contain uncertainties, probabilistic approaches are used for this type of study. AC probabilistic optimal power flow (P-OPF) is formulated and solved using Monte Carlo Simulations (MCS). In order to reduce computation time, point estimate methods (PEM) are introduced as efficient alternative for time-demanding MCS. Subsequently, WF optimal placement is determined using generation shift distribution factors (GSDF) considering a new parameter entitled, wind availability factor (WAF). In order to obtain more realistic results, N-1 contingency analysis is employed to find the optimal size of WF, by means of line outage distribution factors (LODF). The IEEE 30-bus test system is used to show and compare the accuracy of proposed methodology.

Keywords: Probabilistic optimal power flow, Wind power, Pointestimate methods, Congestion management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1891