Search results for: marketing communication approach.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6377

Search results for: marketing communication approach.

617 Using Satellite Images Datasets for Road Intersection Detection in Route Planning

Authors: Fatma El-zahraa El-taher, Ayman Taha, Jane Courtney, Susan Mckeever

Abstract:

Understanding road networks plays an important role in navigation applications such as self-driving vehicles and route planning for individual journeys. Intersections of roads are essential components of road networks. Understanding the features of an intersection, from a simple T-junction to larger multi-road junctions is critical to decisions such as crossing roads or selecting safest routes. The identification and profiling of intersections from satellite images is a challenging task. While deep learning approaches offer state-of-the-art in image classification and detection, the availability of training datasets is a bottleneck in this approach. In this paper, a labelled satellite image dataset for the intersection recognition  problem is presented. It consists of 14,692 satellite images of Washington DC, USA. To support other users of the dataset, an automated download and labelling script is provided for dataset replication. The challenges of construction and fine-grained feature labelling of a satellite image dataset are examined, including the issue of how to address features that are spread across multiple images. Finally, the accuracy of detection of intersections in satellite images is evaluated.

Keywords: Satellite images, remote sensing images, data acquisition, autonomous vehicles, robot navigation, route planning, road intersections.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 651
616 Pectoral Muscles Suppression in Digital Mammograms Using Hybridization of Soft Computing Methods

Authors: I. Laurence Aroquiaraj, K. Thangavel

Abstract:

Breast region segmentation is an essential prerequisite in computerized analysis of mammograms. It aims at separating the breast tissue from the background of the mammogram and it includes two independent segmentations. The first segments the background region which usually contains annotations, labels and frames from the whole breast region, while the second removes the pectoral muscle portion (present in Medio Lateral Oblique (MLO) views) from the rest of the breast tissue. In this paper we propose hybridization of Connected Component Labeling (CCL), Fuzzy, and Straight line methods. Our proposed methods worked good for separating pectoral region. After removal pectoral muscle from the mammogram, further processing is confined to the breast region alone. To demonstrate the validity of our segmentation algorithm, it is extensively tested using over 322 mammographic images from the Mammographic Image Analysis Society (MIAS) database. The segmentation results were evaluated using a Mean Absolute Error (MAE), Hausdroff Distance (HD), Probabilistic Rand Index (PRI), Local Consistency Error (LCE) and Tanimoto Coefficient (TC). The hybridization of fuzzy with straight line method is given more than 96% of the curve segmentations to be adequate or better. In addition a comparison with similar approaches from the state of the art has been given, obtaining slightly improved results. Experimental results demonstrate the effectiveness of the proposed approach.

Keywords: X-ray Mammography, CCL, Fuzzy, Straight line.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1715
615 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation

Authors: Aicha Majda, Abdelhamid El Hassani

Abstract:

Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.

Keywords: Graph cuts, lung CT scan, lung parenchyma segmentation, patch based similarity metric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 703
614 An Exploration of Sense of Place as Informative for Spatial Planning Guidelines: A Case Study of the Vredefort Dome World Heritage Site, South Africa

Authors: Karen Puren, Ernst Drewes, Vera Roos

Abstract:

This paper explores the sense of place in the Vredefort Dome World Heritage site, South Africa, as an essential input for the formulation of spatial planning proposals for the area. Intangible aspects such as personal and symbolic meanings of sites are currently not integrated in spatial planning in South Africa. This may have a detrimental effect on local inhabitants who have a long history with the site and built up a strong place identity. Involving local inhabitants at an early stage of the planning process and incorporating their attitudes and opinions in future intervention in the area, may also contribute to the acceptance of the legitimacy of future policy. An interdisciplinary and mixed-method research approach was followed in this study in order to identify possible ways to anchor spatial planning proposals in the identity of the place. In essence, the qualitative study revealed that inhabitants reflect a deep and personal relationship with and within the area, which contributes significantly to their sense of emotional security and selfidentity. Results include a strong conservation-orientated attitude with regard to the natural rural character of the site, especially in the inner core.

Keywords: Place identity, Sense of Place, Spatial Planning, Vredefort Dome World Heritage Site.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2513
613 A Trainable Neural Network Ensemble for ECG Beat Classification

Authors: Atena Sajedin, Shokoufeh Zakernejad, Soheil Faridi, Mehrdad Javadi, Reza Ebrahimpour

Abstract:

This paper illustrates the use of a combined neural network model for classification of electrocardiogram (ECG) beats. We present a trainable neural network ensemble approach to develop customized electrocardiogram beat classifier in an effort to further improve the performance of ECG processing and to offer individualized health care. We process a three stage technique for detection of premature ventricular contraction (PVC) from normal beats and other heart diseases. This method includes a denoising, a feature extraction and a classification. At first we investigate the application of stationary wavelet transform (SWT) for noise reduction of the electrocardiogram (ECG) signals. Then feature extraction module extracts 10 ECG morphological features and one timing interval feature. Then a number of multilayer perceptrons (MLPs) neural networks with different topologies are designed. The performance of the different combination methods as well as the efficiency of the whole system is presented. Among them, Stacked Generalization as a proposed trainable combined neural network model possesses the highest recognition rate of around 95%. Therefore, this network proves to be a suitable candidate in ECG signal diagnosis systems. ECG samples attributing to the different ECG beat types were extracted from the MIT-BIH arrhythmia database for the study.

Keywords: ECG beat Classification; Combining Classifiers;Premature Ventricular Contraction (PVC); Multi Layer Perceptrons;Wavelet Transform

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2180
612 Developing Leadership and Teamwork Skills of Pre-Service Teacher through Learning Camp

Authors: Sirimanee Banjong

Abstract:

This study aimed to 1) develop pre-service teachers’ leadership skills through camp-based learning, and 2) develop preservice teachers’ teamwork skills through camp-based learning. An applied research methodology was used. The target group was derived from a purposive selection. It involved 32 fourth-year students in Early Childhood Education Program enrolling a course entitled Seminar in Early Childhood Education provided during second semester of academic year 2013. The treatment was camp-based learning activities which applied a PDCA process including four stages: 1) plan, 2) do, 3) check, and 4) act. Research instruments were a learning camp program, a camp-based learning management plan, a 5-level assessment form for leadership skills and a 5-level assessment form for assessing teamwork skills. Data were analyzed using descriptive statistics. Results were: 1) pre-service teachers’ leadership skills yielded the before treatment average score at x= 3.4, S.D.=0.6 2and the after-treatment average score at x 4.29 , S.D.=0.66 pre-service teachers’ teamwork skills yielded the before-treatment average score at x=3.31, S.D.=0.60 and the after-treatment average score at x=4.42, S.D.=0.66 Both differences were statistically significant at the .05 level. Thus, the pre-service teachers’ leadership and teamwork skills were significantly improved through the camp-based learning approach.

Keywords: Learning camp, leadership skills, teamwork skills.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1322
611 MPPT Operation for PV Grid-connected System using RBFNN and Fuzzy Classification

Authors: A. Chaouachi, R. M. Kamel, K. Nagasaka

Abstract:

This paper presents a novel methodology for Maximum Power Point Tracking (MPPT) of a grid-connected 20 kW Photovoltaic (PV) system using neuro-fuzzy network. The proposed method predicts the reference PV voltage guarantying optimal power transfer between the PV generator and the main utility grid. The neuro-fuzzy network is composed of a fuzzy rule-based classifier and three Radial Basis Function Neural Networks (RBFNN). Inputs of the network (irradiance and temperature) are classified before they are fed into the appropriated RBFNN for either training or estimation process while the output is the reference voltage. The main advantage of the proposed methodology, comparing to a conventional single neural network-based approach, is the distinct generalization ability regarding to the nonlinear and dynamic behavior of a PV generator. In fact, the neuro-fuzzy network is a neural network based multi-model machine learning that defines a set of local models emulating the complex and non-linear behavior of a PV generator under a wide range of operating conditions. Simulation results under several rapid irradiance variations proved that the proposed MPPT method fulfilled the highest efficiency comparing to a conventional single neural network.

Keywords: MPPT, neuro-fuzzy, RBFN, grid-connected, photovoltaic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3144
610 A Comprehensive Survey on RAT Selection Algorithms for Heterogeneous Networks

Authors: Abdallah AL Sabbagh, Robin Braun, Mehran Abolhasan

Abstract:

Due to the coexistence of different Radio Access Technologies (RATs), Next Generation Wireless Networks (NGWN) are predicted to be heterogeneous in nature. The coexistence of different RATs requires a need for Common Radio Resource Management (CRRM) to support the provision of Quality of Service (QoS) and the efficient utilization of radio resources. RAT selection algorithms are part of the CRRM algorithms. Simply, their role is to verify if an incoming call will be suitable to fit into a heterogeneous wireless network, and to decide which of the available RATs is most suitable to fit the need of the incoming call and admit it. Guaranteeing the requirements of QoS for all accepted calls and at the same time being able to provide the most efficient utilization of the available radio resources is the goal of RAT selection algorithm. The normal call admission control algorithms are designed for homogeneous wireless networks and they do not provide a solution to fit a heterogeneous wireless network which represents the NGWN. Therefore, there is a need to develop RAT selection algorithm for heterogeneous wireless network. In this paper, we propose an approach for RAT selection which includes receiving different criteria, assessing and making decisions, then selecting the most suitable RAT for incoming calls. A comprehensive survey of different RAT selection algorithms for a heterogeneous wireless network is studied.

Keywords: Heterogeneous Wireless Network, RAT selection algorithms, Next Generation Wireless Network (NGWN), Beyond 3G Network, Common Radio Resource Management (CRRM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1987
609 Transformations of Spatial Distributions of Bio-Polymers and Nanoparticles in Water Suspensions Induced by Resonance-Like Low Frequency Electrical Fields

Authors: A. A. Vasin, N. V. Klassen, A. M. Likhter

Abstract:

Water suspensions of in-organic (metals and oxides) and organic nano-objects (chitozan and collagen) were subjected to the treatment of direct and alternative electrical fields. In addition to quasi-periodical spatial patterning resonance-like performance of spatial distributions of these suspensions has been found at low frequencies of alternating electrical field. These resonances are explained as the result of creation of equilibrium states of groups of charged nano-objects with opposite signs of charges at the interparticle distances where the forces of Coulomb attraction are compensated by the repulsion forces induced by relatively negative polarization of hydrated regions surrounding the nanoparticles with respect to pure water. The low frequencies of these resonances are explained by comparatively big distances between the particles and their big masses with t\respect to masses of atoms constituting molecules with high resonance frequencies. These new resonances open a new approach to detailed modeling and understanding of mechanisms of the influence of electrical fields on the functioning of internal organs of living organisms at the level of cells and neurons.

Keywords: Bio-polymers, chitosan, collagen, nanoparticles, coulomb attraction, polarization repulsion, periodical patterning, electrical low frequency resonances, transformations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1093
608 Face Recognition Using Double Dimension Reduction

Authors: M. A Anjum, M. Y. Javed, A. Basit

Abstract:

In this paper a new approach to face recognition is presented that achieves double dimension reduction making the system computationally efficient with better recognition results. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results improve with increase in face image resolution and levels off when arriving at a certain resolution level. In the proposed model of face recognition, first image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to better computational speed and feature extraction potential of Discrete Cosine Transform (DCT) it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A trade of between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL database, Yale database and a color database. The proposed technique has performed much better compared to other techniques. The significance of the model is two fold: (1) dimension reduction up to an effective and suitable face image resolution (2) appropriate DCT coefficients are retained to achieve best recognition results with varying image poses, intensity and illumination level.

Keywords: Biometrics, DCT, Face Recognition, Feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1456
607 Applying Element Free Galerkin Method on Beam and Plate

Authors: Mahdad M’hamed, Belaidi Idir

Abstract:

This paper develops a meshless approach, called Element Free Galerkin (EFG) method, which is based on the weak form Moving Least Squares (MLS) of the partial differential governing equations and employs the interpolation to construct the meshless shape functions. The variation weak form is used in the EFG where the trial and test functions are approximated bye the MLS approximation. Since the shape functions constructed by this discretization have the weight function property based on the randomly distributed points, the essential boundary conditions can be implemented easily. The local weak form of the partial differential governing equations is obtained by the weighted residual method within the simple local quadrature domain. The spline function with high continuity is used as the weight function. The presently developed EFG method is a truly meshless method, as it does not require the mesh, either for the construction of the shape functions, or for the integration of the local weak form. Several numerical examples of two-dimensional static structural analysis are presented to illustrate the performance of the present EFG method. They show that the EFG method is highly efficient for the implementation and highly accurate for the computation. The present method is used to analyze the static deflection of beams and plate hole

Keywords: Numerical computation, element-free Galerkin, moving least squares, meshless methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2391
606 MATLAB-based System for Centralized Monitoring and Self Restoration against Fiber Fault in FTTH

Authors: Mohammad Syuhaimi Ab-Rahman, Boonchuan Ng, Kasmiran Jumari

Abstract:

This paper presented a MATLAB-based system named Smart Access Network Testing, Analyzing and Database (SANTAD), purposely for in-service transmission surveillance and self restoration against fiber fault in fiber-to-the-home (FTTH) access network. The developed program will be installed with optical line terminal (OLT) at central office (CO) to monitor the status and detect any fiber fault that occurs in FTTH downwardly from CO towards residential customer locations. SANTAD is interfaced with optical time domain reflectometer (OTDR) to accumulate every network testing result to be displayed on a single computer screen for further analysis. This program will identify and present the parameters of each optical fiber line such as the line's status either in working or nonworking condition, magnitude of decreasing at each point, failure location, and other details as shown in the OTDR's screen. The failure status will be delivered to field engineers for promptly actions, meanwhile the failure line will be diverted to protection line to ensure the traffic flow continuously. This approach has a bright prospect to improve the survivability and reliability as well as increase the efficiency and monitoring capabilities in FTTH.

Keywords: MATLAB, SANTAD, in-service transmission surveillance, self restoration, fiber fault, FTTH

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2083
605 Upgraded Cuckoo Search Algorithm to Solve Optimisation Problems Using Gaussian Selection Operator and Neighbour Strategy Approach

Authors: Mukesh Kumar Shah, Tushar Gupta

Abstract:

An Upgraded Cuckoo Search Algorithm is proposed here to solve optimization problems based on the improvements made in the earlier versions of Cuckoo Search Algorithm. Short comings of the earlier versions like slow convergence, trap in local optima improved in the proposed version by random initialization of solution by suggesting an Improved Lambda Iteration Relaxation method, Random Gaussian Distribution Walk to improve local search and further proposing Greedy Selection to accelerate to optimized solution quickly and by “Study Nearby Strategy” to improve global search performance by avoiding trapping to local optima. It is further proposed to generate better solution by Crossover Operation. The proposed strategy used in algorithm shows superiority in terms of high convergence speed over several classical algorithms. Three standard algorithms were tested on a 6-generator standard test system and the results are presented which clearly demonstrate its superiority over other established algorithms. The algorithm is also capable of handling higher unit systems.

Keywords: Economic dispatch, Gaussian selection operator, prohibited operating zones, ramp rate limits, upgraded cuckoo search.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 638
604 Material Analysis for Temple Painting Conservation in Taiwan

Authors: Chen-Fu Wang, Lin-Ya Kung

Abstract:

For traditional painting materials, the artisan used to combine the pigments with different binders to create colors. As time goes by, the materials used for painting evolved from natural to chemical materials. The vast variety of ingredients used in chemical materials has complicated restoration work; it makes conservation work more difficult. Conservation work also becomes harder when the materials cannot be easily identified; therefore, it is essential that we take a more scientific approach to assist in conservation work. Paintings materials are high molecular weight polymer, and their analysis is very complicated as well other contamination such as smoke and dirt can also interfere with the analysis of the material. The current methods of composition analysis of painting materials include Fourier transform infrared spectroscopy (FT-IR), mass spectrometer, Raman spectroscopy, X-ray diffraction spectroscopy (XRD), each of which has its own limitation. In this study, FT-IR was used to analyze the components of the paint coating. We have taken the most commonly seen materials as samples and deteriorated it. The aged information was then used for the database to exam the temple painting materials. By observing the FT-IR changes over time, we can tell all of the painting materials will be deteriorated by the UV light, but only the speed of its degradation had some difference. From the deterioration experiment, the acrylic resin resists better than the others. After collecting the painting materials aging information on FT-IR, we performed some test on the paintings on the temples. It was found that most of the artisan used tune-oil for painting materials, and some other paintings used chemical materials. This method is now working successfully on identifying the painting materials. However, the method is destructive and high cost. In the future, we will work on the how to know the painting materials more efficiently.

Keywords: Temple painting, painting material, conservation, FT-IR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1245
603 Evaluation of Deformable Boundary Condition Using Finite Element Method and Impact Test for Steel Tubes

Authors: Abed Ahmed, Mehrdad Asadi, Jennifer Martay

Abstract:

Stainless steel pipelines are crucial components to transportation and storage in the oil and gas industry. However, the rise of random attacks and vandalism on these pipes for their valuable transport has led to more security and protection for incoming surface impacts. These surface impacts can lead to large global deformations of the pipe and place the pipe under strain, causing the eventual failure of the pipeline. Therefore, understanding how these surface impact loads affect the pipes is vital to improving the pipes’ security and protection. In this study, experimental test and finite element analysis (FEA) have been carried out on EN3B stainless steel specimens to study the impact behaviour. Low velocity impact tests at 9 m/s with 16 kg dome impactor was used to simulate for high momentum impact for localised failure. FEA models of clamped and deformable boundaries were modelled to study the effect of the boundaries on the pipes impact behaviour on its impact resistance, using experimental and FEA approach. Comparison of experimental and FE simulation shows good correlation to the deformable boundaries in order to validate the robustness of the FE model to be implemented in pipe models with complex anisotropic structure.

Keywords: Dynamic impact, deformable boundary conditions, finite element modeling, FEM, finite element, FE, LS-DYNA, Stainless steel pipe.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 649
602 A Comparison of Experimental Data with Monte Carlo Calculations for Optimisation of the Sourceto- Detector Distance in Determining the Efficiency of a LaBr3:Ce (5%) Detector

Authors: H. Aldousari, T. Buchacher, N. M. Spyrou

Abstract:

Cerium-doped lanthanum bromide LaBr3:Ce(5%) crystals are considered to be one of the most advanced scintillator materials used in PET scanning, combining a high light yield, fast decay time and excellent energy resolution. Apart from the correct choice of scintillator, it is also important to optimise the detector geometry, not least in terms of source-to-detector distance in order to obtain reliable measurements and efficiency. In this study a commercially available 25 mm x 25 mm BrilLanCeTM 380 LaBr3: Ce (5%) detector was characterised in terms of its efficiency at varying source-to-detector distances. Gamma-ray spectra of 22Na, 60Co, and 137Cs were separately acquired at distances of 5, 10, 15, and 20cm. As a result of the change in solid angle subtended by the detector, the geometric efficiency reduced in efficiency with increasing distance. High efficiencies at low distances can cause pulse pile-up when subsequent photons are detected before previously detected events have decayed. To reduce this systematic error the source-to-detector distance should be balanced between efficiency and pulse pile-up suppression as otherwise pile-up corrections would need to be necessary at short distances. In addition to the experimental measurements Monte Carlo simulations have been carried out for the same setup, allowing a comparison of results. The advantages and disadvantages of each approach have been highlighted.

Keywords: BrilLanCeTM380 LaBr3:Ce(5%), Coincidence summing, GATE simulation, Geometric efficiency

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1862
601 Maximum Common Substructure Extraction in RNA Secondary Structures Using Clique Detection Approach

Authors: Shih-Yi Chao

Abstract:

The similarity comparison of RNA secondary structures is important in studying the functions of RNAs. In recent years, most existing tools represent the secondary structures by tree-based presentation and calculate the similarity by tree alignment distance. Different to previous approaches, we propose a new method based on maximum clique detection algorithm to extract the maximum common structural elements in compared RNA secondary structures. A new graph-based similarity measurement and maximum common subgraph detection procedures for comparing purely RNA secondary structures is introduced. Given two RNA secondary structures, the proposed algorithm consists of a process to determine the score of the structural similarity, followed by comparing vertices labelling, the labelled edges and the exact degree of each vertex. The proposed algorithm also consists of a process to extract the common structural elements between compared secondary structures based on a proposed maximum clique detection of the problem. This graph-based model also can work with NC-IUB code to perform the pattern-based searching. Therefore, it can be used to identify functional RNA motifs from database or to extract common substructures between complex RNA secondary structures. We have proved the performance of this proposed algorithm by experimental results. It provides a new idea of comparing RNA secondary structures. This tool is helpful to those who are interested in structural bioinformatics.

Keywords: Clique detection, labeled vertices, RNA secondary structures, subgraph, similarity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1429
600 Seamless Multicast Handover in Fmipv6-Based Networks

Authors: Moneeb Gohar, Seok Joo Koh, Tae-Won Um, Hyun-Woo Lee

Abstract:

This paper proposes a fast tree join scheme to provide seamless multicast handover in the mobile networks based on the Fast Mobile IPv6 (FMIPv6). In the existing FMIPv6-based multicast handover scheme, the bi-directional tunnelling or the remote subscription is employed with the packet forwarding from the previous access router (AR) to the new AR. In general, the remote subscription approach is preferred to the bi-directional tunnelling one, since in the remote subscription scheme we can exploit an optimized multicast path from a multicast source to many mobile receivers. However, in the remote subscription scheme, if the tree joining operation takes a long time, the amount of data packets to be forwarded and buffered for multicast handover will increase, and thus the corresponding buffer may overflow, which results in severe packet losses. In order to reduce these costs associated with packet forwarding and buffering, this paper proposes the fast join to multicast tree, in which the new AR will join the multicast tree as fast as possible, so that the new multicast data packets can also arrive at the new AR, by which the packet forwarding and buffering costs can be reduced. From numerical analysis, it is shown that the proposed scheme can give better performance than the existing FMIPv6-based multicast handover schemes in terms of the multicast packet delivery costs.

Keywords: Mobile Multicast, FMIPv6, Seamless Handover, Fast Tree Join.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1403
599 Recycled Plastic Fibers for Minimizing Plastic Shrinkage Cracking of Cement Based Mortar

Authors: B.S. Al-Tulaian, M. J. Al-Shannag, A.M. Al-Hozaimy

Abstract:

The development of new construction materials using  recycled plastic is important to both the construction and the plastic  recycling industries. Manufacturing of fibers from industrial or  postconsumer plastic waste is an attractive approach with such  benefits as concrete performance enhancement, and reduced needs  for land filling. The main objective of this study is to investigate the  effect of Plastic fibers obtained locally from recycled waste on plastic  shrinkage cracking of ordinary cement based mortar. Parameters  investigated include: fiber length ranging from 20 to 50mm, and fiber  volume fraction ranging from 0% to 1.5% by volume. The test results  showed significant improvement in crack arresting mechanism and  substantial reduction in the surface area of cracks for the mortar  reinforced with recycled plastic fibers compared to plain mortar.  Furthermore, test results indicated that there was a slight decrease in  compressive strength of mortar reinforced with different lengths and  contents of recycled fibers compared to plain mortar. This study  suggests that adding more than 1% of RP fibers to mortar, can be  used effectively for controlling plastic shrinkage cracking of cement  based mortar, and thus results in waste reduction and resources  conservation.

 

Keywords: Mortar, plastic, shrinkage cracking, compressive strength, RF recycled fibers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3037
598 Improving Worm Detection with Artificial Neural Networks through Feature Selection and Temporal Analysis Techniques

Authors: Dima Stopel, Zvi Boger, Robert Moskovitch, Yuval Shahar, Yuval Elovici

Abstract:

Computer worm detection is commonly performed by antivirus software tools that rely on prior explicit knowledge of the worm-s code (detection based on code signatures). We present an approach for detection of the presence of computer worms based on Artificial Neural Networks (ANN) using the computer's behavioral measures. Identification of significant features, which describe the activity of a worm within a host, is commonly acquired from security experts. We suggest acquiring these features by applying feature selection methods. We compare three different feature selection techniques for the dimensionality reduction and identification of the most prominent features to capture efficiently the computer behavior in the context of worm activity. Additionally, we explore three different temporal representation techniques for the most prominent features. In order to evaluate the different techniques, several computers were infected with five different worms and 323 different features of the infected computers were measured. We evaluated each technique by preprocessing the dataset according to each one and training the ANN model with the preprocessed data. We then evaluated the ability of the model to detect the presence of a new computer worm, in particular, during heavy user activity on the infected computers.

Keywords: Artificial Neural Networks, Feature Selection, Temporal Analysis, Worm Detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1693
597 Low Resolution Single Neural Network Based Face Recognition

Authors: Jahan Zeb, Muhammad Younus Javed, Usman Qayyum

Abstract:

This research paper deals with the implementation of face recognition using neural network (recognition classifier) on low-resolution images. The proposed system contains two parts, preprocessing and face classification. The preprocessing part converts original images into blurry image using average filter and equalizes the histogram of those image (lighting normalization). The bi-cubic interpolation function is applied onto equalized image to get resized image. The resized image is actually low-resolution image providing faster processing for training and testing. The preprocessed image becomes the input to neural network classifier, which uses back-propagation algorithm to recognize the familiar faces. The crux of proposed algorithm is its beauty to use single neural network as classifier, which produces straightforward approach towards face recognition. The single neural network consists of three layers with Log sigmoid, Hyperbolic tangent sigmoid and Linear transfer function respectively. The training function, which is incorporated in our work, is Gradient descent with momentum (adaptive learning rate) back propagation. The proposed algorithm was trained on ORL (Olivetti Research Laboratory) database with 5 training images. The empirical results provide the accuracy of 94.50%, 93.00% and 90.25% for 20, 30 and 40 subjects respectively, with time delay of 0.0934 sec per image.

Keywords: Average filtering, Bicubic Interpolation, Neurons, vectorization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1718
596 The Wavelet-Based DFT: A New Interpretation, Extensions and Applications

Authors: Abdulnasir Hossen, Ulrich Heute

Abstract:

In 1990 [1] the subband-DFT (SB-DFT) technique was proposed. This technique used the Hadamard filters in the decomposition step to split the input sequence into low- and highpass sequences. In the next step, either two DFTs are needed on both bands to compute the full-band DFT or one DFT on one of the two bands to compute an approximate DFT. A combination network with correction factors was to be applied after the DFTs. Another approach was proposed in 1997 [2] for using a special discrete wavelet transform (DWT) to compute the discrete Fourier transform (DFT). In the first step of the algorithm, the input sequence is decomposed in a similar manner to the SB-DFT into two sequences using wavelet decomposition with Haar filters. The second step is to perform DFTs on both bands to obtain the full-band DFT or to obtain a fast approximate DFT by implementing pruning at both input and output sides. In this paper, the wavelet-based DFT (W-DFT) with Haar filters is interpreted as SB-DFT with Hadamard filters. The only difference is in a constant factor in the combination network. This result is very important to complete the analysis of the W-DFT, since all the results concerning the accuracy and approximation errors in the SB-DFT are applicable. An application example in spectral analysis is given for both SB-DFT and W-DFT (with different filters). The adaptive capability of the SB-DFT is included in the W-DFT algorithm to select the band of most energy as the band to be computed. Finally, the W-DFT is extended to the two-dimensional case. An application in image transformation is given using two different types of wavelet filters.

Keywords: Image Transform, Spectral Analysis, Sub-Band DFT, Wavelet DFT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630
595 Computational Investigation of Air-Gas Venturi Mixer for Powered Bi-Fuel Diesel Engine

Authors: Mofid Gorjibandpy, Mehdi Kazemi Sangsereki

Abstract:

In a bi-fuel diesel engine, the carburetor plays a vital role in switching from fuel gas to petrol mode operation and viceversa. The carburetor is the most important part of the fuel system of a diesel engine. All diesel engines carry variable venturi mixer carburetors. The basic operation of the carburetor mainly depends on the restriction barrel called the venturi. When air flows through the venturi, its speed increases and its pressure decreases. The main challenge focuses on designing a mixing device which mixes the supplied gas is the incoming air at an optimum ratio. In order to surmount the identified problems, the way fuel gas and air flow in the mixer have to be analyzed. In this case, the Computational Fluid Dynamics or CFD approach is applied in design of the prototype mixer. The present work is aimed at further understanding of the air and fuel flow structure by performing CFD studies using a software code. In this study for mixing air and gas in the condition that has been mentioned in continuance, some mixers have been designed. Then using of computational fluid dynamics, the optimum mixer has been selected. The results indicated that mixer with 12 holes can produce a homogenous mixture than those of 8-holes and 6-holes mixer. Also the result showed that if inlet convergency was smoother than outlet divergency, the mixture get more homogenous, the reason of that is in increasing turbulence in outlet divergency.

Keywords: Computational Fluid Dynamics, Venturi mixer, Air-fuel ratio, Turbulence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3917
594 Seismic Performance of Slopes Subjected to Earthquake Mainshock Aftershock Sequences

Authors: Alisha Khanal, Gokhan Saygili

Abstract:

It is commonly observed that aftershocks follow the mainshock. Aftershocks continue over a period of time with a decreasing frequency and typically there is not sufficient time for repair and retrofit between a mainshock–aftershock sequence. Usually, aftershocks are smaller in magnitude; however, aftershock ground motion characteristics such as the intensity and duration can be greater than the mainshock due to the changes in the earthquake mechanism and location with respect to the site. The seismic performance of slopes is typically evaluated based on the sliding displacement predicted to occur along a critical sliding surface. Various empirical models are available that predict sliding displacement as a function of seismic loading parameters, ground motion parameters, and site parameters but these models do not include the aftershocks. The seismic risks associated with the post-mainshock slopes ('damaged slopes') subjected to aftershocks is significant. This paper extends the empirical sliding displacement models for flexible slopes subjected to earthquake mainshock-aftershock sequences (a multi hazard approach). A dataset was developed using 144 pairs of as-recorded mainshock-aftershock sequences using the Pacific Earthquake Engineering Research Center (PEER) database. The results reveal that the combination of mainshock and aftershock increases the seismic demand on slopes relative to the mainshock alone; thus, seismic risks are underestimated if aftershocks are neglected.

Keywords: Seismic slope stability, sliding displacement, mainshock, aftershock, landslide, earthquake.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 848
593 On the Catalytic Combustion Behaviors of CH4 in a MCFC Power Generation System

Authors: Man Young Kim

Abstract:

Catalytic combustion is generally accepted as an environmentally preferred alternative for the generation of heat and power from fossil fuels mainly due to its advantages related to the stable combustion under very lean conditions with low emissions of NOx, CO, and UHC at temperatures lower than those occurred in conventional flame combustion. Despite these advantages, the commercial application of catalytic combustion has been delayed because of complicated reaction processes and the difficulty in developing appropriate catalysts with the required stability and durability. To develop the catalytic combustors, detailed studies on the combustion characteristics of catalytic combustion should be conducted. To the end, in current research, quantitative studies on the combustion characteristics of the catalytic combustors, with a Pd-based catalyst for MCFC power generation systems, relying on numerical simulations have been conducted. In addition, data from experimental studies of variations in outlet temperatures and fuel conversion, taken after operating conditions have been used to validate the present numerical approach. After introducing the governing equations for mass, momentum, and energy equations as well as a description of catalytic combustion kinetics, the effects of the excess air ratio, space velocity, and inlet gas temperature on the catalytic combustion characteristics are extensively investigated. Quantitative comparisons are also conducted with previous experimental data. Finally, some concluding remarks are presented.

Keywords: Catalytic combustion, Methane, BOP, MCFC power generation system, Inlet temperature, Excess air ratio, Space velocity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2139
592 Laser Data Based Automatic Generation of Lane-Level Road Map for Intelligent Vehicles

Authors: Zehai Yu, Hui Zhu, Linglong Lin, Huawei Liang, Biao Yu, Weixin Huang

Abstract:

With the development of intelligent vehicle systems, a high-precision road map is increasingly needed in many aspects. The automatic lane lines extraction and modeling are the most essential steps for the generation of a precise lane-level road map. In this paper, an automatic lane-level road map generation system is proposed. To extract the road markings on the ground, the multi-region Otsu thresholding method is applied, which calculates the intensity value of laser data that maximizes the variance between background and road markings. The extracted road marking points are then projected to the raster image and clustered using a two-stage clustering algorithm. Lane lines are subsequently recognized from these clusters by the shape features of their minimum bounding rectangle. To ensure the storage efficiency of the map, the lane lines are approximated to cubic polynomial curves using a Bayesian estimation approach. The proposed lane-level road map generation system has been tested on urban and expressway conditions in Hefei, China. The experimental results on the datasets show that our method can achieve excellent extraction and clustering effect, and the fitted lines can reach a high position accuracy with an error of less than 10 cm.

Keywords: Curve fitting, lane-level road map, line recognition, multi-thresholding, two-stage clustering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 477
591 Shifting Paradigms of Culture: Rise of Secular Sensibility in Indian Literature

Authors: Nidhi Chouhan

Abstract:

Burgeoning demand of ‘Secularism’ has shaken the pillars of cultural studies in the contemporary literature. The perplexity of the culturally estranged term ‘secular’ gives rise to temporal ideologies across the world. Hence, it is high time to scan this concept in the context of Indian lifestyle which is a blend of assimilated cultures woven in multiple religious fabrics. The infliction of such secular taste is depicted in literary productions like ‘Satanic Verses’ and ‘An Area of Darkness’. The paper conceptually makes a cross-cultural analysis of anti-religious Indian literary texts, assessing its revitalization in current times. Further, this paper studies the increasing popularity of secular sensibility in the contemporary times. The mushrooming elements of secularism such as abstraction, spirituality, liberation, individualism give rise to a seemingly newer idea i.e. ‘Plurality’ making the literature highly hybrid. This approach has been used to study Indian modernity reflected in its literature. Seminal works of stalwarts are used to understand the consequence of this cultural synthesis. Conclusively, this theoretical research inspects the efficiency of secular culture, intertwined with internal coherence and throws light on the plurality of texts in Indian literature.

Keywords: Culture, Indian, literature, plurality, religion, secular, secularism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 840
590 An Ontological Approach to Existentialist Theatre and Theatre of the Absurd in the Works of Jean-Paul Sartre and Samuel Beckett

Authors: Gülten Silindir Keretli

Abstract:

The aim of this study is to analyse the works of playwrights within the framework of existential philosophy. It is to observe the ontological existence in the plays of No Exit and Endgame. Literary works will be discussed separately in each section of this study. The despair of post-war generation of Europe problematized the ‘human condition’ in every field of literature which is the very product of social upheaval. With this concern in his mind, Sartre’s creative works portrayed man as a lonely being, burdened with terrifying freedom to choose and create his own meaning in an apparently meaningless world. The traces of the existential thought are to be found throughout the history of philosophy and literature. On the other hand, the theatre of the absurd is a form of drama showing the absurdity of the human condition and it is heavily influenced by the existential philosophy. Beckett is the most influential playwright of the theatre of the absurd. The themes and thoughts in his plays share many tenets of the existential philosophy. The existential philosophy posits the meaninglessness of existence and it regards man as being thrown into the universe and into desolate isolation. To overcome loneliness and isolation, the human ego needs recognition from the other people. Sartre calls this need of recognition as the need for ‘the Look’ (Le regard) from the Other. In this paper, existentialist philosophy and existentialist angst will be elaborated and then the works of existentialist theatre and theatre of absurd will be discussed within the framework of existential philosophy.

Keywords: Consciousness, existentialism, the notion of absurd, the other.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1532
589 Artificial Intelligent Approach for Machining Titanium Alloy in a Nonconventional Process

Authors: Md. Ashikur Rahman Khan, M. M. Rahman, K. Kadirgama

Abstract:

Artificial neural networks (ANN) are used in distinct researching fields and professions, and are prepared by cooperation of scientists in different fields such as computer engineering, electronic, structure, biology and so many different branches of science. Many models are built correlating the parameters and the outputs in electrical discharge machining (EDM) concern for different types of materials. Up till now model for Ti-5Al-2.5Sn alloy in the case of electrical discharge machining performance characteristics has not been developed. Therefore, in the present work, it is attempted to generate a model of material removal rate (MRR) for Ti-5Al-2.5Sn material by means of Artificial Neural Network. The experimentation is performed according to the design of experiment (DOE) of response surface methodology (RSM). To generate the DOE four parameters such as peak current, pulse on time, pulse off time and servo voltage and one output as MRR are considered. Ti-5Al-2.5Sn alloy is machined with positive polarity of copper electrode. Finally the developed model is tested with confirmation test. The confirmation test yields an error as within the agreeable limit. To investigate the effect of the parameters on performance sensitivity analysis is also carried out which reveals that the peak current having more effect on EDM performance.

Keywords: Ti-5Al-2.5Sn, material removal rate, copper tungsten, positive polarity, artificial neural network, multi-layer perceptron.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2363
588 Heat Transfer and Entropy Generation in a Partial Porous Channel Using LTNE and Exothermicity/Endothermicity Features

Authors: Mohsen Torabi, Nader Karimi, Kaili Zhang

Abstract:

This work aims to provide a comprehensive study on the heat transfer and entropy generation rates of a horizontal channel partially filled with a porous medium which experiences internal heat generation or consumption due to exothermic or endothermic chemical reaction. The focus has been given to the local thermal non-equilibrium (LTNE) model. The LTNE approach helps us to deliver more accurate data regarding temperature distribution within the system and accordingly to provide more accurate Nusselt number and entropy generation rates. Darcy-Brinkman model is used for the momentum equations, and constant heat flux is assumed for boundary conditions for both upper and lower surfaces. Analytical solutions have been provided for both velocity and temperature fields. By incorporating the investigated velocity and temperature formulas into the provided fundamental equations for the entropy generation, both local and total entropy generation rates are plotted for a number of cases. Bifurcation phenomena regarding temperature distribution and interface heat flux ratio are observed. It has been found that the exothermicity or endothermicity characteristic of the channel does have a considerable impact on the temperature fields and entropy generation rates.

Keywords: Entropy generation, exothermicity, endothermicity, forced convection, local thermal non-equilibrium, analytical modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 834