Search results for: minimum noise
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1730

Search results for: minimum noise

200 Laser Data Based Automatic Generation of Lane-Level Road Map for Intelligent Vehicles

Authors: Zehai Yu, Hui Zhu, Linglong Lin, Huawei Liang, Biao Yu, Weixin Huang

Abstract:

With the development of intelligent vehicle systems, a high-precision road map is increasingly needed in many aspects. The automatic lane lines extraction and modeling are the most essential steps for the generation of a precise lane-level road map. In this paper, an automatic lane-level road map generation system is proposed. To extract the road markings on the ground, the multi-region Otsu thresholding method is applied, which calculates the intensity value of laser data that maximizes the variance between background and road markings. The extracted road marking points are then projected to the raster image and clustered using a two-stage clustering algorithm. Lane lines are subsequently recognized from these clusters by the shape features of their minimum bounding rectangle. To ensure the storage efficiency of the map, the lane lines are approximated to cubic polynomial curves using a Bayesian estimation approach. The proposed lane-level road map generation system has been tested on urban and expressway conditions in Hefei, China. The experimental results on the datasets show that our method can achieve excellent extraction and clustering effect, and the fitted lines can reach a high position accuracy with an error of less than 10 cm.

Keywords: Curve fitting, lane-level road map, line recognition, multi-thresholding, two-stage clustering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 477
199 Numerical Study on CO2 Pollution in an Ignition Chamber by Oxygen Enrichment

Authors: Zohreh Orshesh

Abstract:

In this study, a 3D combustion chamber was simulated using FLUENT 6.32. Aims to obtain accurate information about the profile of the combustion in the furnace and also check the effect of oxygen enrichment on the combustion process. Oxygen enrichment is an effective way to reduce combustion pollutant. The flow rate of air to fuel ratio is varied as 1.3, 3.2 and 5.1 and the oxygen enriched flow rates are 28, 54 and 68 lit/min. Combustion simulations typically involve the solution of the turbulent flows with heat transfer, species transport and chemical reactions. It is common to use the Reynolds-averaged form of the governing equation in conjunction with a suitable turbulence model. The 3D Reynolds Averaged Navier Stokes (RANS) equations with standard k-ε turbulence model are solved together by Fluent 6.3 software. First order upwind scheme is used to model governing equations and the SIMPLE algorithm is used as pressure velocity coupling. Species mass fractions at the wall are assumed to have zero normal gradients.Results show that minimum mole fraction of CO2 happens when the flow rate ratio of air to fuel is 5.1. Additionally, in a fixed oxygen enrichment condition, increasing the air to fuel ratio will increase the temperature peak. As a result, oxygen-enrichment can reduce the CO2 emission at this kind of furnace in high air to fuel rates.

Keywords: Combustion chamber, Oxygen enrichment, Reynolds Averaged Navier- Stokes, CO2 emission

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1504
198 Iris Recognition Based On the Low Order Norms of Gradient Components

Authors: Iman A. Saad, Loay E. George

Abstract:

Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%.

Keywords: Iris recognition, contrast stretching, gradient features, texture features, Euclidean metric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1923
197 Effect on the Performance of the Nano-Particulate Graphite Lubricant in the Turning of AISI 1040 Steel under Variable Machining Conditions

Authors: S. Srikiran, Dharmala Venkata Padmaja, P. N. L. Pavani, R. Pola Rao, K. Ramji

Abstract:

Technological advancements in the development of cutting tools and coolant/lubricant chemistry have enhanced the machining capabilities of hard materials under higher machining conditions. Generation of high temperatures at the cutting zone during machining is one of the most important and pertinent problems which adversely affect the tool life and surface finish of the machined components. Generally, cutting fluids and solid lubricants are used to overcome the problem of heat generation, which is not effectively addressing the problems. With technological advancements in the field of tribology, nano-level particulate solid lubricants are being used nowadays in machining operations, especially in the areas of turning and grinding. The present investigation analyses the effect of using nano-particulate graphite powder as lubricant in the turning of AISI 1040 steel under variable machining conditions and to study its effect on cutting forces, tool temperature and surface roughness of the machined component. Experiments revealed that the increase in cutting forces and tool temperature resulting in the decrease of surface quality with the decrease in the size of nano-particulate graphite powder as lubricant.

Keywords: Solid lubricant, graphite, minimum quantity lubrication, nanoparticles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 905
196 Replicating Brain’s Resting State Functional Connectivity Network Using a Multi-Factor Hub-Based Model

Authors: B. L. Ho, L. Shi, D. F. Wang, V. C. T. Mok

Abstract:

The brain’s functional connectivity while temporally non-stationary does express consistency at a macro spatial level. The study of stable resting state connectivity patterns hence provides opportunities for identification of diseases if such stability is severely perturbed. A mathematical model replicating the brain’s spatial connections will be useful for understanding brain’s representative geometry and complements the empirical model where it falls short. Empirical computations tend to involve large matrices and become infeasible with fine parcellation. However, the proposed analytical model has no such computational problems. To improve replicability, 92 subject data are obtained from two open sources. The proposed methodology, inspired by financial theory, uses multivariate regression to find relationships of every cortical region of interest (ROI) with some pre-identified hubs. These hubs acted as representatives for the entire cortical surface. A variance-covariance framework of all ROIs is then built based on these relationships to link up all the ROIs. The result is a high level of match between model and empirical correlations in the range of 0.59 to 0.66 after adjusting for sample size; an increase of almost forty percent. More significantly, the model framework provides an intuitive way to delineate between systemic drivers and idiosyncratic noise while reducing dimensions by more than 30 folds, hence, providing a way to conduct attribution analysis. Due to its analytical nature and simple structure, the model is useful as a standalone toolkit for network dependency analysis or as a module for other mathematical models.

Keywords: Functional magnetic resonance imaging, multivariate regression, network hubs, resting state functional connectivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 765
195 Obtaining High-Dimensional Configuration Space for Robotic Systems Operating in a Common Environment

Authors: U. Yerlikaya, R. T. Balkan

Abstract:

In this research, a method is developed to obtain high-dimensional configuration space for path planning problems. In typical cases, the path planning problems are solved directly in the 3-dimensional (D) workspace. However, this method is inefficient in handling the robots with various geometrical and mechanical restrictions. To overcome these difficulties, path planning may be formalized and solved in a new space which is called configuration space. The number of dimensions of the configuration space comes from the degree of freedoms of the system of interest. The method can be applied in two ways. In the first way, the point clouds of all the bodies of the system and interaction of them are used. The second way is performed via using the clearance function of simulation software where the minimum distances between surfaces of bodies are simultaneously measured. A double-turret system is held in the scope of this study. The 4-D configuration space of a double-turret system is obtained in these two ways. As a result, the difference between these two methods is around 1%, depending on the density of the point cloud. The disparity between the two forms steadily decreases as the point cloud density increases. At the end of the study, in order to verify 4-D configuration space obtained, 4-D path planning problem was realized as 2-D + 2-D and a sample path planning is carried out with using A* algorithm. Then, the accuracy of the configuration space is proved using the obtained paths on the simulation model of the double-turret system.

Keywords: A* Algorithm, autonomous turrets, high-dimensional C-Space, manifold C-Space, point clouds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 328
194 Multi-Stage Multi-Period Production Planning in Wire and Cable Industry

Authors: Mahnaz Hosseinzadeh, Shaghayegh Rezaee Amiri

Abstract:

This paper presents a methodology for serial production planning problem in wire and cable manufacturing process that addresses the problem of input-output imbalance in different consecutive stations, hoping to minimize the halt of machines in each stage. To this end, a linear Goal Programming (GP) model is developed, in which four main categories of constraints as per the number of runs per machine, machines’ sequences, acceptable inventories of machines at the end of each period, and the necessity of fulfillment of the customers’ orders are considered. The model is formulated based upon on the real data obtained from IKO TAK Company, an important supplier of wire and cable for oil and gas and automotive industries in Iran. By solving the model in GAMS software the optimal number of runs, end-of-period inventories, and the possible minimum idle time for each machine are calculated. The application of the numerical results in the target company has shown the efficiency of the proposed model and the solution in decreasing the lead time of the end product delivery to the customers by 20%. Accordingly, the developed model could be easily applied in wire and cable companies for the aim of optimal production planning to reduce the halt of machines in manufacturing stages.

Keywords: Serial manufacturing process, production planning, wire and cable industry, goal programming approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 886
193 Comparison of different Channel Modeling Techniques used in the BPLC Systems

Authors: Justinian Anatory, Nelson Theethayi

Abstract:

The paper compares different channel models used for modeling Broadband Power-Line Communication (BPLC) system. The models compared are Zimmermann and Dostert, Philipps, Anatory et al and Anatory et al generalized Transmission Line (TL) model. The validity of each model was compared in time domain with ATP-EMTP software which uses transmission line approach. It is found that for a power-line network with minimum number of branches all the models give similar signal/pulse time responses compared with ATP-EMTP software; however, Zimmermann and Dostert model indicates the same amplitude but different time delay. It is observed that when the numbers of branches are increased only generalized TL theory approach results are comparable with ATPEMTP results. Also the Multi-Carrier Spread Spectrum (MC-SS) system was applied to check the implication of such behavior on the modulation schemes. It is observed that using Philipps on the underground cable can predict the performance up to 25dB better than other channel models which can misread the actual performance of the system. Also modified Zimmermann and Dostert under multipath can predict a better performance of about 5dB better than the actual predicted by Generalized TL theory. It is therefore suggested for a realistic BPLC system design and analyses the model based on generalized TL theory be used.

Keywords: Broadband Power line Channel Models, loadimpedance, Branched network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1784
192 Non-Overlapping Hierarchical Index Structure for Similarity Search

Authors: Mounira Taileb, Sid Lamrous, Sami Touati

Abstract:

In order to accelerate the similarity search in highdimensional database, we propose a new hierarchical indexing method. It is composed of offline and online phases. Our contribution concerns both phases. In the offline phase, after gathering the whole of the data in clusters and constructing a hierarchical index, the main originality of our contribution consists to develop a method to construct bounding forms of clusters to avoid overlapping. For the online phase, our idea improves considerably performances of similarity search. However, for this second phase, we have also developed an adapted search algorithm. Our method baptized NOHIS (Non-Overlapping Hierarchical Index Structure) use the Principal Direction Divisive Partitioning (PDDP) as algorithm of clustering. The principle of the PDDP is to divide data recursively into two sub-clusters; division is done by using the hyper-plane orthogonal to the principal direction derived from the covariance matrix and passing through the centroid of the cluster to divide. Data of each two sub-clusters obtained are including by a minimum bounding rectangle (MBR). The two MBRs are directed according to the principal direction. Consequently, the nonoverlapping between the two forms is assured. Experiments use databases containing image descriptors. Results show that the proposed method outperforms sequential scan and SRtree in processing k-nearest neighbors.

Keywords: K-nearest neighbour search, multi-dimensional indexing, multimedia databases, similarity search.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1536
191 Analysis of Precipitation and Temperature Trends in Sefid-Roud Basin

Authors: Amir Gandomkar, Tahereh Soltani Gord faramarzi, Parisa Safaripour Chafi, Abdol-Reza Amani

Abstract:

Temperature, humidity and precipitation in an area, are parameters proved influential in the climate of that area, and one should recognize them so that he can determine the climate of that area. Climate changes are of primary importance in climatology, and in recent years, have been of great concern to researchers and even politicians and organizations, for they can play an important role in social, political and economic activities. Even though the real cause of climate changes or their stability is not yet fully recognized, they are a matter of concern to researchers and their importance for countries has prompted them to investigate climate changes in different levels, especially in regional, national and continental level. This issue has less been investigated in our country. However, in recent years, there have been some researches and conferences on climate changes. This study is also in line with such researches and tries to investigate and analyze the trends of climate changes (temperature and precipitation) in Sefid-roud (the name of a river) basin. Three parameters of mean annual precipitation, temperature, and maximum and minimum temperatures in 36 synoptic and climatology stations in a statistical period of 49 years (1956-2005) in the stations of Sefid-roud basin were analyzed by Mann-Kendall test. The results obtained by data analysis show that climate changes are short term and have a trend. The analysis of mean temperature revealed that changes have a significantly rising trend, besides the precipitation has a significantly falling trend.

Keywords: Trend, Climate changes, Sefid-roud, Mann-Kendall

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1716
190 The Impact of Transaction Costs on Rebalancing an Investment Portfolio in Portfolio Optimization

Authors: B. Marasović, S. Pivac, S. V. Vukasović

Abstract:

Constructing a portfolio of investments is one of the most significant financial decisions facing individuals and institutions. In accordance with the modern portfolio theory maximization of return at minimal risk should be the investment goal of any successful investor. In addition, the costs incurred when setting up a new portfolio or rebalancing an existing portfolio must be included in any realistic analysis. In this paper rebalancing an investment portfolio in the presence of transaction costs on the Croatian capital market is analyzed. The model applied in the paper is an extension of the standard portfolio mean-variance optimization model in which transaction costs are incurred to rebalance an investment portfolio. This model allows different costs for different securities, and different costs for buying and selling. In order to find efficient portfolio, using this model, first, the solution of quadratic programming problem of similar size to the Markowitz model, and then the solution of a linear programming problem have to be found. Furthermore, in the paper the impact of transaction costs on the efficient frontier is investigated. Moreover, it is shown that global minimum variance portfolio on the efficient frontier always has the same level of the risk regardless of the amount of transaction costs. Although efficient frontier position depends of both transaction costs amount and initial portfolio it can be concluded that extreme right portfolio on the efficient frontier always contains only one stock with the highest expected return and the highest risk.

Keywords: Croatian capital market, Fractional quadratic programming, Markowitz model, Portfolio optimization, Transaction costs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2914
189 Seamless Flow of Voluminous Data in High Speed Network without Congestion Using Feedback Mechanism

Authors: T.Sheela, Dr.J.Raja

Abstract:

Continuously growing needs for Internet applications that transmit massive amount of data have led to the emergence of high speed network. Data transfer must take place without any congestion and hence feedback parameters must be transferred from the receiver end to the sender end so as to restrict the sending rate in order to avoid congestion. Even though TCP tries to avoid congestion by restricting the sending rate and window size, it never announces the sender about the capacity of the data to be sent and also it reduces the window size by half at the time of congestion therefore resulting in the decrease of throughput, low utilization of the bandwidth and maximum delay. In this paper, XCP protocol is used and feedback parameters are calculated based on arrival rate, service rate, traffic rate and queue size and hence the receiver informs the sender about the throughput, capacity of the data to be sent and window size adjustment, resulting in no drastic decrease in window size, better increase in sending rate because of which there is a continuous flow of data without congestion. Therefore as a result of this, there is a maximum increase in throughput, high utilization of the bandwidth and minimum delay. The result of the proposed work is presented as a graph based on throughput, delay and window size. Thus in this paper, XCP protocol is well illustrated and the various parameters are thoroughly analyzed and adequately presented.

Keywords: Bandwidth-Delay Product, Congestion Control, Congestion Window, TCP/IP

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1454
188 Many-Sided Self Risk Analysis Model for Information Asset to Secure Stability of the Information and Communication Service

Authors: Jin-Tae Lee, Jung-Hoon Suh, Sang-Soo Jang, Jae-Il Lee

Abstract:

Information and communication service providers (ICSP) that are significant in size and provide Internet-based services take administrative, technical, and physical protection measures via the information security check service (ISCS). These protection measures are the minimum action necessary to secure the stability and continuity of the information and communication services (ICS) that they provide. Thus, information assets are essential to providing ICS, and deciding the relative importance of target assets for protection is a critical procedure. The risk analysis model designed to decide the relative importance of information assets, which is described in this study, evaluates information assets from many angles, in order to choose which ones should be given priority when it comes to protection. Many-sided risk analysis (MSRS) grades the importance of information assets, based on evaluation of major security check items, evaluation of the dependency on the information and communication facility (ICF) and influence on potential incidents, and evaluation of major items according to their service classification, in order to identify the ISCS target. MSRS could be an efficient risk analysis model to help ICSPs to identify their core information assets and take information protection measures first, so that stability of the ICS can be ensured.

Keywords: Information Asset, Information CommunicationFacility, Evaluation, ISCS (Information Security Check Service), Evaluation, Grade.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1414
187 Effect of Different Media and Mannitol Concentrations on Growth and Development of Vandopsis lissochiloides (Gaudich.) Pfitz. under Slow Growth Conditions

Authors: J. Linjikao, P. Inthima, A. Kongbangkerd

Abstract:

In vitro conservation of orchid germplasm provides an effective technique for ex situ conservation of orchid diversity. In this study, an efficient protocol for in vitro conservation of Vandopsis lissochiloides (Gaudich.) Pfitz. plantlet under slow growth conditions was investigated. Plantlets were cultured on different strength of Vacin and Went medium (½VW and ¼VW) supplemented with different concentrations of mannitol (0, 2, 4, 6 and 8%), sucrose (0 and 3%) and 50 g/L potato extract, 150 mL/L coconut water. The cultures were incubated at 25±2 °C and maintained under 20 µmol/m2s light intensity for 24 weeks without subculture. At the end of preservation period, the plantlets were subcultured to fresh medium for growth recovery. The results found that the highest leaf number per plantlet could be observed on ¼VW medium without adding sucrose and mannitol while the highest root number per plantlet was found on ½VW added with 3% sucrose without adding mannitol after 24 weeks of in vitro storage. The results showed that the maximum number of leaves (5.8 leaves) and roots (5.0 roots) of preserved plantlets were produced on ¼VW medium without adding sucrose and mannitol. Therefore, ¼VW medium without adding sucrose and mannitol was the best minimum growth conditions for medium-term storage of V. lissochiloides plantlets.

Keywords: Preservation, Vandopsis, germplasm, in vitro.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 669
186 Hiding Data in Images Using PCP

Authors: Souvik Bhattacharyya, Gautam Sanyal

Abstract:

In recent years, everything is trending toward digitalization and with the rapid development of the Internet technologies, digital media needs to be transmitted conveniently over the network. Attacks, misuse or unauthorized access of information is of great concern today which makes the protection of documents through digital media a priority problem. This urges us to devise new data hiding techniques to protect and secure the data of vital significance. In this respect, steganography often comes to the fore as a tool for hiding information. Steganography is a process that involves hiding a message in an appropriate carrier like image or audio. It is of Greek origin and means "covered or hidden writing". The goal of steganography is covert communication. Here the carrier can be sent to a receiver without any one except the authenticated receiver only knows existence of the information. Considerable amount of work has been carried out by different researchers on steganography. In this work the authors propose a novel Steganographic method for hiding information within the spatial domain of the gray scale image. The proposed approach works by selecting the embedding pixels using some mathematical function and then finds the 8 neighborhood of the each selected pixel and map each bit of the secret message in each of the neighbor pixel coordinate position in a specified manner. Before embedding a checking has been done to find out whether the selected pixel or its neighbor lies at the boundary of the image or not. This solution is independent of the nature of the data to be hidden and produces a stego image with minimum degradation.

Keywords: Cover Image, LSB, Pixel Coordinate Position (PCP), Stego Image.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1782
185 Optimum Design of Steel Space Frames by Hybrid Teaching-Learning Based Optimization and Harmony Search Algorithms

Authors: Alper Akın, İbrahim Aydoğdu

Abstract:

This study presents a hybrid metaheuristic algorithm to obtain optimum designs for steel space buildings. The optimum design problem of three-dimensional steel frames is mathematically formulated according to provisions of LRFD-AISC (Load and Resistance factor design of American Institute of Steel Construction). Design constraints such as the strength requirements of structural members, the displacement limitations, the inter-story drift and the other structural constraints are derived from LRFD-AISC specification. In this study, a hybrid algorithm by using teachinglearning based optimization (TLBO) and harmony search (HS) algorithms is employed to solve the stated optimum design problem. These algorithms are two of the recent additions to metaheuristic techniques of numerical optimization and have been an efficient tool for solving discrete programming problems. Using these two algorithms in collaboration creates a more powerful tool and mitigates each other’s weaknesses. To demonstrate the powerful performance of presented hybrid algorithm, the optimum design of a large scale steel building is presented and the results are compared to the previously obtained results available in the literature.

Keywords: Optimum structural design, hybrid techniques, teaching-learning based optimization, harmony search algorithm, minimum weight, steel space frame.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2417
184 Speaker Identification by Joint Statistical Characterization in the Log Gabor Wavelet Domain

Authors: Suman Senapati, Goutam Saha

Abstract:

Real world Speaker Identification (SI) application differs from ideal or laboratory conditions causing perturbations that leads to a mismatch between the training and testing environment and degrade the performance drastically. Many strategies have been adopted to cope with acoustical degradation; wavelet based Bayesian marginal model is one of them. But Bayesian marginal models cannot model the inter-scale statistical dependencies of different wavelet scales. Simple nonlinear estimators for wavelet based denoising assume that the wavelet coefficients in different scales are independent in nature. However wavelet coefficients have significant inter-scale dependency. This paper enhances this inter-scale dependency property by a Circularly Symmetric Probability Density Function (CS-PDF) related to the family of Spherically Invariant Random Processes (SIRPs) in Log Gabor Wavelet (LGW) domain and corresponding joint shrinkage estimator is derived by Maximum a Posteriori (MAP) estimator. A framework is proposed based on these to denoise speech signal for automatic speaker identification problems. The robustness of the proposed framework is tested for Text Independent Speaker Identification application on 100 speakers of POLYCOST and 100 speakers of YOHO speech database in three different noise environments. Experimental results show that the proposed estimator yields a higher improvement in identification accuracy compared to other estimators on popular Gaussian Mixture Model (GMM) based speaker model and Mel-Frequency Cepstral Coefficient (MFCC) features.

Keywords: Speaker Identification, Log Gabor Wavelet, Bayesian Bivariate Estimator, Circularly Symmetric Probability Density Function, SIRP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1613
183 Game-Theory-Based on Downlink Spectrum Allocation in Two-Tier Networks

Authors: Yu Zhang, Ye Tian, Fang Ye Yixuan Kang

Abstract:

The capacity of conventional cellular networks has reached its upper bound and it can be well handled by introducing femtocells with low-cost and easy-to-deploy. Spectrum interference issue becomes more critical in peace with the value-added multimedia services growing up increasingly in two-tier cellular networks. Spectrum allocation is one of effective methods in interference mitigation technology. This paper proposes a game-theory-based on OFDMA downlink spectrum allocation aiming at reducing co-channel interference in two-tier femtocell networks. The framework is formulated as a non-cooperative game, wherein the femto base stations are players and frequency channels available are strategies. The scheme takes full account of competitive behavior and fairness among stations. In addition, the utility function reflects the interference from the standpoint of channels essentially. This work focuses on co-channel interference and puts forward a negative logarithm interference function on distance weight ratio aiming at suppressing co-channel interference in the same layer network. This scenario is more suitable for actual network deployment and the system possesses high robustness. According to the proposed mechanism, interference exists only when players employ the same channel for data communication. This paper focuses on implementing spectrum allocation in a distributed fashion. Numerical results show that signal to interference and noise ratio can be obviously improved through the spectrum allocation scheme and the users quality of service in downlink can be satisfied. Besides, the average spectrum efficiency in cellular network can be significantly promoted as simulations results shown.

Keywords: Femtocell networks, game theory, interference mitigation, spectrum allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 702
182 A Proposal of a Method to Measure the Satisfaction Indicator of the Local Community Concerning Tourism: A Case Study of Jalapão State Park, Tocantins

Authors: Veruska C. Dutra, Mary L. G. S. Senna, Afonso R. Aquino

Abstract:

Tourists bring many benefits to a local community, encouraging it to be involved in that activity; however, it may also have detrimental effects like garbage, noise, violence, external culture and the damaging of the natural environment among others, which may promote community dissatisfaction. The contact between the tourist and the local community is a concern, especially when the community is located near protected areas. In this case, the community must know the tourist destination well, so it can collaborate in the tourism development without harming the environment. In this context, the present article aims to demonstrate the results of a research study conducted as part of a doctorate program in Sciences from the University of Sao Paulo, Brazil. It had as an objective to elaborate a methodology proposal to measure the local community satisfaction indicator, with applicability on a case study in the Mateiros community located in the surrounding area of the Parque Estadual do Jalapão –PEJ conservation unit in the state of Tocantins, Brazil. This is a study of an interdisciplinary nature that had the deductive method as its guide. The indicator result is going to be presented in this study. It pointed out as negative factors: there is no involvement between the local community and the tourism sector, and there is also dissatisfaction with regard to the town’s basic services. The study showed as positive the local community knowledge about the various attractions in the surrounding area and that the group recognizes the importance of the tourism for the town and life. Concerning the methodology that was used, the results showed that it can collaborate in seeking actions of improvement and involvement of the community in the planning and development of the local tourism. It comes out as an efficient analysis tool, thus enabling the perceiving of the local community point of view.

Keywords: Satisfaction indicator, tourism, community, Jalapão.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1746
181 Experimental Study on Strength and Durability Properties of Bio-Self-Cured Fly Ash Based Concrete under Aggressive Environments

Authors: R. Malathy

Abstract:

High performance concrete is not only characterized by its high strength, workability, and durability but also by its smartness in performance without human care since the first day. If the concrete can cure on its own without external curing without compromising its strength and durability, then it is said to be high performance self-curing concrete. In this paper, an attempt is made on the performance study of internally cured concrete using biomaterials, namely Spinacea pleracea and Calatropis gigantea as self-curing agents, and it is compared with the performance of concrete with existing self-cure chemical, namely polyethylene glycol. The present paper focuses on workability, strength, and durability study on M20, M30, and M40 grade concretes replacing 30% of fly ash for cement. The optimum dosage of Spinacea pleracea, Calatropis gigantea, and polyethylene glycol was taken as 0.6%, 0.24%, and 0.3% by weight of cement from the earlier research studies. From the slump tests performed, it was found that there is a minimum variation between conventional concrete and self-cured concrete. The strength activity index is determined by keeping compressive strength of conventionally cured concrete for 28 days as unity and observed that, for self-cured concrete, it is more than 1 after 28 days and more than 1.15 after 56 days because of secondary reaction of fly ash. The performance study of concretes in aggressive environment like acid attack, sea water attack, and chloride attack was made, and the results are positive and encouraging in bio-self-cured concretes which are ecofriendly, cost effective, and high performance materials.

Keywords: Biomaterials, Calatropis gigantea, polyethylene glycol, Spinacea oleracea, self-curing concrete.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2774
180 Effect of Wood Vinegar for Controlling on Housefly (Musca domestica L.)

Authors: U. Pangnakorn, S. Kanlaya, C. Kuntha

Abstract:

Raw wood vinegar was purified by both standing and filtering methods. Toxicity tests were conducted under laboratory conditions by the topical application method (contact poison) and feeding method (stomach poison). Larvicidal activities of wood vinegar at four different concentrations (10, 15, 20, 25 and 30 %) were studied against second instar larvae of housefly (Musca domestica L.). Four replicates were maintained for all treatments and controls. Larval mortality was recorded up to 96 hours and compared with the larval survivability by two methods of larvicidal bioassay. Percent pupation and percent adult emergence were observed in treated M. domestica. The study revealed that the feeding method gave higher efficiency compared with the topical application method. Larval mortality increased with increasing concentration of wood vinegar and the duration of exposure. No mortality was found in treated M. domestica larvae at minimum 10% concentration of wood vinegar through the experiments. The treated larvae were maintained up to pupa and adult emergence. At 30% maximum concentration larval duration was extended to 11 days in M. domestica for topical application method and 9 days for feeding method. Similarly the pupal durations were also increased with increased concentrations (16 and 24 days for topical application method and feeding method respectively at 30% concentration) of the treatments.

Keywords: Housefly (Musca domestica L.), wood vinegar, mortality, topical application, feeding

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3105
179 Pragati Node Popularity (PNP) Approach to Identify Congestion Hot Spots in MPLS

Authors: E. Ramaraj, A. Padmapriya

Abstract:

In large Internet backbones, Service Providers typically have to explicitly manage the traffic flows in order to optimize the use of network resources. This process is often referred to as Traffic Engineering (TE). Common objectives of traffic engineering include balance traffic distribution across the network and avoiding congestion hot spots. Raj P H and SVK Raja designed the Bayesian network approach to identify congestion hors pots in MPLS. In this approach for every node in the network the Conditional Probability Distribution (CPD) is specified. Based on the CPD the congestion hot spots are identified. Then the traffic can be distributed so that no link in the network is either over utilized or under utilized. Although the Bayesian network approach has been implemented in operational networks, it has a number of well known scaling issues. This paper proposes a new approach, which we call the Pragati (means Progress) Node Popularity (PNP) approach to identify the congestion hot spots with the network topology alone. In the new Pragati Node Popularity approach, IP routing runs natively over the physical topology rather than depending on the CPD of each node as in Bayesian network. We first illustrate our approach with a simple network, then present a formal analysis of the Pragati Node Popularity approach. Our PNP approach shows that for any given network of Bayesian approach, it exactly identifies the same result with minimum efforts. We further extend the result to a more generic one: for any network topology and even though the network is loopy. A theoretical insight of our result is that the optimal routing is always shortest path routing with respect to some considerations of hot spots in the networks.

Keywords: Conditional Probability Distribution, Congestion hotspots, Operational Networks, Traffic Engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1940
178 Controlling of Multi-Level Inverter under Shading Conditions Using Artificial Neural Network

Authors: Abed Sami Qawasme, Sameer Khader

Abstract:

This paper describes the effects of photovoltaic voltage changes on Multi-level inverter (MLI) due to solar irradiation variations, and methods to overcome these changes. The irradiation variation affects the generated voltage, which in turn varies the switching angles required to turn-on the inverter power switches in order to obtain minimum harmonic content in the output voltage profile. Genetic Algorithm (GA) is used to solve harmonics elimination equations of eleven level inverters with equal and non-equal dc sources. After that artificial neural network (ANN) algorithm is proposed to generate appropriate set of switching angles for MLI at any level of input dc sources voltage causing minimization of the total harmonic distortion (THD) to an acceptable limit. MATLAB/Simulink platform is used as a simulation tool and Fast Fourier Transform (FFT) analyses are carried out for output voltage profile to verify the reliability and accuracy of the applied technique for controlling the MLI harmonic distortion. According to the simulation results, the obtained THD for equal dc source is 9.38%, while for variable or unequal dc sources it varies between 10.26% and 12.93% as the input dc voltage varies between 4.47V nd 11.43V respectively. The proposed ANN algorithm provides satisfied simulation results that match with results obtained by alternative algorithms.

Keywords: Multi level inverter, genetic algorithm, artificial neural network, total harmonic distortion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 580
177 An Algorithm Proposed for FIR Filter Coefficients Representation

Authors: Mohamed Al Mahdi Eshtawie, Masuri Bin Othman

Abstract:

Finite impulse response (FIR) filters have the advantage of linear phase, guaranteed stability, fewer finite precision errors, and efficient implementation. In contrast, they have a major disadvantage of high order need (more coefficients) than IIR counterpart with comparable performance. The high order demand imposes more hardware requirements, arithmetic operations, area usage, and power consumption when designing and fabricating the filter. Therefore, minimizing or reducing these parameters, is a major goal or target in digital filter design task. This paper presents an algorithm proposed for modifying values and the number of non-zero coefficients used to represent the FIR digital pulse shaping filter response. With this algorithm, the FIR filter frequency and phase response can be represented with a minimum number of non-zero coefficients. Therefore, reducing the arithmetic complexity needed to get the filter output. Consequently, the system characteristic i.e. power consumption, area usage, and processing time are also reduced. The proposed algorithm is more powerful when integrated with multiplierless algorithms such as distributed arithmetic (DA) in designing high order digital FIR filters. Here the DA usage eliminates the need for multipliers when implementing the multiply and accumulate unit (MAC) and the proposed algorithm will reduce the number of adders and addition operations needed through the minimization of the non-zero values coefficients to get the filter output.

Keywords: Pulse shaping Filter, Distributed Arithmetic, Optimization algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3135
176 A Novel VLSI Architecture for Image Compression Model Using Low power Discrete Cosine Transform

Authors: Vijaya Prakash.A.M, K.S.Gurumurthy

Abstract:

In Image processing the Image compression can improve the performance of the digital systems by reducing the cost and time in image storage and transmission without significant reduction of the Image quality. This paper describes hardware architecture of low complexity Discrete Cosine Transform (DCT) architecture for image compression[6]. In this DCT architecture, common computations are identified and shared to remove redundant computations in DCT matrix operation. Vector processing is a method used for implementation of DCT. This reduction in computational complexity of 2D DCT reduces power consumption. The 2D DCT is performed on 8x8 matrix using two 1-Dimensional Discrete cosine transform blocks and a transposition memory [7]. Inverse discrete cosine transform (IDCT) is performed to obtain the image matrix and reconstruct the original image. The proposed image compression algorithm is comprehended using MATLAB code. The VLSI design of the architecture is implemented Using Verilog HDL. The proposed hardware architecture for image compression employing DCT was synthesized using RTL complier and it was mapped using 180nm standard cells. . The Simulation is done using Modelsim. The simulation results from MATLAB and Verilog HDL are compared. Detailed analysis for power and area was done using RTL compiler from CADENCE. Power consumption of DCT core is reduced to 1.027mW with minimum area[1].

Keywords: Discrete Cosine Transform (DCT), Inverse DiscreteCosine Transform (IDCT), Joint Photographic Expert Group (JPEG), Low Power Design, Very Large Scale Integration (VLSI) .

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3112
175 Earth Station Neural Network Control Methodology and Simulation

Authors: Hanaa T. El-Madany, Faten H. Fahmy, Ninet M. A. El-Rahman, Hassen T. Dorrah

Abstract:

Renewable energy resources are inexhaustible, clean as compared with conventional resources. Also, it is used to supply regions with no grid, no telephone lines, and often with difficult accessibility by common transport. Satellite earth stations which located in remote areas are the most important application of renewable energy. Neural control is a branch of the general field of intelligent control, which is based on the concept of artificial intelligence. This paper presents the mathematical modeling of satellite earth station power system which is required for simulating the system.Aswan is selected to be the site under consideration because it is a rich region with solar energy. The complete power system is simulated using MATLAB–SIMULINK.An artificial neural network (ANN) based model has been developed for the optimum operation of earth station power system. An ANN is trained using a back propagation with Levenberg–Marquardt algorithm. The best validation performance is obtained for minimum mean square error. The regression between the network output and the corresponding target is equal to 96% which means a high accuracy. Neural network controller architecture gives satisfactory results with small number of neurons, hence better in terms of memory and time are required for NNC implementation. The results indicate that the proposed control unit using ANN can be successfully used for controlling the satellite earth station power system.

Keywords: Satellite, neural network, MATLAB, power system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1839
174 Artificial Neural Network Modeling and Genetic Algorithm Based Optimization of Hydraulic Design Related to Seepage under Concrete Gravity Dams on Permeable Soils

Authors: Muqdad Al-Juboori, Bithin Datta

Abstract:

Hydraulic structures such as gravity dams are classified as essential structures, and have the vital role in providing strong and safe water resource management. Three major aspects must be considered to achieve an effective design of such a structure: 1) The building cost, 2) safety, and 3) accurate analysis of seepage characteristics. Due to the complexity and non-linearity relationships of the seepage process, many approximation theories have been developed; however, the application of these theories results in noticeable errors. The analytical solution, which includes the difficult conformal mapping procedure, could be applied for a simple and symmetrical problem only. Therefore, the objectives of this paper are to: 1) develop a surrogate model based on numerical simulated data using SEEPW software to approximately simulate seepage process related to a hydraulic structure, 2) develop and solve a linked simulation-optimization model based on the developed surrogate model to describe the seepage occurring under a concrete gravity dam, in order to obtain optimum and safe design at minimum cost. The result shows that the linked simulation-optimization model provides an efficient and optimum design of concrete gravity dams.

Keywords: Artificial neural network, concrete gravity dam, genetic algorithm, seepage analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1346
173 Formation of Protective Aluminum-Oxide Layer on the Surface of Fe-Cr-Al Sintered-Metal-Fibers via Multi-Stage Thermal Oxidation

Authors: Loai Ben Naji, Osama M. Ibrahim, Khaled J. Al-Fadhalah

Abstract:

The objective of this paper is to investigate the formation and adhesion of a protective aluminum-oxide (Al2O3, alumina) layer on the surface of Iron-Chromium-Aluminum Alloy (Fe-Cr-Al) sintered-metal-fibers. The oxide-scale layer was developed via multi-stage thermal oxidation at 930 oC for 1 hour, followed by 1 hour at 960 oC, and finally at 990 oC for 2 hours. Scanning Electron Microscope (SEM) images show that the multi-stage thermal oxidation resulted in the formation of predominantly Al2O3 platelets-like and whiskers. SEM images also reveal non-uniform oxide-scale growth on the surface of the fibers. Furthermore, peeling/spalling of the alumina protective layer occurred after minimum handling, which indicates weak adhesion forces between the protective layer and the base metal alloy.  Energy Dispersive Spectroscopy (EDS) analysis of the heat-treated Fe-Cr-Al sintered-metal-fibers confirmed the high aluminum content on the surface of the protective layer, and the low aluminum content on the exposed base metal alloy surface. In conclusion, the failure of the oxide-scale protective layer exposes the base metal alloy to further oxidation, and the fragile non-uniform oxide-scale is not suitable as a support for catalysts.

Keywords: High-temperature oxidation, alumina protective layer, iron-chromium-aluminum alloy, sintered-metal-fibers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 847
172 Comparative Study of Seismic Isolation as Retrofit Method for Historical Constructions

Authors: Carlos H. Cuadra

Abstract:

Seismic isolation can be used as a retrofit method for historical buildings with the advantage that minimum intervention on super-structure is required. However, selection of isolation devices depends on weight and stiffness of upper structure. In this study, two buildings are considered for analyses to evaluate the applicability of this retrofitting methodology. Both buildings are located at Akita prefecture in the north part of Japan. One building is a wooden structure that corresponds to the old council meeting hall of Noshiro city. The second building is a brick masonry structure that was used as house of a foreign mining engineer and it is located at Ani town. Ambient vibration measurements were performed on both buildings to estimate their dynamic characteristics. Then, target period of vibration of isolated systems is selected as 3 seconds is selected to estimate required stiffness of isolation devices. For wooden structure, which is a light construction, it was found that natural rubber isolators in combination with friction bearings are suitable for seismic isolation. In case of masonry building elastomeric isolator can be used for its seismic isolation. Lumped mass systems are used for seismic response analysis and it is verified in both cases that seismic isolation can be used as retrofitting method of historical construction. However, in the case of the light building, most of the weight corresponds to the reinforced concrete slab that is required to install isolation devices.

Keywords: Historical building, finite element method, masonry structure, seismic isolation, wooden structure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 679
171 Performance Analysis of MIMO Based Multi-User Cooperation Diversity Over Various Fading Channels

Authors: Zuhaib Ashfaq Khan, Imran Khan, Nandana Rajatheva

Abstract:

In this paper, hybrid FDMA-TDMA access technique in a cooperative distributive fashion introducing and implementing a modified protocol introduced in [1] is analyzed termed as Power and Cooperation Diversity Gain Protocol (PCDGP). A wireless network consists of two users terminal , two relays and a destination terminal equipped with two antennas. The relays are operating in amplify-and-forward (AF) mode with a fixed gain. Two operating modes: cooperation-gain mode and powergain mode are exploited from source terminals to relays, as it is working in a best channel selection scheme. Vertical BLAST (Bell Laboratories Layered Space Time) or V-BLAST with minimum mean square error (MMSE) nulling is used at the relays to perfectly detect the joint signals from multiple source terminals. The performance is analyzed using binary phase shift keying (BPSK) modulation scheme and investigated over independent and identical (i.i.d) Rayleigh, Ricean-K and Nakagami-m fading environments. Subsequently, simulation results show that the proposed scheme can provide better signal quality of uplink users in a cooperative communication system using hybrid FDMATDMA technique.

Keywords: Cooperation Diversity, Best Channel Selectionscheme, MIMO relay networks, V-BLAST, QRdecomposition, and MMSE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1943