Search results for: Inverse fast Fourier transform (IFFT)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1831

Search results for: Inverse fast Fourier transform (IFFT)

331 Model-free Prediction based on Tracking Theory and Newton Form of Polynomial

Authors: Guoyuan Qi , Yskandar Hamam, Barend Jacobus van Wyk, Shengzhi Du

Abstract:

The majority of existing predictors for time series are model-dependent and therefore require some prior knowledge for the identification of complex systems, usually involving system identification, extensive training, or online adaptation in the case of time-varying systems. Additionally, since a time series is usually generated by complex processes such as the stock market or other chaotic systems, identification, modeling or the online updating of parameters can be problematic. In this paper a model-free predictor (MFP) for a time series produced by an unknown nonlinear system or process is derived using tracking theory. An identical derivation of the MFP using the property of the Newton form of the interpolating polynomial is also presented. The MFP is able to accurately predict future values of a time series, is stable, has few tuning parameters and is desirable for engineering applications due to its simplicity, fast prediction speed and extremely low computational load. The performance of the proposed MFP is demonstrated using the prediction of the Dow Jones Industrial Average stock index.

Keywords: Forecast, model-free predictor, prediction, time series

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1783
330 Effect of the Internet on Social Capital

Authors: Safaee Safiollah , Javadi Alimohammad, Javadi Maryam

Abstract:

Internet access is a vital part of the modern world and an important tool in the education of our children. It is present in schools, homes and even shopping malls. Mastering the use of the internet is likely to be an important skill for those entering the job markets of the future. An internet user can be anyone he or she wants to be in an online chat room, or play thrilling and challenging games against other players from all corners of the globe. It seems at present time (or near future) for many people relationships in the real world may be neglected as those in the virtual world increase in importance. Internet is provided a fast mode of transportation caused freedom from family bonds and mixing with different cultures and new communities. This research is an attempt to study effect of Internet on Social capital. For this purpose a survey technique on the sample size amounted 168 students of Payame Noor University of Kermanshah city in country of Iran were considered. Degree of social capital is moderate. With the help of the Multi-variable Regression, variables of Iranian message attractive, Interest to internet with effect of positive and variable Creating a cordial atmosphere with negative effect be significant.

Keywords: Internet, Social Capital, social participation Social trust

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1573
329 Students, Knowledge and Employability

Authors: James Moir

Abstract:

Citizens are increasingly are provided with choice and customization in public services and this has now also become a key feature of higher education in terms of policy roll-outs on personal development planning (PDP) and more generally as part of the employability agenda. The goal here is to transform people, in this case graduates, into active, responsible citizen-workers. A key part of this rhetoric and logic is the inculcation of graduate attributes within students. However, there has also been a concern with the issue of student lack of engagement and perseverance with their studies. This paper sets out to explore some of these conceptions that link graduate attributes with citizenship as well as the notion of how identity is forged through the higher education process. Examples are drawn from a quality enhancement project that is being operated within the context of the Scottish higher education system. This is further framed within the wider context of competing and conflicting demands on higher education, exacerbated by the current worldwide economic climate. There are now pressures on students to develop their employability skills as well as their capacity to engage with global issues such as behavioural change in the light of environmental concerns. It is argued that these pressures, in effect, lead to a form of personalization that is concerned with how graduates develop their sense of identity as something that is engineered and re-engineered to meet these demands.

Keywords: students, higher education, employability, knowledge, personal development

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1701
328 Study of Two MPPTs for Photovoltaic Systems Using Controllers Based in Fuzzy Logic and Sliding Mode

Authors: N. Ouldcherchali, M. S. Boucherit, L. Barazane, A. Morsli

Abstract:

In this study, we proposed two techniques to track the maximum power point (MPPT) of a photovoltaic system. The first is an intelligent control technique, and the second is robust used for variable structure system. In fact the characteristics I-V and P–V of the photovoltaic generator depends on the solar irradiance and temperature. These climate changes cause the fluctuation of maximum power point; a maximum power point tracking technique (MPPT) is required to maximize the output power. For this we have adopted a control by fuzzy logic (FLC) famous for its stability and robustness. And a Siding Mode Control (SMC) widely used for variable structure system. The system comprises a photovoltaic panel (PV), a DC-DC converter, which is considered as an adaptation stage between the PV and the load. The modelling and simulation of the system is developed using MATLAB/Simulink. SMC technique provides a good tracking speed in fast changing irradiation and when the irradiation changes slowly or it is constant the panel power of FLC technique presents a much smoother signal with less fluctuations.

Keywords: Fuzzy logic controller, maximum power point, photovoltaic system, tracker, sliding mode controller.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2105
327 A Business Model Design Process for Social Enterprises: The Critical Role of the Environment

Authors: Hadia Abdel Aziz, Raghda El Ebrashi

Abstract:

Business models are shaped by their design space or the environment they are designed to be implemented in. The rapidly changing economic, technological, political, regulatory and market external environment severely affects business logic. This is particularly true for social enterprises whose core mission is to transform their environments, and thus, their whole business logic revolves around the interchange between the enterprise and the environment. The context in which social business operates imposes different business design constraints while at the same time, open up new design opportunities. It is also affected to a great extent by the impact that successful enterprises generate; a continuous loop of interaction that needs to be managed through a dynamic capability in order to generate a lasting powerful impact. This conceptual research synthesizes and analyzes literature on social enterprise, social enterprise business models, business model innovation, business model design, and the open system view theory to propose a new business model design process for social enterprises that takes into account the critical role of environmental factors. This process would help the social enterprise develop a dynamic capability that ensures the alignment of its business model to its environmental context, thus, maximizing its probability of success.

Keywords: Social enterprise, business model, business model design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3032
326 Key Frame Based Video Summarization via Dependency Optimization

Authors: Janya Sainui

Abstract:

As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting key frames. In particular, we apply a statistical dependency measure called quadratic mutual informaion as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches.

Keywords: Video summarization, key frame extraction, dependency measure, quadratic mutual information, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 964
325 A Mapping Approach of Code Generation for Arinc653-Based Avionics Software

Authors: Lu Zou, Dianfu MA, Ying Wang, Xianqi Zhao

Abstract:

Avionic software architecture has transit from a federated avionics architecture to an integrated modular avionics (IMA) .ARINC 653 (Avionics Application Standard Software Interface) is a software specification for space and time partitioning in Safety-critical avionics Real-time operating systems. Methods to transform the abstract avionics application logic function to the executable model have been brought up, however with less consideration about the code generating input and output model specific for ARINC 653 platform and inner-task synchronous dynamic interaction order sequence. In this paper, we proposed an AADL-based model-driven design methodology to fulfill the purpose to automatically generating Cµ executable model on ARINC 653 platform from the ARINC653 architecture which defined as AADL653 in order to facilitate the development of the avionics software constructed on ARINC653 OS. This paper presents the mapping rules between the AADL653 elements and the elements in Cµ language, and define the code generating rules , designs an automatic C µ code generator .Then, we use a case to illustrate our approach. Finally, we give the related work and future research directions.

Keywords: IMA, ARINC653, AADL653, code generation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3038
324 On the Efficient Implementation of a Serial and Parallel Decomposition Algorithm for Fast Support Vector Machine Training Including a Multi-Parameter Kernel

Authors: Tatjana Eitrich, Bruno Lang

Abstract:

This work deals with aspects of support vector machine learning for large-scale data mining tasks. Based on a decomposition algorithm for support vector machine training that can be run in serial as well as shared memory parallel mode we introduce a transformation of the training data that allows for the usage of an expensive generalized kernel without additional costs. We present experiments for the Gaussian kernel, but usage of other kernel functions is possible, too. In order to further speed up the decomposition algorithm we analyze the critical problem of working set selection for large training data sets. In addition, we analyze the influence of the working set sizes onto the scalability of the parallel decomposition scheme. Our tests and conclusions led to several modifications of the algorithm and the improvement of overall support vector machine learning performance. Our method allows for using extensive parameter search methods to optimize classification accuracy.

Keywords: Support Vector Machine Training, Multi-ParameterKernels, Shared Memory Parallel Computing, Large Data

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443
323 3D Face Modeling based on 3D Dense Morphable Face Shape Model

Authors: Yongsuk Jang Kim, Sun-Tae Chung, Boogyun Kim, Seongwon Cho

Abstract:

Realistic 3D face model is more precise in representing pose, illumination, and expression of face than 2D face model so that it can be utilized usefully in various applications such as face recognition, games, avatars, animations, and etc. In this paper, we propose a 3D face modeling method based on 3D dense morphable shape model. The proposed 3D modeling method first constructs a 3D dense morphable shape model from 3D face scan data obtained using a 3D scanner. Next, the proposed method extracts and matches facial landmarks from 2D image sequence containing a face to be modeled, and then reconstructs 3D vertices coordinates of the landmarks using a factorization-based SfM technique. Then, the proposed method obtains a 3D dense shape model of the face to be modeled by fitting the constructed 3D dense morphable shape model into the reconstructed 3D vertices. Also, the proposed method makes a cylindrical texture map using 2D face image sequence. Finally, the proposed method generates a 3D face model by rendering the 3D dense face shape model using the cylindrical texture map. Through building processes of 3D face model by the proposed method, it is shown that the proposed method is relatively easy, fast and precise.

Keywords: 3D Face Modeling, 3D Morphable Shape Model, 3DReconstruction, 3D Correspondence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2428
322 An Experimental Consideration of the Hybrid Architecture Based on the Situated Action Generator

Authors: Serin Lee, Takashi Kubota, Ichiro Nakatani

Abstract:

The approaches to make an agent generate intelligent actions in the AI field might be roughly categorized into two ways–the classical planning and situated action system. It is well known that each system have its own strength and weakness. However, each system also has its own application field. In particular, most of situated action systems do not directly deal with the logical problem. This paper first briefly mentions the novel action generator to situatedly extract a set of actions, which is likely to help to achieve the goal at the current situation in the relaxed logical space. After performing the action set, the agent should recognize the situation for deciding the next likely action set. However, since the extracted action is an approximation of the action which helps to achieve the goal, the agent could be caught into the deadlock of the problem. This paper proposes the newly developed hybrid architecture to solve the problem, which combines the novel situated action generator with the conventional planner. The empirical result in some planning domains shows that the quality of the resultant path to the goal is mostly acceptable as well as deriving the fast response time, and suggests the correlation between the structure of problems and the organization of each system which generates the action.

Keywords: Situated reasoning, situated action, planning, hybrid architecture

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1125
321 A New Source Code Auditing Algorithm for Detecting LFI and RFI in PHP Programs

Authors: Seyed Ali Mir Heydari, Mohsen Sayadiharikandeh

Abstract:

Static analysis of source code is used for auditing web applications to detect the vulnerabilities. In this paper, we propose a new algorithm to analyze the PHP source code for detecting LFI and RFI potential vulnerabilities. In our approach, we first define some patterns for finding some functions which have potential to be abused because of unhandled user inputs. More precisely, we use regular expression as a fast and simple method to define some patterns for detection of vulnerabilities. As inclusion functions could be also used in a safe way, there could occur many false positives (FP). The first cause of these FP-s could be that the function does not use a usersupplied variable as an argument. So, we extract a list of usersupplied variables to be used for detecting vulnerable lines of code. On the other side, as vulnerability could spread among the variables like by multi-level assignment, we also try to extract the hidden usersupplied variables. We use the resulted list to decrease the false positives of our method. Finally, as there exist some ways to prevent the vulnerability of inclusion functions, we define also some patterns to detect them and decrease our false positives.

Keywords: User-supplied Variables, hidden user-supplied variables, PHP vulnerabilities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2507
320 Pension Plan Member’s Investment Strategies with Transaction Cost and Couple Risky Assets Modelled by the O-U Process

Authors: Udeme O. Ini, Edikan E. Akpanibah

Abstract:

This paper studies the optimal investment strategies for a plan member (PM) in a defined contribution (DC) pension scheme with transaction cost, taxes on invested funds and couple risky assets (stocks) under the Ornstein-Uhlenbeck (O-U) process. The PM’s portfolio is assumed to consist of a risk-free asset and two risky assets where the two risky assets are driven by the O-U process. The Legendre transformation and dual theory is use to transform the resultant optimal control problem which is a nonlinear partial differential equation (PDE) into linear PDE and the resultant linear PDE is then solved for the explicit solutions of the optimal investment strategies for PM exhibiting constant absolute risk aversion (CARA) using change of variable technique. Furthermore, theoretical analysis is used to study the influences of some sensitive parameters on the optimal investment strategies with observations that the optimal investment strategies for the two risky assets increase with increase in the dividend and decreases with increase in tax on the invested funds, risk averse coefficient, initial fund size and the transaction cost.

Keywords: Ornstein-Uhlenbeck process, portfolio management, Legendre transforms, CARA utility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 477
319 Ontology-based Concept Weighting for Text Documents

Authors: Hmway Hmway Tar, Thi Thi Soe Nyaunt

Abstract:

Documents clustering become an essential technology with the popularity of the Internet. That also means that fast and high-quality document clustering technique play core topics. Text clustering or shortly clustering is about discovering semantically related groups in an unstructured collection of documents. Clustering has been very popular for a long time because it provides unique ways of digesting and generalizing large amounts of information. One of the issues of clustering is to extract proper feature (concept) of a problem domain. The existing clustering technology mainly focuses on term weight calculation. To achieve more accurate document clustering, more informative features including concept weight are important. Feature Selection is important for clustering process because some of the irrelevant or redundant feature may misguide the clustering results. To counteract this issue, the proposed system presents the concept weight for text clustering system developed based on a k-means algorithm in accordance with the principles of ontology so that the important of words of a cluster can be identified by the weight values. To a certain extent, it has resolved the semantic problem in specific areas.

Keywords: Clustering, Concept Weight, Document clustering, Feature Selection, Ontology

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2406
318 A Semi-Fragile Signature based Scheme for Ownership Identification and Color Image Authentication

Authors: M. Hamad Hassan, S.A.M. Gilani

Abstract:

In this paper, a novel scheme is proposed for ownership identification and authentication using color images by deploying Cryptography and Digital Watermarking as underlaying technologies. The former is used to compute the contents based hash and the latter to embed the watermark. The host image that will claim to be the rightful owner is first transformed from RGB to YST color space exclusively designed for watermarking based applications. Geometrically YS ÔèÑ T and T channel corresponds to the chrominance component of color image, therefore suitable for embedding the watermark. The T channel is divided into 4×4 nonoverlapping blocks. The size of block is important for enhanced localization, security and low computation. Each block along with ownership information is then deployed by SHA160, a one way hash function to compute the content based hash, which is always unique and resistant against birthday attack instead of using MD5 that may raise the condition i.e. H(m)=H(m'). The watermark payload varies from block to block and computed by the variance factorα . The quality of watermarked images is quite high both subjectively and objectively. Our scheme is blind, computationally fast and exactly locates the tampered region.

Keywords: Hash Collision, LSB, MD5, PSNR, SHA160.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1563
317 Data Hiding by Vector Quantization in Color Image

Authors: Yung-Gi Wu

Abstract:

With the growing of computer and network, digital data can be spread to anywhere in the world quickly. In addition, digital data can also be copied or tampered easily so that the security issue becomes an important topic in the protection of digital data. Digital watermark is a method to protect the ownership of digital data. Embedding the watermark will influence the quality certainly. In this paper, Vector Quantization (VQ) is used to embed the watermark into the image to fulfill the goal of data hiding. This kind of watermarking is invisible which means that the users will not conscious the existing of embedded watermark even though the embedded image has tiny difference compared to the original image. Meanwhile, VQ needs a lot of computation burden so that we adopt a fast VQ encoding scheme by partial distortion searching (PDS) and mean approximation scheme to speed up the data hiding process. The watermarks we hide to the image could be gray, bi-level and color images. Texts are also can be regarded as watermark to embed. In order to test the robustness of the system, we adopt Photoshop to fulfill sharpen, cropping and altering to check if the extracted watermark is still recognizable. Experimental results demonstrate that the proposed system can resist the above three kinds of tampering in general cases.

Keywords: Data hiding, vector quantization, watermark.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1776
316 Rapid Method for Low Level 90Sr Determination in Seawater by Liquid Extraction Technique

Authors: S. Visetpotjanakit, N. Nakkaew

Abstract:

Determination of low level 90Sr in seawater has been widely developed for the purpose of environmental monitoring and radiological research because 90Sr is one of the most hazardous radionuclides released from atmospheric during the testing of nuclear weapons, waste discharge from the generation nuclear energy and nuclear accident occurring at power plants. A liquid extraction technique using bis-2-etylhexyl-phosphoric acid to separate and purify yttrium followed by Cherenkov counting using a liquid scintillation counter to determine 90Y in secular equilibrium to 90Sr was developed to monitor 90Sr in the Asia Pacific Ocean. The analytical performance was validated for the accuracy, precision, and trueness criteria. Sr-90 determination in seawater using various low concentrations in a range of 0.01 – 1 Bq/L of 30 liters spiked seawater samples and 0.5 liters of IAEA-RML-2015-01 proficiency test sample was performed for statistical evaluation. The results had a relative bias in the range from 3.41% to 12.28%, which is below accepted relative bias of ± 25% and passed the criteria confirming that our analytical approach for determination of low levels of 90Sr in seawater was acceptable. Moreover, the approach is economical, non-laborious and fast.

Keywords: Proficiency test, radiation monitoring, seawater, strontium determination.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 866
315 Faster FPGA Routing Solution using DNA Computing

Authors: Manpreet Singh, Parvinder Singh Sandhu, Manjinder Singh Kahlon

Abstract:

There are many classical algorithms for finding routing in FPGA. But Using DNA computing we can solve the routes efficiently and fast. The run time complexity of DNA algorithms is much less than other classical algorithms which are used for solving routing in FPGA. The research in DNA computing is in a primary level. High information density of DNA molecules and massive parallelism involved in the DNA reactions make DNA computing a powerful tool. It has been proved by many research accomplishments that any procedure that can be programmed in a silicon computer can be realized as a DNA computing procedure. In this paper we have proposed two tier approaches for the FPGA routing solution. First, geometric FPGA detailed routing task is solved by transforming it into a Boolean satisfiability equation with the property that any assignment of input variables that satisfies the equation specifies a valid routing. Satisfying assignment for particular route will result in a valid routing and absence of a satisfying assignment implies that the layout is un-routable. In second step, DNA search algorithm is applied on this Boolean equation for solving routing alternatives utilizing the properties of DNA computation. The simulated results are satisfactory and give the indication of applicability of DNA computing for solving the FPGA Routing problem.

Keywords: FPGA, Routing, DNA Computing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1592
314 Splitting Modified Donor-Cell Schemes for Spectral Action Balance Equation

Authors: Tanapat Brikshavana, Anirut Luadsong

Abstract:

The spectral action balance equation is an equation that used to simulate short-crested wind-generated waves in shallow water areas such as coastal regions and inland waters. This equation consists of two spatial dimensions, wave direction, and wave frequency which can be solved by finite difference method. When this equation with dominating propagation velocity terms are discretized using central differences, stability problems occur when the grid spacing is chosen too coarse. In this paper, we introduce the splitting modified donorcell scheme for avoiding stability problems and prove that it is consistent to the modified donor-cell scheme with same accuracy. The splitting modified donor-cell scheme was adopted to split the wave spectral action balance equation into four one-dimensional problems, which for each small problem obtains the independently tridiagonal linear systems. For each smaller system can be solved by direct or iterative methods at the same time which is very fast when performed by a multi-cores computer.

Keywords: donor-cell scheme, parallel algorithm, spectral action balance equation, splitting method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1488
313 Conjugate Mixed Convection Heat Transfer and Entropy Generation of Cu-Water Nanofluid in an Enclosure with Thick Wavy Bottom Wall

Authors: Sanjib Kr Pal, S. Bhattacharyya

Abstract:

Mixed convection of Cu-water nanofluid in an enclosure with thick wavy bottom wall has been investigated numerically. A co-ordinate transformation method is used to transform the computational domain into an orthogonal co-ordinate system. The governing equations in the computational domain are solved through a pressure correction based iterative algorithm. The fluid flow and heat transfer characteristics are analyzed for a wide range of Richardson number (0.1 ≤ Ri ≤ 5), nanoparticle volume concentration (0.0 ≤ ϕ ≤ 0.2), amplitude (0.0 ≤ α ≤ 0.1) of the wavy thick- bottom wall and the wave number (ω) at a fixed Reynolds number. Obtained results showed that heat transfer rate increases remarkably by adding the nanoparticles. Heat transfer rate is dependent on the wavy wall amplitude and wave number and decreases with increasing Richardson number for fixed amplitude and wave number. The Bejan number and the entropy generation are determined to analyze the thermodynamic optimization of the mixed convection.

Keywords: Entropy generation, mixed convection, conjugate heat transfer, numerical, nanofluid, wall waviness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1046
312 Rapid Frequency Response Measurement of Power Conversion Products with Coherence-Based Confidence Analysis

Authors: Tomi Roinila, Aki Taskinen, Matti Vilkko

Abstract:

Switched-mode converters play now a significant role in modern society. Their operation are often crucial in various electrical applications affecting the every day life. Therefore, the quality of the converters needs to be reliably verified. Recent studies have shown that the converters can be fully characterized by a set of frequency responses which can be efficiently used to validate the proper operation of the converters. Consequently, several methods have been proposed to measure the frequency responses fast and accurately. Most often correlation-based techniques have been applied. The presented measurement methods are highly sensitive to external errors and system nonlinearities. This fact has been often forgotten and the necessary uncertainty analysis of the measured responses has been neglected. This paper presents a simple approach to analyze the noise and nonlinearities in the frequency-response measurements of switched-mode converters. Coherence analysis is applied to form a confidence interval characterizing the noise and nonlinearities involved in the measurements. The presented method is verified by practical measurements from a high-frequency switchedmode converter.

Keywords: Switched-mode converters, Frequency analysis, CoherenceAnalysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1719
311 Evaluation of Groundwater Unit Hydrograph of Kavar-Maharloo Aquifer

Authors: Mohammad Hosein Hojati, Fardin Boustani

Abstract:

Groundwater is one of the most important water resources in Fars province. Based on this study, 95 percent of the total annual water consumption in Fars is used for agriculture, whereas the percentages for domestic and industrial uses are 4 and 1 percent, respectively. Population growth, urban and industrial growth, and agricultural development in Fars have created a condition of water stress. In this province, farmers and other users are pumping groundwater faster than its natural replenishment rate, causing a continuous drop in groundwater tables and depletion of this resource. In this research variation of groundwater level, their effects and ways to help control groundwater levels in aquifer of the Kavar- Maharloo plains in Fars plain were evaluated .Excessive exploitation of groundwater in this aquifer caused the groundwater levels fall too fast or to unacceptable levels. The average drawdown of the groundwater level in this plain were 17 meters during 1995 to 2006. The purpose of this study is to evaluate water level changes in the Kavar-Maharloo Aquifer in the Fars province in order to determine the areas of greatest depletion, the cause of depletion, and predict the remaining life of the aquifer.

Keywords: Aquifer , ground water depletion, water table

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1681
310 Automated Optic Disc Detection in Retinal Images of Patients with Diabetic Retinopathy and Risk of Macular Edema

Authors: Arturo Aquino, Manuel Emilio Gegundez, Diego Marin

Abstract:

In this paper, a new automated methodology to detect the optic disc (OD) automatically in retinal images from patients with risk of being affected by Diabetic Retinopathy (DR) and Macular Edema (ME) is presented. The detection procedure comprises two independent methodologies. On one hand, a location methodology obtains a pixel that belongs to the OD using image contrast analysis and structure filtering techniques and, on the other hand, a boundary segmentation methodology estimates a circular approximation of the OD boundary by applying mathematical morphology, edge detection techniques and the Circular Hough Transform. The methodologies were tested on a set of 1200 images composed of 229 retinographies from patients affected by DR with risk of ME, 431 with DR and no risk of ME and 540 images of healthy retinas. The location methodology obtained 98.83% success rate, whereas the OD boundary segmentation methodology obtained good circular OD boundary approximation in 94.58% of cases. The average computational time measured over the total set was 1.67 seconds for OD location and 5.78 seconds for OD boundary segmentation.

Keywords: Diabetic retinopathy, macular edema, optic disc, automated detection, automated segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2790
309 Study of the Particle Size Effect on Bubble Rise Velocities in a Three-Phase Bubble Column

Authors: Weiling Li, Wenqi Zhong, Baosheng Jin, Rui Xiao, Yong Lu, Tingting He

Abstract:

Experiments were performed in a three-phase bubble column to study variations of bubble rise velocities. The dynamic gas disengagement (DGD) technique and the fast response pressure transducers were utilized to investigate the bubble rise in the column. The superficial gas velocity of large bubbles and small bubbles, the rise velocities of larger and small bubble fractions were studied considering the effect of particle sizes. The results show that the superficial gas velocity associated with large bubbles linearly increase as superficial gas velocity increasing. Particle size has little effect on the both large and small bubble superficial gas velocities. The rise velocities of larger bubble fractions are larger than that of small bubble fractions, and it had different tendency at low and high superficial gas velocities when changing the particle sizes. The rise velocities of small bubble fractions increased and then had a decrease tendency when the particle size became greater.

Keywords: Bubble rise velocity, gas–liquid–solid, particle size effect, three–phase bubble column.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3403
308 A Force-directed Graph Drawing based on the Hierarchical Individual Timestep Method

Authors: T. Matsubayashi, T. Yamada

Abstract:

In this paper, we propose a fast and efficient method for drawing very large-scale graph data. The conventional force-directed method proposed by Fruchterman and Rheingold (FR method) is well-known. It defines repulsive forces between every pair of nodes and attractive forces between connected nodes on a edge and calculates corresponding potential energy. An optimal layout is obtained by iteratively updating node positions to minimize the potential energy. Here, the positions of the nodes are updated every global timestep at the same time. In the proposed method, each node has its own individual time and time step, and nodes are updated at different frequencies depending on the local situation. The proposed method is inspired by the hierarchical individual time step method used for the high accuracy calculations for dense particle fields such as star clusters in astrophysical dynamics. Experiments show that the proposed method outperforms the original FR method in both speed and accuracy. We implement the proposed method on the MDGRAPE-3 PCI-X special purpose parallel computer and realize a speed enhancement of several hundred times.

Keywords: visualization, graph drawing, Internet Map

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1855
307 A Frugal Bidding Procedure for Replicating WWW Content

Authors: Samee Ullah Khan, C. Ardil

Abstract:

Fine-grained data replication over the Internet allows duplication of frequently accessed data objects, as opposed to entire sites, to certain locations so as to improve the performance of largescale content distribution systems. In a distributed system, agents representing their sites try to maximize their own benefit since they are driven by different goals such as to minimize their communication costs, latency, etc. In this paper, we will use game theoretical techniques and in particular auctions to identify a bidding mechanism that encapsulates the selfishness of the agents, while having a controlling hand over them. In essence, the proposed game theory based mechanism is the study of what happens when independent agents act selfishly and how to control them to maximize the overall performance. A bidding mechanism asks how one can design systems so that agents- selfish behavior results in the desired system-wide goals. Experimental results reveal that this mechanism provides excellent solution quality, while maintaining fast execution time. The comparisons are recorded against some well known techniques such as greedy, branch and bound, game theoretical auctions and genetic algorithms.

Keywords: Internet, data content replication, static allocation, mechanism design, equilibrium.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1404
306 Robust Digital Cinema Watermarking

Authors: Sadi Vural, Hiromi Tomii, Hironori Yamauchi

Abstract:

With the advent of digital cinema and digital broadcasting, copyright protection of video data has been one of the most important issues. We present a novel method of watermarking for video image data based on the hardware and digital wavelet transform techniques and name it as “traceable watermarking" because the watermarked data is constructed before the transmission process and traced after it has been received by an authorized user. In our method, we embed the watermark to the lowest part of each image frame in decoded video by using a hardware LSI. Digital Cinema is an important application for traceable watermarking since digital cinema system makes use of watermarking technology during content encoding, encryption, transmission, decoding and all the intermediate process to be done in digital cinema systems. The watermark is embedded into the randomly selected movie frames using hash functions. Embedded watermark information can be extracted from the decoded video data. For that, there is no need to access original movie data. Our experimental results show that proposed traceable watermarking method for digital cinema system is much better than the convenient watermarking techniques in terms of robustness, image quality, speed, simplicity and robust structure.

Keywords: Decoder, Digital content, JPEG2000 Frame, System-On-Chip, traceable watermark, Hash Function, CRC-32.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1647
305 Dimensionality Reduction of PSSM Matrix and its Influence on Secondary Structure and Relative Solvent Accessibility Predictions

Authors: Rafał Adamczak

Abstract:

State-of-the-art methods for secondary structure (Porter, Psi-PRED, SAM-T99sec, Sable) and solvent accessibility (Sable, ACCpro) predictions use evolutionary profiles represented by the position specific scoring matrix (PSSM). It has been demonstrated that evolutionary profiles are the most important features in the feature space for these predictions. Unfortunately applying PSSM matrix leads to high dimensional feature spaces that may create problems with parameter optimization and generalization. Several recently published suggested that applying feature extraction for the PSSM matrix may result in improvements in secondary structure predictions. However, none of the top performing methods considered here utilizes dimensionality reduction to improve generalization. In the present study, we used simple and fast methods for features selection (t-statistics, information gain) that allow us to decrease the dimensionality of PSSM matrix by 75% and improve generalization in the case of secondary structure prediction compared to the Sable server.

Keywords: Secondary structure prediction, feature selection, position specific scoring matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1936
304 Study of Two Writing Schemes for a Magnetic Tunnel Junction Based On Spin Orbit Torque

Authors: K. Jabeur, L. D. Buda-Prejbeanu, G. Prenat, G. Di Pendina

Abstract:

MRAM technology provides a combination of fast access time, non-volatility, data retention and endurance. While a growing interest is given to two-terminal Magnetic Tunnel Junctions (MTJ) based on Spin-Transfer Torque (STT) switching as the potential candidate for a universal memory, its reliability is dramatically decreased because of the common writing/reading path. Three-terminal MTJ based on Spin-Orbit Torque (SOT) approach revitalizes the hope of an ideal MRAM. It can overcome the reliability barrier encountered in current two-terminal MTJs by separating the reading and the writing path. In this paper, we study two possible writing schemes for the SOT-MTJ device based on recently fabricated samples. While the first is based on precessional switching, the second requires the presence of permanent magnetic field. Based on an accurate Verilog-A model, we simulate the two writing techniques and we highlight advantages and drawbacks of each one. Using the second technique, pioneering logic circuits based on the three-terminal architecture of the SOT-MTJ described in this work are under development with preliminary attractive results.

Keywords: Spin orbit Torque, Magnetic Tunnel Junction, MRAM, Spintronic, Circuit simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3572
303 A Materialized View Approach to Support Aggregation Operations over Long Periods in Sensor Networks

Authors: Minsoo Lee, Julee Choi, Sookyung Song

Abstract:

The increasing interest on processing data created by sensor networks has evolved into approaches to implement sensor networks as databases. The aggregation operator, which calculates a value from a large group of data such as computing averages or sums, etc. is an essential function that needs to be provided when implementing such sensor network databases. This work proposes to add the DURING clause into TinySQL to calculate values during a specific long period and suggests a way to implement the aggregation service in sensor networks by applying materialized view and incremental view maintenance techniques that is used in data warehouses. In sensor networks, data values are passed from child nodes to parent nodes and an aggregation value is computed at the root node. As such root nodes need to be memory efficient and low powered, it becomes a problem to recompute aggregate values from all past and current data. Therefore, applying incremental view maintenance techniques can reduce the memory consumption and support fast computation of aggregate values.

Keywords: Aggregation, Incremental View Maintenance, Materialized view, Sensor Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1540
302 Gray Level Image Encryption

Authors: Roza Afarin, Saeed Mozaffari

Abstract:

The aim of this paper is image encryption using Genetic Algorithm (GA). The proposed encryption method consists of two phases. In modification phase, pixels locations are altered to reduce correlation among adjacent pixels. Then, pixels values are changed in the diffusion phase to encrypt the input image. Both phases are performed by GA with binary chromosomes. For modification phase, these binary patterns are generated by Local Binary Pattern (LBP) operator while for diffusion phase binary chromosomes are obtained by Bit Plane Slicing (BPS). Initial population in GA includes rows and columns of the input image. Instead of subjective selection of parents from this initial population, a random generator with predefined key is utilized. It is necessary to decrypt the coded image and reconstruct the initial input image. Fitness function is defined as average of transition from 0 to 1 in LBP image and histogram uniformity in modification and diffusion phases, respectively. Randomness of the encrypted image is measured by entropy, correlation coefficients and histogram analysis. Experimental results show that the proposed method is fast enough and can be used effectively for image encryption.

Keywords: Correlation coefficients, Genetic algorithm, Image encryption, Image entropy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2238