Search results for: Fast Haar Transform
376 Fighter Aircraft Evaluation and Selection Process Based on Triangular Fuzzy Numbers in Multiple Criteria Decision Making Analysis Using the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS)
Authors: C. Ardil
Abstract:
This article presents a multiple criteria evaluation approach to uncertainty, vagueness, and imprecision analysis for ranking alternatives with fuzzy data for decision making using the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). The fighter aircraft evaluation and selection decision making problem is modeled in a fuzzy environment with triangular fuzzy numbers. The fuzzy decision information related to the fighter aircraft selection problem is taken into account in ordering the alternatives and selecting the best candidate. The basic fuzzy TOPSIS procedure steps transform fuzzy decision matrices into matrices of alternatives evaluated according to all decision criteria. A practical numerical example illustrates the proposed approach to the fighter aircraft selection problem.
Keywords: triangular fuzzy number (TFN), multiple criteria decision making analysis, decision making, aircraft selection, MCDMA, fuzzy TOPSIS
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 473375 An Active Mixer with Vertical Flow Placement via a Series of Inlets for Micromixing
Authors: Pil Woo Heo, In Sub Park
Abstract:
Flows in a microchannel are laminar, which means that mixing depends on only inter-diffusion. A micromixer plays an important role in obtaining fast diagnosis results in the fields of m-TAS (total analysis system), Bio-MEMS and LOC (lab-on-a-chip).
In this paper, we propose a new active mixer with vertical flow placement via a series of inlets for micromixing. This has two inlets on the same axis, one of which is located before the other. The sample input by the first inlet flows into the down-position, while the other sample by the second inlet flows into the up-position. In the experiment, the samples were located vertically in up-down positions in a micro chamber. PZT was attached below a chamber, and ultrasonic waves were radiated in the down to up direction towards the samples in the micro chamber in order to accelerate the mixing. The mixing process was measured by the change of color in a micro chamber using phenolphthalein and NaOH. The results of the experiment showed that the samples in the microchamber were efficiently mixed and that our new active mixer was superior to the horizontal type of active mixers in view of the grey levels and the standard deviation.
Keywords: Active mixer, vertical flow placement, microchannel, bio-MEMS, LOC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1764374 Model-free Prediction based on Tracking Theory and Newton Form of Polynomial
Authors: Guoyuan Qi , Yskandar Hamam, Barend Jacobus van Wyk, Shengzhi Du
Abstract:
The majority of existing predictors for time series are model-dependent and therefore require some prior knowledge for the identification of complex systems, usually involving system identification, extensive training, or online adaptation in the case of time-varying systems. Additionally, since a time series is usually generated by complex processes such as the stock market or other chaotic systems, identification, modeling or the online updating of parameters can be problematic. In this paper a model-free predictor (MFP) for a time series produced by an unknown nonlinear system or process is derived using tracking theory. An identical derivation of the MFP using the property of the Newton form of the interpolating polynomial is also presented. The MFP is able to accurately predict future values of a time series, is stable, has few tuning parameters and is desirable for engineering applications due to its simplicity, fast prediction speed and extremely low computational load. The performance of the proposed MFP is demonstrated using the prediction of the Dow Jones Industrial Average stock index.Keywords: Forecast, model-free predictor, prediction, time series
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1784373 Effect of the Internet on Social Capital
Authors: Safaee Safiollah , Javadi Alimohammad, Javadi Maryam
Abstract:
Internet access is a vital part of the modern world and an important tool in the education of our children. It is present in schools, homes and even shopping malls. Mastering the use of the internet is likely to be an important skill for those entering the job markets of the future. An internet user can be anyone he or she wants to be in an online chat room, or play thrilling and challenging games against other players from all corners of the globe. It seems at present time (or near future) for many people relationships in the real world may be neglected as those in the virtual world increase in importance. Internet is provided a fast mode of transportation caused freedom from family bonds and mixing with different cultures and new communities. This research is an attempt to study effect of Internet on Social capital. For this purpose a survey technique on the sample size amounted 168 students of Payame Noor University of Kermanshah city in country of Iran were considered. Degree of social capital is moderate. With the help of the Multi-variable Regression, variables of Iranian message attractive, Interest to internet with effect of positive and variable Creating a cordial atmosphere with negative effect be significant.
Keywords: Internet, Social Capital, social participation Social trust
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1574372 Students, Knowledge and Employability
Authors: James Moir
Abstract:
Citizens are increasingly are provided with choice and customization in public services and this has now also become a key feature of higher education in terms of policy roll-outs on personal development planning (PDP) and more generally as part of the employability agenda. The goal here is to transform people, in this case graduates, into active, responsible citizen-workers. A key part of this rhetoric and logic is the inculcation of graduate attributes within students. However, there has also been a concern with the issue of student lack of engagement and perseverance with their studies. This paper sets out to explore some of these conceptions that link graduate attributes with citizenship as well as the notion of how identity is forged through the higher education process. Examples are drawn from a quality enhancement project that is being operated within the context of the Scottish higher education system. This is further framed within the wider context of competing and conflicting demands on higher education, exacerbated by the current worldwide economic climate. There are now pressures on students to develop their employability skills as well as their capacity to engage with global issues such as behavioural change in the light of environmental concerns. It is argued that these pressures, in effect, lead to a form of personalization that is concerned with how graduates develop their sense of identity as something that is engineered and re-engineered to meet these demands.Keywords: students, higher education, employability, knowledge, personal development
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1701371 Synthesis, Characterization and Performance Study of Newly Developed Amine Polymeric Membrane (APM) for Carbon Dioxide (CO2) Removal
Authors: Rizwan Nasir, Hilmi Mukhtar, Zakaria Man, Dzeti Farhah Mohshim
Abstract:
Carbon dioxide has been well associated with greenhouse effect, and due to its corrosive nature it is an undesirable compound. A variety of physical-chemical processes are available for the removal of carbon dioxide. Previous attempts in this field have established alkanolamine group has the capability to remove carbon dioxide. So, this study combined the polymeric membrane and alkanolamine solutions to fabricate the amine polymeric membrane (APM) to remove carbon dioxide (CO2). This study entails the effect of three types of amines, monoethanolamine (MEA), diethanolamine (DEA), and methyldiethanolamine (MDEA). The effect of each alkanolamine group on the morphology and performance of polyether sulfone (PES) polymeric membranes was studied. Flat sheet membranes were fabricated by solvent evaporation method by adding polymer and different alkanolamine solutions in the N-Methyl-2-pyrrolidone (NMP) solvent. The final membranes were characterized by using Field Emission Electron Microscope (FESEM), Fourier Transform Infrared (FTIR), and Thermo-Gravimetric Analysis (TGA). The membrane separation performance was studied. The PES-DEA and PES-MDEA membrane has good ability to remove carbon dioxide.
Keywords: Amine Polymeric membrane, Alkanolamine solution, CO2 Removal, Characterization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2253370 Study of Two MPPTs for Photovoltaic Systems Using Controllers Based in Fuzzy Logic and Sliding Mode
Authors: N. Ouldcherchali, M. S. Boucherit, L. Barazane, A. Morsli
Abstract:
In this study, we proposed two techniques to track the maximum power point (MPPT) of a photovoltaic system. The first is an intelligent control technique, and the second is robust used for variable structure system. In fact the characteristics I-V and P–V of the photovoltaic generator depends on the solar irradiance and temperature. These climate changes cause the fluctuation of maximum power point; a maximum power point tracking technique (MPPT) is required to maximize the output power. For this we have adopted a control by fuzzy logic (FLC) famous for its stability and robustness. And a Siding Mode Control (SMC) widely used for variable structure system. The system comprises a photovoltaic panel (PV), a DC-DC converter, which is considered as an adaptation stage between the PV and the load. The modelling and simulation of the system is developed using MATLAB/Simulink. SMC technique provides a good tracking speed in fast changing irradiation and when the irradiation changes slowly or it is constant the panel power of FLC technique presents a much smoother signal with less fluctuations.Keywords: Fuzzy logic controller, maximum power point, photovoltaic system, tracker, sliding mode controller.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2106369 A Business Model Design Process for Social Enterprises: The Critical Role of the Environment
Authors: Hadia Abdel Aziz, Raghda El Ebrashi
Abstract:
Business models are shaped by their design space or the environment they are designed to be implemented in. The rapidly changing economic, technological, political, regulatory and market external environment severely affects business logic. This is particularly true for social enterprises whose core mission is to transform their environments, and thus, their whole business logic revolves around the interchange between the enterprise and the environment. The context in which social business operates imposes different business design constraints while at the same time, open up new design opportunities. It is also affected to a great extent by the impact that successful enterprises generate; a continuous loop of interaction that needs to be managed through a dynamic capability in order to generate a lasting powerful impact. This conceptual research synthesizes and analyzes literature on social enterprise, social enterprise business models, business model innovation, business model design, and the open system view theory to propose a new business model design process for social enterprises that takes into account the critical role of environmental factors. This process would help the social enterprise develop a dynamic capability that ensures the alignment of its business model to its environmental context, thus, maximizing its probability of success.
Keywords: Social enterprise, business model, business model design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3032368 Key Frame Based Video Summarization via Dependency Optimization
Authors: Janya Sainui
Abstract:
As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting key frames. In particular, we apply a statistical dependency measure called quadratic mutual informaion as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches.Keywords: Video summarization, key frame extraction, dependency measure, quadratic mutual information, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 964367 A Mapping Approach of Code Generation for Arinc653-Based Avionics Software
Authors: Lu Zou, Dianfu MA, Ying Wang, Xianqi Zhao
Abstract:
Avionic software architecture has transit from a federated avionics architecture to an integrated modular avionics (IMA) .ARINC 653 (Avionics Application Standard Software Interface) is a software specification for space and time partitioning in Safety-critical avionics Real-time operating systems. Methods to transform the abstract avionics application logic function to the executable model have been brought up, however with less consideration about the code generating input and output model specific for ARINC 653 platform and inner-task synchronous dynamic interaction order sequence. In this paper, we proposed an AADL-based model-driven design methodology to fulfill the purpose to automatically generating Cµ executable model on ARINC 653 platform from the ARINC653 architecture which defined as AADL653 in order to facilitate the development of the avionics software constructed on ARINC653 OS. This paper presents the mapping rules between the AADL653 elements and the elements in Cµ language, and define the code generating rules , designs an automatic C µ code generator .Then, we use a case to illustrate our approach. Finally, we give the related work and future research directions.Keywords: IMA, ARINC653, AADL653, code generation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3041366 On the Efficient Implementation of a Serial and Parallel Decomposition Algorithm for Fast Support Vector Machine Training Including a Multi-Parameter Kernel
Authors: Tatjana Eitrich, Bruno Lang
Abstract:
This work deals with aspects of support vector machine learning for large-scale data mining tasks. Based on a decomposition algorithm for support vector machine training that can be run in serial as well as shared memory parallel mode we introduce a transformation of the training data that allows for the usage of an expensive generalized kernel without additional costs. We present experiments for the Gaussian kernel, but usage of other kernel functions is possible, too. In order to further speed up the decomposition algorithm we analyze the critical problem of working set selection for large training data sets. In addition, we analyze the influence of the working set sizes onto the scalability of the parallel decomposition scheme. Our tests and conclusions led to several modifications of the algorithm and the improvement of overall support vector machine learning performance. Our method allows for using extensive parameter search methods to optimize classification accuracy.
Keywords: Support Vector Machine Training, Multi-ParameterKernels, Shared Memory Parallel Computing, Large Data
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443365 3D Face Modeling based on 3D Dense Morphable Face Shape Model
Authors: Yongsuk Jang Kim, Sun-Tae Chung, Boogyun Kim, Seongwon Cho
Abstract:
Realistic 3D face model is more precise in representing pose, illumination, and expression of face than 2D face model so that it can be utilized usefully in various applications such as face recognition, games, avatars, animations, and etc. In this paper, we propose a 3D face modeling method based on 3D dense morphable shape model. The proposed 3D modeling method first constructs a 3D dense morphable shape model from 3D face scan data obtained using a 3D scanner. Next, the proposed method extracts and matches facial landmarks from 2D image sequence containing a face to be modeled, and then reconstructs 3D vertices coordinates of the landmarks using a factorization-based SfM technique. Then, the proposed method obtains a 3D dense shape model of the face to be modeled by fitting the constructed 3D dense morphable shape model into the reconstructed 3D vertices. Also, the proposed method makes a cylindrical texture map using 2D face image sequence. Finally, the proposed method generates a 3D face model by rendering the 3D dense face shape model using the cylindrical texture map. Through building processes of 3D face model by the proposed method, it is shown that the proposed method is relatively easy, fast and precise.Keywords: 3D Face Modeling, 3D Morphable Shape Model, 3DReconstruction, 3D Correspondence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2429364 An Experimental Consideration of the Hybrid Architecture Based on the Situated Action Generator
Authors: Serin Lee, Takashi Kubota, Ichiro Nakatani
Abstract:
The approaches to make an agent generate intelligent actions in the AI field might be roughly categorized into two ways–the classical planning and situated action system. It is well known that each system have its own strength and weakness. However, each system also has its own application field. In particular, most of situated action systems do not directly deal with the logical problem. This paper first briefly mentions the novel action generator to situatedly extract a set of actions, which is likely to help to achieve the goal at the current situation in the relaxed logical space. After performing the action set, the agent should recognize the situation for deciding the next likely action set. However, since the extracted action is an approximation of the action which helps to achieve the goal, the agent could be caught into the deadlock of the problem. This paper proposes the newly developed hybrid architecture to solve the problem, which combines the novel situated action generator with the conventional planner. The empirical result in some planning domains shows that the quality of the resultant path to the goal is mostly acceptable as well as deriving the fast response time, and suggests the correlation between the structure of problems and the organization of each system which generates the action.
Keywords: Situated reasoning, situated action, planning, hybrid architecture
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1126363 A New Source Code Auditing Algorithm for Detecting LFI and RFI in PHP Programs
Authors: Seyed Ali Mir Heydari, Mohsen Sayadiharikandeh
Abstract:
Static analysis of source code is used for auditing web applications to detect the vulnerabilities. In this paper, we propose a new algorithm to analyze the PHP source code for detecting LFI and RFI potential vulnerabilities. In our approach, we first define some patterns for finding some functions which have potential to be abused because of unhandled user inputs. More precisely, we use regular expression as a fast and simple method to define some patterns for detection of vulnerabilities. As inclusion functions could be also used in a safe way, there could occur many false positives (FP). The first cause of these FP-s could be that the function does not use a usersupplied variable as an argument. So, we extract a list of usersupplied variables to be used for detecting vulnerable lines of code. On the other side, as vulnerability could spread among the variables like by multi-level assignment, we also try to extract the hidden usersupplied variables. We use the resulted list to decrease the false positives of our method. Finally, as there exist some ways to prevent the vulnerability of inclusion functions, we define also some patterns to detect them and decrease our false positives.Keywords: User-supplied Variables, hidden user-supplied variables, PHP vulnerabilities.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2508362 Pension Plan Member’s Investment Strategies with Transaction Cost and Couple Risky Assets Modelled by the O-U Process
Authors: Udeme O. Ini, Edikan E. Akpanibah
Abstract:
This paper studies the optimal investment strategies for a plan member (PM) in a defined contribution (DC) pension scheme with transaction cost, taxes on invested funds and couple risky assets (stocks) under the Ornstein-Uhlenbeck (O-U) process. The PM’s portfolio is assumed to consist of a risk-free asset and two risky assets where the two risky assets are driven by the O-U process. The Legendre transformation and dual theory is use to transform the resultant optimal control problem which is a nonlinear partial differential equation (PDE) into linear PDE and the resultant linear PDE is then solved for the explicit solutions of the optimal investment strategies for PM exhibiting constant absolute risk aversion (CARA) using change of variable technique. Furthermore, theoretical analysis is used to study the influences of some sensitive parameters on the optimal investment strategies with observations that the optimal investment strategies for the two risky assets increase with increase in the dividend and decreases with increase in tax on the invested funds, risk averse coefficient, initial fund size and the transaction cost.
Keywords: Ornstein-Uhlenbeck process, portfolio management, Legendre transforms, CARA utility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 481361 Solid Dispersions of Cefixime Using β-Cyclodextrin: Characterization and in vitro Evaluation
Authors: Nagasamy Venkatesh Dhandapani, Amged Awad El-Gied
Abstract:
Cefixime, a BCS class II drug, is insoluble in water but freely soluble in acetone and in alcohol. The aqueous solubility of cefixime in water is poor and exhibits exceptionally slow and intrinsic dissolution rate. In the present study, cefixime and β-Cyclodextrin (β-CD) solid dispersions were prepared with a view to study the effect and influence of β-CD on the solubility and dissolution rate of this poorly aqueous soluble drug. Phase solubility profile revealed that the solubility of cefixime was increased in the presence of β-CD and was classified as AL-type. Effect of variable, such as drug:carrier ratio, was studied. Physical characterization of the solid dispersion was characterized by Fourier transform infrared spectroscopy (FT-IR) and Differential scanning calorimetry (DSC). These studies revealed that a distinct loss of drug crystallinity in the solid molecular dispersions is ostensibly accounting for enhancement of dissolution rate in distilled water. The drug release from the prepared solid dispersion exhibited a first order kinetics. Solid dispersions of cefixime showed a 6.77 times fold increase in dissolution rate over the pure drug.Keywords: Cefixime, β-Cyclodextrin, solid dispersions, kneading method, dissolution, release kinetics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1611360 Ontology-based Concept Weighting for Text Documents
Authors: Hmway Hmway Tar, Thi Thi Soe Nyaunt
Abstract:
Documents clustering become an essential technology with the popularity of the Internet. That also means that fast and high-quality document clustering technique play core topics. Text clustering or shortly clustering is about discovering semantically related groups in an unstructured collection of documents. Clustering has been very popular for a long time because it provides unique ways of digesting and generalizing large amounts of information. One of the issues of clustering is to extract proper feature (concept) of a problem domain. The existing clustering technology mainly focuses on term weight calculation. To achieve more accurate document clustering, more informative features including concept weight are important. Feature Selection is important for clustering process because some of the irrelevant or redundant feature may misguide the clustering results. To counteract this issue, the proposed system presents the concept weight for text clustering system developed based on a k-means algorithm in accordance with the principles of ontology so that the important of words of a cluster can be identified by the weight values. To a certain extent, it has resolved the semantic problem in specific areas.Keywords: Clustering, Concept Weight, Document clustering, Feature Selection, Ontology
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2407359 A Semi-Fragile Signature based Scheme for Ownership Identification and Color Image Authentication
Authors: M. Hamad Hassan, S.A.M. Gilani
Abstract:
In this paper, a novel scheme is proposed for ownership identification and authentication using color images by deploying Cryptography and Digital Watermarking as underlaying technologies. The former is used to compute the contents based hash and the latter to embed the watermark. The host image that will claim to be the rightful owner is first transformed from RGB to YST color space exclusively designed for watermarking based applications. Geometrically YS ÔèÑ T and T channel corresponds to the chrominance component of color image, therefore suitable for embedding the watermark. The T channel is divided into 4×4 nonoverlapping blocks. The size of block is important for enhanced localization, security and low computation. Each block along with ownership information is then deployed by SHA160, a one way hash function to compute the content based hash, which is always unique and resistant against birthday attack instead of using MD5 that may raise the condition i.e. H(m)=H(m'). The watermark payload varies from block to block and computed by the variance factorα . The quality of watermarked images is quite high both subjectively and objectively. Our scheme is blind, computationally fast and exactly locates the tampered region.
Keywords: Hash Collision, LSB, MD5, PSNR, SHA160.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1564358 Data Hiding by Vector Quantization in Color Image
Authors: Yung-Gi Wu
Abstract:
With the growing of computer and network, digital data can be spread to anywhere in the world quickly. In addition, digital data can also be copied or tampered easily so that the security issue becomes an important topic in the protection of digital data. Digital watermark is a method to protect the ownership of digital data. Embedding the watermark will influence the quality certainly. In this paper, Vector Quantization (VQ) is used to embed the watermark into the image to fulfill the goal of data hiding. This kind of watermarking is invisible which means that the users will not conscious the existing of embedded watermark even though the embedded image has tiny difference compared to the original image. Meanwhile, VQ needs a lot of computation burden so that we adopt a fast VQ encoding scheme by partial distortion searching (PDS) and mean approximation scheme to speed up the data hiding process. The watermarks we hide to the image could be gray, bi-level and color images. Texts are also can be regarded as watermark to embed. In order to test the robustness of the system, we adopt Photoshop to fulfill sharpen, cropping and altering to check if the extracted watermark is still recognizable. Experimental results demonstrate that the proposed system can resist the above three kinds of tampering in general cases.Keywords: Data hiding, vector quantization, watermark.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1777357 A Frequency Grouping Approach for Blind Deconvolution of Fairly Motionless Sources
Authors: E. S. Gower, T. Tsalaile, E. Rakgati, M. O. J. Hawksford
Abstract:
A frequency grouping approach for multi-channel instantaneous blind source separation (I-BSS) of convolutive mixtures is proposed for a lower net residual inter-symbol interference (ISI) and inter-channel interference (ICI) than the conventional short-time Fourier transform (STFT) approach. Starting in the time domain, STFTs are taken with overlapping windows to convert the convolutive mixing problem into frequency domain instantaneous mixing. Mixture samples at the same frequency but from different STFT windows are grouped together forming unique frequency groups. The individual frequency group vectors are input to the I-BSS algorithm of choice, from which the output samples are dispersed back to their respective STFT windows. After applying the inverse STFT, the resulting time domain signals are used to construct the complete source estimates via the weighted overlap-add method (WOLA). The proposed algorithm is tested for source deconvolution given two mixtures, and simulated along with the STFT approach to illustrate its superiority for fairly motionless sources.Keywords: Blind source separation, short-time Fouriertransform, weighted overlap-add method
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527356 Rapid Method for Low Level 90Sr Determination in Seawater by Liquid Extraction Technique
Authors: S. Visetpotjanakit, N. Nakkaew
Abstract:
Determination of low level 90Sr in seawater has been widely developed for the purpose of environmental monitoring and radiological research because 90Sr is one of the most hazardous radionuclides released from atmospheric during the testing of nuclear weapons, waste discharge from the generation nuclear energy and nuclear accident occurring at power plants. A liquid extraction technique using bis-2-etylhexyl-phosphoric acid to separate and purify yttrium followed by Cherenkov counting using a liquid scintillation counter to determine 90Y in secular equilibrium to 90Sr was developed to monitor 90Sr in the Asia Pacific Ocean. The analytical performance was validated for the accuracy, precision, and trueness criteria. Sr-90 determination in seawater using various low concentrations in a range of 0.01 – 1 Bq/L of 30 liters spiked seawater samples and 0.5 liters of IAEA-RML-2015-01 proficiency test sample was performed for statistical evaluation. The results had a relative bias in the range from 3.41% to 12.28%, which is below accepted relative bias of ± 25% and passed the criteria confirming that our analytical approach for determination of low levels of 90Sr in seawater was acceptable. Moreover, the approach is economical, non-laborious and fast.
Keywords: Proficiency test, radiation monitoring, seawater, strontium determination.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 867355 Faster FPGA Routing Solution using DNA Computing
Authors: Manpreet Singh, Parvinder Singh Sandhu, Manjinder Singh Kahlon
Abstract:
There are many classical algorithms for finding routing in FPGA. But Using DNA computing we can solve the routes efficiently and fast. The run time complexity of DNA algorithms is much less than other classical algorithms which are used for solving routing in FPGA. The research in DNA computing is in a primary level. High information density of DNA molecules and massive parallelism involved in the DNA reactions make DNA computing a powerful tool. It has been proved by many research accomplishments that any procedure that can be programmed in a silicon computer can be realized as a DNA computing procedure. In this paper we have proposed two tier approaches for the FPGA routing solution. First, geometric FPGA detailed routing task is solved by transforming it into a Boolean satisfiability equation with the property that any assignment of input variables that satisfies the equation specifies a valid routing. Satisfying assignment for particular route will result in a valid routing and absence of a satisfying assignment implies that the layout is un-routable. In second step, DNA search algorithm is applied on this Boolean equation for solving routing alternatives utilizing the properties of DNA computation. The simulated results are satisfactory and give the indication of applicability of DNA computing for solving the FPGA Routing problem.Keywords: FPGA, Routing, DNA Computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1592354 Splitting Modified Donor-Cell Schemes for Spectral Action Balance Equation
Authors: Tanapat Brikshavana, Anirut Luadsong
Abstract:
The spectral action balance equation is an equation that used to simulate short-crested wind-generated waves in shallow water areas such as coastal regions and inland waters. This equation consists of two spatial dimensions, wave direction, and wave frequency which can be solved by finite difference method. When this equation with dominating propagation velocity terms are discretized using central differences, stability problems occur when the grid spacing is chosen too coarse. In this paper, we introduce the splitting modified donorcell scheme for avoiding stability problems and prove that it is consistent to the modified donor-cell scheme with same accuracy. The splitting modified donor-cell scheme was adopted to split the wave spectral action balance equation into four one-dimensional problems, which for each small problem obtains the independently tridiagonal linear systems. For each smaller system can be solved by direct or iterative methods at the same time which is very fast when performed by a multi-cores computer.Keywords: donor-cell scheme, parallel algorithm, spectral action balance equation, splitting method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1488353 Conjugate Mixed Convection Heat Transfer and Entropy Generation of Cu-Water Nanofluid in an Enclosure with Thick Wavy Bottom Wall
Authors: Sanjib Kr Pal, S. Bhattacharyya
Abstract:
Mixed convection of Cu-water nanofluid in an enclosure with thick wavy bottom wall has been investigated numerically. A co-ordinate transformation method is used to transform the computational domain into an orthogonal co-ordinate system. The governing equations in the computational domain are solved through a pressure correction based iterative algorithm. The fluid flow and heat transfer characteristics are analyzed for a wide range of Richardson number (0.1 ≤ Ri ≤ 5), nanoparticle volume concentration (0.0 ≤ ϕ ≤ 0.2), amplitude (0.0 ≤ α ≤ 0.1) of the wavy thick- bottom wall and the wave number (ω) at a fixed Reynolds number. Obtained results showed that heat transfer rate increases remarkably by adding the nanoparticles. Heat transfer rate is dependent on the wavy wall amplitude and wave number and decreases with increasing Richardson number for fixed amplitude and wave number. The Bejan number and the entropy generation are determined to analyze the thermodynamic optimization of the mixed convection.Keywords: Entropy generation, mixed convection, conjugate heat transfer, numerical, nanofluid, wall waviness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1046352 Rapid Frequency Response Measurement of Power Conversion Products with Coherence-Based Confidence Analysis
Authors: Tomi Roinila, Aki Taskinen, Matti Vilkko
Abstract:
Switched-mode converters play now a significant role in modern society. Their operation are often crucial in various electrical applications affecting the every day life. Therefore, the quality of the converters needs to be reliably verified. Recent studies have shown that the converters can be fully characterized by a set of frequency responses which can be efficiently used to validate the proper operation of the converters. Consequently, several methods have been proposed to measure the frequency responses fast and accurately. Most often correlation-based techniques have been applied. The presented measurement methods are highly sensitive to external errors and system nonlinearities. This fact has been often forgotten and the necessary uncertainty analysis of the measured responses has been neglected. This paper presents a simple approach to analyze the noise and nonlinearities in the frequency-response measurements of switched-mode converters. Coherence analysis is applied to form a confidence interval characterizing the noise and nonlinearities involved in the measurements. The presented method is verified by practical measurements from a high-frequency switchedmode converter.Keywords: Switched-mode converters, Frequency analysis, CoherenceAnalysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1719351 Evaluation of Groundwater Unit Hydrograph of Kavar-Maharloo Aquifer
Authors: Mohammad Hosein Hojati, Fardin Boustani
Abstract:
Groundwater is one of the most important water resources in Fars province. Based on this study, 95 percent of the total annual water consumption in Fars is used for agriculture, whereas the percentages for domestic and industrial uses are 4 and 1 percent, respectively. Population growth, urban and industrial growth, and agricultural development in Fars have created a condition of water stress. In this province, farmers and other users are pumping groundwater faster than its natural replenishment rate, causing a continuous drop in groundwater tables and depletion of this resource. In this research variation of groundwater level, their effects and ways to help control groundwater levels in aquifer of the Kavar- Maharloo plains in Fars plain were evaluated .Excessive exploitation of groundwater in this aquifer caused the groundwater levels fall too fast or to unacceptable levels. The average drawdown of the groundwater level in this plain were 17 meters during 1995 to 2006. The purpose of this study is to evaluate water level changes in the Kavar-Maharloo Aquifer in the Fars province in order to determine the areas of greatest depletion, the cause of depletion, and predict the remaining life of the aquifer.Keywords: Aquifer , ground water depletion, water table
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1682350 Automated Optic Disc Detection in Retinal Images of Patients with Diabetic Retinopathy and Risk of Macular Edema
Authors: Arturo Aquino, Manuel Emilio Gegundez, Diego Marin
Abstract:
In this paper, a new automated methodology to detect the optic disc (OD) automatically in retinal images from patients with risk of being affected by Diabetic Retinopathy (DR) and Macular Edema (ME) is presented. The detection procedure comprises two independent methodologies. On one hand, a location methodology obtains a pixel that belongs to the OD using image contrast analysis and structure filtering techniques and, on the other hand, a boundary segmentation methodology estimates a circular approximation of the OD boundary by applying mathematical morphology, edge detection techniques and the Circular Hough Transform. The methodologies were tested on a set of 1200 images composed of 229 retinographies from patients affected by DR with risk of ME, 431 with DR and no risk of ME and 540 images of healthy retinas. The location methodology obtained 98.83% success rate, whereas the OD boundary segmentation methodology obtained good circular OD boundary approximation in 94.58% of cases. The average computational time measured over the total set was 1.67 seconds for OD location and 5.78 seconds for OD boundary segmentation.
Keywords: Diabetic retinopathy, macular edema, optic disc, automated detection, automated segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2790349 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks
Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar
Abstract:
DNA Barcode provides good sources of needed information to classify living species. The classification problem has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use the similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. However, all the used methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. In fact, our method permits to avoid the complex problem of form and structure in different classes of organisms. The empirical data and their classification performances are compared with other methods. Evenly, in this study, we present our system which is consisted of three phases. The first one, is called transformation, is composed of three sub steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. Moreover, the second phase step is an approximation; it is empowered by the use of Multi Library Wavelet Neural Networks (MLWNN). Finally, the third one, is called the classification of DNA Barcodes, is realized by applying the algorithm of hierarchical classification.Keywords: DNA Barcode, Electron-Ion Interaction Pseudopotential, Multi Library Wavelet Neural Networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1967348 Study of the Particle Size Effect on Bubble Rise Velocities in a Three-Phase Bubble Column
Authors: Weiling Li, Wenqi Zhong, Baosheng Jin, Rui Xiao, Yong Lu, Tingting He
Abstract:
Experiments were performed in a three-phase bubble column to study variations of bubble rise velocities. The dynamic gas disengagement (DGD) technique and the fast response pressure transducers were utilized to investigate the bubble rise in the column. The superficial gas velocity of large bubbles and small bubbles, the rise velocities of larger and small bubble fractions were studied considering the effect of particle sizes. The results show that the superficial gas velocity associated with large bubbles linearly increase as superficial gas velocity increasing. Particle size has little effect on the both large and small bubble superficial gas velocities. The rise velocities of larger bubble fractions are larger than that of small bubble fractions, and it had different tendency at low and high superficial gas velocities when changing the particle sizes. The rise velocities of small bubble fractions increased and then had a decrease tendency when the particle size became greater.
Keywords: Bubble rise velocity, gas–liquid–solid, particle size effect, three–phase bubble column.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3403347 A Force-directed Graph Drawing based on the Hierarchical Individual Timestep Method
Authors: T. Matsubayashi, T. Yamada
Abstract:
In this paper, we propose a fast and efficient method for drawing very large-scale graph data. The conventional force-directed method proposed by Fruchterman and Rheingold (FR method) is well-known. It defines repulsive forces between every pair of nodes and attractive forces between connected nodes on a edge and calculates corresponding potential energy. An optimal layout is obtained by iteratively updating node positions to minimize the potential energy. Here, the positions of the nodes are updated every global timestep at the same time. In the proposed method, each node has its own individual time and time step, and nodes are updated at different frequencies depending on the local situation. The proposed method is inspired by the hierarchical individual time step method used for the high accuracy calculations for dense particle fields such as star clusters in astrophysical dynamics. Experiments show that the proposed method outperforms the original FR method in both speed and accuracy. We implement the proposed method on the MDGRAPE-3 PCI-X special purpose parallel computer and realize a speed enhancement of several hundred times.Keywords: visualization, graph drawing, Internet Map
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1857