Search results for: DIRECT algorithm
2123 A Review of Methods for 2D/3D Registration
Authors: Panos D. Kotsas, Tony Dodd
Abstract:
2D/3D registration is a special case of medical image registration which is of particular interest to surgeons. Applications of 2D/3D registration are [1] radiotherapy planning and treatment verification, spinal surgery, hip replacement, neurointerventions and aortic stenting. The purpose of this paper is to provide a literature review of the main methods for image registration for the 2D/3D case. At the end of the paper an algorithm is proposed for 2D/3D registration based on the Chebyssev polynomials iteration loop.Keywords: Medical image registration, review, 2D/3D
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29442122 Thermodynamic Attainable Region for Direct Synthesis of Dimethyl Ether from Synthesis Gas
Authors: Thulane Paepae, Tumisang Seodigeng
Abstract:
This paper demonstrates the use of a method of synthesizing process flowsheets using a graphical tool called the GH-plot and in particular, to look at how it can be used to compare the reactions of a combined simultaneous process with regard to their thermodynamics. The technique uses fundamental thermodynamic principles to allow the mass, energy and work balances locate the attainable region for chemical processes in a reactor. This provides guidance on what design decisions would be best suited to developing new processes that are more effective and make lower demands on raw material and energy usage.Keywords: Attainable region, dimethyl ether synthesis, mass balance, optimal reaction networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14882121 Optical Flow Technique for Supersonic Jet Measurements
Authors: H. D. Lim, Jie Wu, T. H. New, Shengxian Shi
Abstract:
This paper outlines the development of an experimental technique in quantifying supersonic jet flows, in an attempt to avoid seeding particle problems frequently associated with particle-image velocimetry (PIV) techniques at high Mach numbers. Based on optical flow algorithms, the idea behind the technique involves using high speed cameras to capture Schlieren images of the supersonic jet shear layers, before they are subjected to an adapted optical flow algorithm based on the Horn-Schnuck method to determine the associated flow fields. The proposed method is capable of offering full-field unsteady flow information with potentially higher accuracy and resolution than existing point-measurements or PIV techniques. Preliminary study via numerical simulations of a circular de Laval jet nozzle successfully reveals flow and shock structures typically associated with supersonic jet flows, which serve as useful data for subsequent validation of the optical flow based experimental results. For experimental technique, a Z-type Schlieren setup is proposed with supersonic jet operated in cold mode, stagnation pressure of 4 bar and exit Mach of 1.5. High-speed singleframe or double-frame cameras are used to capture successive Schlieren images. As implementation of optical flow technique to supersonic flows remains rare, the current focus revolves around methodology validation through synthetic images. The results of validation test offers valuable insight into how the optical flow algorithm can be further improved to improve robustness and accuracy. Despite these challenges however, this supersonic flow measurement technique may potentially offer a simpler way to identify and quantify the fine spatial structures within the shock shear layer.
Keywords: Schlieren, optical flow, supersonic jets, shock shear layer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19032120 A Multi-Modal Virtual Walkthrough of the Virtual Past and Present Based on Panoramic View, Crowd Simulation and Acoustic Heritage on Mobile Platform
Authors: Lim Chen Kim, Tan Kian Lam, Chan Yi Chee
Abstract:
This research presents a multi-modal simulation in the reconstruction of the past and the construction of present in digital cultural heritage on mobile platform. In bringing the present life, the virtual environment is generated through a presented scheme for rapid and efficient construction of 360° panoramic view. Then, acoustical heritage model and crowd model are presented and improvised into the 360° panoramic view. For the reconstruction of past life, the crowd is simulated and rendered in an old trading port. However, the keystone of this research is in a virtual walkthrough that shows the virtual present life in 2D and virtual past life in 3D, both in an environment of virtual heritage sites in George Town through mobile device. Firstly, the 2D crowd is modelled and simulated using OpenGL ES 1.1 on mobile platform. The 2D crowd is used to portray the present life in 360° panoramic view of a virtual heritage environment based on the extension of Newtonian Laws. Secondly, the 2D crowd is animated and rendered into 3D with improved variety and incorporated into the virtual past life using Unity3D Game Engine. The behaviours of the 3D models are then simulated based on the enhancement of the classical model of Boid algorithm. Finally, a demonstration system is developed and integrated with the models, techniques and algorithms of this research. The virtual walkthrough is demonstrated to a group of respondents and is evaluated through the user-centred evaluation by navigating around the demonstration system. The results of the evaluation based on the questionnaires have shown that the presented virtual walkthrough has been successfully deployed through a multi-modal simulation and such a virtual walkthrough would be particularly useful in a virtual tour and virtual museum applications.
Keywords: Boid algorithm, crowd simulation, mobile platform, Newtonian laws, virtual heritage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14962119 Synthesis of Logic Circuits Using Fractional-Order Dynamic Fitness Functions
Authors: Cecília Reis, J. A. Tenreiro Machado, J. Boaventura Cunha
Abstract:
This paper analyses the performance of a genetic algorithm using a new concept, namely a fractional-order dynamic fitness function, for the synthesis of combinational logic circuits. The experiments reveal superior results in terms of speed and convergence to achieve a solution.
Keywords: Circuit design, fractional-order systems, genetic algorithms, logic circuits
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17362118 The Global Stability Using Lyapunov Function
Authors: R. Kongnuy, E. Naowanich, T. Kruehong
Abstract:
An important technique in stability theory for differential equations is known as the direct method of Lyapunov. In this work we deal global stability properties of Leptospirosis transmission model by age group in Thailand. First we consider the data from Division of Epidemiology Ministry of Public Health, Thailand between 1997-2011. Then we construct the mathematical model for leptospirosis transmission by eight age groups. The Lyapunov functions are used for our model which takes the forms of an Ordinary Differential Equation system. The globally asymptotically for equilibrium states are analyzed.Keywords: Age Group, Leptospirosis, Lyapunov Function, Ordinary Differential Equation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21472117 Power Control in a Doubly Fed Induction Machine
Authors: A. Ourici
Abstract:
This paper proposes a direct power control for doubly-fed induction machine for variable speed wind power generation. It provides decoupled regulation of the primary side active and reactive power and it is suitable for both electric energy generation and drive applications. In order to control the power flowing between the stator of the DFIG and the network, a decoupled control of active and reactive power is synthesized using PI controllers.The obtained simulation results show the feasibility and the effectiveness of the suggested methodKeywords: Doubly fed induction machine , decoupled power control , vector control , active and reactive power, PWM inverter
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23722116 Comparing Test Equating by Item Response Theory and Raw Score Methods with Small Sample Sizes on a Study of the ARTé: Mecenas Learning Game
Authors: Steven W. Carruthers
Abstract:
The purpose of the present research is to equate two test forms as part of a study to evaluate the educational effectiveness of the ARTé: Mecenas art history learning game. The researcher applied Item Response Theory (IRT) procedures to calculate item, test, and mean-sigma equating parameters. With the sample size n=134, test parameters indicated “good” model fit but low Test Information Functions and more acute than expected equating parameters. Therefore, the researcher applied equipercentile equating and linear equating to raw scores and compared the equated form parameters and effect sizes from each method. Item scaling in IRT enables the researcher to select a subset of well-discriminating items. The mean-sigma step produces a mean-slope adjustment from the anchor items, which was used to scale the score on the new form (Form R) to the reference form (Form Q) scale. In equipercentile equating, scores are adjusted to align the proportion of scores in each quintile segment. Linear equating produces a mean-slope adjustment, which was applied to all core items on the new form. The study followed a quasi-experimental design with purposeful sampling of students enrolled in a college level art history course (n=134) and counterbalancing design to distribute both forms on the pre- and posttests. The Experimental Group (n=82) was asked to play ARTé: Mecenas online and complete Level 4 of the game within a two-week period; 37 participants completed Level 4. Over the same period, the Control Group (n=52) did not play the game. The researcher examined between group differences from post-test scores on test Form Q and Form R by full-factorial Two-Way ANOVA. The raw score analysis indicated a 1.29% direct effect of form, which was statistically non-significant but may be practically significant. The researcher repeated the between group differences analysis with all three equating methods. For the IRT mean-sigma adjusted scores, form had a direct effect of 8.39%. Mean-sigma equating with a small sample may have resulted in inaccurate equating parameters. Equipercentile equating aligned test means and standard deviations, but resultant skewness and kurtosis worsened compared to raw score parameters. Form had a 3.18% direct effect. Linear equating produced the lowest Form effect, approaching 0%. Using linearly equated scores, the researcher conducted an ANCOVA to examine the effect size in terms of prior knowledge. The between group effect size for the Control Group versus Experimental Group participants who completed the game was 14.39% with a 4.77% effect size attributed to pre-test score. Playing and completing the game increased art history knowledge, and individuals with low prior knowledge tended to gain more from pre- to post test. Ultimately, researchers should approach test equating based on their theoretical stance on Classical Test Theory and IRT and the respective assumptions. Regardless of the approach or method, test equating requires a representative sample of sufficient size. With small sample sizes, the application of a range of equating approaches can expose item and test features for review, inform interpretation, and identify paths for improving instruments for future study.Keywords: Effectiveness, equipercentile equating, IRT, learning games, linear equating, mean-sigma equating.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10142115 Inheritance Growth: a Biology Inspired Method to Build Structures in P2P
Authors: Panchalee Sukjit, Herwig Unger
Abstract:
IT infrastructures are becoming more and more difficult. Therefore, in the first industrial IT systems, the P2P paradigm has replaced the traditional client server and methods of self-organization are gaining more and more importance. From the past it is known that especially regular structures like grids may significantly improve the system behavior and performance. This contribution introduces a new algorithm based on a biologic analogue, which may provide the growth of several regular structures on top of anarchic grown P2P- or social network structures.Keywords: P2P, Pattern generation, Grid, Social network, Inheritance, Reproduction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14632114 Dynamic Study on the Evaluation of the Settlement of Soil under Sea Dam
Authors: Faroudja Meziani, Amar Kahil
Abstract:
In order to study the variation in settlement of soil under a dyke dam, the modelisation in our study consists of applying an imposed displacement at the base of the mass of soil (consisting of a saturated sand). The imposed displacement follows the evolution of acceleration of the earthquake of Boumerdes 2003 in Algeria. Moreover, the gravity load is taken into consideration by taking account the specific weight of the materials constituting the dyke. The results obtained show that the gravity loads have a direct influence on the evolution of settlement, especially at the center of the dyke where these loads are higher.
Keywords: Settlement, dynamic analysis, rockfill dam, effect of earthquake, soil dynamics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7862113 A Numerical Algorithm for Positive Solutions of Concave and Convex Elliptic Equation on R2
Authors: Hailong Zhu, Zhaoxiang Li
Abstract:
In this paper we investigate numerically positive solutions of the equation -Δu = λuq+up with Dirichlet boundary condition in a boundary domain ╬® for λ > 0 and 0 < q < 1 < p < 2*, we will compute and visualize the range of λ, this problem achieves a numerical solution.
Keywords: positive solutions, concave-convex, sub-super solution method, pseudo arclength method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13192112 Strong Adhesion and High Wettability at Polyetheretherketone-Resin/Titanium-Dioxide Interface Obtained with Crystal-Orientation Control
Authors: Tomio Iwasaki, Yosuke Kawahito
Abstract:
The adhesion strength and wettability at the interfaces between a polyetheretherketone (PEEK) resin and titanium dioxide (TiO2) have become more important because direct joining of PEEK resin and titanium (Ti), whose surface has usually the oxide (TiO2), is needed not only in vehicles such as airplanes, automobiles, and space vehicles, but also in medical devices such as implants. To realize strong joint between the PEEK resin and TiO2, the dependence of the adhesion strength and wettability on crystal orientations of rutile TiO2 were investigated by using molecular simulations. Molecular dynamics simulations were conducted by combining quantum-mechanics equation of electrons with Newton’s equation of motion of nuclear coordinates (atomic coordinates). By putting a PEEK-resin sphere on a rutile TiO2 surface and by heating the system to 650 K, the contact angles at the interfaces were calculated to evaluate the wettability. After the system is cooled to 300 K from 650 K, to evaluate the adhesin strength, the adhesive fracture energy is calculated as the difference between the energy of the PEEK-TiO2 attached state and that of the PEEK-TiO2 detached state. The results of the contact angles showed that PEEK resin on the TiO2(100) and that on the TiO2(001) surface has low wettability with large contact angles. On the other hand, PEEK resin on the TiO2(110) surface has high wettability with a small contact angle. The results of the adhesive fracture energies showed that the adhesion at the PEEK-resin/TiO2(100) and PEEK-resin/TiO2(001) interfaces are weak. On the other hand, the adhesion at the PEEK-resin/TiO2(110) interface is strong. To clarify the reason that the higher wettability and stronger adhesion are obtained at the PEEK/TiO2(110) interface than at the at the PEEK/TiO2(100) and PEEK/TiO2(001) interfaces, atomic configurations at the interfaces were visualized. The atomic configuration at the PEEK/TiO2(110) interface showed that the lattice-matched coherent interface is realized, and the atomic density is high. On the other hand, the atomic configuration at the PEEK/TiO2(001) interface showed the lattice-unmatched incoherent interface. The atomic configuration at the PEEK/TiO2(100) interface showed that the atomic density is very low although the lattice-matched interface is realized. Therefore, the lattice matching and the high atomic density at the PEEK/TiO2(001) interface are considered to be dominant factors in the high wettability and strong adhesion.
Keywords: Adhesion, direct joining, PEEK, TiO2, wettability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4472111 ILMI Approach for Robust Output Feedback Control of Induction Machine
Authors: Abdelwahed Echchatbi, Adil Rizki, Ali Haddi, Nabil Mrani, Noureddine Elalami
Abstract:
In this note, the robust static output feedback stabilisation of an induction machine is addressed. The machine is described by a non homogenous bilinear model with structural uncertainties, and the feedback gain is computed via an iterative LMI (ILMI) algorithm.Keywords: Induction machine, Static output feedback, robust stabilisation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18762110 Multi-Objective Optimization of Gas Turbine Power Cycle
Authors: Mohsen Nikaein
Abstract:
Because of importance of energy, optimization of power generation systems is necessary. Gas turbine cycles are suitable manner for fast power generation, but their efficiency is partly low. In order to achieving higher efficiencies, some propositions are preferred such as recovery of heat from exhaust gases in a regenerator, utilization of intercooler in a multistage compressor, steam injection to combustion chamber and etc. However thermodynamic optimization of gas turbine cycle, even with above components, is necessary. In this article multi-objective genetic algorithms are employed for Pareto approach optimization of Regenerative-Intercooling-Gas Turbine (RIGT) cycle. In the multiobjective optimization a number of conflicting objective functions are to be optimized simultaneously. The important objective functions that have been considered for optimization are entropy generation of RIGT cycle (Ns) derives using Exergy Analysis and Gouy-Stodola theorem, thermal efficiency and the net output power of RIGT Cycle. These objectives are usually conflicting with each other. The design variables consist of thermodynamic parameters such as compressor pressure ratio (Rp), excess air in combustion (EA), turbine inlet temperature (TIT) and inlet air temperature (T0). At the first stage single objective optimization has been investigated and the method of Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used for multi-objective optimization. Optimization procedures are performed for two and three objective functions and the results are compared for RIGT Cycle. In order to investigate the optimal thermodynamic behavior of two objectives, different set, each including two objectives of output parameters, are considered individually. For each set Pareto front are depicted. The sets of selected decision variables based on this Pareto front, will cause the best possible combination of corresponding objective functions. There is no superiority for the points on the Pareto front figure, but they are superior to any other point. In the case of three objective optimization the results are given in tables.Keywords: Exergy, Entropy Generation, Brayton Cycle, DesignParameters, Optimization, Genetic Algorithm, Multi-Objective.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25232109 Social Media as a ‘Service’ for Value Co-Creation by Integrating Sponsoring Companies, Sports Entities and Fans
Authors: Harri Jalonen
Abstract:
Social media has changed the ways we communicate, collaborate and connect with each other. It has also influenced our habits of consuming sports. Social media has allowed direct interaction between sponsoring companies, athletes/players and fans. Drawing on the service dominant logic of value co-creation, the conceptual paper identifies three operant resources which are beneficial for value co-creation: i) social identity and sense of community, ii) congruence and brand personality, and iii) participatory culture and fan activation. The paper contributes to the theoretical discussion on how social can be media used for value co-creation purposes in the sports industry.
Keywords: Sport, value co-creation, social media, service.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16362108 Probabilistic Graphical Model for the Web
Authors: M. Nekri, A. Khelladi
Abstract:
The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.
Keywords: Clustering coefficient, preferential attachment, small world, Web community.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16032107 Meta Model Based EA for Complex Optimization
Authors: Maumita Bhattacharya
Abstract:
Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, many real life optimization problems often require finding optimal solution to complex high dimensional, multimodal problems involving computationally very expensive fitness function evaluations. Use of evolutionary algorithms in such problem domains is thus practically prohibitive. An attractive alternative is to build meta models or use an approximation of the actual fitness functions to be evaluated. These meta models are order of magnitude cheaper to evaluate compared to the actual function evaluation. Many regression and interpolation tools are available to build such meta models. This paper briefly discusses the architectures and use of such meta-modeling tools in an evolutionary optimization context. We further present two evolutionary algorithm frameworks which involve use of meta models for fitness function evaluation. The first framework, namely the Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model [14] reduces computation time by controlled use of meta-models (in this case approximate model generated by Support Vector Machine regression) to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the metamodel are generated from a single uniform model. This does not take into account uncertain scenarios involving noisy fitness functions. The second model, DAFHEA-II, an enhanced version of the original DAFHEA framework, incorporates a multiple-model based learning approach for the support vector machine approximator to handle noisy functions [15]. Empirical results obtained by evaluating the frameworks using several benchmark functions demonstrate their efficiencyKeywords: Meta model, Evolutionary algorithm, Stochastictechnique, Fitness function, Optimization, Support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20662106 A Probabilistic Reinforcement-Based Approach to Conceptualization
Authors: Hadi Firouzi, Majid Nili Ahmadabadi, Babak N. Araabi
Abstract:
Conceptualization strengthens intelligent systems in generalization skill, effective knowledge representation, real-time inference, and managing uncertain and indefinite situations in addition to facilitating knowledge communication for learning agents situated in real world. Concept learning introduces a way of abstraction by which the continuous state is formed as entities called concepts which are connected to the action space and thus, they illustrate somehow the complex action space. Of computational concept learning approaches, action-based conceptualization is favored because of its simplicity and mirror neuron foundations in neuroscience. In this paper, a new biologically inspired concept learning approach based on the probabilistic framework is proposed. This approach exploits and extends the mirror neuron-s role in conceptualization for a reinforcement learning agent in nondeterministic environments. In the proposed method, instead of building a huge numerical knowledge, the concepts are learnt gradually from rewards through interaction with the environment. Moreover the probabilistic formation of the concepts is employed to deal with uncertain and dynamic nature of real problems in addition to the ability of generalization. These characteristics as a whole distinguish the proposed learning algorithm from both a pure classification algorithm and typical reinforcement learning. Simulation results show advantages of the proposed framework in terms of convergence speed as well as generalization and asymptotic behavior because of utilizing both success and failures attempts through received rewards. Experimental results, on the other hand, show the applicability and effectiveness of the proposed method in continuous and noisy environments for a real robotic task such as maze as well as the benefits of implementing an incremental learning scenario in artificial agents.
Keywords: Concept learning, probabilistic decision making, reinforcement learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15262105 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images
Authors: Amit Kr. Happy
Abstract:
This paper is motivated by the importance of multi-sensor image fusion with specific focus on Infrared (IR) and Visible image (VI) fusion for various applications including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like Visible camera & IR Thermal Imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (IR) that may be reflected or self-emitted. A digital color camera captures the visible source image and a thermal IR camera acquires the thermal source image. In this paper, some image fusion algorithms based upon Multi-Scale Transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, but they also make it hard to become deployed in system and applications that require real-time operation, high flexibility and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.
Keywords: Image fusion, IR thermal imager, multi-sensor, Multi-Scale Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4292104 Development of Wave-Dissipating Block Installation Simulation for Inexperienced Worker Training
Authors: Hao Min Chuah, Tatsuya Yamazaki, Ryosui Iwasawa, Tatsumi Suto
Abstract:
In recent years, with the advancement of digital technology, the movement to introduce so-called ICT (Information and Communication Technology), such as computer technology and network technology, to civil engineering construction sites and construction sites is accelerating. As part of this movement, attempts are being made in various situations to reproduce actual sites inside computers and use them for designing and construction planning, as well as for training inexperienced engineers. The installation of wave-dissipating blocks on coasts, etc., is a type of work that has been carried out by skilled workers based on their years of experience and is one of the tasks that is difficult for inexperienced workers to carry out on site. Wave-dissipating blocks are structures that are designed to protect coasts, beaches, and so on from erosion by reducing the energy of ocean waves. Wave-dissipating blocks usually weigh more than 1 t and are installed by being suspended by a crane, so it would be time-consuming and costly for inexperienced workers to train on-site. In this paper, therefore, a block installation simulator is developed based on Unity 3D, a game development engine. The simulator computes porosity. Porosity is defined as the ratio of the total volume of the wave breaker blocks inside the structure to the final shape of the ideal structure. Using the evaluation of porosity, the simulator can determine how well the user is able to install the blocks. The voxelization technique is used to calculate the porosity of the structure, simplifying the calculations. Other techniques, such as raycasting and box overlapping, are employed for accurate simulation. In the near future, the simulator will install an automatic block installation algorithm based on combinatorial optimization solutions and compare the user-demonstrated block installation and the appropriate installation solved by the algorithm.
Keywords: 3D simulator, porosity, user interface, voxelization, wave-dissipating blocks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 672103 Control of Biofilm Formation and Inorganic Particle Accumulation on Reverse Osmosis Membrane by Hypochlorite Washing
Authors: Masaki Ohno, Cervinia Manalo, Tetsuji Okuda, Satoshi Nakai, Wataru Nishijima
Abstract:
Reverse osmosis (RO) membranes have been widely used for desalination to purify water for drinking and other purposes. Although at present most RO membranes have no resistance to chlorine, chlorine-resistant membranes are being developed. Therefore, direct chlorine treatment or chlorine washing will be an option in preventing biofouling on chlorine-resistant membranes. Furthermore, if particle accumulation control is possible by using chlorine washing, expensive pretreatment for particle removal can be removed or simplified. The objective of this study was to determine the effective hypochlorite washing condition required for controlling biofilm formation and inorganic particle accumulation on RO membrane in a continuous flow channel with RO membrane and spacer. In this study, direct chlorine washing was done by soaking fouled RO membranes in hypochlorite solution and fluorescence intensity was used to quantify biofilm on the membrane surface. After 48 h of soaking the membranes in high fouling potential waters, the fluorescence intensity decreased to 0 from 470 using the following washing conditions: 10 mg/L chlorine concentration, 2 times/d washing interval, and 30 min washing time. The chlorine concentration required to control biofilm formation decreased as the chlorine concentration (0.5–10 mg/L), the washing interval (1–4 times/d), or the washing time (1–30 min) increased. For the sample solutions used in the study, 10 mg/L chlorine concentration with 2 times/d interval, and 5 min washing time was required for biofilm control. The optimum chlorine washing conditions obtained from soaking experiments proved to be applicable also in controlling biofilm formation in continuous flow experiments. Moreover, chlorine washing employed in controlling biofilm with suspended particles resulted in lower amounts of organic (0.03 mg/cm2) and inorganic (0.14 mg/cm2) deposits on the membrane than that for sample water without chlorine washing (0.14 mg/cm2 and 0.33 mg/cm2, respectively). The amount of biofilm formed was 79% controlled by continuous washing with 10 mg/L of free chlorine concentration, and the inorganic accumulation amount decreased by 58% to levels similar to that of pure water with kaolin (0.17 mg/cm2) as feed water. These results confirmed the acceleration of particle accumulation due to biofilm formation, and that the inhibition of biofilm growth can almost completely reduce further particle accumulation. In addition, effective hypochlorite washing condition which can control both biofilm formation and particle accumulation could be achieved.
Keywords: Biofouling control, hypochlorite, reverse osmosis, washing condition optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11872102 An Alternative Proof for the Topological Entropy of the Motzkin Shift
Authors: Fahad Alsharari, Mohd Salmi Md Noorani
Abstract:
A Motzkin shift is a mathematical model for constraints on genetic sequences. In terms of the theory of symbolic dynamics, the Motzkin shift is nonsofic, and therefore, we cannot use the Perron- Frobenius theory to calculate its topological entropy. The Motzkin shift M(M,N) which comes from language theory, is defined to be the shift system over an alphabet A that consists of N negative symbols, N positive symbols and M neutral symbols. For an x in the full shift, x will be in the Motzkin subshift M(M,N) if and only if every finite block appearing in x has a non-zero reduced form. Therefore, the constraint for x cannot be bounded in length. K. Inoue has shown that the entropy of the Motzkin shift M(M,N) is log(M + N + 1). In this paper, a new direct method of calculating the topological entropy of the Motzkin shift is given without any measure theoretical discussion.
Keywords: Motzkin shift, topological entropy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20092101 Computing Visibility Subsets in an Orthogonal Polyhedron
Authors: Jefri Marzal, Hong Xie, Chun Che Fung
Abstract:
Visibility problems are central to many computational geometry applications. One of the typical visibility problems is computing the view from a given point. In this paper, a linear time procedure is proposed to compute the visibility subsets from a corner of a rectangular prism in an orthogonal polyhedron. The proposed algorithm could be useful to solve classic 3D problems.
Keywords: Visibility, rectangular prism, orthogonal polyhedron.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13882100 Direct Measurements of Wind Data over 100 Meters above the Ground in the Site of Lendinara, Italy
Authors: A. Dal Monte, M. Raciti Castelli, G. B. Bellato, L. Stevanato, E. Benini
Abstract:
The wind resource in the Italian site of Lendinara (RO) is analyzed through a systematic anemometric campaign performed on the top of the bell tower, at an altitude of over 100 m above the ground. Both the average wind speed and the Weibull distribution are computed. The resulting average wind velocity is in accordance with the numerical predictions of the Italian Wind Atlas, confirming the accuracy of the extrapolation of wind data adopted for the evaluation of wind potential at higher altitudes with respect to the commonly placed measurement stations.Keywords: Anemometric campaign, wind resource, Weibull distribution, wind atlas
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19572099 Ant Colony Optimization for Feature Subset Selection
Authors: Ahmed Al-Ani
Abstract:
The Ant Colony Optimization (ACO) is a metaheuristic inspired by the behavior of real ants in their search for the shortest paths to food sources. It has recently attracted a lot of attention and has been successfully applied to a number of different optimization problems. Due to the importance of the feature selection problem and the potential of ACO, this paper presents a novel method that utilizes the ACO algorithm to implement a feature subset search procedure. Initial results obtained using the classification of speech segments are very promising.Keywords: Ant Colony Optimization, ant systems, feature selection, pattern recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31402098 The Intersubjective Dynamic Regarding Commercial Failures of Foreign Migration of Brands in Food Industry
Authors: Philippe Fauquet-Alekhine, Elena Fauquet-Alekhine-Pavlovskaia
Abstract:
On the basis of questionnaires and interviews of two samples of subjects (French and Anglo-Saxon) for which two food products were presented (one of the subject’s country and one of the foreign country), we have shown how consumers could be sensitive to the label or brand written on the package of the food product. Furthermore, in the light of Intersubjectivity theory, we have shown the necessity for the consumer to find congruence between the direct and meta perspective towards the product for which the producer and especially the marketer is responsible. Taking into account these findings may help to avoid the commercial failure of a brand while exported abroad.
Keywords: Brand, failure, food industry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17072097 A Sub Pixel Resolution Method
Authors: S. Khademi, A. Darudi, Z. Abbasi
Abstract:
One of the main limitations for the resolution of optical instruments is the size of the sensor-s pixels. In this paper we introduce a new sub pixel resolution algorithm to enhance the resolution of images. This method is based on the analysis of multiimages which are fast recorded during the fine relative motion of image and pixel arrays of CCDs. It is shown that by applying this method for a sample noise free image one will enhance the resolution with 10-14 order of error.Keywords: Sub Pixel Resolution, Moving Pixels, CCD, Image, Optical Instrument.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19942096 2-D Realization of WiMAX Channel Interleaver for Efficient Hardware Implementation
Authors: Rizwan Asghar, Dake Liu
Abstract:
The direct implementation of interleaver functions in WiMAX is not hardware efficient due to presence of complex functions. Also the conventional method i.e. using memories for storing the permutation tables is silicon consuming. This work presents a 2-D transformation for WiMAX channel interleaver functions which reduces the overall hardware complexity to compute the interleaver addresses on the fly. A fully reconfigurable architecture for address generation in WiMAX channel interleaver is presented, which consume 1.1 k-gates in total. It can be configured for any block size and any modulation scheme in WiMAX. The presented architecture can run at a frequency of 200 MHz, thus fully supporting high bandwidth requirements for WiMAX.Keywords: Interleaver, deinterleaver, WiMAX, 802.16e.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23122095 Analysis of Equal cost Adaptive Routing Algorithms using Connection-Oriented and Connectionless Protocols
Authors: ER. Yashpaul Singh, A. Swarup
Abstract:
This research paper evaluates and compares the performance of equal cost adaptive multi-path routing algorithms taking the transport protocols TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) using network simulator ns2 and concludes which one is better.Keywords: Multi-path routing algorithm, Datagram, Virtual Circuit, Throughput, Network services.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14982094 Comparison of Noise Emissions in the Interior of Passenger Cars
Authors: Martin Kendra, Tomas Skrucany, Jaroslav Masek
Abstract:
The noise is one of the negative elements which affects the human health. This article presents the measurement of emitted noise by road vehicle and its parts during the operation. Measurement was done in the interior of common passenger cars with a digital sound meter. The results compare the noise value in different cars with different body shape, which influences the driver’s health. Transport has considerable ecological effects; many of them are detrimental to environmental sustainability. Roads and traffic exert a variety of direct and mostly detrimental effects on nature.Keywords: Driver, noise measurement, passenger road vehicle, road transport.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2517