Search results for: implementations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 161

Search results for: implementations

131 Developing a Theory for Study of Transformation of Historic Cities

Authors: Sana Ahrar

Abstract:

Cities are undergoing rapid transformation with the change in lifestyle and technological advancements. These transformations may be experienced or physically visible in the built form. This paper focuses on the relationship between the social, physical environment, change in lifestyle and the interrelated factors influencing the transformation of any historic city. Shahjahanabad as a city has undergone transformation under the various political powers as well as the various policy implementations after independence. These visible traces of transformation diffused throughout the city may be due to socio-economic, historic, political factors and due to the globalization process. This study shall enable evolving a theory for the study of transformation of Historic cities such as Shahjahanabad: which has been plundered, rebuilt, and which still thrives as a ‘living heritage city’. The theory developed will be the process of studying the transformation and can be used by planners, policy makers and researchers in different urban contexts.

Keywords: heritage, historic cities, Shahjahanabad, transformation

Procedia PDF Downloads 356
130 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation

Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk

Abstract:

The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.

Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set

Procedia PDF Downloads 189
129 The Content-Based Classroom: Perspectives on Integrating Language and Content

Authors: Mourad Ben Bennani

Abstract:

Views of language and language learning have undergone a tremendous change over the last decades. Language is no longer seen as a set of structured rules. It is rather viewed as a tool of interaction and communication. This shift in views has resulted in change in viewing language learning, which gave birth to various approaches and methodologies of language teaching. Two of these approaches are content-based instruction and content and language integrated learning (CLIL). These are similar approaches which integrate content and foreign/second language learning through various methodologies and models as a result of different implementations around the world. This presentation deals with sociocultural view of CBI and CLIL. It also defines language and content as vital components of CBI and CLIL. Next it reviews the origins of CBI and the continuum perspectives and CLIL definitions and models featured in the literature. Finally it summarizes current aspects around research in program evaluation with a focus on the benefits and challenges of these innovative approaches for second language teaching.

Keywords: CBI, CLIL, CBI continuum, CLIL models

Procedia PDF Downloads 386
128 Capacity Estimation of Hybrid Automated Repeat Request Protocol for Low Earth Orbit Mega-Constellations

Authors: Arif Armagan Gozutok, Alper Kule, Burak Tos, Selman Demirel

Abstract:

Wireless communication chain requires effective ways to keep throughput efficiency high while it suffers location-dependent, time-varying burst errors. Several techniques are developed in order to assure that the receiver recovers the transmitted information without errors. The most fundamental approaches are error checking and correction besides re-transmission of the non-acknowledged packets. In this paper, stop & wait (SAW) and chase combined (CC) hybrid automated repeat request (HARQ) protocols are compared and analyzed in terms of throughput and average delay for the usage of low earth orbit (LEO) mega-constellations case. Several assumptions and technological implementations are considered as well as usage of low-density parity check (LDPC) codes together with several constellation orbit configurations.

Keywords: HARQ, LEO, satellite constellation, throughput

Procedia PDF Downloads 112
127 Implementation of Distributed Randomized Algorithms for Resilient Peer-to-Peer Networks

Authors: Richard Tanaka, Ying Zhu

Abstract:

This paper studies a few randomized algorithms in application-layer peer-to-peer networks. The significant gain in scalability and resilience that peer-to-peer networks provide has made them widely used and adopted in many real-world distributed systems and applications. The unique properties of peer-to-peer networks make them particularly suitable for randomized algorithms such as random walks and gossip algorithms. Instead of simulations of peer-to-peer networks, we leverage the Docker virtual container technology to develop implementations of the peer-to-peer networks and these distributed randomized algorithms running on top of them. We can thus analyze their behaviour and performance in realistic settings. We further consider the problem of identifying high-risk bottleneck links in the network with the objective of improving the resilience and reliability of peer-to-peer networks. We propose a randomized algorithm to solve this problem and evaluate its performance by simulations.

Keywords: distributed randomized algorithms, peer-to-peer networks, virtual container technology, resilient networks

Procedia PDF Downloads 174
126 Hardware for Genetic Algorithm

Authors: Fariborz Ahmadi, Reza Tati

Abstract:

Genetic algorithm is a soft computing method that works on set of solutions. These solutions are called chromosome and the best one is the absolute solution of the problem. The main problem of this algorithm is that after passing through some generations, it may be produced some chromosomes that had been produced in some generations ago that causes reducing the convergence speed. From another respective, most of the genetic algorithms are implemented in software and less works have been done on hardware implementation. Our work implements genetic algorithm in hardware that doesn’t produce chromosome that have been produced in previous generations. In this work, most of genetic operators are implemented without producing iterative chromosomes and genetic diversity is preserved. Genetic diversity causes that not only do not this algorithm converge to local optimum but also reaching to global optimum. Without any doubts, proposed approach is so faster than software implementations. Evaluation results also show the proposed approach is faster than hardware ones.

Keywords: hardware, genetic algorithm, computer science, engineering

Procedia PDF Downloads 468
125 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley

Abstract:

Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.

Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 39
124 Neuron Dynamics of Single-Compartment Traub Model for Hardware Implementations

Authors: J. C. Moctezuma, V. Breña-Medina, Jose Luis Nunez-Yanez, Joseph P. McGeehan

Abstract:

In this work we make a bifurcation analysis for a single compartment representation of Traub model, one of the most important conductance-based models. The analysis focus in two principal parameters: current and leakage conductance. Study of stable and unstable solutions are explored; also Hop-bifurcation and frequency interpretation when current varies is examined. This study allows having control of neuron dynamics and neuron response when these parameters change. Analysis like this is particularly important for several applications such as: tuning parameters in learning process, neuron excitability tests, measure bursting properties of the neuron, etc. Finally, a hardware implementation results were developed to corroborate these results.

Keywords: Traub model, Pinsky-Rinzel model, Hopf bifurcation, single-compartment models, bifurcation analysis, neuron modeling

Procedia PDF Downloads 285
123 A High-Level Co-Evolutionary Hybrid Algorithm for the Multi-Objective Job Shop Scheduling Problem

Authors: Aydin Teymourifar, Gurkan Ozturk

Abstract:

In this paper, a hybrid distributed algorithm has been suggested for the multi-objective job shop scheduling problem. Many new approaches are used at design steps of the distributed algorithm. Co-evolutionary structure of the algorithm and competition between different communicated hybrid algorithms, which are executed simultaneously, causes to efficient search. Using several machines for distributing the algorithms, at the iteration and solution levels, increases computational speed. The proposed algorithm is able to find the Pareto solutions of the big problems in shorter time than other algorithm in the literature. Apache Spark and Hadoop platforms have been used for the distribution of the algorithm. The suggested algorithm and implementations have been compared with results of the successful algorithms in the literature. Results prove the efficiency and high speed of the algorithm.

Keywords: distributed algorithms, Apache Spark, Hadoop, job shop scheduling, multi-objective optimization

Procedia PDF Downloads 333
122 Study of Gait Stability Evaluation Technique Based on Linear Inverted Pendulum Model

Authors: Kang Sungjae

Abstract:

This research proposes a gait stability evaluation technique based on the linear inverted pendulum model and moving support foot Zero Moment Point. With this, an improvement towards the gait analysis of the orthosis walk is validated. The application of Lagrangian mechanics approximation to the solutions of the dynamics equations for the linear inverted pendulum does not only simplify the solution, but it provides a smooth Zero Moment Point for the double feet support phase. The Zero Moment Point gait analysis techniques mentioned above validates reference trajectories for the center of mass of the gait orthosis, the timing of the steps and landing position references for the swing feet. The stability evaluation technique are tested with a 6 DOF powered gait orthosis. The results obtained are promising for implementations.

Keywords: locomotion, center of mass, gait stability, linear inverted pendulum model

Procedia PDF Downloads 494
121 An Embedded High Speed Adder for Arithmetic Computations

Authors: Kala Bharathan, R. Seshasayanan

Abstract:

In this paper, a 1-bit Embedded Logic Full Adder (EFA) circuit in transistor level is proposed, which reduces logic complexity, gives low power and high speed. The design is further extended till 64 bits. To evaluate the performance of EFA, a 16, 32, 64-bit both Linear and Square root Carry Select Adder/Subtractor (CSLAS) Structure is also proposed. Realistic testing of proposed circuits is done on 8 X 8 Modified Booth multiplier and comparison in terms of power and delay is done. The EFA is implemented for different multiplier architectures for performance parameter comparison. Overall delay for CSLAS is reduced to 78% when compared to conventional one. The circuit implementations are done on TSMC 28nm CMOS technology using Cadence Virtuoso tool. The EFA has power savings of up to 14% when compared to the conventional adder. The present implementation was found to offer significant improvement in terms of power and speed in comparison to other full adder circuits.

Keywords: embedded logic, full adder, pdp, xor gate

Procedia PDF Downloads 421
120 Comparison of Parallel CUDA and OpenMP Implementations of Memetic Algorithms for Solving Optimization Problems

Authors: Jason Digalakis, John Cotronis

Abstract:

Memetic algorithms (MAs) are useful for solving optimization problems. It is quite difficult to search the search space of the optimization problem with large dimensions. There is a challenge to use all the cores of the system. In this study, a sequential implementation of the memetic algorithm is converted into a concurrent version, which is executed on the cores of both CPU and GPU. For this reason, CUDA and OpenMP libraries are operated on the parallel algorithm to make a concurrent execution on CPU and GPU, respectively. The aim of this study is to compare CPU and GPU implementation of the memetic algorithm. For this purpose, fourteen benchmark functions are selected as test problems. The obtained results indicate that our approach leads to speedups up to five thousand times higher compared to one CPU thread while maintaining a reasonable results quality. This clearly shows that GPUs have the potential to acceleration of MAs and allow them to solve much more complex tasks.

Keywords: memetic algorithm, CUDA, GPU-based memetic algorithm, open multi processing, multimodal functions, unimodal functions, non-linear optimization problems

Procedia PDF Downloads 52
119 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays

Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín

Abstract:

Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.

Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation

Procedia PDF Downloads 150
118 Identity-Based Encryption: A Comparison of Leading Classical and Post-Quantum Implementations in an Enterprise Setting

Authors: Emily Stamm, Neil Smyth, Elizabeth O'Sullivan

Abstract:

In Identity-Based Encryption (IBE), an identity, such as a username, email address, or domain name, acts as the public key. IBE consolidates the PKI by eliminating the repetitive process of requesting public keys for each message encryption. Two of the most popular schemes are Sakai-Kasahara (SAKKE), which is based on elliptic curve pairings, and the Ducas, Lyubashevsky, and Prest lattice scheme (DLP- Lattice), which is based on quantum-secure lattice cryptography. In or- der to embed the schemes in a standard enterprise setting, both schemes are implemented as shared system libraries and integrated into a REST service that functions at the enterprise level. The performance of both schemes as libraries and services is compared, and the practicalities of implementation and application are discussed. Our performance results indicate that although SAKKE has the smaller key and ciphertext sizes, DLP-Lattice is significantly faster overall and we recommend it for most enterprise use cases.

Keywords: identity-based encryption, post-quantum cryptography, lattice-based cryptography, IBE

Procedia PDF Downloads 88
117 Solving Linear Systems Involved in Convex Programming Problems

Authors: Yixun Shi

Abstract:

Many interior point methods for convex programming solve an (n+m)x(n+m)linear system in each iteration. Many implementations solve this system in each iteration by considering an equivalent mXm system (4) as listed in the paper, and thus the job is reduced into solving the system (4). However, the system(4) has to be solved exactly since otherwise the error would be entirely passed onto the last m equations of the original system. Often the Cholesky factorization is computed to obtain the exact solution of (4). One Cholesky factorization is to be done in every iteration, resulting in higher computational costs. In this paper, two iterative methods for solving linear systems using vector division are combined together and embedded into interior point methods. Instead of computing one Cholesky factorization in each iteration, it requires only one Cholesky factorization in the entire procedure, thus significantly reduces the amount of computation needed for solving the problem. Based on that, a hybrid algorithm for solving convex programming problems is proposed.

Keywords: convex programming, interior point method, linear systems, vector division

Procedia PDF Downloads 371
116 Multi-Actors’ Scenario for Measuring Metropolitan Governance and Spatial Planning: A Case Study of Bangalore, India

Authors: H. S. Kumara

Abstract:

The rapid process of urbanization and the growing number of the metropolitan cities and its region call for better governance in India. This article attempts to argue that spatial planning really matters for measuring the governance at metropolitan scale. These study explore to metropolitan governance and spatial planning and its interrelationship issues, concepts and evolution of spatial planning in India and critically examines the multi actors’ scenario for measuring metropolitan governance by means of spatial planning in context with reviewing various master plans, concept of multi-actors viewpoint on role of spatial planning related to zoning regulations, master plan implementations and effective service delivery issues. This paper argues and concludes that the spatial planning of Bangalore directly impact on measuring metropolitan governance.

Keywords: metropolitan governance, spatial planning, service delivery, multi-actors’, opinion survey, master plan

Procedia PDF Downloads 564
115 Optimization and Design of Current-Mode Multiplier Circuits with Applications in Analog Signal Processing for Gas Industrial Package Systems

Authors: Mohamad Baqer Heidari, Hefzollah.Mohammadian

Abstract:

This brief presents two original implementations of improved accuracy current-mode multiplier/divider circuits. Besides the advantage of their simplicity, these original multiplier/divider structures present the advantage of very small linearity errors that can be obtained as a result of the proposed design techniques (0.75% and 0.9%, respectively, for an extended range of the input currents). The original multiplier/divider circuits permit a facile reconfiguration, the presented structures representing the functional basis for implementing complex function synthesizer circuits. The proposed computational structures are designed for implementing in 0.18-µm CMOS technology, with a low-voltage operation (a supply voltage of 1.2 V). The circuits’ power consumptions are 60 and 75 µW, respectively, while their frequency bandwidths are 79.6 and 59.7 MHz, respectively.

Keywords: analog signal processing, current-mode operation, functional core, multiplier, reconfigurable circuits, industrial package systems

Procedia PDF Downloads 345
114 Application First and Second Digits Number in the Benford Law

Authors: Teguh Sugiarto

Abstract:

Background: This study aims to explore the fraud that occurred in the financial statements using the Benford distribution law of 1st and 2nd case study of PT AKR Corporindo Tbk. Research Methods: In this study the authors use the first digit of the analysis and the analysis of the second digit of Bedford’s law. Having obtained the results of the analysis of the first and second digits, authors will make the difference between implementations using the scale above and below 5%. The number that has the level of difference in the range of 5% above or below, then a financial report in may, to analyse in the followup to the direction of the audit investigation, and authors assume happens a confusion in the financial statements. Findings: From research done, we found that there was a difference in the results of the appearance of the first digit of the number with the proper use of Benford's law, according to PT AKR Corporindo financial reports Tbk for the fiscal year 2006-2010, above and below the level the difference in set 5%. Conclusions: From the research that has been done, it can be concluded that on PT AKR Corporindo financial report 2006, 2007, 2008, 2009 and 2010, there is a level difference of appearance of numbers according to Benford's law is significant, as presented in the table analysis.

Keywords: Benford law, first digits, second digits, Indonesian company

Procedia PDF Downloads 401
113 Non-Interactive XOR Quantum Oblivious Transfer: Optimal Protocols and Their Experimental Implementations

Authors: Lara Stroh, Nikola Horová, Robert Stárek, Ittoop V. Puthoor, Michal Mičuda, Miloslav Dušek, Erika Andersson

Abstract:

Oblivious transfer (OT) is an important cryptographic primitive. Any multi-party computation can be realised with OT as a building block. XOR oblivious transfer (XOT) is a variant where the sender Alice has two bits, and a receiver, Bob, obtains either the first bit, the second bit, or their XOR. Bob should not learn anything more than this, and Alice should not learn what Bob has learned. Perfect quantum OT with information-theoretic security is known to be impossible. We determine the smallest possible cheating probabilities for unrestricted dishonest parties in non-interactive quantum XOT protocols using symmetric pure states and present an optimal protocol which outperforms classical protocols. We also "reverse" this protocol so that Bob becomes the sender of a quantum state and Alice the receiver who measures it while still implementing oblivious transfer from Alice to Bob. Cheating probabilities for both parties stay the same as for the unreversed protocol. We optically implemented both the unreversed and the reversed protocols and cheating strategies, noting that the reversed protocol is easier to implement.

Keywords: oblivious transfer, quantum protocol, cryptography, XOR

Procedia PDF Downloads 79
112 Vehicle Type Classification with Geometric and Appearance Attributes

Authors: Ghada S. Moussa

Abstract:

With the increase in population along with economic prosperity, an enormous increase in the number and types of vehicles on the roads occurred. This fact brings a growing need for efficiently yet effectively classifying vehicles into their corresponding categories, which play a crucial role in many areas of infrastructure planning and traffic management. This paper presents two vehicle-type classification approaches; 1) geometric-based and 2) appearance-based. The two classification approaches are used for two tasks: multi-class and intra-class vehicle classifications. For the evaluation purpose of the proposed classification approaches’ performance and the identification of the most effective yet efficient one, 10-fold cross-validation technique is used with a large dataset. The proposed approaches are distinguishable from previous research on vehicle classification in which: i) they consider both geometric and appearance attributes of vehicles, and ii) they perform remarkably well in both multi-class and intra-class vehicle classification. Experimental results exhibit promising potentials implementations of the proposed vehicle classification approaches into real-world applications.

Keywords: appearance attributes, geometric attributes, support vector machine, vehicle classification

Procedia PDF Downloads 312
111 Experimental Evaluation of Succinct Ternary Tree

Authors: Dmitriy Kuptsov

Abstract:

Tree data structures, such as binary or in general k-ary trees, are essential in computer science. The applications of these data structures can range from data search and retrieval to sorting and ranking algorithms. Naive implementations of these data structures can consume prohibitively large volumes of random access memory limiting their applicability in certain solutions. Thus, in these cases, more advanced representation of these data structures is essential. In this paper we present the design of the compact version of ternary tree data structure and demonstrate the results for the experimental evaluation using static dictionary problem. We compare these results with the results for binary and regular ternary trees. The conducted evaluation study shows that our design, in the best case, consumes up to 12 times less memory (for the dictionary used in our experimental evaluation) than a regular ternary tree and in certain configuration shows performance comparable to regular ternary trees. We have evaluated the performance of the algorithms using both 32 and 64 bit operating systems.

Keywords: algorithms, data structures, succinct ternary tree, per- formance evaluation

Procedia PDF Downloads 137
110 A Scalable Media Job Framework for an Open Source Search Engine

Authors: Pooja Mishra, Chris Pollett

Abstract:

This paper explores efficient ways to implement various media-updating features like news aggregation, video conversion, and bulk email handling. All of these jobs share the property that they are periodic in nature, and they all benefit from being handled in a distributed fashion. The data for these jobs also often comes from a social or collaborative source. We isolate the class of periodic, one round map reduce jobs as a useful setting to describe and handle media updating tasks. As such tasks are simpler than general map reduce jobs, programming them in a general map reduce platform could easily become tedious. This paper presents a MediaUpdater module of the Yioop Open Source Search Engine Web Portal designed to handle such jobs via an extension of a PHP class. We describe how to implement various media-updating tasks in our system as well as experiments carried out using these implementations on an Amazon Web Services cluster.

Keywords: distributed jobs framework, news aggregation, video conversion, email

Procedia PDF Downloads 266
109 A Proposal to Integrate Spatially Explicit Ecosystem Services with Urban Metabolic Modelling

Authors: Thomas Elliot, Javier Babi Almenar, Benedetto Rugani

Abstract:

The integration of urban metabolism (UM) with spatially explicit ecosystem service (ES) stocks has the potential to advance sustainable urban development. It will correct the lack of spatially specificity of current urban metabolism models. Furthermore, it will include into UM not only the physical properties of material and energy stocks and flows, but also the implications to the natural capital that provides and maintains human well-being. This paper presents the first stages of a modelling framework by which urban planners can assess spatially the trade-offs of ES flows resulting from urban interventions of different character and scale. This framework allows for a multi-region assessment which takes into account sustainability burdens consequent to an urban planning event occurring elsewhere in the environment. The urban boundary is defined as the Functional Urban Audit (FUA) method to account for trans-administrative ES flows. ES are mapped using CORINE land use within the FUA. These stocks and flows are incorporated into a UM assessment method to demonstrate the transfer and flux of ES arising from different urban planning implementations.

Keywords: ecological economics, ecosystem services, spatial planning, urban metabolism

Procedia PDF Downloads 304
108 The Applicability of International Humanitarian Law to Non-State Actors

Authors: Yin Cheung Lam

Abstract:

In 1949, the ratification of the Geneva Conventions heralded the international community’s adoption of a new universal and non-discriminatory approach to human rights in situations of conflict. However, with the proliferation of international terrorism after the 9/11 attacks on the United States (U.S.), the international community’s uneven and contradictory implementations of international humanitarian law (IHL) questioned its agenda of universal human rights. Specifically, the derogation from IHL has never been so pronounced in the U.S. led ‘War on Terror’. While an extensive literature has ‘assessed the impact’ of the implementation of the Geneva Conventions, limited attention has been paid to interrogating the ways in which the Geneva Conventions and its resulting implementation have functioned to discursively reproduce certain understandings of human rights between states and non-state actors. Through a discursive analysis of the Geneva Conventions and the conceptualization of human rights in relation to terrorism, this thesis problematises the way in which the U.S. has understood and reproduced understandings of human rights. Using the U.S. ‘War on Terror’ as an example, it seeks to extend previous analyses of the U.S.’ practice of IHL through a qualitative discursive analysis of the human rights content that appears in the Geneva Conventions in addition to the speeches and policy documents on the ‘War on Terror’.

Keywords: discursive analysis, human rights, non-state actors, war on terror

Procedia PDF Downloads 578
107 The Use of X-Ray Computed Microtomography in Petroleum Geology: A Case Study of Unconventional Reservoir Rocks in Poland

Authors: Tomasz Wejrzanowski, Łukasz Kaczmarek, Michał Maksimczuk

Abstract:

High-resolution X-ray computed microtomography (µCT) is a non-destructive technique commonly used to determine the internal structure of reservoir rock sample. This study concerns µCT analysis of Silurian and Ordovician shales and mudstones from a borehole in the Baltic Basin, north of Poland. The spatial resolution of the µCT images obtained was 27 µm, which enabled the authors to create accurate 3-D visualizations and to calculate the ratio of pores and fractures volume to the total sample volume. A total of 1024 µCT slices were used to create a 3-D volume of sample structure geometry. These µCT slices were processed to obtain a clearly visible image and the volume ratio. A copper X-ray source filter was used to reduce image artifacts. Due to accurate technical settings of µCT it was possible to obtain high-resolution 3-D µCT images of low X-ray transparency samples. The presented results confirm the utility of µCT implementations in geoscience and show that µCT has still promising applications for reservoir exploration and characterization.

Keywords: fractures, material density, pores, structure

Procedia PDF Downloads 218
106 Building Information Modelling: A Review to Indian Scenario

Authors: P. Agnivesh, P. V. Ponambala Moorthi

Abstract:

Evolution of information modelling leads to the visualisation of well-organized built environment. Building Information Modelling (BIM) is considered as evolution in the off-site construction which essentially enhances and controls the present scenario of on-site construction paradigms. Promptness, sustainability and security are considered as the important characteristics of the building information modelling. Projects that uses BIM are tied firmly by technology but distributed organizationally. This allows different team members in the project to associate and integrate the works and work flows. This will in turn improve the efficiency of work breakdown structure. Internationally BIM had been accepted as modern computer aided way of information sharing by construction industry for efficient way of manipulation in order to avoid the on-site misperceptions. Even though, in developing countries like India BIM is in the phase of start and requires lot of mandates and policies to be brought about by the government for its widespread implementations. This paper reviews the current scenario of BIM worldwide and in India and suggests for the improved implementation of building modelling for Indian policy condition.

Keywords: building information modelling, Indian polity, information modelling, information sharing, mandates and policies, sustainability.

Procedia PDF Downloads 349
105 The Impact of Social Protection Intervention on Alleviating Social Vulnerability (Evidence from Ethiopian Rural Households)

Authors: Tewelde Gebresslase Haile, S. P. Singh

Abstract:

To bridge the existing knowledge gap on public intervention implementations, this study estimates the impact of social protection intervention (SPI) on alleviating social vulnerability. Following a multi-stage sampling, primary information was gathered through a self-administered questionnaire, FGD, and interviews from the target households located at four systematically selected districts of Tigrai, Ethiopia. Factor analysis and Propensity Score Matching are applied to construct Social Vulnerability Index (SVI) and measuring the counterfactual impact of selected intervention. As a multidimensional challenge, social vulnerability is found as an important concept used to guide policy evaluation. Accessibility of basic services of Social Affairs, Agriculture, Health and Education sectors, and Food Security Program are commonly used as SPIs. Finally, this study discovers that the households who had access to SPI have scored 9.65% lower SVI than in the absence of the intervention. Finally, this study suggests the provision of integrated, proactive, productive, and evidence-based SPIs to alleviate social vulnerability.

Keywords: social protection, livelihood assets, social vulnerability, public policy SVI

Procedia PDF Downloads 46
104 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks

Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi

Abstract:

Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.

Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex

Procedia PDF Downloads 131
103 Upcycling of Inorganic Waste: Lessons Learned and Outlook for the Future

Authors: Miroslava Hujová, Patricia Rabello Monich, Jozef Kraxner, Dusan Galusek, Enrico Bernardo

Abstract:

Inorganic waste upcycling offers a solution how to avoid landfilling and how to save raw materials at the same time. However, its practical implementations in Slovakia and elsewhere in Europe, are rather limited despite the potential smaller countries like Slovakia have their advantage in closely-knitted inorganic materials industry. One part of discussion should include an overview of wastes that can be possibly used for upcycling, i.e. fly ashes, red mud, glass cullets, vitrified bottom ashes etc. These wastes can be processed by a variety of strategies, the one of our choice, alkali activation, opens the possibility for the formation of novel materials at almost negligible energetic expense. In the research, these materials are characterized by comprehensive means (X-Ray Fluorescece, Diffraction methods, Thermal Analysis, Scanning Electron Microscopy, Mechanical tests and Chemical stability), which time and time again demonstrate their competitive properties against traditional materials available at the market. It is just a question for discussion why these materials do not receive more significant attention from industry and there is pressing interest for the solution of standing situation.

Keywords: upcycling, inorganic wastes, glass ceramics, alkali-activation

Procedia PDF Downloads 110
102 Carbon Footprint Reduction Using Cleaner Production Strategies in a Otoshimi Producing Plant

Authors: Razuana Rahim, Abdul Aziz Abdul Raman

Abstract:

In this work, a study was conducted to evaluate the feasibility of using Cleaner Production (CP) strategy to reduce carbon dioxide emission (CO2) in a plant that produces Otoshimi. CP strategy is meant to reduce CO2 emission while taking into consideration the economic aspect. For this purpose, a CP audit was conducted and the information obtained were analyzed and major contributors of CO2 emission inside the boundary of the production plant was identified. Electricity, water and fuel consumption and generation of solid waste and wastewater were identified as the main contributors. Total CO2 emission generated was 0.27 kg CO2 per kg of Otoshimi produced, where 68% was contributed by electricity consumption. Subsequently, a total of three CP options were generated and implementations of these options are expected to reduce the CO2 emission from electricity consumption to 0.16 kg CO2 per kg of Otoshimi produced, a reduction of about 14%. The study proves that CP strategy can be implemented even without any investment to reduce CO2 for a plant that produces Otoshimi.

Keywords: carbon dioxide emission, cleaner production audit, cleaner production options, otoshimi production

Procedia PDF Downloads 393