Search results for: low complexity
641 Complexity of Multivalued Maps
Authors: David Sherwell, Vivien Visaya
Abstract:
We consider the topological entropy of maps that in general, cannot be described by one-dimensional dynamics. In particular, we show that for a multivalued map F generated by singlevalued maps, the topological entropy of any of the single-value map bounds the topological entropy of F from below.Keywords: Multivalued maps, Topological entropy, Selectors
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1250640 The Model of the Genre of Literary Portrait in Modern Literary Criticism
Authors: B. K. Bazylova, Zh. D. Suleimenova
Abstract:
In modern literary criticism the problem of genre is one of discussion. Genre is a phenomenon, located in the intersection of the synchronous and diachronic processes in the development of literature, and this is due to the complexity of its solutions. It defines the place of contact between literary works and literary process.
Keywords: Literary, criticism, literary portrait.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2036639 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings
Authors: G. Candel, D. Naccache
Abstract:
t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embedding. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic, and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n2) to O(n2/k), and the memory requirement from n2 to 2(n/k)2 which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.
Keywords: Concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 489638 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases
Authors: Mohammad A. Bani-Khaled
Abstract:
In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.
Keywords: Coupled dynamics, geometric complexity, Proper Orthogonal Decomposition (POD), thin walled beams.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1016637 Community Perceptions and Attitudes Regarding Wildlife Crime in South Africa
Authors: Louiza C. Duncker, Duarte Gonçalves
Abstract:
Wildlife crime is a complex problem with many interconnected facets, which are generally responded to in parts or fragments in efforts to “break down” the complexity into manageable components. However, fragmentation increases complexity as coherence and cooperation become diluted. A whole-of-society approach has been developed towards finding a common goal and integrated approach to preventing wildlife crime. As part of this development, research was conducted in rural communities adjacent to conservation areas in South Africa to define and comprehend the challenges faced by them, and to understand their perceptions of wildlife crime. The results of the research showed that the perceptions of community members varied - most were in favor of conservation and of protecting rhinos, only if they derive adequate benefit from it. Regardless of gender, income level, education level, or access to services, conservation was perceived to be good and bad by the same people. Even though people in the communities are poor, a willingness to stop rhino poaching does exist amongst them, but their perception of parks not caring about people triggered an attitude of not being willing to stop, prevent or report poaching. Understanding the nuances, the history, the interests and values of community members, and the drivers behind poaching mind-sets (intrinsic or driven by transnational organized crime) is imperative to create sustainable and resilient communities on multiple levels that make a substantial positive impact on people’s lives, but also conserve wildlife for posterity.
Keywords: Conservation, community perceptions, wildlife crime, rhino poaching, interest and value creation, whole-of-society approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1879636 An Embedded System for Artificial Intelligence Applications
Authors: Ioannis P. Panagopoulos, Christos C. Pavlatos, George K. Papakonstantinou
Abstract:
Conventional approaches in the implementation of logic programming applications on embedded systems are solely of software nature. As a consequence, a compiler is needed that transforms the initial declarative logic program to its equivalent procedural one, to be programmed to the microprocessor. This approach increases the complexity of the final implementation and reduces the overall system's performance. On the contrary, presenting hardware implementations which are only capable of supporting logic programs prevents their use in applications where logic programs need to be intertwined with traditional procedural ones, for a specific application. We exploit HW/SW codesign methods to present a microprocessor, capable of supporting hybrid applications using both programming approaches. We take advantage of the close relationship between attribute grammar (AG) evaluation and knowledge engineering methods to present a programmable hardware parser that performs logic derivations and combine it with an extension of a conventional RISC microprocessor that performs the unification process to report the success or failure of those derivations. The extended RISC microprocessor is still capable of executing conventional procedural programs, thus hybrid applications can be implemented. The presented implementation is programmable, supports the execution of hybrid applications, increases the performance of logic derivations (experimental analysis yields an approximate 1000% increase in performance) and reduces the complexity of the final implemented code. The proposed hardware design is supported by a proposed extended C-language called C-AG.
Keywords: Attribute Grammars, Logic Programming, RISC microprocessor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5087635 The Impact of Regulatory Changes on the Development of Mobile Medical Apps
Abstract:
Mobile applications are being used to perform a wide variety of tasks in day-to-day life, ranging from checking email to controlling your home heating. Application developers have recognized the potential to transform a smart device into a medical device, by using a mobile medical application i.e. a mobile phone or a tablet. When initially conceived these mobile medical applications performed basic functions e.g. BMI calculator, accessing reference material etc.; however, increasing complexity offers clinicians and patients a range of functionality. As this complexity and functionality increases, so too does the potential risk associated with using such an application. Examples include any applications that provide the ability to inflate and deflate blood pressure cuffs, as well as applications that use patient-specific parameters and calculate dosage or create a dosage plan for radiation therapy. If an unapproved mobile medical application is marketed by a medical device organization, then they face significant penalties such as receiving an FDA warning letter to cease the prohibited activity, fines and possibility of facing a criminal conviction. Regulatory bodies have finalized guidance intended for mobile application developers to establish if their applications are subject to regulatory scrutiny. However, regulatory controls appear contradictory with the approaches taken by mobile application developers who generally work with short development cycles and very little documentation and as such, there is the potential to stifle further improvements due to these regulations. The research presented as part of this paper details how by adopting development techniques, such as agile software development, mobile medical application developers can meet regulatory requirements whilst still fostering innovation.
Keywords: Medical, mobile, applications, software Engineering, FDA, standards, regulations, agile.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2063634 The Role of People and Data in Complex Spatial-Related Long-Term Decisions: A Case Study of Capital Project Management Groups
Authors: Peter Boyes, Sarah Sharples, Paul Tennent, Gary Priestnall, Jeremy Morley
Abstract:
Significant long-term investment projects can involve complex decisions. These are often described as capital projects and the factors that contribute to their complexity include budgets, motivating reasons for investment, stakeholder involvement, interdependent projects, and the delivery phases required. The complexity of these projects often requires management groups to be established involving stakeholder representatives, these teams are inherently multidisciplinary. This study uses two university campus capital projects as case studies for this type of management group. Due to the interaction of projects with wider campus infrastructure and users, decisions are made at varying spatial granularity throughout the project lifespan. This spatial-related context brings complexity to the group decisions. Sensemaking is the process used to achieve group situational awareness of a complex situation, enabling the team to arrive at a consensus and make a decision. The purpose of this study is to understand the role of people and data in complex spatial related long-term decision and sensemaking processes. The paper aims to identify and present issues experienced in practical settings of these types of decision. A series of exploratory semi-structured interviews with members of the two projects elicit an understanding of their operation. From two stages of thematic analysis, inductive and deductive, emergent themes are identified around the group structure, the data usage, and the decision making within these groups. When data were made available to the group, there were commonly issues with perception of veracity and validity of the data presented; this impacted the ability of the group to reach consensus and therefore for decision to be made. Similarly, there were different responses to forecasted or modelled data, shaped by the experience and occupation of the individuals within the multidisciplinary management group. This paper provides an understanding of further support required for team sensemaking and decision making in complex capital projects. The paper also discusses the barriers found to effective decision making in this setting and suggests opportunities to develop decision support systems in this team strategic decision-making process. Recommendations are made for further research into the sensemaking and decision-making process of this complex spatial-related setting.
Keywords: decision making, decisions under uncertainty, real decisions, sensemaking, spatial, team decision making
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 492633 Constraint Based Frequent Pattern Mining Technique for Solving GCS Problem
Authors: First G.M. Karthik, Second Ramachandra.V.Pujeri, Dr.
Abstract:
Generalized Center String (GCS) problem are generalized from Common Approximate Substring problem and Common substring problems. GCS are known to be NP-hard allowing the problems lies in the explosion of potential candidates. Finding longest center string without concerning the sequence that may not contain any motifs is not known in advance in any particular biological gene process. GCS solved by frequent pattern-mining techniques and known to be fixed parameter tractable based on the fixed input sequence length and symbol set size. Efficient method known as Bpriori algorithms can solve GCS with reasonable time/space complexities. Bpriori 2 and Bpriori 3-2 algorithm are been proposed of any length and any positions of all their instances in input sequences. In this paper, we reduced the time/space complexity of Bpriori algorithm by Constrained Based Frequent Pattern mining (CBFP) technique which integrates the idea of Constraint Based Mining and FP-tree mining. CBFP mining technique solves the GCS problem works for all center string of any length, but also for the positions of all their mutated copies of input sequence. CBFP mining technique construct TRIE like with FP tree to represent the mutated copies of center string of any length, along with constraints to restraint growth of the consensus tree. The complexity analysis for Constrained Based FP mining technique and Bpriori algorithm is done based on the worst case and average case approach. Algorithm's correctness compared with the Bpriori algorithm using artificial data is shown.Keywords: Constraint Based Mining, FP tree, Data mining, GCS problem, CBFP mining technique.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1702632 Improved Rake Receiver Based On the Signal Sign Separation in Maximal Ratio Combining Technique for Ultra-Wideband Wireless Communication Systems
Authors: Rashid A. Fayadh, F. Malek, Hilal A. Fadhil, Norshafinash Saudin
Abstract:
At receiving high data rate in ultra wideband (UWB) technology for many users, there are multiple user interference and inter-symbol interference as obstacles in the multi-path reception technique. Since the rake receivers were designed to collect many resolvable paths, even more than hundred of paths. Rake receiver implementation structures have been proposed towards increasing the complexity for getting better performances in indoor or outdoor multi-path receivers by reducing the bit error rate (BER). So several rake structures were proposed in the past to reduce the number of combining and estimating of resolvable paths. To this aim, we suggested two improved rake receivers based on signal sign separation in the maximal ratio combiner (MRC), called positive-negative MRC selective rake (P-N/MRC-S-rake) and positive-negative MRC partial rake (P-N/MRC-S-rake) receivers. These receivers were introduced to reduce the complexity with less number of fingers and improving the performance with low BER. Before decision circuit, there is a comparator to compare between positive quantity and negative quantity to decide whether the transmitted bit is 1 or 0. The BER was driven by MATLAB simulation with multi-path environments for impulse radio time-hopping binary phase shift keying (TH-BPSK) modulation and the results were compared with those of conventional rake receivers.
Keywords: Selective and partial rake receivers, positive and negative signal separation, maximal ratio combiner, bit error rate performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1901631 RRNS-Convolutional Concatenated Code for OFDM based Wireless Communication with Direct Analog-to-Residue Converter
Authors: Shahana T. K., Babita R. Jose, K. Poulose Jacob, Sreela Sasi
Abstract:
The modern telecommunication industry demands higher capacity networks with high data rate. Orthogonal frequency division multiplexing (OFDM) is a promising technique for high data rate wireless communications at reasonable complexity in wireless channels. OFDM has been adopted for many types of wireless systems like wireless local area networks such as IEEE 802.11a, and digital audio/video broadcasting (DAB/DVB). The proposed research focuses on a concatenated coding scheme that improve the performance of OFDM based wireless communications. It uses a Redundant Residue Number System (RRNS) code as the outer code and a convolutional code as the inner code. Here, a direct conversion of analog signal to residue domain is done to reduce the conversion complexity using sigma-delta based parallel analog-to-residue converter. The bit error rate (BER) performances of the proposed system under different channel conditions are investigated. These include the effect of additive white Gaussian noise (AWGN), multipath delay spread, peak power clipping and frame start synchronization error. The simulation results show that the proposed RRNS-Convolutional concatenated coding (RCCC) scheme provides significant improvement in the system performance by exploiting the inherent properties of RRNS.Keywords: Analog-to-residue converter, Concatenated codes, OFDM, Redundant Residue Number System, Sigma-delta modulator, Wireless communication
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1944630 Soft Real-Time Fuzzy Task Scheduling for Multiprocessor Systems
Authors: Mahdi Hamzeh, Sied Mehdi Fakhraie, Caro Lucas
Abstract:
All practical real-time scheduling algorithms in multiprocessor systems present a trade-off between their computational complexity and performance. In real-time systems, tasks have to be performed correctly and timely. Finding minimal schedule in multiprocessor systems with real-time constraints is shown to be NP-hard. Although some optimal algorithms have been employed in uni-processor systems, they fail when they are applied in multiprocessor systems. The practical scheduling algorithms in real-time systems have not deterministic response time. Deterministic timing behavior is an important parameter for system robustness analysis. The intrinsic uncertainty in dynamic real-time systems increases the difficulties of scheduling problem. To alleviate these difficulties, we have proposed a fuzzy scheduling approach to arrange real-time periodic and non-periodic tasks in multiprocessor systems. Static and dynamic optimal scheduling algorithms fail with non-critical overload. In contrast, our approach balances task loads of the processors successfully while consider starvation prevention and fairness which cause higher priority tasks have higher running probability. A simulation is conducted to evaluate the performance of the proposed approach. Experimental results have shown that the proposed fuzzy scheduler creates feasible schedules for homogeneous and heterogeneous tasks. It also and considers tasks priorities which cause higher system utilization and lowers deadline miss time. According to the results, it performs very close to optimal schedule of uni-processor systems.Keywords: Computational complexity, Deadline, Feasible scheduling, Fuzzy scheduling, Priority, Real-time multiprocessor systems, Robustness, System utilization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2128629 A Novel Recursive Multiplierless Algorithm for 2-D DCT
Authors: V.K.Ananthashayana, Geetha.K.S
Abstract:
In this paper, a recursive algorithm for the computation of 2-D DCT using Ramanujan Numbers is proposed. With this algorithm, the floating-point multiplication is completely eliminated and hence the multiplierless algorithm can be implemented using shifts and additions only. The orthogonality of the recursive kernel is well maintained through matrix factorization to reduce the computational complexity. The inherent parallel structure yields simpler programming and hardware implementation and provides log 1 2 3 2 N N-N+ additions and N N 2 log 2 shifts which is very much less complex when compared to other recent multiplierless algorithms.Keywords: DCT, Multilplerless, Ramanujan Number, Recursive.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1545628 The Process of Crisis: Model of Its Development in the Organization
Authors: M. Mikušová
Abstract:
The main aim of this paper is to present a clear and comprehensive picture of the process of a crisis in the organization which will help to better understand its possible developments. For a description of the sequence of individual steps and an indication of their causation and possible variants of the developments, a detailed flow diagram with verbal comment is applied. For simplicity, the process of the crisis is observed in four basic phases called: symptoms of the crisis, diagnosis, action and prevention. The model highlights the complexity of the phenomenon of the crisis and that the various phases of the crisis are interweaving.
Keywords: Crisis, management, model, organization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1134627 Exponentially Weighted Simultaneous Estimation of Several Quantiles
Authors: Valeriy Naumov, Olli Martikainen
Abstract:
In this paper we propose new method for simultaneous generating multiple quantiles corresponding to given probability levels from data streams and massive data sets. This method provides a basis for development of single-pass low-storage quantile estimation algorithms, which differ in complexity, storage requirement and accuracy. We demonstrate that such algorithms may perform well even for heavy-tailed data.Keywords: Quantile estimation, data stream, heavy-taileddistribution, tail index.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1533626 Low Complexity, High Performance LDPC Codes Based on Defected Fullerene Graphs
Authors: Ashish Goswami, Rakesh Sharma
Abstract:
In this paper, LDPC Codes based on defected fullerene graphs have been generated. And it is found that the codes generated are fast in encoding and better in terms of error performance on AWGN Channel.Keywords: LDPC Codes, Fullerene Graphs, Defected Fullerene Graphs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1768625 A Logic Approach to Database Dynamic Updating
Authors: Daniel Stamate
Abstract:
We introduce a logic-based framework for database updating under constraints. In our framework, the constraints are represented as an instantiated extended logic program. When performing an update, database consistency may be violated. We provide an approach of maintaining database consistency, and study the conditions under which the maintenance process is deterministic. We show that the complexity of the computations and decision problems presented in our framework is in each case polynomial time.Keywords: Databases, knowledge bases, constraints, updates, minimal change, consistency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1359624 Influence of Shading on a BIPV System’s Performance in an Urban Context: Case Study of BIPV Systems of the Science Center of Complexity Building of the National and Autonomous University of Mexico in Mexico City
Authors: Viridiana Edith Ardura Perea, José Luis Bermúdez Alcocer
Abstract:
The purpose of this paper is to establish the influence of shading on a Building Integrated Photovoltaic (BIPV) system´s performance in an urban context. The PV systems of the Science Center of Complexity (Centro de Ciencias de la Complejidad) Building based in the Main Campus of the National and Autonomous University of Mexico (UNAM) in Mexico City was taken as case study. The PV systems are placed on the rooftop and on the south façade of the building. The south-façade PV system, operating as sunshades, consists of two strings: one at the ground floor and the other one at the first floor. According to the building’s facility manager, the south-façade PV system generates 42% less electricity per kilowatt peak (kWp) installed than the one on the roof. The methods applied in this study were Solar Radiation Analysis (SRA) simulations performed with the Insight 360 Plug-in from Revit 2018® and an on-site measurement using specialized tools. The results of the SRA simulations showed that the shading casted by the PV system placed on the first floor on top of the PV system of the ground floor decreases its solar incident radiation over 50%. The simulation outcome was compared and validated to the measured data obtained from the on-site measurement. In conclusion, the loss factor achieved from the shading of the PVs is due to the surroundings and the PV system´s own design. The south-façade BIPV system’s deficient design generates critical losses on its performance and decreases its profitability.
Keywords: Building integrated photovoltaics (BIPV) design, energy analysis software, shading losses, solar radiation analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1493623 Fast Algorithm of Shot Cut Detection
Authors: Lenka Krulikovská, Jaroslav Polec, Tomáš Hirner
Abstract:
In this paper we present a novel method, which reduces the computational complexity of abrupt cut detection. We have proposed fast algorithm, where the similarity of frames within defined step is evaluated instead of comparing successive frames. Based on the results of simulation on large video collection, the proposed fast algorithm is able to achieve 80% reduction of needed frames comparisons compared to actually used methods without the shot cut detection accuracy degradation.Keywords: Abrupt cut, fast algorithm, shot cut detection, Pearson correlation coefficient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1746622 Quantifying Uncertainties in an Archetype-Based Building Stock Energy Model by Use of Individual Building Models
Authors: Morten Brøgger, Kim Wittchen
Abstract:
Focus on reducing energy consumption in existing buildings at large scale, e.g. in cities or countries, has been increasing in recent years. In order to reduce energy consumption in existing buildings, political incentive schemes are put in place and large scale investments are made by utility companies. Prioritising these investments requires a comprehensive overview of the energy consumption in the existing building stock, as well as potential energy-savings. However, a building stock comprises thousands of buildings with different characteristics making it difficult to model energy consumption accurately. Moreover, the complexity of the building stock makes it difficult to convey model results to policymakers and other stakeholders. In order to manage the complexity of the building stock, building archetypes are often employed in building stock energy models (BSEMs). Building archetypes are formed by segmenting the building stock according to specific characteristics. Segmenting the building stock according to building type and building age is common, among other things because this information is often easily available. This segmentation makes it easy to convey results to non-experts. However, using a single archetypical building to represent all buildings in a segment of the building stock is associated with loss of detail. Thermal characteristics are aggregated while other characteristics, which could affect the energy efficiency of a building, are disregarded. Thus, using a simplified representation of the building stock could come at the expense of the accuracy of the model. The present study evaluates the accuracy of a conventional archetype-based BSEM that segments the building stock according to building type- and age. The accuracy is evaluated in terms of the archetypes’ ability to accurately emulate the average energy demands of the corresponding buildings they were meant to represent. This is done for the buildings’ energy demands as a whole as well as for relevant sub-demands. Both are evaluated in relation to the type- and the age of the building. This should provide researchers, who use archetypes in BSEMs, with an indication of the expected accuracy of the conventional archetype model, as well as the accuracy lost in specific parts of the calculation, due to use of the archetype method.Keywords: Building stock energy modelling, energy-savings, archetype.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 747621 Dual Construction of Stern-based Signature Scheme
Authors: Pierre-Louis Cayrel, Sidi Mohamed El Yousfi Alaoui
Abstract:
In this paper, we propose a dual version of the first threshold ring signature scheme based on error-correcting code proposed by Aguilar et. al in [1]. Our scheme uses an improvement of Véron zero-knowledge identification scheme, which provide smaller public and private key sizes and better computation complexity than the Stern one. This scheme is secure in the random oracle model.Keywords: Stern algorithm, Véron algorithm, threshold ring signature, post-quantum cryptography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1799620 Self – Tuning Method of Fuzzy System: An Application on Greenhouse Process
Authors: M. Massour El Aoud, M. Franceschi, M. Maher
Abstract:
The approach proposed here is oriented in the direction of fuzzy system for the analysis and the synthesis of intelligent climate controllers, the simulation of the internal climate of the greenhouse is achieved by a linear model whose coefficients are obtained by identification. The use of fuzzy logic controllers for the regulation of climate variables represents a powerful way to minimize the energy cost. Strategies of reduction and optimization are adopted to facilitate the tuning and to reduce the complexity of the controller.
Keywords: Greenhouse, fuzzy logic, optimization, gradient descent.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1947619 Organization of the Purchasing Function for Innovation
Authors: Jasna Prester, Ivana Rašić Bakarić, Božidar Matijević
Abstract:
Innovations not only contribute to competitiveness of the company but have also positive effects on revenues. On average, product innovations account to 14 percent of companies’ sales. Innovation management has substantially changed during the last decade, because of growing reliance on external partners. As a consequence, a new task for purchasing arises, as firms need to understand which suppliers actually do have high potential contributing to the innovativeness of the firm and which do not. Proper organization of the purchasing function is important since for the majority of manufacturing companies deal with substantial material costs which pass through the purchasing function. In the past the purchasing function was largely seen as a transaction-oriented, clerical function but today purchasing is the intermediate with supply chain partners contributing to innovations, be it product or process innovations. Therefore, purchasing function has to be organized differently to enable firm innovation potential. However, innovations are inherently risky. There are behavioral risk (that some partner will take advantage of the other party), technological risk in terms of complexity of products and processes of manufacturing and incoming materials and finally market risks, which in fact judge the value of the innovation. These risks are investigated in this work. Specifically, technological risks which deal with complexity of the products, and processes will be investigated more thoroughly. Buying components or such high edge technologies necessities careful investigation of technical features and therefore is usually conducted by a team of experts. Therefore it is hypothesized that higher the technological risk, higher will be the centralization of the purchasing function as an interface with other supply chain members. Main contribution of this research lies is in the fact that analysis was performed on a large data set of 1493 companies, from 25 countries collected in the GMRG 4 survey. Most analyses of purchasing function are done by case study analysis of innovative firms. Therefore this study contributes with empirical evaluations that can be generalized.
Keywords: Purchasing function organization, innovation, technological risk, GMRG 4 survey.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3721618 Analysis of Direct Current Motor in LabVIEW
Authors: E. Ramprasath, P. Manojkumar, P. Veena
Abstract:
DC motors have been widely used in the past centuries which are proudly known as the workhorse of industrial systems until the invention of the AC induction motors which makes a huge revolution in industries. Since then, the use of DC machines has been decreased due to enormous factors such as reliability, robustness and complexity but it lost its fame due to the losses. In this paper a new methodology is proposed to construct a DC motor through the simulation in LabVIEW to get an idea about its real time performances, if a change in parameter might have bigger improvement in losses and reliability.Keywords: Direct Current motor, LabVIEW software, modelling and analysis, overall characteristics of Direct Current motor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3069617 Synchronization of a Perturbed Satellite Attitude Motion
Authors: Sadaoui Djaouida
Abstract:
In the paper, the predictive control method is proposed to control the synchronization of two perturbed satellites attitude motion. Based on delayed feedback control of continuous-time systems combines with the prediction-based method of discrete-time systems, this approach only needs a single controller to realize synchronization, which has considerable significance in reducing the cost and complexity for controller implementation.
Keywords: Predictive control, Synchronization, Satellite attitude.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1950616 High-Speed Pipeline Implementation of Radix-2 DIF Algorithm
Authors: Christos Meletis, Paul Bougas, George Economakos , Paraskevas Kalivas, Kiamal Pekmestzi
Abstract:
In this paper, we propose a new architecture for the implementation of the N-point Fast Fourier Transform (FFT), based on the Radix-2 Decimation in Frequency algorithm. This architecture is based on a pipeline circuit that can process a stream of samples and produce two FFT transform samples every clock cycle. Compared to existing implementations the architecture proposed achieves double processing speed using the same circuit complexity.
Keywords: Digital signal processing, systolic circuits, FFTalgorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2215615 A Novel FFT-Based Frequency Offset Estimator for OFDM Systems
Authors: Mahdi Masoumi, Mehrdad Ardebilipoor, Seyed Aidin Bassam
Abstract:
This paper proposes a novel frequency offset (FO) estimator for orthogonal frequency division multiplexing. Simplicity is most significant feature of this algorithm and can be repeated to achieve acceptable accuracy. Also fractional and integer part of FO is estimated jointly with use of the same algorithm. To do so, instead of using conventional algorithms that usually use correlation function, we use DFT of received signal. Therefore, complexity will be reduced and we can do synchronization procedure by the same hardware that is used to demodulate OFDM symbol. Finally, computer simulation shows that the accuracy of this method is better than other conventional methods.
Keywords: DFT, Estimator, Frequency Offset, IEEE802.11a, OFDM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1497614 A Knowledge Engineering Workshop: Application for Choise Car
Authors: Touahria Mohamed, Khababa Abdallah, Frécon Louis
Abstract:
This paper proposes a declarative language for knowledge representation (Ibn Rochd), and its environment of exploitation (DeGSE). This DeGSE system was designed and developed to facilitate Ibn Rochd writing applications. The system was tested on several knowledge bases by ascending complexity, culminating in a system for recognition of a plant or a tree, and advisors to purchase a car, for pedagogical and academic guidance, or for bank savings and credit. Finally, the limits of the language and research perspectives are stated.Keywords: Knowledge representation, declarative language, IbnRochd, DeGSE, facets, cognitive approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1328613 A New True RMS-to-DC Converter in CMOS Technology
Authors: H. Asiaban, E. Farshidi
Abstract:
This paper presents a new true RMS-to-DC converter circuit based on a square-root-domain squarer/divider. The circuit is designed by employing up-down translinear loop and using of MOSFET transistors that operate in strong inversion saturation region. The converter offer advantages of two-quadrant input current, low circuit complexity, low supply voltage (1.2V) and immunity from the body effect. The circuit has been simulated by HSPICE. The simulation results are seen to conform to the theoretical analysis and shows benefits of the proposed circuit.Keywords: Current-mode, squarer/divider, low-pass filter, converter, translinear loop, RMS-to-DC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3293612 Visualization of Searching and Sorting Algorithms
Authors: Bremananth R, Radhika.V, Thenmozhi.S
Abstract:
Sequences of execution of algorithms in an interactive manner using multimedia tools are employed in this paper. It helps to realize the concept of fundamentals of algorithms such as searching and sorting method in a simple manner. Visualization gains more attention than theoretical study and it is an easy way of learning process. We propose methods for finding runtime sequence of each algorithm in an interactive way and aims to overcome the drawbacks of the existing character systems. System illustrates each and every step clearly using text and animation. Comparisons of its time complexity have been carried out and results show that our approach provides better perceptive of algorithms.Keywords: Algorithms, Searching, Sorting, Visualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2113