Search results for: labeling complexity.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 826

Search results for: labeling complexity.

646 Design of Low Power and High Speed Digital IIR Filter in 45nm with Optimized CSA for Digital Signal Processing Applications

Authors: G. Ramana Murthy, C. Senthilpari, P. Velrajkumar, Lim Tien Sze

Abstract:

In this paper, a design methodology to implement low-power and high-speed 2nd order recursive digital Infinite Impulse Response (IIR) filter has been proposed. Since IIR filters suffer from a large number of constant multiplications, the proposed method replaces the constant multiplications by using addition/subtraction and shift operations. The proposed new 6T adder cell is used as the Carry-Save Adder (CSA) to implement addition/subtraction operations in the design of recursive section IIR filter to reduce the propagation delay. Furthermore, high-level algorithms designed for the optimization of the number of CSA blocks are used to reduce the complexity of the IIR filter. The DSCH3 tool is used to generate the schematic of the proposed 6T CSA based shift-adds architecture design and it is analyzed by using Microwind CAD tool to synthesize low-complexity and high-speed IIR filters. The proposed design outperforms in terms of power, propagation delay, area and throughput when compared with MUX-12T, MCIT-7T based CSA adder filter design. It is observed from the experimental results that the proposed 6T based design method can find better IIR filter designs in terms of power and delay than those obtained by using efficient general multipliers.

Keywords: CSA Full Adder, Delay unit, IIR filter, Low-Power, PDP, Parametric Analysis, Propagation Delay, Throughput, VLSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3762
645 A New Variant of RC4 Stream Cipher

Authors: Lae Lae Khine

Abstract:

RC4 was used as an encryption algorithm in WEP(Wired Equivalent Privacy) protocol that is a standardized for 802.11 wireless network. A few attacks followed, indicating certain weakness in the design. In this paper, we proposed a new variant of RC4 stream cipher. The new version of the cipher does not only appear to be more secure, but its keystream also has large period, large complexity and good statistical properties.

Keywords: Cryptography, New variant, RC4, Stream Cipher.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1844
644 A Novel VLSI Architecture for Image Compression Model Using Low power Discrete Cosine Transform

Authors: Vijaya Prakash.A.M, K.S.Gurumurthy

Abstract:

In Image processing the Image compression can improve the performance of the digital systems by reducing the cost and time in image storage and transmission without significant reduction of the Image quality. This paper describes hardware architecture of low complexity Discrete Cosine Transform (DCT) architecture for image compression[6]. In this DCT architecture, common computations are identified and shared to remove redundant computations in DCT matrix operation. Vector processing is a method used for implementation of DCT. This reduction in computational complexity of 2D DCT reduces power consumption. The 2D DCT is performed on 8x8 matrix using two 1-Dimensional Discrete cosine transform blocks and a transposition memory [7]. Inverse discrete cosine transform (IDCT) is performed to obtain the image matrix and reconstruct the original image. The proposed image compression algorithm is comprehended using MATLAB code. The VLSI design of the architecture is implemented Using Verilog HDL. The proposed hardware architecture for image compression employing DCT was synthesized using RTL complier and it was mapped using 180nm standard cells. . The Simulation is done using Modelsim. The simulation results from MATLAB and Verilog HDL are compared. Detailed analysis for power and area was done using RTL compiler from CADENCE. Power consumption of DCT core is reduced to 1.027mW with minimum area[1].

Keywords: Discrete Cosine Transform (DCT), Inverse DiscreteCosine Transform (IDCT), Joint Photographic Expert Group (JPEG), Low Power Design, Very Large Scale Integration (VLSI) .

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3107
643 Introducing a Platform for Encryption Algorithms

Authors: Ahmad Habibizad Navin, Yasaman Hashemi, Omid Mirmotahari

Abstract:

In this paper, we introduce a novel platform encryption method, which modify its keys and random number generators step by step during encryption algorithms. According to complexity of the proposed algorithm, it was safer than any other method.

Keywords: Decryption, Encryption, Algorithm, security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1394
642 A Multipurpose Audio Watermarking Algorithm Based on Vector Quantization in DCT Domain

Authors: Jixin Liu, Zheming Lu

Abstract:

In this paper, a novel multipurpose audio watermarking algorithm is proposed based on Vector Quantization (VQ) in Discrete Cosine Transform (DCT) domain using the codeword labeling and index-bit constrained method. By using this algorithm, it can fulfill the requirements of both the copyright protection and content integrity authentication at the same time for the multimedia artworks. The robust watermark is embedded in the middle frequency coefficients of the DCT transform during the labeled codeword vector quantization procedure. The fragile watermark is embedded into the indices of the high frequency coefficients of the DCT transform by using the constrained index vector quantization method for the purpose of integrity authentication of the original audio signals. Both the robust and the fragile watermarks can be extracted without the original audio signals, and the simulation results show that our algorithm is effective with regard to the transparency, robustness and the authentication requirements

Keywords: Copyright Protection, Discrete Cosine Transform, Integrity Authentication, Multipurpose Audio Watermarking, Vector Quantization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1953
641 Complexity of Multivalued Maps

Authors: David Sherwell, Vivien Visaya

Abstract:

We consider the topological entropy of maps that in general, cannot be described by one-dimensional dynamics. In particular, we show that for a multivalued map F generated by singlevalued maps, the topological entropy of any of the single-value map bounds the topological entropy of F from below.

Keywords: Multivalued maps, Topological entropy, Selectors

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1198
640 The Model of the Genre of Literary Portrait in Modern Literary Criticism

Authors: B. K. Bazylova, Zh. D. Suleimenova

Abstract:

In modern literary criticism the problem of genre is one of discussion. Genre is a phenomenon, located in the intersection of the synchronous and diachronic processes in the development of literature, and this is due to the complexity of its solutions. It defines the place of contact between literary works and literary process.

Keywords: Literary, criticism, literary portrait.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1977
639 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: G. Candel, D. Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embedding. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic, and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n2) to O(n2/k), and the memory requirement from n2 to 2(n/k)2 which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: Concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 428
638 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases

Authors: Mohammad A. Bani-Khaled

Abstract:

In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.

Keywords: Coupled dynamics, geometric complexity, Proper Orthogonal Decomposition (POD), thin walled beams.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 972
637 Community Perceptions and Attitudes Regarding Wildlife Crime in South Africa

Authors: Louiza C. Duncker, Duarte Gonçalves

Abstract:

Wildlife crime is a complex problem with many interconnected facets, which are generally responded to in parts or fragments in efforts to “break down” the complexity into manageable components. However, fragmentation increases complexity as coherence and cooperation become diluted. A whole-of-society approach has been developed towards finding a common goal and integrated approach to preventing wildlife crime. As part of this development, research was conducted in rural communities adjacent to conservation areas in South Africa to define and comprehend the challenges faced by them, and to understand their perceptions of wildlife crime. The results of the research showed that the perceptions of community members varied - most were in favor of conservation and of protecting rhinos, only if they derive adequate benefit from it. Regardless of gender, income level, education level, or access to services, conservation was perceived to be good and bad by the same people. Even though people in the communities are poor, a willingness to stop rhino poaching does exist amongst them, but their perception of parks not caring about people triggered an attitude of not being willing to stop, prevent or report poaching. Understanding the nuances, the history, the interests and values of community members, and the drivers behind poaching mind-sets (intrinsic or driven by transnational organized crime) is imperative to create sustainable and resilient communities on multiple levels that make a substantial positive impact on people’s lives, but also conserve wildlife for posterity.

Keywords: Conservation, community perceptions, wildlife crime, rhino poaching, interest and value creation, whole-of-society approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1827
636 An Embedded System for Artificial Intelligence Applications

Authors: Ioannis P. Panagopoulos, Christos C. Pavlatos, George K. Papakonstantinou

Abstract:

Conventional approaches in the implementation of logic programming applications on embedded systems are solely of software nature. As a consequence, a compiler is needed that transforms the initial declarative logic program to its equivalent procedural one, to be programmed to the microprocessor. This approach increases the complexity of the final implementation and reduces the overall system's performance. On the contrary, presenting hardware implementations which are only capable of supporting logic programs prevents their use in applications where logic programs need to be intertwined with traditional procedural ones, for a specific application. We exploit HW/SW codesign methods to present a microprocessor, capable of supporting hybrid applications using both programming approaches. We take advantage of the close relationship between attribute grammar (AG) evaluation and knowledge engineering methods to present a programmable hardware parser that performs logic derivations and combine it with an extension of a conventional RISC microprocessor that performs the unification process to report the success or failure of those derivations. The extended RISC microprocessor is still capable of executing conventional procedural programs, thus hybrid applications can be implemented. The presented implementation is programmable, supports the execution of hybrid applications, increases the performance of logic derivations (experimental analysis yields an approximate 1000% increase in performance) and reduces the complexity of the final implemented code. The proposed hardware design is supported by a proposed extended C-language called C-AG.

Keywords: Attribute Grammars, Logic Programming, RISC microprocessor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5043
635 The Impact of Regulatory Changes on the Development of Mobile Medical Apps

Authors: M. McHugh, D. Lillis

Abstract:

Mobile applications are being used to perform a wide variety of tasks in day-to-day life, ranging from checking email to controlling your home heating. Application developers have recognized the potential to transform a smart device into a medical device, by using a mobile medical application i.e. a mobile phone or a tablet. When initially conceived these mobile medical applications performed basic functions e.g. BMI calculator, accessing reference material etc.; however, increasing complexity offers clinicians and patients a range of functionality. As this complexity and functionality increases, so too does the potential risk associated with using such an application. Examples include any applications that provide the ability to inflate and deflate blood pressure cuffs, as well as applications that use patient-specific parameters and calculate dosage or create a dosage plan for radiation therapy. If an unapproved mobile medical application is marketed by a medical device organization, then they face significant penalties such as receiving an FDA warning letter to cease the prohibited activity, fines and possibility of facing a criminal conviction. Regulatory bodies have finalized guidance intended for mobile application developers to establish if their applications are subject to regulatory scrutiny. However, regulatory controls appear contradictory with the approaches taken by mobile application developers who generally work with short development cycles and very little documentation and as such, there is the potential to stifle further improvements due to these regulations. The research presented as part of this paper details how by adopting development techniques, such as agile software development, mobile medical application developers can meet regulatory requirements whilst still fostering innovation.

Keywords: Medical, mobile, applications, software Engineering, FDA, standards, regulations, agile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2017
634 The Role of People and Data in Complex Spatial-Related Long-Term Decisions: A Case Study of Capital Project Management Groups

Authors: Peter Boyes, Sarah Sharples, Paul Tennent, Gary Priestnall, Jeremy Morley

Abstract:

Significant long-term investment projects can involve complex decisions. These are often described as capital projects and the factors that contribute to their complexity include budgets, motivating reasons for investment, stakeholder involvement, interdependent projects, and the delivery phases required. The complexity of these projects often requires management groups to be established involving stakeholder representatives, these teams are inherently multidisciplinary. This study uses two university campus capital projects as case studies for this type of management group. Due to the interaction of projects with wider campus infrastructure and users, decisions are made at varying spatial granularity throughout the project lifespan. This spatial-related context brings complexity to the group decisions. Sensemaking is the process used to achieve group situational awareness of a complex situation, enabling the team to arrive at a consensus and make a decision. The purpose of this study is to understand the role of people and data in complex spatial related long-term decision and sensemaking processes. The paper aims to identify and present issues experienced in practical settings of these types of decision. A series of exploratory semi-structured interviews with members of the two projects elicit an understanding of their operation. From two stages of thematic analysis, inductive and deductive, emergent themes are identified around the group structure, the data usage, and the decision making within these groups. When data were made available to the group, there were commonly issues with perception of veracity and validity of the data presented; this impacted the ability of the group to reach consensus and therefore for decision to be made. Similarly, there were different responses to forecasted or modelled data, shaped by the experience and occupation of the individuals within the multidisciplinary management group. This paper provides an understanding of further support required for team sensemaking and decision making in complex capital projects. The paper also discusses the barriers found to effective decision making in this setting and suggests opportunities to develop decision support systems in this team strategic decision-making process. Recommendations are made for further research into the sensemaking and decision-making process of this complex spatial-related setting.

Keywords: decision making, decisions under uncertainty, real decisions, sensemaking, spatial, team decision making

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 413
633 Constraint Based Frequent Pattern Mining Technique for Solving GCS Problem

Authors: First G.M. Karthik, Second Ramachandra.V.Pujeri, Dr.

Abstract:

Generalized Center String (GCS) problem are generalized from Common Approximate Substring problem and Common substring problems. GCS are known to be NP-hard allowing the problems lies in the explosion of potential candidates. Finding longest center string without concerning the sequence that may not contain any motifs is not known in advance in any particular biological gene process. GCS solved by frequent pattern-mining techniques and known to be fixed parameter tractable based on the fixed input sequence length and symbol set size. Efficient method known as Bpriori algorithms can solve GCS with reasonable time/space complexities. Bpriori 2 and Bpriori 3-2 algorithm are been proposed of any length and any positions of all their instances in input sequences. In this paper, we reduced the time/space complexity of Bpriori algorithm by Constrained Based Frequent Pattern mining (CBFP) technique which integrates the idea of Constraint Based Mining and FP-tree mining. CBFP mining technique solves the GCS problem works for all center string of any length, but also for the positions of all their mutated copies of input sequence. CBFP mining technique construct TRIE like with FP tree to represent the mutated copies of center string of any length, along with constraints to restraint growth of the consensus tree. The complexity analysis for Constrained Based FP mining technique and Bpriori algorithm is done based on the worst case and average case approach. Algorithm's correctness compared with the Bpriori algorithm using artificial data is shown.

Keywords: Constraint Based Mining, FP tree, Data mining, GCS problem, CBFP mining technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1645
632 Improved Rake Receiver Based On the Signal Sign Separation in Maximal Ratio Combining Technique for Ultra-Wideband Wireless Communication Systems

Authors: Rashid A. Fayadh, F. Malek, Hilal A. Fadhil, Norshafinash Saudin

Abstract:

At receiving high data rate in ultra wideband (UWB) technology for many users, there are multiple user interference and inter-symbol interference as obstacles in the multi-path reception technique. Since the rake receivers were designed to collect many resolvable paths, even more than hundred of paths. Rake receiver implementation structures have been proposed towards increasing the complexity for getting better performances in indoor or outdoor multi-path receivers by reducing the bit error rate (BER). So several rake structures were proposed in the past to reduce the number of combining and estimating of resolvable paths. To this aim, we suggested two improved rake receivers based on signal sign separation in the maximal ratio combiner (MRC), called positive-negative MRC selective rake (P-N/MRC-S-rake) and positive-negative MRC partial rake (P-N/MRC-S-rake) receivers. These receivers were introduced to reduce the complexity with less number of fingers and improving the performance with low BER. Before decision circuit, there is a comparator to compare between positive quantity and negative quantity to decide whether the transmitted bit is 1 or 0. The BER was driven by MATLAB simulation with multi-path environments for impulse radio time-hopping binary phase shift keying (TH-BPSK) modulation and the results were compared with those of conventional rake receivers.

Keywords: Selective and partial rake receivers, positive and negative signal separation, maximal ratio combiner, bit error rate performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1858
631 RRNS-Convolutional Concatenated Code for OFDM based Wireless Communication with Direct Analog-to-Residue Converter

Authors: Shahana T. K., Babita R. Jose, K. Poulose Jacob, Sreela Sasi

Abstract:

The modern telecommunication industry demands higher capacity networks with high data rate. Orthogonal frequency division multiplexing (OFDM) is a promising technique for high data rate wireless communications at reasonable complexity in wireless channels. OFDM has been adopted for many types of wireless systems like wireless local area networks such as IEEE 802.11a, and digital audio/video broadcasting (DAB/DVB). The proposed research focuses on a concatenated coding scheme that improve the performance of OFDM based wireless communications. It uses a Redundant Residue Number System (RRNS) code as the outer code and a convolutional code as the inner code. Here, a direct conversion of analog signal to residue domain is done to reduce the conversion complexity using sigma-delta based parallel analog-to-residue converter. The bit error rate (BER) performances of the proposed system under different channel conditions are investigated. These include the effect of additive white Gaussian noise (AWGN), multipath delay spread, peak power clipping and frame start synchronization error. The simulation results show that the proposed RRNS-Convolutional concatenated coding (RCCC) scheme provides significant improvement in the system performance by exploiting the inherent properties of RRNS.

Keywords: Analog-to-residue converter, Concatenated codes, OFDM, Redundant Residue Number System, Sigma-delta modulator, Wireless communication

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1902
630 Evaluation of the Internal Quality for Pineapple Based on the Spectroscopy Approach and Neural Network

Authors: Nonlapun Meenil, Pisitpong Intarapong, Thitima Wongsheree, Pranchalee Samanpiboon

Abstract:

In Thailand, once pineapples are harvested, they must be classified into two classes based on their sweetness: sweet and unsweet. This paper has studied and developed the assessment of internal quality of pineapples using a low-cost compact spectroscopy sensor according to the spectroscopy approach and Neural Network (NN). During the experiments, Batavia pineapples were utilized, generating 100 samples. The extracted pineapple juice of each sample was used to determine the Soluble Solid Content (SSC) labeling into sweet and unsweet classes. In terms of experimental equipment, the sensor cover was specifically designed to install the sensor and light source to read the reflectance at a five mm depth from pineapple flesh. By using a spectroscopy sensor, data on visible and near-infrared reflectance (Vis-NIR) were collected. The NN was used to classify the pineapple classes. Before the classification step, the preprocessing methods, which are class balancing, data shuffling, and standardization, were applied. The 510 nm and 900 nm reflectance values of the middle parts of pineapples were used as features of the NN. With the sequential model and ReLU activation function, 100% accuracy of the training set and 76.67% accuracy of the test set were achieved. According to the abovementioned information, using a low-cost compact spectroscopy sensor has achieved favorable results in classifying the sweetness of the two classes of pineapples.

Keywords: Spectroscopy, soluble solid content, pineapple, neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29
629 Pectoral Muscles Suppression in Digital Mammograms Using Hybridization of Soft Computing Methods

Authors: I. Laurence Aroquiaraj, K. Thangavel

Abstract:

Breast region segmentation is an essential prerequisite in computerized analysis of mammograms. It aims at separating the breast tissue from the background of the mammogram and it includes two independent segmentations. The first segments the background region which usually contains annotations, labels and frames from the whole breast region, while the second removes the pectoral muscle portion (present in Medio Lateral Oblique (MLO) views) from the rest of the breast tissue. In this paper we propose hybridization of Connected Component Labeling (CCL), Fuzzy, and Straight line methods. Our proposed methods worked good for separating pectoral region. After removal pectoral muscle from the mammogram, further processing is confined to the breast region alone. To demonstrate the validity of our segmentation algorithm, it is extensively tested using over 322 mammographic images from the Mammographic Image Analysis Society (MIAS) database. The segmentation results were evaluated using a Mean Absolute Error (MAE), Hausdroff Distance (HD), Probabilistic Rand Index (PRI), Local Consistency Error (LCE) and Tanimoto Coefficient (TC). The hybridization of fuzzy with straight line method is given more than 96% of the curve segmentations to be adequate or better. In addition a comparison with similar approaches from the state of the art has been given, obtaining slightly improved results. Experimental results demonstrate the effectiveness of the proposed approach.

Keywords: X-ray Mammography, CCL, Fuzzy, Straight line.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1708
628 Soft Real-Time Fuzzy Task Scheduling for Multiprocessor Systems

Authors: Mahdi Hamzeh, Sied Mehdi Fakhraie, Caro Lucas

Abstract:

All practical real-time scheduling algorithms in multiprocessor systems present a trade-off between their computational complexity and performance. In real-time systems, tasks have to be performed correctly and timely. Finding minimal schedule in multiprocessor systems with real-time constraints is shown to be NP-hard. Although some optimal algorithms have been employed in uni-processor systems, they fail when they are applied in multiprocessor systems. The practical scheduling algorithms in real-time systems have not deterministic response time. Deterministic timing behavior is an important parameter for system robustness analysis. The intrinsic uncertainty in dynamic real-time systems increases the difficulties of scheduling problem. To alleviate these difficulties, we have proposed a fuzzy scheduling approach to arrange real-time periodic and non-periodic tasks in multiprocessor systems. Static and dynamic optimal scheduling algorithms fail with non-critical overload. In contrast, our approach balances task loads of the processors successfully while consider starvation prevention and fairness which cause higher priority tasks have higher running probability. A simulation is conducted to evaluate the performance of the proposed approach. Experimental results have shown that the proposed fuzzy scheduler creates feasible schedules for homogeneous and heterogeneous tasks. It also and considers tasks priorities which cause higher system utilization and lowers deadline miss time. According to the results, it performs very close to optimal schedule of uni-processor systems.

Keywords: Computational complexity, Deadline, Feasible scheduling, Fuzzy scheduling, Priority, Real-time multiprocessor systems, Robustness, System utilization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2082
627 A Text Mining Technique Using Association Rules Extraction

Authors: Hany Mahgoub, Dietmar Rösner, Nabil Ismail, Fawzy Torkey

Abstract:

This paper describes text mining technique for automatically extracting association rules from collections of textual documents. The technique called, Extracting Association Rules from Text (EART). It depends on keyword features for discover association rules amongst keywords labeling the documents. In this work, the EART system ignores the order in which the words occur, but instead focusing on the words and their statistical distributions in documents. The main contributions of the technique are that it integrates XML technology with Information Retrieval scheme (TFIDF) (for keyword/feature selection that automatically selects the most discriminative keywords for use in association rules generation) and use Data Mining technique for association rules discovery. It consists of three phases: Text Preprocessing phase (transformation, filtration, stemming and indexing of the documents), Association Rule Mining (ARM) phase (applying our designed algorithm for Generating Association Rules based on Weighting scheme GARW) and Visualization phase (visualization of results). Experiments applied on WebPages news documents related to the outbreak of the bird flu disease. The extracted association rules contain important features and describe the informative news included in the documents collection. The performance of the EART system compared with another system that uses the Apriori algorithm throughout the execution time and evaluating extracted association rules.

Keywords: Text mining, data mining, association rule mining

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4378
626 A Novel Recursive Multiplierless Algorithm for 2-D DCT

Authors: V.K.Ananthashayana, Geetha.K.S

Abstract:

In this paper, a recursive algorithm for the computation of 2-D DCT using Ramanujan Numbers is proposed. With this algorithm, the floating-point multiplication is completely eliminated and hence the multiplierless algorithm can be implemented using shifts and additions only. The orthogonality of the recursive kernel is well maintained through matrix factorization to reduce the computational complexity. The inherent parallel structure yields simpler programming and hardware implementation and provides log 1 2 3 2 N N-N+ additions and N N 2 log 2 shifts which is very much less complex when compared to other recent multiplierless algorithms.

Keywords: DCT, Multilplerless, Ramanujan Number, Recursive.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1507
625 The Process of Crisis: Model of Its Development in the Organization

Authors: M. Mikušová

Abstract:

The main aim of this paper is to present a clear and comprehensive picture of the process of a crisis in the organization which will help to better understand its possible developments. For a description of the sequence of individual steps and an indication of their causation and possible variants of the developments, a detailed flow diagram with verbal comment is applied. For simplicity, the process of the crisis is observed in four basic phases called: symptoms of the crisis, diagnosis, action and prevention. The model highlights the complexity of the phenomenon of the crisis and that the various phases of the crisis are interweaving.

Keywords: Crisis, management, model, organization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1082
624 Exponentially Weighted Simultaneous Estimation of Several Quantiles

Authors: Valeriy Naumov, Olli Martikainen

Abstract:

In this paper we propose new method for simultaneous generating multiple quantiles corresponding to given probability levels from data streams and massive data sets. This method provides a basis for development of single-pass low-storage quantile estimation algorithms, which differ in complexity, storage requirement and accuracy. We demonstrate that such algorithms may perform well even for heavy-tailed data.

Keywords: Quantile estimation, data stream, heavy-taileddistribution, tail index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1485
623 Low Complexity, High Performance LDPC Codes Based on Defected Fullerene Graphs

Authors: Ashish Goswami, Rakesh Sharma

Abstract:

In this paper, LDPC Codes based on defected fullerene graphs have been generated. And it is found that the codes generated are fast in encoding and better in terms of error performance on AWGN Channel.

Keywords: LDPC Codes, Fullerene Graphs, Defected Fullerene Graphs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1736
622 A Logic Approach to Database Dynamic Updating

Authors: Daniel Stamate

Abstract:

We introduce a logic-based framework for database updating under constraints. In our framework, the constraints are represented as an instantiated extended logic program. When performing an update, database consistency may be violated. We provide an approach of maintaining database consistency, and study the conditions under which the maintenance process is deterministic. We show that the complexity of the computations and decision problems presented in our framework is in each case polynomial time.

Keywords: Databases, knowledge bases, constraints, updates, minimal change, consistency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1324
621 Influence of Shading on a BIPV System’s Performance in an Urban Context: Case Study of BIPV Systems of the Science Center of Complexity Building of the National and Autonomous University of Mexico in Mexico City

Authors: Viridiana Edith Ardura Perea, José Luis Bermúdez Alcocer

Abstract:

The purpose of this paper is to establish the influence of shading on a Building Integrated Photovoltaic (BIPV) system´s performance in an urban context. The PV systems of the Science Center of Complexity (Centro de Ciencias de la Complejidad) Building based in the Main Campus of the National and Autonomous University of Mexico (UNAM) in Mexico City was taken as case study.  The PV systems are placed on the rooftop and on the south façade of the building.  The south-façade PV system, operating as sunshades, consists of two strings:  one at the ground floor and the other one at the first floor.  According to the building’s facility manager, the south-façade PV system generates 42% less electricity per kilowatt peak (kWp) installed than the one on the roof.  The methods applied in this study were Solar Radiation Analysis (SRA) simulations performed with the Insight 360 Plug-in from Revit 2018® and an on-site measurement using specialized tools.  The results of the SRA simulations showed that the shading casted by the PV system placed on the first floor on top of the PV system of the ground floor decreases its solar incident radiation over 50%.  The simulation outcome was compared and validated to the measured data obtained from the on-site measurement.  In conclusion, the loss factor achieved from the shading of the PVs is due to the surroundings and the PV system´s own design.  The south-façade BIPV system’s deficient design generates critical losses on its performance and decreases its profitability.

Keywords: Building integrated photovoltaics (BIPV) design, energy analysis software, shading losses, solar radiation analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443
620 Fast Algorithm of Shot Cut Detection

Authors: Lenka Krulikovská, Jaroslav Polec, Tomáš Hirner

Abstract:

In this paper we present a novel method, which reduces the computational complexity of abrupt cut detection. We have proposed fast algorithm, where the similarity of frames within defined step is evaluated instead of comparing successive frames. Based on the results of simulation on large video collection, the proposed fast algorithm is able to achieve 80% reduction of needed frames comparisons compared to actually used methods without the shot cut detection accuracy degradation.

Keywords: Abrupt cut, fast algorithm, shot cut detection, Pearson correlation coefficient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1709
619 Quantifying Uncertainties in an Archetype-Based Building Stock Energy Model by Use of Individual Building Models

Authors: Morten Brøgger, Kim Wittchen

Abstract:

Focus on reducing energy consumption in existing buildings at large scale, e.g. in cities or countries, has been increasing in recent years. In order to reduce energy consumption in existing buildings, political incentive schemes are put in place and large scale investments are made by utility companies. Prioritising these investments requires a comprehensive overview of the energy consumption in the existing building stock, as well as potential energy-savings. However, a building stock comprises thousands of buildings with different characteristics making it difficult to model energy consumption accurately. Moreover, the complexity of the building stock makes it difficult to convey model results to policymakers and other stakeholders. In order to manage the complexity of the building stock, building archetypes are often employed in building stock energy models (BSEMs). Building archetypes are formed by segmenting the building stock according to specific characteristics. Segmenting the building stock according to building type and building age is common, among other things because this information is often easily available. This segmentation makes it easy to convey results to non-experts. However, using a single archetypical building to represent all buildings in a segment of the building stock is associated with loss of detail. Thermal characteristics are aggregated while other characteristics, which could affect the energy efficiency of a building, are disregarded. Thus, using a simplified representation of the building stock could come at the expense of the accuracy of the model. The present study evaluates the accuracy of a conventional archetype-based BSEM that segments the building stock according to building type- and age. The accuracy is evaluated in terms of the archetypes’ ability to accurately emulate the average energy demands of the corresponding buildings they were meant to represent. This is done for the buildings’ energy demands as a whole as well as for relevant sub-demands. Both are evaluated in relation to the type- and the age of the building. This should provide researchers, who use archetypes in BSEMs, with an indication of the expected accuracy of the conventional archetype model, as well as the accuracy lost in specific parts of the calculation, due to use of the archetype method.

Keywords: Building stock energy modelling, energy-savings, archetype.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 685
618 Dual Construction of Stern-based Signature Scheme

Authors: Pierre-Louis Cayrel, Sidi Mohamed El Yousfi Alaoui

Abstract:

In this paper, we propose a dual version of the first threshold ring signature scheme based on error-correcting code proposed by Aguilar et. al in [1]. Our scheme uses an improvement of Véron zero-knowledge identification scheme, which provide smaller public and private key sizes and better computation complexity than the Stern one. This scheme is secure in the random oracle model.

Keywords: Stern algorithm, Véron algorithm, threshold ring signature, post-quantum cryptography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1755
617 Self – Tuning Method of Fuzzy System: An Application on Greenhouse Process

Authors: M. Massour El Aoud, M. Franceschi, M. Maher

Abstract:

The approach proposed here is oriented in the direction of fuzzy system for the analysis and the synthesis of intelligent climate controllers, the simulation of the internal climate of the greenhouse is achieved by a linear model whose coefficients are obtained by identification. The use of fuzzy logic controllers for the regulation of climate variables represents a powerful way to minimize the energy cost. Strategies of reduction and optimization are adopted to facilitate the tuning and to reduce the complexity of the controller.

Keywords: Greenhouse, fuzzy logic, optimization, gradient descent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1903