Search results for: block matching algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4873

Search results for: block matching algorithm

4783 Constructing White-Box Implementations Based on Threshold Shares and Composite Fields

Authors: Tingting Lin, Manfred von Willich, Dafu Lou, Phil Eisen

Abstract:

A white-box implementation of a cryptographic algorithm is a software implementation intended to resist extraction of the secret key by an adversary. To date, most of the white-box techniques are used to protect block cipher implementations. However, a large proportion of the white-box implementations are proven to be vulnerable to affine equivalence attacks and other algebraic attacks, as well as differential computation analysis (DCA). In this paper, we identify a class of block ciphers for which we propose a method of constructing white-box implementations. Our method is based on threshold implementations and operations in composite fields. The resulting implementations consist of lookup tables and few exclusive OR operations. All intermediate values (inputs and outputs of the lookup tables) are masked. The threshold implementation makes the distribution of the masked values uniform and independent of the original inputs, and the operations in composite fields reduce the size of the lookup tables. The white-box implementations can provide resistance against algebraic attacks and DCA-like attacks.

Keywords: white-box, block cipher, composite field, threshold implementation

Procedia PDF Downloads 127
4782 Analysis of Matching Pursuit Features of EEG Signal for Mental Tasks Classification

Authors: Zin Mar Lwin

Abstract:

Brain Computer Interface (BCI) Systems have developed for people who suffer from severe motor disabilities and challenging to communicate with their environment. BCI allows them for communication by a non-muscular way. For communication between human and computer, BCI uses a type of signal called Electroencephalogram (EEG) signal which is recorded from the human„s brain by means of an electrode. The electroencephalogram (EEG) signal is an important information source for knowing brain processes for the non-invasive BCI. Translating human‟s thought, it needs to classify acquired EEG signal accurately. This paper proposed a typical EEG signal classification system which experiments the Dataset from “Purdue University.” Independent Component Analysis (ICA) method via EEGLab Tools for removing artifacts which are caused by eye blinks. For features extraction, the Time and Frequency features of non-stationary EEG signals are extracted by Matching Pursuit (MP) algorithm. The classification of one of five mental tasks is performed by Multi_Class Support Vector Machine (SVM). For SVMs, the comparisons have been carried out for both 1-against-1 and 1-against-all methods.

Keywords: BCI, EEG, ICA, SVM

Procedia PDF Downloads 250
4781 Very Large Scale Integration Architecture of Finite Impulse Response Filter Implementation Using Retiming Technique

Authors: S. Jalaja, A. M. Vijaya Prakash

Abstract:

Recursive combination of an algorithm based on Karatsuba multiplication is exploited to design a generalized transpose and parallel Finite Impulse Response (FIR) Filter. Mid-range Karatsuba multiplication and Carry Save adder based on Karatsuba multiplication reduce time complexity for higher order multiplication implemented up to n-bit. As a result, we design modified N-tap Transpose and Parallel Symmetric FIR Filter Structure using Karatsuba algorithm. The mathematical formulation of the FFA Filter is derived. The proposed architecture involves significantly less area delay product (APD) then the existing block implementation. By adopting retiming technique, hardware cost is reduced further. The filter architecture is designed by using 90 nm technology library and is implemented by using cadence EDA Tool. The synthesized result shows better performance for different word length and block size. The design achieves switching activity reduction and low power consumption by applying with and without retiming for different combination of the circuit. The proposed structure achieves more than a half of the power reduction by adopting with and without retiming techniques compared to the earlier design structure. As a proof of the concept for block size 16 and filter length 64 for CKA method, it achieves a 51% as well as 70% less power by applying retiming technique, and for CSA method it achieves a 57% as well as 77% less power by applying retiming technique compared to the previously proposed design.

Keywords: carry save adder Karatsuba multiplication, mid range Karatsuba multiplication, modified FFA and transposed filter, retiming

Procedia PDF Downloads 204
4780 Analgesic Efficacy of IPACK Block in Primary Total Knee Arthroplasty (90 CASES)

Authors: Fedili Benamar, Beloulou Mohamed Lamine, Ouahes Hassane, Ghattas Samir

Abstract:

 Background and aims: Peripheral regional anesthesia has been integrated into most analgesia protocols for total knee arthroplasty which considered among the most painful surgeries with a huge potential for chronicization. The adductor canal block (ACB) has gained popularity. Similarly, the IPACK block has been described to provide analgesia of the posterior knee capsule. This study aimed to evaluate the analgesic efficacy of this block in patients undergoing primary PTG. Methods: 90 patients were randomized to receive either an IPACK, an anterior sciatic block, or a sham block (30 patients in each group + multimodal analgesia and a catheter in the KCA adductor canal). GROUP 1 KCA GROUP 2 KCA+BSA GROUP 3 KCA+IPACK The analgesic blocks were done under echo-guidance preoperatively respecting the safety rules, the dose administered was 20 cc of ropivacaine 0.25% was used. We were to assess posterior knee pain 6 hours after surgery. Other endpoints included quality of recovery after surgery, pain scores, opioid requirements (PCA morphine)(EPI info 7.2 analysis). Results: -groups were matched -A predominance of women (4F/1H). -average age: 68 +/-7 years -the average BMI =31.75 kg/m2 +/- 4. -70% of patients ASA2 ,20% ASA3. -The average duration of the intervention: 89 +/- 19 minutes. -Morphine consumption (PCA) significantly higher in group 1 (16mg) & group 2 (8mg) group 3 (4mg) - The groups were matched . -There was a correlation between the use of the ipack block and postoperative pain Conclusions :In a multimodal analgesic protocol, the addition of IPACK block decreased pain scores and morphine consumption ,

Keywords: regional anesthesia, analgesia, total knee arthroplasty, the adductor canal block (acb), the ipack block, pain

Procedia PDF Downloads 40
4779 Clustering-Based Computational Workload Minimization in Ontology Matching

Authors: Mansir Abubakar, Hazlina Hamdan, Norwati Mustapha, Teh Noranis Mohd Aris

Abstract:

In order to build a matching pattern for each class correspondences of ontology, it is required to specify a set of attribute correspondences across two corresponding classes by clustering. Clustering reduces the size of potential attribute correspondences considered in the matching activity, which will significantly reduce the computation workload; otherwise, all attributes of a class should be compared with all attributes of the corresponding class. Most existing ontology matching approaches lack scalable attributes discovery methods, such as cluster-based attribute searching. This problem makes ontology matching activity computationally expensive. It is therefore vital in ontology matching to design a scalable element or attribute correspondence discovery method that would reduce the size of potential elements correspondences during mapping thereby reduce the computational workload in a matching process as a whole. The objective of this work is 1) to design a clustering method for discovering similar attributes correspondences and relationships between ontologies, 2) to discover element correspondences by classifying elements of each class based on element’s value features using K-medoids clustering technique. Discovering attribute correspondence is highly required for comparing instances when matching two ontologies. During the matching process, any two instances across two different data sets should be compared to their attribute values, so that they can be regarded to be the same or not. Intuitively, any two instances that come from classes across which there is a class correspondence are likely to be identical to each other. Besides, any two instances that hold more similar attribute values are more likely to be matched than the ones with less similar attribute values. Most of the time, similar attribute values exist in the two instances across which there is an attribute correspondence. This work will present how to classify attributes of each class with K-medoids clustering, then, clustered groups to be mapped by their statistical value features. We will also show how to map attributes of a clustered group to attributes of the mapped clustered group, generating a set of potential attribute correspondences that would be applied to generate a matching pattern. The K-medoids clustering phase would largely reduce the number of attribute pairs that are not corresponding for comparing instances as only the coverage probability of attributes pairs that reaches 100% and attributes above the specified threshold can be considered as potential attributes for a matching. Using clustering will reduce the size of potential elements correspondences to be considered during mapping activity, which will in turn reduce the computational workload significantly. Otherwise, all element of the class in source ontology have to be compared with all elements of the corresponding classes in target ontology. K-medoids can ably cluster attributes of each class, so that a proportion of attribute pairs that are not corresponding would not be considered when constructing the matching pattern.

Keywords: attribute correspondence, clustering, computational workload, k-medoids clustering, ontology matching

Procedia PDF Downloads 222
4778 A Hybrid Multi-Objective Firefly-Sine Cosine Algorithm for Multi-Objective Optimization Problem

Authors: Gaohuizi Guo, Ning Zhang

Abstract:

Firefly algorithm (FA) and Sine Cosine algorithm (SCA) are two very popular and advanced metaheuristic algorithms. However, these algorithms applied to multi-objective optimization problems have some shortcomings, respectively, such as premature convergence and limited exploration capability. Combining the privileges of FA and SCA while avoiding their deficiencies may improve the accuracy and efficiency of the algorithm. This paper proposes a hybridization of FA and SCA algorithms, named multi-objective firefly-sine cosine algorithm (MFA-SCA), to develop a more efficient meta-heuristic algorithm than FA and SCA.

Keywords: firefly algorithm, hybrid algorithm, multi-objective optimization, sine cosine algorithm

Procedia PDF Downloads 135
4777 Practical Guide To Design Dynamic Block-Type Shallow Foundation Supporting Vibrating Machine

Authors: Dodi Ikhsanshaleh

Abstract:

When subjected to dynamic load, foundation oscillates in the way that depends on the soil behaviour, the geometry and inertia of the foundation and the dynamic exctation. The practical guideline to analysis block-type foundation excitated by dynamic load from vibrating machine is presented. The analysis use Lumped Mass Parameter Method to express dynamic properties such as stiffness and damping of soil. The numerical examples are performed on design block-type foundation supporting gas turbine compressor which is important equipment package in gas processing plant

Keywords: block foundation, dynamic load, lumped mass parameter

Procedia PDF Downloads 461
4776 Approximating Fixed Points by a Two-Step Iterative Algorithm

Authors: Safeer Hussain Khan

Abstract:

In this paper, we introduce a two-step iterative algorithm to prove a strong convergence result for approximating common fixed points of three contractive-like operators. Our algorithm basically generalizes an existing algorithm..Our iterative algorithm also contains two famous iterative algorithms: Mann iterative algorithm and Ishikawa iterative algorithm. Thus our result generalizes the corresponding results proved for the above three iterative algorithms to a class of more general operators. At the end, we remark that nothing prevents us to extend our result to the case of the iterative algorithm with error terms.

Keywords: contractive-like operator, iterative algorithm, fixed point, strong convergence

Procedia PDF Downloads 516
4775 Biimodal Biometrics System Using Fusion of Iris and Fingerprint

Authors: Attallah Bilal, Hendel Fatiha

Abstract:

This paper proposes the bimodal biometrics system for identity verification iris and fingerprint, at matching score level architecture using weighted sum of score technique. The features are extracted from the pre processed images of iris and fingerprint. These features of a query image are compared with those of a database image to obtain matching scores. The individual scores generated after matching are passed to the fusion module. This module consists of three major steps i.e., normalization, generation of similarity score and fusion of weighted scores. The final score is then used to declare the person as genuine or an impostor. The system is tested on CASIA database and gives an overall accuracy of 91.04% with FAR of 2.58% and FRR of 8.34%.

Keywords: iris, fingerprint, sum rule, fusion

Procedia PDF Downloads 338
4774 Comparative Study Between Two Different Techniques for Postoperative Analgesia in Cesarean Section Delivery

Authors: Nermeen Elbeltagy, Sara Hassan, Tamer Hosny, Mostafa Abdelaziz

Abstract:

Introduction: Adequate postoperative analgesia after caesarean section (CS) is crucial as it impacts the distinct surgical recovery needs of the parturient. Over recent years, there has been increased interest in regional nerve block techniques with promising results on efficacy. These techniques reduce the need for additional analgesia, thereby lowering the incidence of drug-related side effects. As postoperative pain after cesarean is mainly due to abdominal incision, the transverses abdomenis plane ( TAP ) block is a relatively new abdominal nerve block with excellent efficacy after different abdominal surgeries, including cesarean section. Objective: The main objective is to compare ultrasound-guided TAP block provided by the anesthesiologist with TAP provided by the surgeon through a caesarean incision regarding the duration of postoperative analgesia, intensity of analgesia, timing of mobilization, and easiness of the procedure. Method: Ninety pregnant females at term who were scheduled for delivery by elective cesarean section were randomly distributed into two groups. The first group (45) received spinal anesthesia and postoperative ultrasound guided TAP block using 20ml on each side of 0.25% bupivacaine which was provided by the anesthesiologist. The second group (45) received spinal anesthesia plus a TAP block using 20ml on each side of 0.25% bupivacaine, which was provided by the surgeon through the cesarean incision. Visual Analogue Scale (VAS) was used for the comparison between the two groups. Results: VAS score after four hours was higher among the TAP block group provided by the surgeon through the surgical incision than the postoperative analgesic profile using ultrasound-guided TAP block provided by the anesthesiologist (P=0.011). On the contrary, there was no statistical difference in the patient’s dose of analgesia after four hours of the TAP block (P=0.228). Conclusion: TAP block provided through the surgical incision is safe and enhances early patient’s mobilization.

Keywords: TAP block, CS, VAS, analgesia

Procedia PDF Downloads 16
4773 Bundle Block Detection Using Spectral Coherence and Levenberg Marquardt Neural Network

Authors: K. Padmavathi, K. Sri Ramakrishna

Abstract:

This study describes a procedure for the detection of Left and Right Bundle Branch Block (LBBB and RBBB) ECG patterns using spectral Coherence(SC) technique and LM Neural Network. The Coherence function finds common frequencies between two signals and evaluate the similarity of the two signals. The QT variations of Bundle Blocks are observed in lead V1 of ECG. Spectral Coherence technique uses Welch method for calculating PSD. For the detection of normal and Bundle block beats, SC output values are given as the input features for the LMNN classifier. Overall accuracy of LMNN classifier is 99.5 percent. The data was collected from MIT-BIH Arrhythmia database.

Keywords: bundle block, SC, LMNN classifier, welch method, PSD, MIT-BIH, arrhythmia database

Procedia PDF Downloads 250
4772 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation

Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk

Abstract:

The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.

Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set

Procedia PDF Downloads 189
4771 A Comparative Study of Morphine and Clonidine as an Adjunct to Ropivacaine in Paravertebral Block for Modified Radical Mastectomy

Authors: Mukesh K., Siddiqui A. K., Abbas H., Gupta R.

Abstract:

Background: General Anesthesia is a standard for breast onco-surgery. The issue of postoperative pain and the occurrence of nausea and vomiting has prompted the quest for a superior methodology with fewer complications. Over the recent couple of years, paravertebral block (PVB) has acquired huge fame either in combination with GA or alone for anesthetic management. In this study, we aim to evaluate the efficacy of morphine and clonidine as an adjunct to ropivacaine in a paravertebral block in breast cancer patients undergoing modified radical mastectomy. Methods: In this study, total 90 patients were divided into three groups (30 each) on the basis of computer-generated randomization. Group C (Control): Paravertebral block with 0.25% ropivacaine (19ml) and 1 ml saline; Group M- Paravertebral block with 0.25% ropivacaine(19ml) + 20 microgram/kg body weight morphine; Group N: Paravertebral block with 0.25% ropivacaine(19ml) +1.0 microgram/kg body weight clonidine. The postoperative pain intensity was recorded using the visual analog scale (VAS) and Sedation was observed by the Ramsay Sedation score (RSS). Results: The VAS was similar at 0hr, 2hr and 4 hr in the postoperative period among all the groups. There was a significant (p=0.003) difference in VAS from 6 hr to 20 hr in the postoperative period among the groups. A significant (p<0.05) difference was observed among the groups at 8 hr to 20 hr). The first requirement of analgesia was significantly (p=0.001) higher in Group N (7.70±1.74) than in Group C (4.43±1.43) and Group M (7.33±2.21). Conclusion: The morphine in the paravertebral block provides better postoperative analgesia. The consumption of rescue analgesia was significantly reduced in the morphine group as compared to the clonidine group. The procedure also proved to be safe as no complication was encountered in the paravertebral block in our study.

Keywords: ropivacaine, morphine, clonidine, paravertebral block

Procedia PDF Downloads 93
4770 Matching Law in Autoshaped Choice in Neural Networks

Authors: Giselle Maggie Fer Castañeda, Diego Iván González

Abstract:

The objective of this work was to study the autoshaped choice behavior in the Donahoe, Burgos and Palmer (DBP) neural network model and analyze it under the matching law. Autoshaped choice can be viewed as a form of economic behavior defined as the preference between alternatives according to their relative outcomes. The Donahoe, Burgos and Palmer (DBP) model is a connectionist proposal that unifies operant and Pavlovian conditioning. This model has been used for more than three decades as a neurobehavioral explanation of conditioning phenomena, as well as a generator of predictions suitable for experimental testing with non-human animals and humans. The study consisted of different simulations in which, in each one, a ratio of reinforcement was established for two alternatives, and the responses (i.e., activations) in each of them were measured. Choice studies with animals have demonstrated that the data generally conform closely to the generalized matching law equation, which states that the response ratio equals proportionally to the reinforcement ratio; therefore, it was expected to find similar results with the neural networks of the Donahoe, Burgos and Palmer (DBP) model since these networks have simulated and predicted various conditioning phenomena. The results were analyzed by the generalized matching law equation, and it was observed that under some contingencies, the data from the networks adjusted approximately to what was established by the equation. Implications and limitations are discussed.

Keywords: matching law, neural networks, computational models, behavioral sciences

Procedia PDF Downloads 43
4769 Impact of Machining Parameters on the Surface Roughness of Machined PU Block

Authors: Louis Denis Kevin Catherine, Raja Aziz Raja Ma’arof, Azrina Arshad, Sangeeth Suresh

Abstract:

Machining parameters are very important in determining the surface quality of any material. In the past decade, some new engineering materials were developed for the manufacturing industry which created a need to conduct an investigation on the impact of the said parameters on their surface roughness. The polyurethane (PU) block is widely used in the automotive industry to manufacture parts such as checking fixtures that are used to verify the dimensional accuracy of automotive parts. In this paper, the design of experiment (DOE) was used to investigate the effect of the milling parameters on the PU block. Furthermore, an analysis of the machined surface chemical composition was done using scanning electron microscope (SEM). It was found that the surface roughness of the PU block is severely affected when PU undergoes a flood machining process instead of a dry condition. In addition, the step over and the silicon content were found to be the most significant parameters that influence the surface quality of the PU block.

Keywords: polyurethane (PU), design of experiment (DOE), scanning electron microscope (SEM), surface roughness

Procedia PDF Downloads 490
4768 Tolerating Input Faults in Asynchronous Sequential Machines

Authors: Jung-Min Yang

Abstract:

A method of tolerating input faults for input/state asynchronous sequential machines is proposed. A corrective controller is placed in front of the considered asynchronous machine to realize model matching with a reference model. The value of the external input transmitted to the closed-loop system may change by fault. We address the existence condition for the controller that can counteract adverse effects of any input fault while maintaining the objective of model matching. A design procedure for constructing the controller is outlined. The proposed reachability condition for the controller design is validated in an illustrative example.

Keywords: asynchronous sequential machines, corrective control, fault tolerance, input faults, model matching

Procedia PDF Downloads 392
4767 An Algorithm to Compute the State Estimation of a Bilinear Dynamical Systems

Authors: Abdullah Eqal Al Mazrooei

Abstract:

In this paper, we introduce a mathematical algorithm which is used for estimating the states in the bilinear systems. This algorithm uses a special linearization of the second-order term by using the best available information about the state of the system. This technique makes our algorithm generalizes the well-known Kalman estimators. The system which is used here is of the bilinear class, the evolution of this model is linear-bilinear in the state of the system. Our algorithm can be used with linear and bilinear systems. We also here introduced a real application for the new algorithm to prove the feasibility and the efficiency for it.

Keywords: estimation algorithm, bilinear systems, Kakman filter, second order linearization

Procedia PDF Downloads 450
4766 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods

Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard

Abstract:

The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.

Keywords: algorithms, genetics, matching, population

Procedia PDF Downloads 115
4765 The Behavior of Masonry Wall Constructed Using Biaxial Interlocking Concrete Block, Solid Concrete Block and Cement Sand Brick Subjected to the Compressive Load

Authors: Fauziah Aziz, Mohd.fadzil Arshad, Hazrina Mansor, Sedat Kömürcü

Abstract:

Masonry is an isotropic and heterogeneous material due to the presence of the different components within the assembly process. Normally the mortar plays a significant role in the compressive behavior of the traditional masonry structures. Biaxial interlocking concrete block is a masonry unit that comes out with the interlocking concept. This masonry unit can improve the quality of the construction process, reduce the cost of labor, reduce high skill workmanship, and speeding the construction time. Normally, the interlocking concrete block masonry unit in the market place was designed in a way interlocking concept only either x or y-axis, shorter in length, and low compressive strength value. However, the biaxial interlocking concrete block is a dry-stack concept being introduced in this research, offered the specialty compared to the normal interlocking concrete available in the market place due to its length and the geometry of the groove and tongue. This material can be used as a non-load bearing wall, or load-bearing wall depends on the application of the masonry. But, there is a lack of technical data that was produced before. This paper presents a finding on the compressive resistance of the biaxial interlocking concrete block masonry wall compared to the other traditional masonry walls. Two series of biaxial interlocking concrete block masonry walls, namely M1 and M2, a series of solid concrete block and cement sand brick walls M3, and M4 have tested the compressive resistance. M1 is the masonry wall of a hollow biaxial interlocking concrete block meanwhile; M2 is the grouted masonry wall, M3 is a solid concrete block masonry wall, and M4 is a cement sand brick masonry wall. All the samples were tested under static compressive load. The results examine that M2 is higher in compressive resistance compared to the M1, M3, and M4. It shows that the compressive strength of the concrete masonry units plays a significant role in the capacity of the masonry wall.

Keywords: interlocking concrete block, compressive resistance, concrete masonry unit, masonry

Procedia PDF Downloads 138
4764 Development of Variable Order Block Multistep Method for Solving Ordinary Differential Equations

Authors: Mohamed Suleiman, Zarina Bibi Ibrahim, Nor Ain Azeany, Khairil Iskandar Othman

Abstract:

In this paper, a class of variable order fully implicit multistep Block Backward Differentiation Formulas (VOBBDF) using uniform step size for the numerical solution of stiff ordinary differential equations (ODEs) is developed. The code will combine three multistep block methods of order four, five and six. The order selection is based on approximation of the local errors with specific tolerance. These methods are constructed to produce two approximate solutions simultaneously at each iteration in order to further increase the efficiency. The proposed VOBBDF is validated through numerical results on some standard problems found in the literature and comparisons are made with single order Block Backward Differentiation Formula (BBDF). Numerical results shows the advantage of using VOBBDF for solving ODEs.

Keywords: block backward differentiation formulas, uniform step size, ordinary differential equations

Procedia PDF Downloads 413
4763 Mean Shift-Based Preprocessing Methodology for Improved 3D Buildings Reconstruction

Authors: Nikolaos Vassilas, Theocharis Tsenoglou, Djamchid Ghazanfarpour

Abstract:

In this work we explore the capability of the mean shift algorithm as a powerful preprocessing tool for improving the quality of spatial data, acquired from airborne scanners, from densely built urban areas. On one hand, high resolution image data corrupted by noise caused by lossy compression techniques are appropriately smoothed while at the same time preserving the optical edges and, on the other, low resolution LiDAR data in the form of normalized Digital Surface Map (nDSM) is upsampled through the joint mean shift algorithm. Experiments on both the edge-preserving smoothing and upsampling capabilities using synthetic RGB-z data show that the mean shift algorithm is superior to bilateral filtering as well as to other classical smoothing and upsampling algorithms. Application of the proposed methodology for 3D reconstruction of buildings of a pilot region of Athens, Greece results in a significant visual improvement of the 3D building block model.

Keywords: 3D buildings reconstruction, data fusion, data upsampling, mean shift

Procedia PDF Downloads 290
4762 Investigation of Design Process of an Impedance Matching in the Specific Frequency for Radio Frequency Application

Authors: H. Nabaei, M. Joghataie

Abstract:

In this article, we study the design methods of matched filter with commercial software including CST Studio and ADS in specific frequency: 900 MHz. At first, we select two amounts of impedance for studying matching of them. Then, using by matched filter utility tool in ADS software, we simulate and deviate the elements of matched filters. In the following, we implement matched filter in CST STUDIO software. The simulated results show the great conformity in this field. Also, we peruse scattering and Impedance parameters in the Derivative structure. Finally, the layout of matched filter is obtained by the schematic tool of CST STUDIO. In fact, here, we present the design process of matched filters in the specific frequency.

Keywords: impedance matching, lumped element, transmission line, maximum power transmission, 3D layout

Procedia PDF Downloads 471
4761 Determination of the Optimum Size of Building Stone Blocks: Case Study of Delichai Travertine Mine

Authors: Hesam Sedaghat Nejad, Navid Hosseini, Arash Nikvar Hassani

Abstract:

Determination of the optimum block size with high profitability is one of the significant parameters in designation of the building stone mines. The aim of this study was to determine the optimum dimensions of building stone blocks in Delichai travertine mine of Damavand in Tehran province through combining the effective parameters proven in determination of the optimum dimensions in building stones such as the spacing of joints and gaps, extraction tools constraints with the help of modeling by Gemcom software. To this end, following simulation of the topography of the mine, the block model was prepared and then in order to use spacing joints and discontinuities as a limiting factor, the existing joints set was added to the model. Since only one almost horizontal joint set with a slope of 5 degrees was available, this factor was effective only in determining the optimum height of the block, and thus to determine the longitudinal and transverse optimum dimensions of the extracted block, the power of available loader in the mine was considered as the secondary limiting factor. According to the aforementioned factors, the optimal block size in this mine was measured as 3.4×4×7 meter.

Keywords: building stone, optimum block size, Delichay travertine mine, loader power

Procedia PDF Downloads 331
4760 Round Addition DFA on Lightweight Block Ciphers with On-The-Fly Key Schedule

Authors: Hideki Yoshikawa, Masahiro Kaminaga, Arimitsu Shikoda, Toshinori Suzuki

Abstract:

Round addition differential fault analysis (DFA) using operation bypassing for lightweight block ciphers with on-the-fly key schedule is presented. For 64-bit KLEIN and 64-bit LED, it is shown that only a pair of correct ciphertext and faulty ciphertext can derive the secret master key. For PRESENT, one correct ciphertext and two faulty ciphertexts are required to reconstruct the secret key.

Keywords: differential fault analysis (DFA), round addition, block cipher, on-the-fly key schedule

Procedia PDF Downloads 677
4759 Effect of Oral Clonidine Premedication on Subarachnoid Block Characteristics of 0.5 % Hyperbaric Bupivacaine for Laparoscopic Gynecological Procedures – A Randomized Control Study

Authors: Buchh Aqsa, Inayat Umar

Abstract:

Background- Clonidine, α 2 agonist, possesses several properties to make it valuable adjuvant for spinal anesthesia. The study was aimed to evaluate the clinical effects of oral clonidine premedication for laparoscopic gynecological procedures under subarachnoid block. Patients and method- Sixtyfour adult female patients of ASA physical status I and II, aged 25 to 45 years and scheduled for laparoscopic gynecological procedures under the subarachnoid block, were randomized into two comparable equal groups of 32 patients each to received either oral clonidine, 100 µg (Group I) or placebo (Group II), 90 minutes before the procedure. Subarachnoid block was established with of 3.5 ml of 0.5% hyperbaric bupivacaine in all patients. Onset and duration of sensory and motor block, maximum cephalad level, and the regression time to reach S1 sensory level were assessed as primary end points. Sedation, hemodynamic variability, and respiratory depression or any other side effects were evaluated as secondary outcomes. Results- The demographic profile was comparable. The intraoperative hemodynamic parameters showed significant differences between groups. Oral clonidine was accelerated the onset time of sensory and motor blockade and extended the duration of sensory block (216.4 ± 23.3 min versus 165 ± 37.2 min, P <0.05). The duration of motor block showed no significant difference. The sedation score was more than 2 in the clonidine group as compared to the control group. Conclusion- Oral clonidine premedication has extended the duration of sensory analgesia with arousable sedation. It also prevented the post spinal shivering of the subarachnoid block.

Keywords: oral clonidine, subarachnoid block, sensory analgesia, laparoscopic gynaecological

Procedia PDF Downloads 54
4758 Block Implicit Adams Type Algorithms for Solution of First Order Differential Equation

Authors: Asabe Ahmad Tijani, Y. A. Yahaya

Abstract:

The paper considers the derivation of implicit Adams-Moulton type method, with k=4 and 5. We adopted the method of interpolation and collocation of power series approximation to generate the continuous formula which was evaluated at off-grid and some grid points within the step length to generate the proposed block schemes, the schemes were investigated and found to be consistent and zero stable. Finally, the methods were tested with numerical experiments to ascertain their level of accuracy.

Keywords: Adam-Moulton Type (AMT), off-grid, block method, consistent and zero stable

Procedia PDF Downloads 455
4757 Open-Loop Vector Control of Induction Motor with Space Vector Pulse Width Modulation Technique

Authors: Karchung, S. Ruangsinchaiwanich

Abstract:

This paper presents open-loop vector control method of induction motor with space vector pulse width modulation (SVPWM) technique. Normally, the closed loop speed control is preferred and is believed to be more accurate. However, it requires a position sensor to track the rotor position which is not desirable to use it for certain workspace applications. This paper exhibits the performance of three-phase induction motor with the simplest control algorithm without the use of a position sensor nor an estimation block to estimate rotor position for sensorless control. The motor stator currents are measured and are transformed to synchronously rotating (d-q-axis) frame by use of Clarke and Park transformation. The actual control happens in this frame where the measured currents are compared with the reference currents. The error signal is fed to a conventional PI controller, and the corrected d-q voltage is generated. The controller outputs are transformed back to three phase voltages and are fed to SVPWM block which generates PWM signal for the voltage source inverter. The open loop vector control model along with SVPWM algorithm is modeled in MATLAB/Simulink software and is experimented and validated in TMS320F28335 DSP board.

Keywords: electric drive, induction motor, open-loop vector control, space vector pulse width modulation technique

Procedia PDF Downloads 121
4756 Temporal Characteristics of Human Perception to Significant Variation of Block Structures

Authors: Kuo-Cheng Liu

Abstract:

In the latest research efforts, the structures of the image in the spatial domain have been successfully analyzed and proved to deduce the visual masking for accurately estimating the visibility thresholds of the image. If the structural properties of the video sequence in the temporal domain are taken into account to estimate the temporal masking, the improvement and enhancement of the as-sessing spatio-temporal visibility thresholds are reasonably expected. In this paper, the temporal characteristics of human perception to the change in block structures on the time axis are analyzed. The temporal characteristics of human perception are represented in terms of the significant variation in block structures for the analysis of human visual system (HVS). Herein, the block structure in each frame is computed by combined the pattern masking and the contrast masking simultaneously. The contrast masking always overestimates the visibility thresholds of edge regions and underestimates that of texture regions, while the pattern masking is weak on a uniform background and is strong on the complex background with spatial patterns. Under considering the significant variation of block structures between successive frames, we extend the block structures of images in the spatial domain to that of video sequences in the temporal domain to analyze the relation between the inter-frame variation of structures and the temporal masking. Meanwhile, the subjective viewing test and the fair rating process are designed to evaluate the consistency of the temporal characteristics with the HVS under a specified viewing condition.

Keywords: temporal characteristic, block structure, pattern masking, contrast masking

Procedia PDF Downloads 381
4755 Modification of Newton Method in Two Point Block Backward Differentiation Formulas

Authors: Khairil I. Othman, Nur N. Kamal, Zarina B. Ibrahim

Abstract:

In this paper, we present modified Newton method as a new strategy for improving the efficiency of Two Point Block Backward Differentiation Formulas (BBDF) when solving stiff systems of ordinary differential equations (ODEs). These methods are constructed to produce two approximate solutions simultaneously at each iteration The detailed implementation of the predictor corrector BBDF with PE(CE)2 with modified Newton are discussed. The proposed modification of BBDF is validated through numerical results on some standard problems found in the literature and comparisons are made with the existing Block Backward Differentiation Formula. Numerical results show the advantage of using the new strategy for solving stiff ODEs in improving the accuracy of the solution.

Keywords: newton method, two point, block, accuracy

Procedia PDF Downloads 330
4754 Automated Feature Detection and Matching Algorithms for Breast IR Sequence Images

Authors: Chia-Yen Lee, Hao-Jen Wang, Jhih-Hao Lai

Abstract:

In recent years, infrared (IR) imaging has been considered as a potential tool to assess the efficacy of chemotherapy and early detection of breast cancer. Regions of tumor growth with high metabolic rate and angiogenesis phenomenon lead to the high temperatures. Observation of differences between the heat maps in long term is useful to help assess the growth of breast cancer cells and detect breast cancer earlier, wherein the multi-time infrared image alignment technology is a necessary step. Representative feature points detection and matching are essential steps toward the good performance of image registration and quantitative analysis. However, there is no clear boundary on the infrared images and the subject's posture are different for each shot. It cannot adhesive markers on a body surface for a very long period, and it is hard to find anatomic fiducial markers on a body surface. In other words, it’s difficult to detect and match features in an IR sequence images. In this study, automated feature detection and matching algorithms with two type of automatic feature points (i.e., vascular branch points and modified Harris corner) are developed respectively. The preliminary results show that the proposed method could identify the representative feature points on the IR breast images successfully of 98% accuracy and the matching results of 93% accuracy.

Keywords: Harris corner, infrared image, feature detection, registration, matching

Procedia PDF Downloads 283