Search results for: 3D-Binary Matrix Reconstruction
729 Machine Learning Approach for Identifying Dementia from MRI Images
Authors: S. K. Aruna, S. Chitra
Abstract:
This research paper presents a framework for classifying Magnetic Resonance Imaging (MRI) images for Dementia. Dementia, an age-related cognitive decline is indicated by degeneration of cortical and sub-cortical structures. Characterizing morphological changes helps understand disease development and contributes to early prediction and prevention of the disease. Modelling, that captures the brain’s structural variability and which is valid in disease classification and interpretation is very challenging. Features are extracted using Gabor filter with 0, 30, 60, 90 orientations and Gray Level Co-occurrence Matrix (GLCM). It is proposed to normalize and fuse the features. Independent Component Analysis (ICA) selects features. Support Vector Machine (SVM) classifier with different kernels is evaluated, for efficiency to classify dementia. This study evaluates the presented framework using MRI images from OASIS dataset for identifying dementia. Results showed that the proposed feature fusion classifier achieves higher classification accuracy.
Keywords: Magnetic resonance imaging, dementia, Gabor filter, gray level co-occurrence matrix, support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2115728 Comparative Study on Recent Integer DCTs
Authors: Sakol Udomsiri, Masahiro Iwahashi
Abstract:
This paper presents comparative study on recent integer DCTs and a new method to construct a low sensitive structure of integer DCT for colored input signals. The method refers to sensitivity of multiplier coefficients to finite word length as an indicator of how word length truncation effects on quality of output signal. The sensitivity is also theoretically evaluated as a function of auto-correlation and covariance matrix of input signal. The structure of integer DCT algorithm is optimized by combination of lower sensitive lifting structure types of IRT. It is evaluated by the sensitivity of multiplier coefficients to finite word length expression in a function of covariance matrix of input signal. Effectiveness of the optimum combination of IRT in integer DCT algorithm is confirmed by quality improvement comparing with existing case. As a result, the optimum combination of IRT in each integer DCT algorithm evidently improves output signal quality and it is still compatible with the existing one.Keywords: DCT, sensitivity, lossless, wordlength.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1381727 Laboratory Investigation of the Pavement Condition in Lebanon: Implementation of Reclaimed Asphalt Pavement in the Base Course and Asphalt Layer
Authors: Marinelle El-Khoury, Lina Bouhaya, Nivine Abbas, Hassan Sleiman
Abstract:
The road network in the north of Lebanon is a prime example of the lack of pavement design and execution in Lebanon. These roads show major distresses and hence, should be tested and evaluated. The aim of this research is to investigate and determine the deficiencies in road surface design in Lebanon, and to propose an environmentally friendly asphalt mix design. This paper consists of several parts: (i) evaluating pavement performance and structural behavior, (ii) identifying the distresses using visual examination followed by laboratory tests, (iii) deciding the optimal solution where rehabilitation or reconstruction is required and finally, (iv) identifying a sustainable method, which uses recycled material in the proposed mix. The asphalt formula contains Reclaimed Asphalt Pavement (RAP) in the base course layer and in the asphalt layer. Visual inspection of the roads in Tripoli shows that these roads face a high level of distress severity. Consequently, the pavement should be reconstructed rather than simply rehabilitated. Coring was done to determine the pavement layer thickness. The results were compared to the American Association of State Highway and Transportation Officials (AASHTO) design methodology and showed that the existing asphalt thickness is lower than the required asphalt thickness. Prior to the pavement reconstruction, the road materials were tested according to the American Society for Testing and Materials (ASTM) specification to identify whether the materials are suitable. Accordingly, the ASTM tests that were performed on the base course are Sieve analysis, Atterberg limits, modified proctor, Los Angeles, and California Bearing Ratio (CBR) tests. Results show a CBR value higher than 70%. Hence, these aggregates could be used as a base course layer. The asphalt layer was also tested and the results of the Marshall flow and stability tests meet the ASTM specifications. In the last section, an environmentally friendly mix was proposed. An optimal RAP percentage of 30%, which produced a well graded base course and asphalt mix, was determined through a series of trials.Keywords: Asphalt mix, reclaimed asphalt pavement, California bearing ratio, sustainability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 729726 Promising Immobilization of Cadmium and Lead inside Ca-rich Glass-ceramics
Authors: A. Karnis, L. Gautron
Abstract:
Considering toxicity of heavy metals and their accumulation in domestic wastes, immobilization of lead and cadmium is envisaged inside glass-ceramics. We particularly focused this work on calcium-rich phases embedded in a glassy matrix. Glass-ceramics were synthesized from glasses doped with 12 wt% and 16 wt% of PbO or CdO. They were observed and analyzed by Electron MicroProbe Analysis (EMPA) and Analytical Scanning Electron Microscopy (ASEM). Structural characterization of the samples was performed by powder XRay Diffraction. Diopside crystals of CaMgSi2O6 composition are shown to incorporate significant amounts of cadmium (up to 9 wt% of CdO). Two new crystalline phases are observed with very high Cd or Pb contents: about 40 wt% CdO for the cadmiumrich phase and near 60 wt% PbO for the lead-rich phase. We present complete chemical and structural characterization of these phases. They represent a promising way for the immobilization of toxic elements like Cd or Pb since glass ceramics are known to propose a “double barrier" protection (metal-rich crystals embedded in a glass matrix) against metal release in the environment.Keywords: Cadmium, Calcium-rich phases, Diopside, Domesticwastes, Fly ashes, Glass-ceramics, Lead, Municipal Solid WasteIncineration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656725 Performance Comparison of Particle Swarm Optimization with Traditional Clustering Algorithms used in Self-Organizing Map
Authors: Anurag Sharma, Christian W. Omlin
Abstract:
Self-organizing map (SOM) is a well known data reduction technique used in data mining. It can reveal structure in data sets through data visualization that is otherwise hard to detect from raw data alone. However, interpretation through visual inspection is prone to errors and can be very tedious. There are several techniques for the automatic detection of clusters of code vectors found by SOM, but they generally do not take into account the distribution of code vectors; this may lead to unsatisfactory clustering and poor definition of cluster boundaries, particularly where the density of data points is low. In this paper, we propose the use of an adaptive heuristic particle swarm optimization (PSO) algorithm for finding cluster boundaries directly from the code vectors obtained from SOM. The application of our method to several standard data sets demonstrates its feasibility. PSO algorithm utilizes a so-called U-matrix of SOM to determine cluster boundaries; the results of this novel automatic method compare very favorably to boundary detection through traditional algorithms namely k-means and hierarchical based approach which are normally used to interpret the output of SOM.Keywords: cluster boundaries, clustering, code vectors, data mining, particle swarm optimization, self-organizing maps, U-matrix.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1909724 Quick Sequential Search Algorithm Used to Decode High-Frequency Matrices
Authors: Mohammed M. Siddeq, Mohammed H. Rasheed, Omar M. Salih, Marcos A. Rodrigues
Abstract:
This research proposes a data encoding and decoding method based on the Matrix Minimization algorithm. This algorithm is applied to high-frequency coefficients for compression/encoding. The algorithm starts by converting every three coefficients to a single value; this is accomplished based on three different keys. The decoding/decompression uses a search method called QSS (Quick Sequential Search) Decoding Algorithm presented in this research based on the sequential search to recover the exact coefficients. In the next step, the decoded data are saved in an auxiliary array. The basic idea behind the auxiliary array is to save all possible decoded coefficients; this is because another algorithm, such as conventional sequential search, could retrieve encoded/compressed data independently from the proposed algorithm. The experimental results showed that our proposed decoding algorithm retrieves original data faster than conventional sequential search algorithms.
Keywords: Matrix Minimization Algorithm, Decoding Sequential Search Algorithm, image compression, Discrete Cosine Transform, Discrete Wavelet Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 247723 Blind Channel Estimation for Frequency Hopping System Using Subspace Based Method
Authors: M. M. Qasaymeh, M. A. Khodeir
Abstract:
Subspace channel estimation methods have been studied widely, where the subspace of the covariance matrix is decomposed to separate the signal subspace from noise subspace. The decomposition is normally done by using either the eigenvalue decomposition (EVD) or the singular value decomposition (SVD) of the auto-correlation matrix (ACM). However, the subspace decomposition process is computationally expensive. This paper considers the estimation of the multipath slow frequency hopping (FH) channel using noise space based method. In particular, an efficient method is proposed to estimate the multipath time delays by applying multiple signal classification (MUSIC) algorithm which is based on the null space extracted by the rank revealing LU (RRLU) factorization. As a result, precise information is provided by the RRLU about the numerical null space and the rank, (i.e., important tool in linear algebra). The simulation results demonstrate the effectiveness of the proposed novel method by approximately decreasing the computational complexity to the half as compared with RRQR methods keeping the same performance.
Keywords: Time Delay Estimation, RRLU, RRQR, MUSIC, LS-ESPRIT, LS-ESPRIT, Frequency Hopping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2044722 Interference Reduction Technique in Multistage Multiuser Detector for DS-CDMA System
Authors: Lokesh Tharani, R.P.Yadav
Abstract:
This paper presents the results related to the interference reduction technique in multistage multiuser detector for asynchronous DS-CDMA system. To meet the real-time requirements for asynchronous multiuser detection, a bit streaming, cascade architecture is used. An asynchronous multiuser detection involves block-based computations and matrix inversions. The paper covers iterative-based suboptimal schemes that have been studied to decrease the computational complexity, eliminate the need for matrix inversions, decreases the execution time, reduces the memory requirements and uses joint estimation and detection process that gives better performance than the independent parameter estimation method. The stages of the iteration use cascaded and bits processed in a streaming fashion. The simulation has been carried out for asynchronous DS-CDMA system by varying one parameter, i.e., number of users. The simulation result exhibits that system gives optimum bit error rate (BER) at 3rd stage for 15-users.Keywords: Multi-user detection (MUD), multiple accessinterference (MAI), near-far effect, decision feedback detector, successive interference cancellation detector (SIC) and parallelinterference cancellation (PIC) detector.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1762721 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array
Authors: Yanping Liao, Zenan Wu, Ruigang Zhao
Abstract:
Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues of the noise subspace, improve the divergence of small eigenvalues in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.
Keywords: Multi-carrier frequency diverse array, adaptive beamforming, correction index, limited snapshot, robust.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 677720 Detecting HCC Tumor in Three Phasic CT Liver Images with Optimization of Neural Network
Authors: Mahdieh Khalilinezhad, Silvana Dellepiane, Gianni Vernazza
Abstract:
The aim of this work is to build a model based on tissue characterization that is able to discriminate pathological and non-pathological regions from three-phasic CT images. With our research and based on a feature selection in different phases, we are trying to design a neural network system with an optimal neuron number in a hidden layer. Our approach consists of three steps: feature selection, feature reduction, and classification. For each region of interest (ROI), 6 distinct sets of texture features are extracted such as: first order histogram parameters, absolute gradient, run-length matrix, co-occurrence matrix, autoregressive model, and wavelet, for a total of 270 texture features. When analyzing more phases, we show that the injection of liquid cause changes to the high relevant features in each region. Our results demonstrate that for detecting HCC tumor phase 3 is the best one in most of the features that we apply to the classification algorithm. The percentage of detection between pathology and healthy classes, according to our method, relates to first order histogram parameters with accuracy of 85% in phase 1, 95% in phase 2, and 95% in phase 3.
Keywords: Feature selection, Multi-phasic liver images, Neural network, Texture analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2535719 A Model of Market Segmentation for the Customers of Mellat Bank in Iran
Authors: Nader Gharibnavaz, Hossein Yazdi
Abstract:
If organizations like Mellat Bank want to identify its customer market completely to reach its specified goals, it can segment the market to offer the product package to the right segment. Our objective is to offer a segmentation model for Iran banking market in Mellat bank view. The methodology of this project is combined by “segmentation on the basis of four part-quality variables" and “segmentation on the basis of different in means". Required data are gathered from E-Systems and researcher personal observation. Finally, the research offers the organization that at first step form a four dimensional matrix with 756 segments using four variables named value-based, behavioral, activity style, and activity level, and at the second step calculate the means of profit for every cell of matrix in two distinguished work level (levels α1:normal condition and α2: high pressure condition) and compare the segments by checking two conditions that are 1- homogeneity every segment with its sub segment and 2- heterogeneity with other segments, and so it can do the necessary segmentation process. After all, the last offer (more explained by an operational example and feedback algorithm) is to test and update the model because of dynamic environment, technology, and banking system.Keywords: market segmentation model, banking system, Mellat bank
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3287718 Signal Reconstruction Using Cepstrum of Higher Order Statistics
Authors: Adnan Al-Smadi, Mahmoud Smadi
Abstract:
This paper presents an algorithm for reconstructing phase and magnitude responses of the impulse response when only the output data are available. The system is driven by a zero-mean independent identically distributed (i.i.d) non-Gaussian sequence that is not observed. The additive noise is assumed to be Gaussian. This is an important and essential problem in many practical applications of various science and engineering areas such as biomedical, seismic, and speech processing signals. The method is based on evaluating the bicepstrum of the third-order statistics of the observed output data. Simulations results are presented that demonstrate the performance of this method.
Keywords: Cepstrum, bicepstrum, third order statistics
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2037717 The Effects of Alkalization to the Mechanical Properties of the Ijuk Fiber Reinforced PLA Biocomposites
Authors: Mochamad Chalid, Imam Prabowo
Abstract:
Today, the pollution due to non-degradable material such as plastics, has led to studies about the development of environmental-friendly material. Because of biodegradability obtained from natural sources, polylactid acid (PLA) and ijuk fiber are interesting to modify into a composite. This material is also expected to reduce the impact of environmental pollution. Surface modification of ijuk fiber through alkalinization with 0.25 M NaOH solution for 30 minutes was aimed to enhance its compatibility to PLA, in order to improve properties of the composite such as the mechanical properties. Alkalinization of the ijuk fibers annihilates some surface components such as lignin, wax and hemicelloluse, so the pore on the surface clearly appeared, decreasing of the density and diameter of the ijuk fibers. The change of the ijuk fiber properties leads to increase the mechanical properties of PLA composites reinforced the ijuk fibers through strengthening of the mechanical interlocking with the PLA matrix. An addition to enhance the distribution of the fibers in the PLA matrix, the stirring during DCM solvent evaporation from the mixture of the ijuk fibers and the dissolved-PLA can reduce amount of the trapped-voids and fibers pull-out phenomena, which can decrease the mechanical properties of the composite.
Keywords: Polylactic acid, Arenga pinnata, alkalinization, compatibility, adhesion, morphology, mechanical properties, volume fraction, distributiom.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2801716 Computer Aided Diagnostic System for Detection and Classification of a Brain Tumor through MRI Using Level Set Based Segmentation Technique and ANN Classifier
Authors: Atanu K Samanta, Asim Ali Khan
Abstract:
Due to the acquisition of huge amounts of brain tumor magnetic resonance images (MRI) in clinics, it is very difficult for radiologists to manually interpret and segment these images within a reasonable span of time. Computer-aided diagnosis (CAD) systems can enhance the diagnostic capabilities of radiologists and reduce the time required for accurate diagnosis. An intelligent computer-aided technique for automatic detection of a brain tumor through MRI is presented in this paper. The technique uses the following computational methods; the Level Set for segmentation of a brain tumor from other brain parts, extraction of features from this segmented tumor portion using gray level co-occurrence Matrix (GLCM), and the Artificial Neural Network (ANN) to classify brain tumor images according to their respective types. The entire work is carried out on 50 images having five types of brain tumor. The overall classification accuracy using this method is found to be 98% which is significantly good.
Keywords: Artificial neural network, ANN, brain tumor, computer-aided diagnostic, CAD system, gray-level co-occurrence matrix, GLCM, level set method, tumor segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1365715 An Investigation of a Three-Dimensional Constitutive Model of Gas Diffusion Layers in Polymer Electrolyte Membrane Fuel Cells
Authors: Yanqin Chen, Chao Jiang, Chongdu Cho
Abstract:
This research presents the three-dimensional mechanical characteristics of a commercial gas diffusion layer by experiment and simulation results. Although the mechanical performance of gas diffusion layers has attracted much attention, its reliability and accuracy are still a major challenge. With the help of simulation analysis methods, it is beneficial to the gas diffusion layer’s extensive commercial development and the overall stress analysis of proton electrolyte membrane fuel cells during its pre-production design period. Therefore, in this paper, a three-dimensional constitutive model of a commercial gas diffusion layer, including its material stiffness matrix parameters, is developed and coded, in the user-defined material model of a commercial finite element method software for simulation. Then, the model is validated by comparing experimental results as well as simulation outcomes. As a result, both the experimental data and simulation results show a good agreement with each other, with high accuracy.
Keywords: Gas diffusion layer, proton electrolyte membrane fuel cell, stiffness matrix, three-dimensional mechanical characteristics, user-defined material model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 948714 Mathematical Approach towards Fault Detection and Isolation of Linear Dynamical Systems
Authors: V.Manikandan, N.Devarajan
Abstract:
The main objective of this work is to provide a fault detection and isolation based on Markov parameters for residual generation and a neural network for fault classification. The diagnostic approach is accomplished in two steps: In step 1, the system is identified using a series of input / output variables through an identification algorithm. In step 2, the fault is diagnosed comparing the Markov parameters of faulty and non faulty systems. The Artificial Neural Network is trained using predetermined faulty conditions serves to classify the unknown fault. In step 1, the identification is done by first formulating a Hankel matrix out of Input/ output variables and then decomposing the matrix via singular value decomposition technique. For identifying the system online sliding window approach is adopted wherein an open slit slides over a subset of 'n' input/output variables. The faults are introduced at arbitrary instances and the identification is carried out in online. Fault residues are extracted making a comparison of the first five Markov parameters of faulty and non faulty systems. The proposed diagnostic approach is illustrated on benchmark problems with encouraging results.
Keywords: Artificial neural network, Fault Diagnosis, Identification, Markov parameters.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1633713 PUMA 560 Optimal Trajectory Control using Genetic Algorithm, Simulated Annealing and Generalized Pattern Search Techniques
Authors: Sufian Ashraf Mazhari, Surendra Kumar
Abstract:
Robot manipulators are highly coupled nonlinear systems, therefore real system and mathematical model of dynamics used for control system design are not same. Hence, fine-tuning of controller is always needed. For better tuning fast simulation speed is desired. Since, Matlab incorporates LAPACK to increase the speed and complexity of matrix computation, dynamics, forward and inverse kinematics of PUMA 560 is modeled on Matlab/Simulink in such a way that all operations are matrix based which give very less simulation time. This paper compares PID parameter tuning using Genetic Algorithm, Simulated Annealing, Generalized Pattern Search (GPS) and Hybrid Search techniques. Controller performances for all these methods are compared in terms of joint space ITSE and cartesian space ISE for tracking circular and butterfly trajectories. Disturbance signal is added to check robustness of controller. GAGPS hybrid search technique is showing best results for tuning PID controller parameters in terms of ITSE and robustness.Keywords: Controller Tuning, Genetic Algorithm, Pattern Search, Robotic Controller, Simulated Annealing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3717712 The Influence of Fiber Volume Fraction on Thermal Conductivity of Pultruded Profile
Authors: V. Lukášová, P. Peukert, V. Votrubec
Abstract:
Thermal conductivity in the x, y and z-directions was measured on a pultruded profile that was manufactured by the technology of pulling from glass fibers and a polyester matrix. The results of measurements of thermal conductivity showed considerable variability in different directions. The caused variability in thermal conductivity was expected due fraction variations. The cross-section of the pultruded profile was scanned. An image analysis illustrated an uneven distribution of the fibers and the matrix in the cross-section. The distribution of these inequalities was processed into a Voronoi diagram in the observed area of the pultruded profile cross-section. In order to verify whether the variation of the fiber volume fraction in the pultruded profile can affect its thermal conductivity, the numerical simulations in the ANSYS Fluent were performed. The simulation was based on the geometry reconstructed from image analysis. The aim is to quantify thermal conductivity numerically. Above all, images with different volume fractions were chosen. The results of the measured thermal conductivity were compared with the calculated thermal conductivity. The evaluated data proved a strong correlation between volume fraction and thermal conductivity of the pultruded profile. Based on presented results, a modification of production technology may be proposed.Keywords: Numerical simulation, pultruded profile, volume fraction, thermal conductivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1183711 Response of Pavement under Temperature and Vehicle Coupled Loading
Authors: Yang Zhong, Mei-jie Xu
Abstract:
To study the dynamic mechanics response of asphalt pavement under the temperature load and vehicle loading, asphalt pavement was regarded as multilayered elastic half-space system, and theory analysis was conducted by regarding dynamic modulus of asphalt mixture as the parameter. Firstly, based on the dynamic modulus test of asphalt mixture, function relationship between the dynamic modulus of representative asphalt mixture and temperature was obtained. In addition, the analytical solution for thermal stress in single layer was derived by using Laplace integral transformation and Hankel integral transformation respectively by using thermal equations of equilibrium. The analytical solution of calculation model of thermal stress in asphalt pavement was derived by transfer matrix of thermal stress in multilayer elastic system. Finally, the variation of thermal stress in pavement structure was analyzed. The result shows that there is obvious difference between the thermal stress based on dynamic modulus and the solution based on static modulus. So the dynamic change of parameter in asphalt mixture should be taken into consideration when theoretical analysis is taken out.Keywords: Asphalt pavement, dynamic modulus, integral transformation, transfer matrix, thermal stress.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1662710 Adaptive and Personalizing Learning Sequence Using Modified Roulette Wheel Selection Algorithm
Authors: Melvin A. Ballera
Abstract:
Prior literature in the field of adaptive and personalized learning sequence in e-learning have proposed and implemented various mechanisms to improve the learning process such as individualization and personalization, but complex to implement due to expensive algorithmic programming and need of extensive and prior data. The main objective of personalizing learning sequence is to maximize learning by dynamically selecting the closest teaching operation in order to achieve the learning competency of learner. In this paper, a revolutionary technique has been proposed and tested to perform individualization and personalization using modified reversed roulette wheel selection algorithm that runs at O(n). The technique is simpler to implement and is algorithmically less expensive compared to other revolutionary algorithms since it collects the dynamic real time performance matrix such as examinations, reviews, and study to form the RWSA single numerical fitness value. Results show that the implemented system is capable of recommending new learning sequences that lessens time of study based on student's prior knowledge and real performance matrix.Keywords: E-learning, fitness value, personalized learning sequence, reversed roulette wheel selection algorithms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2024709 Matrix Based Synthesis of EXOR dominated Combinational Logic for Low Power
Authors: Padmanabhan Balasubramanian, C. Hari Narayanan
Abstract:
This paper discusses a new, systematic approach to the synthesis of a NP-hard class of non-regenerative Boolean networks, described by FON[FOFF]={mi}[{Mi}], where for every mj[Mj]∈{mi}[{Mi}], there exists another mk[Mk]∈{mi}[{Mi}], such that their Hamming distance HD(mj, mk)=HD(Mj, Mk)=O(n), (where 'n' represents the number of distinct primary inputs). The method automatically ensures exact minimization for certain important selfdual functions with 2n-1 points in its one-set. The elements meant for grouping are determined from a newly proposed weighted incidence matrix. Then the binary value corresponding to the candidate pair is correlated with the proposed binary value matrix to enable direct synthesis. We recommend algebraic factorization operations as a post processing step to enable reduction in literal count. The algorithm can be implemented in any high level language and achieves best cost optimization for the problem dealt with, irrespective of the number of inputs. For other cases, the method is iterated to subsequently reduce it to a problem of O(n-1), O(n-2),.... and then solved. In addition, it leads to optimal results for problems exhibiting higher degree of adjacency, with a different interpretation of the heuristic, and the results are comparable with other methods. In terms of literal cost, at the technology independent stage, the circuits synthesized using our algorithm enabled net savings over AOI (AND-OR-Invert) logic, AND-EXOR logic (EXOR Sum-of- Products or ESOP forms) and AND-OR-EXOR logic by 45.57%, 41.78% and 41.78% respectively for the various problems. Circuit level simulations were performed for a wide variety of case studies at 3.3V and 2.5V supply to validate the performance of the proposed method and the quality of the resulting synthesized circuits at two different voltage corners. Power estimation was carried out for a 0.35micron TSMC CMOS process technology. In comparison with AOI logic, the proposed method enabled mean savings in power by 42.46%. With respect to AND-EXOR logic, the proposed method yielded power savings to the tune of 31.88%, while in comparison with AND-OR-EXOR level networks; average power savings of 33.23% was obtained.Keywords: AOI logic, ESOP, AND-OR-EXOR, Incidencematrix, Hamming distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520708 Impulse Response Shortening for Discrete Multitone Transceivers using Convex Optimization Approach
Authors: Ejaz Khan, Conor Heneghan
Abstract:
In this paper we propose a new criterion for solving the problem of channel shortening in multi-carrier systems. In a discrete multitone receiver, a time-domain equalizer (TEQ) reduces intersymbol interference (ISI) by shortening the effective duration of the channel impulse response. Minimum mean square error (MMSE) method for TEQ does not give satisfactory results. In [1] a new criterion for partially equalizing severe ISI channels to reduce the cyclic prefix overhead of the discrete multitone transceiver (DMT), assuming a fixed transmission bandwidth, is introduced. Due to specific constrained (unit morm constraint on the target impulse response (TIR)) in their method, the freedom to choose optimum vector (TIR) is reduced. Better results can be obtained by avoiding the unit norm constraint on the target impulse response (TIR). In this paper we change the cost function proposed in [1] to the cost function of determining the maximum of a determinant subject to linear matrix inequality (LMI) and quadratic constraint and solve the resulting optimization problem. Usefulness of the proposed method is shown with the help of simulations.Keywords: Equalizer, target impulse response, convex optimization, matrix inequality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1712707 Fiction and Reality in Animation: Taking Final Flight of the Osiris as an Example
Authors: Syong-Yang Chung, Xin-An Chen
Abstract:
This study aims to explore the less well-known animation “Final Flight of the Osiris”, consisting of an initial exploration of the film color, storyline, and the simulacrum meanings of the roles, which leads to a further exploration of the light-shadow contrast and the psychological images presented by the screen colors and the characters. The research is based on literature review, and all data was compiled for the analysis of the visual vocabulary evolution of the characters. In terms of the structure, the relational study of the animation and the historical background of that time came first, including The Wachowskis’ and Andy Jones’ impact towards the cinematographic version and the animation version of “The Matrix”. Through literature review, the film color, the meaning and the relevant points were clarified. It was found in this research that “Final Flight of the Osiris” separates the realistic and virtual spaces by the changing the color tones; the "self" of the audience gradually dissolves into the "virtual" in the simulacra world, and the "Animatrix" has become a virtual field for the audience to understand itself about "existence" and "self".
Keywords: The Matrix, The Final Flight of Osiris, Wachowski sisters, simulacrum.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 853706 Effect of Rubber Treatment on Compressive Strength and Modulus of Elasticity of Self-Compacting Rubberized Concrete
Authors: I. Miličević, M. Hadzima Nyarko, R. Bušić, J. Simonović Radosavljević, M. Prokopijević, K. Vojisavljević
Abstract:
This paper investigates the effects of different treatment methods of rubber aggregates for self-compacting concrete (SCC) on compressive strength and modulus of elasticity. SCC mixtures with 10% replacement of fine aggregate with crumb rubber by total aggregate volume and with different aggregate treatment methods were investigated. The rubber aggregate was treated in three different methods: dry process, water-soaking, and NaOH treatment plus water soaking. Properties of SCC in a fresh and hardened state were tested and evaluated. Scanning electron microscope (SEM) analysis of three different SCC patches were made and discussed. It was observed that applying the proposed NaOH plus water soaking method resulted in the improvement of fresh and hardened concrete properties. It resulted in a more uniform distribution of rubber particles in the cement matrix, a better bond between rubber particles and the cement matrix, and higher compressive strength of SCC rubberized concrete.
Keywords: Compressive strength, modulus of elasticity, NaOH treatment, rubber aggregate, self-compacting rubberized concrete, scanning electron microscope analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 641705 A New Stability Analysis and Stabilization of Discrete-Time Switched Linear Systems Using Vector Norms Approach
Authors: Marwen Kermani, Anis Sakly, Faouzi M'sahli
Abstract:
In this paper, we aim to investigate a new stability analysis for discrete-time switched linear systems based on the comparison, the overvaluing principle, the application of Borne-Gentina criterion and the Kotelyanski conditions. This stability conditions issued from vector norms correspond to a vector Lyapunov function. In fact, the switched system to be controlled will be represented in the Companion form. A comparison system relative to a regular vector norm is used in order to get the simple arrow form of the state matrix that yields to a suitable use of Borne-Gentina criterion for the establishment of sufficient conditions for global asymptotic stability. This proposed approach could be a constructive solution to the state and static output feedback stabilization problems.
Keywords: Discrete-time switched linear systems, Global asymptotic stability, Vector norms, Borne-Gentina criterion, Arrow form state matrix, Arbitrary switching, State feedback controller, Static output feedback controller.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1639704 Robotic End-Effector Impedance Control without Expensive Torque/Force Sensor
Authors: Shiuh-Jer Huang, Yu-Chi Liu, Su-Hai Hsiang
Abstract:
A novel low-cost impedance control structure is proposed for monitoring the contact force between end-effector and environment without installing an expensive force/torque sensor. Theoretically, the end-effector contact force can be estimated from the superposition of each joint control torque. There have a nonlinear matrix mapping function between each joint motor control input and end-effector actuating force/torques vector. This new force control structure can be implemented based on this estimated mapping matrix. First, the robot end-effector is manipulated to specified positions, then the force controller is actuated based on the hall sensor current feedback of each joint motor. The model-free fuzzy sliding mode control (FSMC) strategy is employed to design the position and force controllers, respectively. All the hardware circuits and software control programs are designed on an Altera Nios II embedded development kit to constitute an embedded system structure for a retrofitted Mitsubishi 5 DOF robot. Experimental results show that PI and FSMC force control algorithms can achieve reasonable contact force monitoring objective based on this hardware control structure.
Keywords: Robot, impedance control, fuzzy sliding mode control, contact force estimator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4018703 On the Hierarchical Ergodicity Coefficient
Authors: Yilun Shang
Abstract:
In this paper, we deal with the fundamental concepts and properties of ergodicity coefficients in a hierarchical sense by making use of partition. Moreover, we establish a hierarchial Hajnal’s inequality improving some previous results.
Keywords: Stochastic matrix, ergodicity coefficient, partition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1347702 Mechanical Investigation Approach to Optimize the High-Velocity Oxygen Fuel Fe-Based Amorphous Coatings Reinforced by B4C Nanoparticles
Authors: Behrooz Movahedi
Abstract:
Fe-based amorphous feedstock powders are used as the matrix into which various ratios of hard B4C nanoparticles (0, 5, 10, 15, 20 vol.%) as reinforcing agents were prepared using a planetary high-energy mechanical milling. The ball-milled nanocomposite feedstock powders were also sprayed by means of high-velocity oxygen fuel (HVOF) technique. The characteristics of the powder particles and the prepared coating depending on their microstructures and nanohardness were examined in detail using nanoindentation tester. The results showed that the formation of the Fe-based amorphous phase was noticed over the course of high-energy ball milling. It is interesting to note that the nanocomposite coating is divided into two regions, namely, a full amorphous phase region and homogeneous dispersion of B4C nanoparticles with a scale of 10–50 nm in a residual amorphous matrix. As the B4C content increases, the nanohardness of the composite coatings increases, but the fracture toughness begins to decrease at the B4C content higher than 20 vol.%. The optimal mechanical properties are obtained with 15 vol.% B4C due to the suitable content and uniform distribution of nanoparticles. Consequently, the changes in mechanical properties of the coatings were attributed to the changes in the brittle to ductile transition by adding B4C nanoparticles.
Keywords: Fe-based amorphous, B4C nanoparticles, nanocomposite coating, HVOF.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 742701 On Reversal and Transposition Medians
Authors: Martin Bader
Abstract:
During the last years, the genomes of more and more species have been sequenced, providing data for phylogenetic recon- struction based on genome rearrangement measures. A main task in all phylogenetic reconstruction algorithms is to solve the median of three problem. Although this problem is NP-hard even for the sim- plest distance measures, there are exact algorithms for the breakpoint median and the reversal median that are fast enough for practical use. In this paper, this approach is extended to the transposition median as well as to the weighted reversal and transposition median. Although there is no exact polynomial algorithm known even for the pairwise distances, we will show that it is in most cases possible to solve these problems exactly within reasonable time by using a branch and bound algorithm.Keywords: Comparative genomics, genome rearrangements, me-dian, reversals, transpositions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1688700 Reconsidering the Palaeo-Environmental Reconstruction of the Wet Zone of Sri Lanka: A Zooarchaeological Perspective
Authors: Kalangi Rodrigo, Kelum Manamendra-Arachchi
Abstract:
Bones, teeth, and shells have been acknowledged over the last two centuries as evidence of chronology, Palaeo-environment, and human activity. Faunal traces are valid evidence of past situations because they have properties that have not changed over long periods. Sri Lanka has been known as an Island, which has a diverse variety of prehistoric occupation among ecological zones. Defining the Paleoecology of the past societies has been an archaeological thought developed in the 1960s. It is mainly concerned with the reconstruction from available geological and biological evidence of past biota, populations, communities, landscapes, environments, and ecosystems. This early and persistent human fossil, technical, and cultural florescence, as well as a collection of well-preserved tropical-forest rock shelters with associated 'on-site ' Palaeoenvironmental records, makes Sri Lanka a central and unusual case study to determine the extent and strength of early human tropical forest encounters. Excavations carried out in prehistoric caves in the low country wet zone has shown that in the last 50,000 years, the temperature in the lowland rainforests has not exceeded 5 degrees. Based on Semnopithecus Priam (Gray Langur) remains unearthed from wet zone prehistoric caves, it has been argued periods of momentous climate changes during the Last Glacial Maximum (LGM) and Terminal Pleistocene/Early Holocene boundary, with a recognizable preference for semi-open ‘Intermediate’ rainforest or edges. Continuous genus Acavus and Oligospira occupation along with uninterrupted horizontal pervasive of Canarium sp. (‘kekuna’ nut) have proven that temperatures in the lowland rain forests have not changed by at least 5 °C over the last 50,000 years. Site catchment or territorial analysis cannot be any longer defensible, due to time-distance based factors as well as optimal foraging theory failed as a consequence of prehistoric people were aware of the decrease in cost-benefit ratio and located sites, and generally played out a settlement strategy that minimized the ratio of energy expended to energy produced.Keywords: Palaeo-environment, palaeo-ecology, palaeo-climate, prehistory, zooarchaeology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 738