Search results for: Co-occurrence matrix; Similarity measure.
1836 On the Optimality Assessment of Nanoparticle Size Spectrometry and Its Association to the Entropy Concept
Authors: A. Shaygani, R. Saifi, M. S. Saidi, M. Sani
Abstract:
Particle size distribution, the most important characteristics of aerosols, is obtained through electrical characterization techniques. The dynamics of charged nanoparticles under the influence of electric field in Electrical Mobility Spectrometer (EMS) reveals the size distribution of these particles. The accuracy of this measurement is influenced by flow conditions, geometry, electric field and particle charging process, therefore by the transfer function (transfer matrix) of the instrument. In this work, a wire-cylinder corona charger was designed and the combined fielddiffusion charging process of injected poly-disperse aerosol particles was numerically simulated as a prerequisite for the study of a multichannel EMS. The result, a cloud of particles with no uniform charge distribution, was introduced to the EMS. The flow pattern and electric field in the EMS were simulated using Computational Fluid Dynamics (CFD) to obtain particle trajectories in the device and therefore to calculate the reported signal by each electrometer. According to the output signals (resulted from bombardment of particles and transferring their charges as currents), we proposed a modification to the size of detecting rings (which are connected to electrometers) in order to evaluate particle size distributions more accurately. Based on the capability of the system to transfer information contents about size distribution of the injected particles, we proposed a benchmark for the assessment of optimality of the design. This method applies the concept of Von Neumann entropy and borrows the definition of entropy from information theory (Shannon entropy) to measure optimality. Entropy, according to the Shannon entropy, is the ''average amount of information contained in an event, sample or character extracted from a data stream''. Evaluating the responses (signals) which were obtained via various configurations of detecting rings, the best configuration which gave the best predictions about the size distributions of injected particles, was the modified configuration. It was also the one that had the maximum amount of entropy. A reasonable consistency was also observed between the accuracy of the predictions and the entropy content of each configuration. In this method, entropy is extracted from the transfer matrix of the instrument for each configuration. Ultimately, various clouds of particles were introduced to the simulations and predicted size distributions were compared to the exact size distributions.Keywords: Aerosol Nano-Particle, CFD, Electrical Mobility Spectrometer, Von Neumann entropy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18591835 Maintenance Function's Performance Evaluation Using Adapted Balanced Scorecard Model
Authors: A. Bakhtiar, B. Purwanggono, N. Metasari
Abstract:
PT XYZ is a bottled drinking water company. To preserve production resources owned by the company so that the resources could be utilized well, it has implemented maintenance management system, which has important role in company's profitability, and is one of the factors influenced overall company's performance. Yet, up to now the company has never measured maintenance activities' contribution to company's performance. Performance evaluation is done according to adapted Balanced Scorecard model fitted to maintenance function context. This model includes six perspectives: innovation and growth, production, maintenance, environment, costumer, and finance. Actual performance measurement is done through Analytic Hierarchy Process and Objective Matrix. From the research done, we can conclude that the company's maintenance function is categorized in moderate performance. But, there are some indicators which has high priority but low performance, which are: costumers' complain rate, work lateness rate, and Return on Investment.
Keywords: Maintenance, performance, balanced scorecard, objective matrix.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27151834 Undecimated Wavelet Transform Based Contrast Enhancement
Authors: Numan Unaldi, Samil Temel, Süleyman Demirci
Abstract:
A novel undecimated wavelet transform based contrast enhancement algorithmis proposed to for both gray scale andcolor images. Contrast enhancement is realized by tuning the magnitude of approximation coefficients at each level with respect to the approximation coefficients of one higher level during the inverse transform phase in a center/surround enhancement sense.The performance of the proposed algorithm is evaluated using a statistical visual contrast measure (VCM). Experimental results on the proposed algorithm show improvement in terms of the VCM.
Keywords: Image enhancement, local contrast enhancement, visual contrast measure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27461833 Isolation and Identification Fibrinolytic Protease Endophytic Fungi from Hibiscus Leaves in Shah Alam
Authors: Mohd Sidek Ahmad, Zainon Mohd Noor, Zaidah Zainal Ariffin
Abstract:
Fibrin degradation is an important part in prevention or treatment of intravascular thrombosis and cardiovascular diseases. Plasmin like fibrinolytic enzymes has given new hope to patient with cardiovascular diseases by treating fibrin aggregation related diseases with traditional plasminogen activator which have many side effects. Various researches involving wide range of sources for production of fibrinolytic proteases, from bacteria, fungi, insects and fermented foods. But few have looked into endophytic fungi as a potential source. Sixteen (16) endophytic fungi were isolated from Hibiscus sp. leaves from six different locations in Shah Alam, Selangor. Only two endophytic fungi, FH3 and S13 showed positive fibrinolytic protease activities. FH3 produced 5.78cm and S13 produced 4.48cm on Skim Milk Agar after 4 days of incubation at 27°C. Fibrinolytic activity was observed; 3.87cm and 1.82cm diameter clear zone on fibrin plate of FH3 and S13 respectively. 18srRNA was done for identification of the isolated fungi with positive fibrinolytic protease. S13 had the highest similarity (100%) to that of Penicillium citrinum strain TG2 and FH3 had the highest similarity (99%) to that of Fusarium sp. FW2PhC1, Fusarium sp. 13002, Fusarium sp. 08006, Fusarium equiseti strain Salicorn 8 and Fungal sp. FCASAn-2. Media composition variation showed the effects of carbon nitrogen on protein concentration, where the decrement of 50% of media composition caused drastic decrease in protease of FH3 from 1.081 to 0.056 and also S13 from 2.946 to 0.198.
Keywords: Isolation, identification, fibrinolytic protease, endophytic fungi, Hibiscus leaves.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32081832 Cumulative Learning based on Dynamic Clustering of Hierarchical Production Rules(HPRs)
Authors: Kamal K.Bharadwaj, Rekha Kandwal
Abstract:
An important structuring mechanism for knowledge bases is building clusters based on the content of their knowledge objects. The objects are clustered based on the principle of maximizing the intraclass similarity and minimizing the interclass similarity. Clustering can also facilitate taxonomy formation, that is, the organization of observations into a hierarchy of classes that group similar events together. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form Decision If < condition> Generality
Keywords: Cumulative learning, clustering, data mining, hierarchical production rules.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14381831 Simulation of Population Dynamics of Aedes aegypti using Climate Dependent Model
Authors: Nuraini Yusoff, Harun Budin, Salemah Ismail
Abstract:
A climate dependent model is proposed to simulate the population of Aedes aegypti mosquito. In developing the model, average temperature of Shah Alam, Malaysia was used to determine the development rate of each stage of the life cycle of mosquito. Rainfall dependent function was proposed to simulate the hatching rate of the eggs under several assumptions. The proposed transition matrix was obtained and used to simulate the population of eggs, larvae, pupae and adults mosquito. It was found that the peak of mosquito abundance comes during a relatively dry period following a heavy rainfall. In addition, lag time between the peaks of mosquito abundance and dengue fever cases in Shah Alam was estimated.Keywords: simulation, Aedes aegypti, Lefkovitch matrix, rainfall dependent model, Shah Alam
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24871830 Speed Control of a Permanent Magnet Synchronous Machine (PMSM) Fed by an Inverter Voltage Fuzzy Control Approach
Authors: Jamel Khedri, Mohamed Chaabane, Mansour Souissi, Driss Mehdi
Abstract:
This paper deals with the synthesis of fuzzy controller applied to a permanent magnet synchronous machine (PMSM) with a guaranteed H∞ performance. To design this fuzzy controller, nonlinear model of the PMSM is approximated by Takagi-Sugeno fuzzy model (T-S fuzzy model), then the so-called parallel distributed compensation (PDC) is employed. Next, we derive the property of the H∞ norm. The latter is cast in terms of linear matrix inequalities (LMI-s) while minimizing the H∞ norm of the transfer function between the disturbance and the error ( ) ev T . The experimental and simulations results were conducted on a permanent magnet synchronous machine to illustrate the effects of the fuzzy modelling and the controller design via the PDC.Keywords: Feedback controller, Takagi-Sugeno fuzzy model, Linear Matrix Inequality (LMI), PMSM, H∞ performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23351829 CT Medical Images Denoising Based on New Wavelet Thresholding Compared with Curvelet and Contourlet
Authors: Amir Moslemi, Amir Movafeghi, Shahab Moradi
Abstract:
One of the most important challenging factors in medical images is nominated as noise. Image denoising refers to the improvement of a digital medical image that has been infected by Additive White Gaussian Noise (AWGN). The digital medical image or video can be affected by different types of noises. They are impulse noise, Poisson noise and AWGN. Computed tomography (CT) images are subjects to low quality due to the noise. Quality of CT images is dependent on absorbed dose to patients directly in such a way that increase in absorbed radiation, consequently absorbed dose to patients (ADP), enhances the CT images quality. In this manner, noise reduction techniques on purpose of images quality enhancement exposing no excess radiation to patients is one the challenging problems for CT images processing. In this work, noise reduction in CT images was performed using two different directional 2 dimensional (2D) transformations; i.e., Curvelet and Contourlet and Discrete Wavelet Transform (DWT) thresholding methods of BayesShrink and AdaptShrink, compared to each other and we proposed a new threshold in wavelet domain for not only noise reduction but also edge retaining, consequently the proposed method retains the modified coefficients significantly that result good visual quality. Data evaluations were accomplished by using two criterions; namely, peak signal to noise ratio (PSNR) and Structure similarity (Ssim).Keywords: Computed Tomography (CT), noise reduction, curve-let, contour-let, Signal to Noise Peak-Peak Ratio (PSNR), Structure Similarity (Ssim), Absorbed Dose to Patient (ADP).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29201828 Deterministic Method to Assess Kalman Filter Passive Ranging Solution Reliability
Authors: Ronald M. Yannone
Abstract:
For decades, the defense business has been plagued by not having a reliable, deterministic method to know when the Kalman filter solution for passive ranging application is reliable for use by the fighter pilot. This has made it hard to accurately assess when the ranging solution can be used for situation awareness and weapons use. To date, we have used ad hoc rules-of-thumb to assess when we think the estimate of the Kalman filter standard deviation on range is reliable. A reliable algorithm has been developed at BAE Systems Electronics & Integrated Solutions that monitors the Kalman gain matrix elements – and a patent is pending. The “settling" of the gain matrix elements relates directly to when we can assess the time when the passive ranging solution is within the 10 percent-of-truth value. The focus of the paper is on surface-based passive ranging – but the method is applicable to airborne targets as well.Keywords: Electronic warfare, extended Kalman filter (EKF), fighter aircraft, passive ranging, track convergence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20641827 Measurement of Intellectual Capital in an Algerian Company
Authors: S. Brahmi, S. Aitouche, M. D. Mouss
Abstract:
Every modern company should measure the value of its intellectual capital and to report to complement the traditional annual balance sheets. The purpose of this work is to measure the intellectual capital in an Algerian company (or production system) using the Weightless Wealth Tool Kit (WWTK). The results of the measurement of intellectual capital are supplemented by traditional financial ratios. The measurement was applied to the National Company of Wells Services (ENSP) in Hassi Messaoud city, in the south of Algeria. We calculated the intellectual capital (intangible resources) of the ENSP to help the organization to better capitalize on its potential of workers and their know-how. The intangible value of the ENSP is evaluated at 16,936,173,345 DA in 2015.
Keywords: Financial valuation, intangible capital, intellectual capital, intellectual capital measurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11521826 Robust Stability Criteria for Uncertain Genetic Regulatory Networks with Time-Varying Delays
Authors: Wenqin Wang, Shouming Zhong
Abstract:
This paper presents the robust stability criteria for uncertain genetic regulatory networks with time-varying delays. One key point of the criterion is that the decomposition of the matrix ˜D into ˜D = ˜D1 + ˜D2. This decomposition corresponds to a decomposition of the delayed terms into two groups: the stabilizing ones and the destabilizing ones. This technique enables one to take the stabilizing effect of part of the delayed terms into account. Meanwhile, by choosing an appropriate new Lyapunov functional, a new delay-dependent stability criteria is obtained and formulated in terms of linear matrix inequalities (LMIs). Finally, numerical examples are presented to illustrate the effectiveness of the theoretical results.
Keywords: Genetic regulatory network, Time-varying delay, Uncertain system, Lyapunov-Krasovskii functional
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15231825 Application of Genetic Algorithms for Evolution of Quantum Equivalents of Boolean Circuits
Authors: Swanti Satsangi, Ashish Gulati, Prem Kumar Kalra, C. Patvardhan
Abstract:
Due to the non- intuitive nature of Quantum algorithms, it becomes difficult for a classically trained person to efficiently construct new ones. So rather than designing new algorithms manually, lately, Genetic algorithms (GA) are being implemented for this purpose. GA is a technique to automatically solve a problem using principles of Darwinian evolution. This has been implemented to explore the possibility of evolving an n-qubit circuit when the circuit matrix has been provided using a set of single, two and three qubit gates. Using a variable length population and universal stochastic selection procedure, a number of possible solution circuits, with different number of gates can be obtained for the same input matrix during different runs of GA. The given algorithm has also been successfully implemented to obtain two and three qubit Boolean circuits using Quantum gates. The results demonstrate the effectiveness of the GA procedure even when the search spaces are large.Keywords: Ancillas, Boolean functions, Genetic algorithm, Oracles, Quantum circuits, Scratch bit
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19431824 Efficient Frontier - Comparing Different Volatility Estimators
Authors: Tea Poklepović, Zdravka Aljinović, Mario Matković
Abstract:
Modern Portfolio Theory (MPT) according to Markowitz states that investors form mean-variance efficient portfolios which maximizes their utility. Markowitz proposed the standard deviation as a simple measure for portfolio risk and the lower semi-variance as the only risk measure of interest to rational investors. This paper uses a third volatility estimator based on intraday data and compares three efficient frontiers on the Croatian Stock Market. The results show that range-based volatility estimator outperforms both mean-variance and lower semi-variance model.
Keywords: Variance, lower semi-variance, range-based volatility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25781823 Cognition Technique for Developing a World Music
Authors: Haider Javed Uppal, Javed Yunas Uppal
Abstract:
In today's globalized world, it is necessary to develop a form of music that is able to evoke equal emotional responses among people from diverse cultural backgrounds. Indigenous cultures throughout history have developed their own music cognition, specifically in terms of the connections between music and mood. With the advancements in artificial intelligence technologies, it has become possible to analyze and categorize music features such as timbre, harmony, melody, and rhythm, and relate them to the resulting mood effects experienced by listeners. This paper presents a model that utilizes a screenshot translator to convert music from different origins into waveforms, which are then analyzed using machine learning and information retrieval techniques. By connecting these waveforms with Thayer's matrix of moods, a mood classifier has been developed using fuzzy logic algorithms to determine the emotional impact of different types of music on listeners from various cultures.
Keywords: Cognition, world music, artificial intelligence, Thayer’s matrix.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1531822 Degradation of Irradiated UO2 Fuel Thermal Conductivity Calculated by FRAPCON Model Due to Porosity Evolution at High Burn-Up
Authors: B. Roostaii, H. Kazeminejad, S. Khakshournia
Abstract:
The evolution of volume porosity previously obtained by using the existing low temperature high burn-up gaseous swelling model with progressive recrystallization for UO2 fuel is utilized to study the degradation of irradiated UO2 thermal conductivity calculated by the FRAPCON model of thermal conductivity. A porosity correction factor is developed based on the assumption that the fuel morphology is a three-phase type, consisting of the as-fabricated pores and pores due to intergranular bubbles whitin UO2 matrix and solid fission products. The predicted thermal conductivity demonstrates an additional degradation of 27% due to porosity formation at burn-up levels around 120 MWd/kgU which would cause an increase in the fuel temperature accordingly. Results of the calculations are compared with available data.
Keywords: Irradiation-induced recrystallization, matrix swelling, porosity evolution, UO2 thermal conductivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12481821 A New High Speed Neural Model for Fast Character Recognition Using Cross Correlation and Matrix Decomposition
Authors: Hazem M. El-Bakry
Abstract:
Neural processors have shown good results for detecting a certain character in a given input matrix. In this paper, a new idead to speed up the operation of neural processors for character detection is presented. Such processors are designed based on cross correlation in the frequency domain between the input matrix and the weights of neural networks. This approach is developed to reduce the computation steps required by these faster neural networks for the searching process. The principle of divide and conquer strategy is applied through image decomposition. Each image is divided into small in size sub-images and then each one is tested separately by using a single faster neural processor. Furthermore, faster character detection is obtained by using parallel processing techniques to test the resulting sub-images at the same time using the same number of faster neural networks. In contrast to using only faster neural processors, the speed up ratio is increased with the size of the input image when using faster neural processors and image decomposition. Moreover, the problem of local subimage normalization in the frequency domain is solved. The effect of image normalization on the speed up ratio of character detection is discussed. Simulation results show that local subimage normalization through weight normalization is faster than subimage normalization in the spatial domain. The overall speed up ratio of the detection process is increased as the normalization of weights is done off line.Keywords: Fast Character Detection, Neural Processors, Cross Correlation, Image Normalization, Parallel Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15371820 Investigating Activity Recognition Using 9-Axis Sensors and Filters in Wearable Devices
Authors: Jun Gil Ahn, Jong Kang Park, Jong Tae Kim
Abstract:
In this paper, we analyze major components of activity recognition (AR) in wearable device with 9-axis sensors and sensor fusion filters. 9-axis sensors commonly include 3-axis accelerometer, 3-axis gyroscope and 3-axis magnetometer. We chose sensor fusion filters as Kalman filter and Direction Cosine Matrix (DCM) filter. We also construct sensor fusion data from each activity sensor data and perform classification by accuracy of AR using Naïve Bayes and SVM. According to the classification results, we observed that the DCM filter and the specific combination of the sensing axes are more effective for AR in wearable devices while classifying walking, running, ascending and descending.Keywords: Accelerometer, activity recognition, directional cosine matrix filter, gyroscope, Kalman filter, magnetometer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16741819 A Note on the Convergence of the Generalized AOR Iterative Method for Linear Systems
Authors: Zhong-xi Gao, Hou-biao Li
Abstract:
Recently, some convergent results of the generalized AOR iterative (GAOR) method for solving linear systems with strictly diagonally dominant matrices are presented in [Darvishi, M.T., Hessari, P.: On convergence of the generalized AOR method for linear systems with diagonally dominant cofficient matrices. Appl. Math. Comput. 176, 128-133 (2006)] and [Tian, G.X., Huang, T.Z., Cui, S.Y.: Convergence of generalized AOR iterative method for linear systems with strictly diagonally dominant cofficient matrices. J. Comp. Appl. Math. 213, 240-247 (2008)]. In this paper, we give the convergence of the GAOR method for linear systems with strictly doubly diagonally dominant matrix, which improves these corresponding results.
Keywords: Diagonally dominant matrix, GAOR method, Linear system, Convergence
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13031818 The Design of a Vehicle Traffic Flow Prediction Model for a Gauteng Freeway Based on an Ensemble of Multi-Layer Perceptron
Authors: Tebogo Emma Makaba, Barnabas Ndlovu Gatsheni
Abstract:
The cities of Johannesburg and Pretoria both located in the Gauteng province are separated by a distance of 58 km. The traffic queues on the Ben Schoeman freeway which connects these two cities can stretch for almost 1.5 km. Vehicle traffic congestion impacts negatively on the business and the commuter’s quality of life. The goal of this paper is to identify variables that influence the flow of traffic and to design a vehicle traffic prediction model, which will predict the traffic flow pattern in advance. The model will unable motorist to be able to make appropriate travel decisions ahead of time. The data used was collected by Mikro’s Traffic Monitoring (MTM). Multi-Layer perceptron (MLP) was used individually to construct the model and the MLP was also combined with Bagging ensemble method to training the data. The cross—validation method was used for evaluating the models. The results obtained from the techniques were compared using predictive and prediction costs. The cost was computed using combination of the loss matrix and the confusion matrix. The predicted models designed shows that the status of the traffic flow on the freeway can be predicted using the following parameters travel time, average speed, traffic volume and day of month. The implications of this work is that commuters will be able to spend less time travelling on the route and spend time with their families. The logistics industry will save more than twice what they are currently spending.Keywords: Bagging ensemble methods, confusion matrix, multi-layer perceptron, vehicle traffic flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17771817 Enhancement of Low Contrast Satellite Images using Discrete Cosine Transform and Singular Value Decomposition
Authors: A. K. Bhandari, A. Kumar, P. K. Padhy
Abstract:
In this paper, a novel contrast enhancement technique for contrast enhancement of a low-contrast satellite image has been proposed based on the singular value decomposition (SVD) and discrete cosine transform (DCT). The singular value matrix represents the intensity information of the given image and any change on the singular values change the intensity of the input image. The proposed technique converts the image into the SVD-DCT domain and after normalizing the singular value matrix; the enhanced image is reconstructed by using inverse DCT. The visual and quantitative results suggest that the proposed SVD-DCT method clearly shows the increased efficiency and flexibility of the proposed method over the exiting methods such as Linear Contrast Stretching technique, GHE technique, DWT-SVD technique, DWT technique, Decorrelation Stretching technique, Gamma Correction method based techniques.Keywords: Singular Value Decomposition (SVD), discretecosine transforms (DCT), image equalization and satellite imagecontrast enhancement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38381816 A Novel Microarray Biclustering Algorithm
Authors: Chieh-Yuan Tsai, Chuang-Cheng Chiu
Abstract:
Biclustering aims at identifying several biclusters that reveal potential local patterns from a microarray matrix. A bicluster is a sub-matrix of the microarray consisting of only a subset of genes co-regulates in a subset of conditions. In this study, we extend the motif of subspace clustering to present a K-biclusters clustering (KBC) algorithm for the microarray biclustering issue. Besides minimizing the dissimilarities between genes and bicluster centers within all biclusters, the objective function of the KBC algorithm additionally takes into account how to minimize the residues within all biclusters based on the mean square residue model. In addition, the objective function also maximizes the entropy of conditions to stimulate more conditions to contribute the identification of biclusters. The KBC algorithm adopts the K-means type clustering process to efficiently make the partition of K biclusters be optimized. A set of experiments on a practical microarray dataset are demonstrated to show the performance of the proposed KBC algorithm.Keywords: Microarray, Biclustering, Subspace clustering, Meansquare residue model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16151815 New Multisensor Data Fusion Method Based on Probabilistic Grids Representation
Authors: Zhichao Zhao, Yi Liu, Shunping Xiao
Abstract:
A new data fusion method called joint probability density matrix (JPDM) is proposed, which can associate and fuse measurements from spatially distributed heterogeneous sensors to identify the real target in a surveillance region. Using the probabilistic grids representation, we numerically combine the uncertainty regions of all the measurements in a general framework. The NP-hard multisensor data fusion problem has been converted to a peak picking problem in the grids map. Unlike most of the existing data fusion method, the JPDM method dose not need association processing, and will not lead to combinatorial explosion. Its convergence to the CRLB with a diminishing grid size has been proved. Simulation results are presented to illustrate the effectiveness of the proposed technique.
Keywords: Cramer-Rao lower bound (CRLB), data fusion, probabilistic grids, joint probability density matrix, localization, sensor network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18031814 Similarity Measures and Weighted Fuzzy C-Mean Clustering Algorithm
Authors: Bainian Li, Kongsheng Zhang, Jian Xu
Abstract:
In this paper we study the fuzzy c-mean clustering algorithm combined with principal components method. Demonstratively analysis indicate that the new clustering method is well rather than some clustering algorithms. We also consider the validity of clustering method.
Keywords: FCM algorithm, Principal Components Analysis, Clustervalidity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17241813 Compromise Ratio Method for Decision Making under Fuzzy Environment using Fuzzy Distance Measure
Authors: Debashree Guha, Debjani Chakraborty
Abstract:
The aim of this paper is to adopt a compromise ratio (CR) methodology for fuzzy multi-attribute single-expert decision making proble. In this paper, the rating of each alternative has been described by linguistic terms, which can be expressed as triangular fuzzy numbers. The compromise ratio method for fuzzy multi-attribute single expert decision making has been considered here by taking the ranking index based on the concept that the chosen alternative should be as close as possible to the ideal solution and as far away as possible from the negative-ideal solution simultaneously. From logical point of view, the distance between two triangular fuzzy numbers also is a fuzzy number, not a crisp value. Therefore a fuzzy distance measure, which is itself a fuzzy number, has been used here to calculate the difference between two triangular fuzzy numbers. Now in this paper, with the help of this fuzzy distance measure, it has been shown that the compromise ratio is a fuzzy number and this eases the problem of the decision maker to take the decision. The computation principle and the procedure of the compromise ratio method have been described in detail in this paper. A comparative analysis of the compromise ratio method previously proposed [1] and the newly adopted method have been illustrated with two numerical examples.
Keywords: Compromise ratio method, Fuzzy multi-attributesingle-expert decision making, Fuzzy number, Linguistic variable
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14111812 Prerequisites to Increase the Purchase Intent fora Socially Responsible Company –Development of a Scale
Authors: Tutku Eker Iscioglu
Abstract:
Increasing attention has been given in academia to the concept of corporate social responsibility. Also, the number of companies that undertake social responsibility initiatives has been boosting day by day since behaving in a socially responsible manner brings a lot to the companies. Literature provides various benefits of social responsibility and under which situations these benefits could be realized. However, most of these studies focus on one aspect of the consequences of behaving in a socially responsible manner and there is no study that unifies the conditions that a company should fulfill to make customers prefer its brand. This study aims to fill this gap. More specifically, the purpose of this study is to identify the conditions that a socially responsible company should fulfill in order to attract customers. To this end, a scale is developed and its reliability and validity is assessed through the method of Multitrait- Multimethod Matrix.
Keywords: Consumers, Corporate Social Responsibility, Multitrait-Multimethod Matrix, Scale Development.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15071811 Humor Roles of Females in a Product Color Matrix
Authors: Jin-Tsann Yeh, Chyong-Ling Lin
Abstract:
Healthcare providers sometimes use the power of humor as a treatment and therapy for buffering mental health or easing mental disorders because humor can provide relief from distress and conflict. Humor is also very suitable for advertising because of similar benefits. This study carefully examines humor's widespread use in advertising and identifies relationships among humor mechanisms, female depictions, and product types. The purpose is to conceptualize how humor theories can be used not only to successfully define a product as fitting within one of four color categories of the product color matrix, but also to identify compelling contemporary female depictions through humor in ads. The results can offer an idealization for marketing managers and consumers to help them understand how female role depictions can be effectively used with humor in ads. The four propositions developed herein are derived from related literature, through the identification of marketing strategy formulations that achieve product memory enhancement by adopting humor mechanisms properly matched with female role depictions.Keywords: Humor mechanisms, Female role depiction, Product types.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20531810 Study of Tribological Behaviour of Al6061/Silicon Carbide/Graphite Hybrid Metal Matrix Composite Using Taguchi's Techniques
Authors: Mohamed Zakaulla, A. R. Anwar Khan
Abstract:
Al6061 alloy base matrix, reinforced with particles of silicon carbide (10 wt %) and Graphite powder (1wt%), known as hybrid composites have been fabricated by liquid metallurgy route (stir casting technique) and optimized at different parameters like applied load, sliding speed and sliding distance by taguchi method. A plan of experiment generated through taguchi technique was used to perform experiments based on L27 orthogonal array. The developed ANOVA and regression equations are used to find the optimum coefficient of friction and wear under the influence of applied load, sliding speed and sliding distance. On the basis of “smaller the best” the dry sliding wear resistance was analysed and finally confirmation tests were carried out to verify the experimental results.Keywords: Analysis of variance, dry sliding wear, Hybrid composite, orthogonal array, Taguchi technique.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27041809 Experimental Characterization of the Shear Behavior of Fiber Reinforced Concrete Beam Elements in Chips
Authors: Djamal Atlaoui, Youcef Bouafia
Abstract:
This work deals with the experimental study of the mechanical behavior, by shear tests (fracture shear), elements of concrete beams reinforced with fibers in chips. These fibers come from the machining waste of the steel parts. The shear tests are carried out on prismatic specimens of dimensions 10 x 20 x 120 cm3. The fibers are characterized by mechanical resistance and tearing. The optimal composition of the concrete was determined by the workability test. Two fiber contents are selected for this study (W = 0.6% and W = 0.8%) and a BT control concrete (W = 0%) of the same composition as the matrix is developed to serve as a reference with a sand-to-gravel ratio (S/G) of concrete matrix equal to 1. The comparison of the different results obtained shows that the chips fibers confer a significant ductility to the material after cracking of the concrete. Also, the fibers used limit diagonal cracks in shear and improve strength and rigidity.Keywords: Characterization, chips fibers, cracking mode, ductility, undulation, shear.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5091808 A New Weighted LDA Method in Comparison to Some Versions of LDA
Authors: Delaram Jarchi, Reza Boostani
Abstract:
Linear Discrimination Analysis (LDA) is a linear solution for classification of two classes. In this paper, we propose a variant LDA method for multi-class problem which redefines the between class and within class scatter matrices by incorporating a weight function into each of them. The aim is to separate classes as much as possible in a situation that one class is well separated from other classes, incidentally, that class must have a little influence on classification. It has been suggested to alleviate influence of classes that are well separated by adding a weight into between class scatter matrix and within class scatter matrix. To obtain a simple and effective weight function, ordinary LDA between every two classes has been used in order to find Fisher discrimination value and passed it as an input into two weight functions and redefined between class and within class scatter matrices. Experimental results showed that our new LDA method improved classification rate, on glass, iris and wine datasets, in comparison to different versions of LDA.Keywords: Discriminant vectors, weighted LDA, uncorrelation, principle components, Fisher-face method, Bootstarp method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15231807 Analytical Solution for Free Vibration of Rectangular Kirchhoff Plate from Wave Approach
Authors: Mansour Nikkhah-Bahrami, Masih Loghmani, Mostafa Pooyanfar
Abstract:
In this paper, an analytical approach for free vibration analysis of four edges simply supported rectangular Kirchhoff plates is presented. The method is based on wave approach. From wave standpoint vibration propagate, reflect and transmit in a structure. Firstly, the propagation and reflection matrices for plate with simply supported boundary condition are derived. Then, these matrices are combined to provide a concise and systematic approach to free vibration analysis of a simply supported rectangular Kirchhoff plate. Subsequently, the eigenvalue problem for free vibration of plates is formulated and the equation of plate natural frequencies is constructed. Finally, the effectiveness of the approach is shown by comparison of the results with existing classical solution.Keywords: Kirchhoff plate, propagation matrix, reflection matrix, vibration analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2420