Search results for: mixed method analysis
11606 Feasibility Investigation of Near Infrared Spectrometry for Particle Size Estimation of Nano Structures
Authors: A. Bagheri Garmarudi, M. Khanmohammadi, N. Khoddami, K. Shabani
Abstract:
Determination of nano particle size is substantial since the nano particle size exerts a significant effect on various properties of nano materials. Accordingly, proposing non-destructive, accurate and rapid techniques for this aim is of high interest. There are some conventional techniques to investigate the morphology and grain size of nano particles such as scanning electron microscopy (SEM), atomic force microscopy (AFM) and X-ray diffractometry (XRD). Vibrational spectroscopy is utilized to characterize different compounds and applied for evaluation of the average particle size based on relationship between particle size and near infrared spectra [1,4] , but it has never been applied in quantitative morphological analysis of nano materials. So far, the potential application of nearinfrared (NIR) spectroscopy with its ability in rapid analysis of powdered materials with minimal sample preparation, has been suggested for particle size determination of powdered pharmaceuticals. The relationship between particle size and diffuse reflectance (DR) spectra in near infrared region has been applied to introduce a method for estimation of particle size. Back propagation artificial neural network (BP-ANN) as a nonlinear model was applied to estimate average particle size based on near infrared diffuse reflectance spectra. Thirty five different nano TiO2 samples with different particle size were analyzed by DR-FTNIR spectrometry and the obtained data were processed by BP- ANN.Keywords: near infrared, particle size, chemometrics, neuralnetwork, nano structure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 184211605 An Auxiliary Technique for Coronary Heart Disease Prediction by Analyzing ECG Based on ResNet and Bi-LSTM
Authors: Yang Zhang, Jian He
Abstract:
Heart disease is one of the leading causes of death in the world, and coronary heart disease (CHD) is one of the major heart diseases. Electrocardiogram (ECG) is widely used in the detection of heart diseases, but the traditional manual method for CHD prediction by analyzing ECG requires lots of professional knowledge for doctors. This paper presents sliding window and continuous wavelet transform (CWT) to transform ECG signals into images, and then ResNet and Bi-LSTM are introduced to build the ECG feature extraction network (namely ECGNet). At last, an auxiliary system for CHD prediction was developed based on modified ResNet18 and Bi-LSTM, and the public ECG dataset of CHD from MIMIC-3 was used to train and test the system. The experimental results show that the accuracy of the method is 83%, and the F1-score is 83%. Compared with the available methods for CHD prediction based on ECG, such as kNN, decision tree, VGGNet, etc., this method not only improves the prediction accuracy but also could avoid the degradation phenomenon of the deep learning network.
Keywords: Bi-LSTM, CHD, coronary heart disease, ECG, electrocardiogram, ResNet, sliding window.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33911604 Experimental Chevreul’s Salt Production Methods on Copper Recovery
Authors: Turan Çalban, Oral Laçin, Abdüsselam Kurtbas
Abstract:
Experimental production methods of Chevreul’s salt being an intermediate stage product in copper recovery were investigated on this article. Chevreul’s salt, Cu2SO3.CuSO3.2H2O, being a mixed valence copper sulphite compound, has been obtained by using different methods and reagents. Chevreul’s salt has an intense brick-red color. It is highly stable and expensive. The production of Chevreul’s salt plays a key role in hydrometallurgy. Thermodynamic tendency on precipitation of Chevreul’s salt is related to pH and temperature. Besides, SO2 gaseous is a versatile reagent for precipitating of copper sulphites, Using of SO2 for selective precipitation can be made by appropriate adjustments of pH and temperature. Chevreul’s salt does not form in acidic solutions if those solutions contains considerable amount of sulfurous acid. It is necessary to maintain between pH 2–4.5, because, solubility of Chevreul’s salt increases with decreasing of pH values. Also, the region which Chevreul’s salt is stable can be seen from the potentialpH diagram.
Keywords: Chevreul’s salt, copper recovery, copper sulphite, stage product.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 306811603 Optimal Distributed Generator Sizing and Placement by Analytical Method and PSO Algorithm Considering Optimal Reactive Power Dispatch
Authors: Kyaw Myo Lin, Pyone Lai Swe, Khine Zin Oo
Abstract:
In this paper, an approach combining analytical method for the distributed generator (DG) sizing and meta-heuristic search for the optimal location of DG has been presented. The optimal size of DG on each bus is estimated by the loss sensitivity factor method while the optimal sites are determined by Particle Swarm Optimization (PSO) based optimal reactive power dispatch for minimizing active power loss. To confirm the proposed approach, it has been tested on IEEE-30 bus test system. The adjustments of operating constraints and voltage profile improvements have also been observed. The obtained results show that the allocation of DGs results in a significant loss reduction with good voltage profiles and the combined approach is competent in keeping the system voltages within the acceptable limits.
Keywords: Analytical approach, distributed generations, optimal size, optimal location, optimal reactive power dispatch, particle swarm optimization algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 117611602 Accuracy of Small Field of View CBCT in Determining Endodontic Working Length
Authors: N. L. S. Ahmad, Y. L. Thong, P. Nambiar
Abstract:
An in vitro study was carried out to evaluate the feasibility of small field of view (FOV) cone beam computed tomography (CBCT) in determining endodontic working length. The objectives were to determine the accuracy of CBCT in measuring the estimated preoperative working lengths (EPWL), endodontic working lengths (EWL) and file lengths. Access cavities were prepared in 27 molars. For each root canal, the baseline electronic working length was determined using an EAL (Raypex 5). The teeth were then divided into overextended, non-modified and underextended groups and the lengths were adjusted accordingly. Imaging and measurements were made using the respective software of the RVG (Kodak RVG 6100) and CBCT units (Kodak 9000 3D). Root apices were then shaved and the apical constrictions viewed under magnification to measure the control working lengths. The paired t-test showed a statistically significant difference between CBCT EPWL and control length but the difference was too small to be clinically significant. From the Bland Altman analysis, the CBCT method had the widest range of 95% limits of agreement, reflecting its greater potential of error. In measuring file lengths, RVG had a bigger window of 95% limits of agreement compared to CBCT. Conclusions: (1) The clinically insignificant underestimation of the preoperative working length using small FOV CBCT showed that it is acceptable for use in the estimation of preoperative working length. (2) Small FOV CBCT may be used in working length determination but it is not as accurate as the currently practiced method of using the EAL. (3) It is also more accurate than RVG in measuring file lengths.Keywords: Accuracy, CBCT, endodontic, measurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 161911601 Research Trend Analysis – A Sample in the Field of Information Systems
Authors: Hei-Chia Wang, Wei-Pin Chiu
Abstract:
As research performance in academia is treated as one of indices for national competency, many countries devote much attention and resources to increasing their research performance. Understand the research trend is the basic step to improve the research performance. The goal of this research is to design an analysis system to evaluate research trends from analyzing data from different countries. In this paper, information system researches in Taiwan and other countries, including Asian countries and prominent countries represented by the Group of Eight (G8) is used as example. Our research found the trends are varied in different countries. Our research suggested that Taiwan-s scholars can pay more attention to interdisciplinary applications and try to increase their collaboration with other countries, in order to increase Taiwan's competency in the area of information science.
Keywords: Bibliometric analysis, research trend, scientometric analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 168111600 The Analysis of Nanoptenna for Extreme Fast Communication (XFC) over Short Distance
Authors: Shruti Taksali
Abstract:
This paper focuses on the analysis of Nanoptenna for extreme fast communication. The Nanoptenna is basically a nano antenna designed for communication at optical range of frequencies. Since, this range of frequencies includes the visible spectrum of the light, so there is a high possibility of the data transfer at high rates and extreme fast communication (XFC). The shape chosen for the analysis is a bow tie structure due to its various characteristics of electric field enhancement.
Keywords: Nanoptenna, communication, optical range, XFC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 135511599 Stop Consonants in Chinese and Slovak: Contrastive Analysis by Using Praat
Authors: Maria Istvanova
Abstract:
The acquisition of the correct pronunciation in Chinese is closely linked to the initial phase of the study. Based on the contrastive analysis, we determine the differences in the pronunciation of stop consonants in Chinese and Slovak taking into consideration the place and manner of articulation to gain a better understanding of the students' main difficulties in the process of acquiring correct pronunciation of Chinese stop consonants. We employ the software Praat for the analysis of the recorded samples with an emphasis on the pronunciation of the students with a varying command of Chinese. The comparison of the voice onset time (VOT) length for the individual consonants in the students' pronunciation and the pronunciation of the native speaker exposes the differences between the correct pronunciation and the deviant pronunciation of the students.
Keywords: Chinese, contrastive analysis, Praat, pronunciation, Slovak.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 52011598 Urbanization and Income Inequality in Thailand
Authors: Acumsiri Tantiakrnpanit
Abstract:
This paper aims to examine the relationship between urbanization and income inequality in Thailand during the period 2002–2020, using a panel of data for 76 provinces collected from Thailand’s National Statistical Office (Labor Force Survey: LFS), as well as geospatial data from the U.S. Air Force Defense Meteorological Satellite Program (DMSP) and the Visible Infrared Imaging Radiometer Suite Day/Night band (VIIRS-DNB) satellite for 19 selected years. This paper employs two different definitions to identify urban areas: 1) Urban areas defined by Thailand's National Statistical Office (LFS), and 2) Urban areas estimated using nighttime light data from the DMSP and VIIRS-DNB satellite. The second method includes two sub-categories: 2.1) Determining urban areas by calculating nighttime light density with a population density of 300 people per square kilometer, and 2.2) Calculating urban areas based on nighttime light density corresponding to a population density of 1,500 people per square kilometer. The empirical analysis based on Ordinary Least Squares (OLS), fixed effects, and random effects models reveals a consistent U-shaped relationship between income inequality and urbanization. The findings from the econometric analysis demonstrate that urbanization or population density has a significant and negative impact on income inequality. Moreover, the square of urbanization shows a statistically significant positive impact on income inequality. Additionally, there is a negative association between logarithmically transformed income and income inequality. This paper also proposes the inclusion of satellite imagery, geospatial data, and spatial econometric techniques in future studies to conduct quantitative analysis of spatial relationships.
Keywords: Income inequality, nighttime light, population density, Thailand, urbanization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12911597 Basicity of Jordanian Natural Clays Studied by Pyrrole-tpd and Catalytic Conversion of Methylbutynol
Authors: M. Z. Alsawalha
Abstract:
The main objective of this study is to investigate basic properties of different natural clays, by two methods. The first method is a gas phase conversion of methylbutynol (MBOH). The second method is the application of Pyrrole-tpd. Based on the product distribution from the first method, the acidic, basic and coordinately unsaturated sites were differentiated. It was shown that both the conversion and the selectivity for basic products did not change with reaction time. Nevertheless, a deviation from the stoichiometric ratio R of formed acetylene to acetone was observed (R=0.8…0.97). The conversion normalized to the surface area was used for establishing the activity sequence: White kaolinite > red kaolinite > bentonite > zeolite > diatomite. In addition, the results were compared with synthetic amorphous alumosilicates and typical basic materials like MgO and ZnO. The basic properties were characterized using the Pyrrole-tpd. The Pyrrole-tpd results showed the same basicity sequence as the MBOH gas phase reaction.
Keywords: Alumosilicates, basic surface properties, natural clays, normalized conversions with acetylene and acetone, pyrrole-TPD adsorption.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 111911596 Rotor Side Speed Control Methods Using MATLAB/Simulink for Wound Induction Motor
Authors: Rajesh Kumar, Roopali Dogra, Puneet Aggarwal
Abstract:
In recent advancements in electric machine and drives, wound rotor motor is extensively used. The merit of using wound rotor induction motor is to control speed/torque characteristics by inserting external resistance. Wound rotor induction motor can be used in the cases such as (a) low inrush current, (b) load requiring high starting torque, (c) lower starting current is required, (d) loads having high inertia, and (e) gradual built up of torque. Examples include conveyers, cranes, pumps, elevators, and compressors. This paper includes speed control of wound induction motor using MATLAB/Simulink for rotor resistance and slip power recovery method. The characteristics of these speed control methods are hence analyzed.
Keywords: Wound rotor induction motor, MATLAB/Simulink, rotor resistance method, slip power recovery method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 241711595 Increasing The Speed of Convergence of an Artificial Neural Network based ARMA Coefficients Determination Technique
Authors: Abiodun M. Aibinu, Momoh J. E. Salami, Amir A. Shafie, Athaur Rahman Najeeb
Abstract:
In this paper, novel techniques in increasing the accuracy and speed of convergence of a Feed forward Back propagation Artificial Neural Network (FFBPNN) with polynomial activation function reported in literature is presented. These technique was subsequently used to determine the coefficients of Autoregressive Moving Average (ARMA) and Autoregressive (AR) system. The results obtained by introducing sequential and batch method of weight initialization, batch method of weight and coefficient update, adaptive momentum and learning rate technique gives more accurate result and significant reduction in convergence time when compared t the traditional method of back propagation algorithm, thereby making FFBPNN an appropriate technique for online ARMA coefficient determination.Keywords: Adaptive Learning rate, Adaptive momentum, Autoregressive, Modeling, Neural Network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 149811594 The Study on Migration Strategy of Legacy System
Authors: Chao Qi, Fuyang Peng, Bo Deng, Xiaoyan Su
Abstract:
In the upgrade process of enterprise information systems, whether new systems will be success and their development will be efficient, depends on how to deal with and utilize those legacy systems. We propose an evaluation system, which comprehensively describes the capacity of legacy information systems in five aspects. Then a practical legacy systems evaluation method is scripted. Base on the evaluation result, we put forward 4 kinds of migration strategy: eliminated, maintenance, modification, encapsulating. The methods and strategies play important roles in practice.Keywords: Legacy Systems, Evaluation Method, Migration Strategy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 171411593 More Realistic Model for Simulating Min Protein Dynamics: Lattice Boltzmann Method Incorporating the Role of Nucleoids
Authors: J.Yojina, W. Ngamsaad, N. Nuttavut, D.Triampo, Y. Lenbury, W. Triampo, P. Kanthang, S.Sriyab
Abstract:
The dynamics of Min proteins plays a center role in accurate cell division. Although the nucleoids may presumably play an important role in prokaryotic cell division, there is a lack of models to account for its participation. In this work, we apply the lattice Boltzmann method to investigate protein oscillation based on a mesoscopic model that takes into account the nucleoid-s role. We found that our numerical results are in reasonably good agreement with the previous experimental results On comparing with the other computational models without the presence of nucleoids, the highlight of our finding is that the local densities of MinD and MinE on the cytoplasmic membrane increases, especially along the cell width, when the size of the obstacle increases, leading to a more distinct cap-like structure at the poles. This feature indicated the realistic pattern and reflected the combination of Min protein dynamics and nucleoid-s role.Keywords: lattice Boltzmann method, cell division, Minproteins oscillation, nucleoid
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 124611592 HPM Solution of Momentum Equation for Darcy-Brinkman Model in a Parallel Plates Channel Subjected to Lorentz Force
Authors: Asghar Shirazpour, Seyed Moein Rassoulinejad Mousavi, Hamid Reza Seyf
Abstract:
In this paper an analytical solution is presented for fully developed flow in a parallel plates channel under the action of Lorentz force, by use of Homotopy Perturbation Method (HPM). The analytical results are compared with exact solution and an excellent agreement has been observed between them for both Couette and Poiseuille flows. Moreover, the effects of key parameters have been studied on the dimensionless velocity profile.
Keywords: Lorentz Force, Porous Media, Homotopy Perturbation method
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 219511591 Jamun Juice Extraction Using Commercial Enzymes and Optimization of the Treatment with the Help of Physicochemical, Nutritional and Sensory Properties
Authors: Payel Ghosh, Rama Chandra Pradhan, Sabyasachi Mishra
Abstract:
Jamun (Syzygium cuminii L.) is one of the important indigenous minor fruit with high medicinal value. The jamun cultivation is unorganized and there is huge loss of this fruit every year. The perishable nature of the fruit makes its postharvest management further difficult. Due to the strong cell wall structure of pectin-protein bonds and hard seeds, extraction of juice becomes difficult. Enzymatic treatment has been commercially used for improvement of juice quality with high yield. The objective of the study was to optimize the best treatment method for juice extraction. Enzymes (Pectinase and Tannase) from different stains had been used and for each enzyme, best result obtained by using response surface methodology. Optimization had been done on the basis of physicochemical property, nutritional property, sensory quality and cost estimation. According to quality aspect, cost analysis and sensory evaluation, the optimizing enzymatic treatment was obtained by Pectinase from Aspergillus aculeatus strain. The optimum condition for the treatment was 44 oC with 80 minute with a concentration of 0.05% (w/w). At these conditions, 75% of yield with turbidity of 32.21NTU, clarity of 74.39%T, polyphenol content of 115.31 mg GAE/g, protein content of 102.43 mg/g have been obtained with a significant difference in overall acceptability.Keywords: Jamun, enzymatic treatment, physicochemical property, sensory analysis, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 155511590 Multicriteria Synthesis of a Polycentric Knee Prosthesis For Transfemoral Amputees
Authors: Oleksandr Poliakov, Olena Chepenyuk, Yevgen Pashkov, Mykhaylo Kalinin, Vadym Kramar
Abstract:
In one of the prosthesis designs for lower limb transfemoral amputations artificial knee joints with polycentric mechanisms are used. Such prostheses are characterized by high stability during the stance phase of the movement. The existing variety of polycentric mechanisms indicates the possibility of finding the optimal prosthesis design satisfying several quality criteria.In this paper we present a multicriteria method for the synthesis of the artifical polycentric knee mechanism based on the uniform systematic study of the design parameters space and on the analysis of Pareto optimal solutions.Keywords: Optimalcriteria, polycentric knee, prosthesis, synthesis, transfemoral amputee.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 197011589 Impact of Four Reading and Library Factors on the Grade Average of Ugandan Secondary School Students: A Quantitative Study
Authors: Valeda Dent
Abstract:
This study explores reading and library factors related to secondary school student academic outcomes in rural areas in Uganda. This mixed methods study utilized quantitative data collected as part of a more extensive project to explore six student factors in relation to students’ school, library, and home environments. The Kitengesa Community Library in Uganda (www.kitengesalibrary.org) served as the site for this study. The factors explored for this study include reading frequency, library use frequency, library access, overall grade average (OGA), and presence and type of reading materials in the home. Results indicated that both reading frequency and certain types of reading materials read for recreational purposes are correlated with higher OGA. Reading frequency was positively correlated with student OGA for all students.Keywords: Rural village libraries, secondary school students, reading, academic achievement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 133511588 Rock Textures Classification Based on Textural and Spectral Features
Authors: Tossaporn Kachanubal, Somkait Udomhunsakul
Abstract:
In this paper, we proposed a method to classify each type of natural rock texture. Our goal is to classify 26 classes of rock textures. First, we extract five features of each class by using principle component analysis combining with the use of applied spatial frequency measurement. Next, the effective node number of neural network was tested. We used the most effective neural network in classification process. The results from this system yield quite high in recognition rate. It is shown that high recognition rate can be achieved in separation of 26 stone classes.Keywords: Texture classification, SFM, neural network, rock texture classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 201111587 Silver Nanoparticles-Enhanced Luminescence Spectra of Silicon Nanocrystals
Authors: Khamael M. Abualnaja, Lidija Šiller, Benjamin R. Horrocks
Abstract:
Metal-enhanced Luminescence of silicon nanocrystals (SiNCs) was determined using two different particle sizes of silver nanoparticles (AgNPs). SiNCs have been characterized by scanning electron microscopy (SEM), high resolution transmission electron microscopy (HRTEM), Fourier transform infrared spectroscopy (FTIR) and X-ray photoelectron spectroscopy (XPS). It is found that the SiNCs are crystalline with an average diameter of 65 nm and FCC lattice. AgNPs were synthesized using photochemical reduction of AgNO3 with sodium dodecyl sulphate (SDS). The enhanced luminescence of SiNCs by AgNPs was evaluated by confocal Raman microspectroscopy. Enhancement up to x9 and x3 times were observed for SiNCs that mixed with AgNPs which have an average particle size of 100 nm and 30 nm, respectively. Silver NPs-enhanced luminescence of SiNCs occurs as a result of the coupling between the excitation laser light and the plasmon bands of AgNPs; thus this intense field at AgNPs surface couples strongly to SiNCs.
Keywords: Luminescence, Silicon Nanocrystals, Silver Nanoparticles, Surface Enhanced Raman Spectroscopy (SERS).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 282011586 ANP-based Intra and Inter-industry Analysis for Measuring Spillover Effect of ICT Industries
Authors: Yongyoon Suh, Yongtae Park
Abstract:
The interaction among information and communication technology (ICT) industries is a recently ubiquitous phenomenon through fixed-mobile integration. To monitor the impact of interaction, previous research has mainly focused on measuring spillover effect among ICT industries using various methods. Among others, inter-industry analysis is one of the useful methods for examining spillover effect between industries. However, more complex ICT industries become, more important the impact within an industry is. Inter-industry analysis is limited in mirroring intra-relationships within an industry. Thus, this study applies the analytic network process (ANP) to measure the spillover effect, capturing all of the intra and inter-relationships. Using ANP-based intra and inter-industry analysis, the spillover effect is effectively measured, mirroring the complex structure of ICT industries. A main ICT industry and its linkages are also explored to show the current structure of ICT industries. The proposed approach is expected to allow policy makers to understand interactions of ICT industries and their impact.
Keywords: ANP, intra and inter-industry analysis, spillover effect
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 173711585 Integrating Low and High Level Object Recognition Steps by Probabilistic Networks
Authors: András Barta, István Vajk
Abstract:
In pattern recognition applications the low level segmentation and the high level object recognition are generally considered as two separate steps. The paper presents a method that bridges the gap between the low and the high level object recognition. It is based on a Bayesian network representation and network propagation algorithm. At the low level it uses hierarchical structure of quadratic spline wavelet image bases. The method is demonstrated for a simple circuit diagram component identification problem.
Keywords: Object recognition, Bayesian network, Wavelets, Document processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 167111584 Non-Overlapping Hierarchical Index Structure for Similarity Search
Authors: Mounira Taileb, Sid Lamrous, Sami Touati
Abstract:
In order to accelerate the similarity search in highdimensional database, we propose a new hierarchical indexing method. It is composed of offline and online phases. Our contribution concerns both phases. In the offline phase, after gathering the whole of the data in clusters and constructing a hierarchical index, the main originality of our contribution consists to develop a method to construct bounding forms of clusters to avoid overlapping. For the online phase, our idea improves considerably performances of similarity search. However, for this second phase, we have also developed an adapted search algorithm. Our method baptized NOHIS (Non-Overlapping Hierarchical Index Structure) use the Principal Direction Divisive Partitioning (PDDP) as algorithm of clustering. The principle of the PDDP is to divide data recursively into two sub-clusters; division is done by using the hyper-plane orthogonal to the principal direction derived from the covariance matrix and passing through the centroid of the cluster to divide. Data of each two sub-clusters obtained are including by a minimum bounding rectangle (MBR). The two MBRs are directed according to the principal direction. Consequently, the nonoverlapping between the two forms is assured. Experiments use databases containing image descriptors. Results show that the proposed method outperforms sequential scan and SRtree in processing k-nearest neighbors.
Keywords: K-nearest neighbour search, multi-dimensional indexing, multimedia databases, similarity search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 156311583 Adaptive Score Normalization: A Novel Approach for Multimodal Biometric Systems
Authors: Anouar Ben Khalifa, Sami Gazzah, Najoua Essoukri BenAmara
Abstract:
Multimodal biometric systems integrate the data presented by multiple biometric sources, hence offering a better performance than the systems based on a single biometric modality. Although the coupling of biometric systems can be done at different levels, the fusion at the scores level is the most common since it has been proven effective than the rest of the fusion levels. However, the scores from different modalities are generally heterogeneous. A step of normalizing the scores is needed to transform these scores into a common domain before combining them. In this paper, we study the performance of several normalization techniques with various fusion methods in a context relating to the merger of three unimodal systems based on the face, the palmprint and the fingerprint. We also propose a new adaptive normalization method that takes into account the distribution of client scores and impostor scores. Experiments conducted on a database of 100 people show that the performances of a multimodal system depend on the choice of the normalization method and the fusion technique. The proposed normalization method has given the best results.
Keywords: Multibiometrics, Fusion, Score level, Score normalization, Adaptive normalization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 355411582 Electrochemical Performance of Carbon Nanotube Based Supercapacitor
Authors: Jafar Khan Kasi, Ajab Khan Kasi, Muzamil Bokhari
Abstract:
Carbon nanotube is one of the most attractive materials for the potential applications of nanotechnology due to its excellent mechanical, thermal, electrical and optical properties. In this paper we report a supercapacitor made of nickel foil electrodes, coated with multiwall carbon nanotubes (MWCNTs) thin film using electrophoretic deposition (EPD) method. Chemical vapor deposition method was used for the growth of MWCNTs and ethanol was used as a hydrocarbon source. High graphitic multiwall carbon nanotube was found at 750oC analyzing by Raman spectroscopy. We observed the electrochemical performance of supercapacitor by cyclic voltammetry. The electrodes of supercapacitor fabricated from MWCNTs exhibit considerably small equivalent series resistance (ESR), and a high specific power density. Electrophoretic deposition is an easy method in fabricating MWCNT electrodes for high performance supercapacitor.
Keywords: Carbon nanotube, chemical vapor deposition, catalyst, charge, cyclic voltammetry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 248911581 A Bayesian Classification System for Facilitating an Institutional Risk Profile Definition
Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan
Abstract:
This paper presents an approach for easy creation and classification of institutional risk profiles supporting endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support set up of the most important risk factors. Subsequently, risk profiles employ risk factors classifier and associated configurations to support digital preservation experts with a semi-automatic estimation of endangerment group for file format risk profiles. Our goal is to make use of an expert knowledge base, accuired through a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation of risk factors for a requried dimension for analysis. Using the naive Bayes method, the decision support system recommends to an expert the matching risk profile group for the previously selected institutional risk profile. The proposed methods improve the visibility of risk factor values and the quality of a digital preservation process. The presented approach is designed to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and values of file format risk profiles. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert and to define its profile group. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.Keywords: linked open data, information integration, digital libraries, data mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 73011580 Congestion Management in a Deregulated Power System with Micro Grid
Authors: Guguloth Ramesh, T. K. Sunil Kumar
Abstract:
This paper presents congestion management in deregulated power systems. In a deregulated environment, every buyer wants to buy power from the cheapest generator available, irrespective of relative geographical location of buyer and seller. As a consequence of this, the transmission corridors evacuating the power of cheaper generators would get overloaded if all such transactions are approved. Congestion management is a mechanism to prioritize the transactions and commit to such a schedule which would not overload the network. The congestions in the transmission lines are determined by Optimal Power Flow (OPF) solution, which is carried by primal liner programming method. Congestion in the transmission lines are alleviated by connected Distributed Generation (DG) of micro grid at load bus. A method to determine the optimal location of DG unit has been suggested based on transmission line relief sensitivity based approach. The effectiveness of proposed method has been demonstrated on modified IEEE-14 and 30 bus test systems.
Keywords: Congestion management, Distribution Generation (DG), Transmission Line Relief (TLR) sensitivity index, OPF.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 389411579 Development of a Tilt-Rotor Aircraft Model Using System Identification Technique
Authors: Antonio Vitale, Nicola Genito, Giovanni Cuciniello, Ferdinando Montemari
Abstract:
The introduction of tilt-rotor aircraft into the existing civilian air transportation system will provide beneficial effects due to tilt-rotor capability to combine the characteristics of a helicopter and a fixed-wing aircraft into one vehicle. The disposability of reliable tilt-rotor simulation models supports the development of such vehicle. Indeed, simulation models are required to design automatic control systems that increase safety, reduce pilot's workload and stress, and ensure the optimal aircraft configuration with respect to flight envelope limits, especially during the most critical flight phases such as conversion from helicopter to aircraft mode and vice versa. This article presents a process to build a simplified tilt-rotor simulation model, derived from the analysis of flight data. The model aims to reproduce the complex dynamics of tilt-rotor during the in-flight conversion phase. It uses a set of scheduled linear transfer functions to relate the autopilot reference inputs to the most relevant rigid body state variables. The model also computes information about the rotor flapping dynamics, which are useful to evaluate the aircraft control margin in terms of rotor collective and cyclic commands. The rotor flapping model is derived through a mixed theoretical-empirical approach, which includes physical analytical equations (applicable to helicopter configuration) and parametric corrective functions. The latter are introduced to best fit the actual rotor behavior and balance the differences existing between helicopter and tilt-rotor during flight. Time-domain system identification from flight data is exploited to optimize the model structure and to estimate the model parameters. The presented model-building process was applied to simulated flight data of the ERICA Tilt-Rotor, generated by using a high fidelity simulation model implemented in FlightLab environment. The validation of the obtained model was very satisfying, confirming the validity of the proposed approach.
Keywords: Flapping Dynamics, Flight Dynamics, System Identification, Tilt-Rotor Modeling and Simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 128711578 Global Behavior in (Q-xy)2 Potential
Authors: K. Jaroensutasinee
Abstract:
The general global behavior of particle S a non-linear (Q - xy)2 potential cannot be revealed a Poincare surface of section method (PSS) because inost trajectories take practically infinitely long time to integrate numerically before they come back to the surface. In this study as an alternative to PSS, a multiple scale perturbation is applied to analyze global adiabatic, non-adiabatic and chaotic behavior of particles in this potential. It was found that the results can be summarized as a form of a Fermi-like map. Additionally, this method gives a variation of global stochasticity criteria with Q.
Keywords: Multiple Scak Perturbation The Poincare Surface or Section, Fermi Map
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 126411577 Integrating Fast Karnough Map and Modular Neural Networks for Simplification and Realization of Complex Boolean Functions
Authors: Hazem M. El-Bakry
Abstract:
In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.Keywords: Boolean Functions, Simplification, KarnoughMap, Implementation of Logic Functions, Modular NeuralNetworks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814