Search results for: Mason-Caffin method
6585 Physical and Electrical Characterization of ZnO Thin Films Prepared by Sol-Gel Method
Authors: Mohammad Reza Tabatabaei, Ali Vaseghi Ardekani
Abstract:
In this paper, Zinc Oxide (ZnO) thin films are deposited on glass substrate by sol-gel method. The ZnO thin films with well defined orientation were acquired by spin coating of zinc acetate dehydrate monoethanolamine (MEA), de-ionized water and isopropanol alcohol. These films were pre-heated at 275°C for 10 min and then annealed at 350°C, 450°C and 550°C for 80 min. The effect of annealing temperature and different thickness on structure and surface morphology of the thin films were verified by Atomic Force Microscopy (AFM). It was found that there was a significant effect of annealing temperature on the structural parameters of the films such as roughness exponent, fractal dimension and interface width. Thin films also were characterizied by X-ray Diffractometery (XRD) method. XRD analysis revealed that the annealed ZnO thin films consist of single phase ZnO with wurtzite structure and show the c-axis grain orientation. Increasing annealing temperature increased the crystallite size and the c-axis orientation of the film after 450°C. Also In this study, ZnO thin films in different thickness have been prepared by sol-gel method on the glass substrate at room temperature. The thicknesses of films are 100, 150 and 250 nm. Using fractal analysis, morphological characteristics of surface films thickness in amorphous state were investigated. The results show that with increasing thickness, surface roughness (RMS) and lateral correlation length (ξ) are decreased. Also, the roughness exponent (α) and growth exponent (β) were determined to be 0.74±0.02 and 0.11±0.02, respectively.
Keywords: ZnO, Thin film, Fractal analysis, Morphology, AFM, annealing temperature, different thickness, XRD.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34886584 A Generic Approach to Reuse Unified Modeling Language Components Following an Agile Process
Authors: Rim Bouhaouel, Naoufel Kraïem, Zuhoor Al Khanjari
Abstract:
Unified Modeling Language (UML) is considered as one of the widespread modeling language standardized by the Object Management Group (OMG). Therefore, the model driving engineering (MDE) community attempts to provide reuse of UML diagrams, and do not construct it from scratch. The UML model appears according to a specific software development process. The existing method generation models focused on the different techniques of transformation without considering the development process. Our work aims to construct an UML component from fragments of UML diagram basing on an agile method. We define UML fragment as a portion of a UML diagram, which express a business target. To guide the generation of fragments of UML models using an agile process, we need a flexible approach, which adapts to the agile changes and covers all its activities. We use the software product line (SPL) to derive a fragment of process agile method. This paper explains our approach, named RECUP, to generate UML fragments following an agile process, and overviews the different aspects. In this paper, we present the approach and we define the different phases and artifacts.Keywords: UML, component, fragment, agile, SPL.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9176583 A New Distribution Network Reconfiguration Approach using a Tree Model
Authors: E. Dolatdar, S. Soleymani, B. Mozafari
Abstract:
Power loss reduction is one of the main targets in power industry and so in this paper, the problem of finding the optimal configuration of a radial distribution system for loss reduction is considered. Optimal reconfiguration involves the selection of the best set of branches to be opened ,one each from each loop, for reducing resistive line losses , and reliving overloads on feeders by shifting the load to adjacent feeders. However ,since there are many candidate switching combinations in the system ,the feeder reconfiguration is a complicated problem. In this paper a new approach is proposed based on a simple optimum loss calculation by determining optimal trees of the given network. From graph theory a distribution network can be represented with a graph that consists a set of nodes and branches. In fact this problem can be viewed as a problem of determining an optimal tree of the graph which simultaneously ensure radial structure of each candidate topology .In this method the refined genetic algorithm is also set up and some improvements of algorithm are made on chromosome coding. In this paper an implementation of the algorithm presented by [7] is applied by modifying in load flow program and a comparison of this method with the proposed method is employed. In [7] an algorithm is proposed that the choice of the switches to be opened is based on simple heuristic rules. This algorithm reduce the number of load flow runs and also reduce the switching combinations to a fewer number and gives the optimum solution. To demonstrate the validity of these methods computer simulations with PSAT and MATLAB programs are carried out on 33-bus test system. The results show that the performance of the proposed method is better than [7] method and also other methods.
Keywords: Distribution System, Reconfiguration, Loss Reduction , Graph Theory , Optimization , Genetic Algorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37826582 Heteromolecular Structure Formation in Aqueous Solutions of Ethanol, Tetrahydrofuran and Dimethylformamide
Authors: Sh. Gofurov, O. Ismailova, U. Makhmanov, A. Kokhkharov
Abstract:
The refractometric method has been used to determine optical properties of concentration features of aqueous solutions of ethanol, tetrahydrofuran and dimethylformamide at the room temperature. Changes in dielectric permittivity of aqueous solutions of ethanol, tetrahydrofuran and dimethylformamide in a wide range of concentrations (0÷1.0 molar fraction) have been studied using molecular dynamics method. The curves depending on the concentration of experimental data on excess refractive indices and excess dielectric permittivity were compared. It has been shown that stable heteromolecular complexes in binary solutions are formed in the concentration range of 0.3÷0.4 mole fractions. The real and complex part of dielectric permittivity was obtained from dipole-dipole autocorrelation functions of molecules. At the concentrations of C = 0.3 / 0.4 m.f. the heteromolecular structures with hydrogen bonds are formed. This is confirmed by the extremum values of excessive dielectric permittivity and excessive refractive index of aqueous solutions.
Keywords: Refractometric method, dielectric constant, molecular dynamics, aqueous solution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10026581 Design and Analysis of a Piezoelectric-Based AC Current Measuring Sensor
Authors: Easa Ali Abbasi, Akbar Allahverdizadeh, Reza Jahangiri, Behnam Dadashzadeh
Abstract:
Electrical current measurement is a suitable method for the performance determination of electrical devices. There are two contact and noncontact methods in this measuring process. Contact method has some disadvantages like having direct connection with wire which may endamage the system. Thus, in this paper, a bimorph piezoelectric cantilever beam which has a permanent magnet on its free end is used to measure electrical current in a noncontact way. In mathematical modeling, based on Galerkin method, the governing equation of the cantilever beam is solved, and the equation presenting the relation between applied force and beam’s output voltage is presented. Magnetic force resulting from current carrying wire is considered as the external excitation force of the system. The results are compared with other references in order to demonstrate the accuracy of the mathematical model. Finally, the effects of geometric parameters on the output voltage and natural frequency are presented.
Keywords: Cantilever beam, electrical current measurement, forced excitation, piezoelectric.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10776580 Synthesis of ZnO Nanostructures via Gel-casting Method
Authors: A.A.Rohani, A.Salehi, M.Tabrizi, S. A. Manafi, A. Fardafshari
Abstract:
In this study, ZnO nano rods and ZnO ultrafine particles were synthesized by Gel-casting method. The synthesized ZnO powder has a hexagonal zincite structure. The ZnO aggregates with rod-like morphology are typically 1.4 μm in length and 120 nm in diameter, which consist of many small nanocrystals with diameters of 10 nm. Longer wires connected by many hexahedral ZnO nanocrystals were obtained after calcinations at the temperature over 600° C.The crystalline structures and morphologies of the powder have been characterized by X-ray diffraction(XRD) and Scaning electron microscopy (SEM).The result shows that the different preparation conditions such as concentration H2O, calcinations time and calcinations temperature have a lot of influences upon the properties of nano ZnO powders, an increase in the temperature of the calcinations results in an increase of the grain size and also the increase of the calcinations time in high temperature makes the size of the grains bigger. The existences of extra watter prevent nano grains from improving like rod morphology. We have obtained the smallest grain size of ZnO powder by controlling the process conditions. Finally In a suitable condition, a novel nanostructure, namely bi-rod-like ZnO nano rods was found which is different from known ZnO nanostructures.
Keywords: morphology, nano particles, ZnO, gel-Casting method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17736579 Creation of GaxCo1-xZnSe0.4 (x = 0.1, 0.3, 0.5) Nanoparticles Using Pulse Laser Ablation Method
Authors: Yong Pan, Li Wang, Xue Qiong Su, Dong Wen Gao
Abstract:
To date, nanomaterials have received extensive attention over the years because of their wide application. Various nanomaterials such as nanoparticles, nanowire, nanoring, nanostars and other nanostructures have begun to be systematically studied. The preparation of these materials by chemical methods is not only costly, but also has a long cycle and high toxicity. At the same time, preparation of nanoparticles of multi-doped composites has been limited due to the special structure of the materials. In order to prepare multi-doped composites with the same structure as macro-materials and simplify the preparation method, the GaxCo1-xZnSe0.4 (x = 0.1, 0.3, 0.5) nanoparticles are prepared by Pulse Laser Ablation (PLA) method. The particle component and structure are systematically investigated by X-ray diffraction (XRD) and Raman spectra, which show that the success of our preparation and the same concentration between nanoparticles (NPs) and target. Morphology of the NPs characterized by Transmission Electron Microscopy (TEM) indicates the circular-shaped particles in preparation. Fluorescence properties are reflected by PL spectra, which demonstrate the best performance in concentration of Ga0.3Co0.3ZnSe0.4. Therefore, all the results suggest that PLA is promising to prepare the multi-NPs since it can modulate performance of NPs.
Keywords: PLA, physics, nanoparticles, multi-doped.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8086578 Microarrays Denoising via Smoothing of Coefficients in Wavelet Domain
Authors: Mario Mastriani, Alberto E. Giraldez
Abstract:
We describe a novel method for removing noise (in wavelet domain) of unknown variance from microarrays. The method is based on a smoothing of the coefficients of the highest subbands. Specifically, we decompose the noisy microarray into wavelet subbands, apply smoothing within each highest subband, and reconstruct a microarray from the modified wavelet coefficients. This process is applied a single time, and exclusively to the first level of decomposition, i.e., in most of the cases, it is not necessary a multirresoltuion analysis. Denoising results compare favorably to the most of methods in use at the moment.
Keywords: Directional smoothing, denoising, edge preservation, microarrays, thresholding, wavelets
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15026577 Enhancement of Shape Description and Representation by Slope
Authors: Ali Salem Bin Samma, Rosalina Abdul Salam
Abstract:
Representation and description of object shapes by the slopes of their contours or borders are proposed. The idea is to capture the essence of the features that make it easier for a shape to be stored, transmitted, compared and recognized. These features must be independent of translation, rotation and scaling of the shape. A approach is proposed to obtain high performance, efficiency and to merge the boundaries into sequence of straight line segments with the fewest possible segments. Evaluation on the performance of the proposed method is based on its comparison with established method of object shape description.Keywords: Shape description, Shape representation and Slope.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14566576 Network Application Identification Based on Communication Characteristics of Application Messages
Authors: Yuji Waizumi, Yuya Tsukabe, Hiroshi Tsunoda, Yoshiaki Nemoto
Abstract:
A person-to-person information sharing is easily realized by P2P networks in which servers are not essential. Leakage of information, which are caused by malicious accesses for P2P networks, has become a new social issues. To prevent information leakage, it is necessary to detect and block traffics of P2P software. Since some P2P softwares can spoof port numbers, it is difficult to detect the traffics sent from P2P softwares by using port numbers. It is more difficult to devise effective countermeasures for detecting the software because their protocol are not public. In this paper, a discriminating method of network applications based on communication characteristics of application messages without port numbers is proposed. The proposed method is based on an assumption that there can be some rules about time intervals to transmit messages in application layer and the number of necessary packets to send one message. By extracting the rule from network traffic, the proposed method can discriminate applications without port numbers.Keywords: Network Application Identification, Message Transition Pattern
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13616575 Detection of Near Failure Winding due to Deformation in 33/11kV Power Transformer by using Low Voltage Impulse (LVI) Test Method and Validated through Untanking
Authors: R. Samsudin, Yogendra, Hairil Satar, Y.Zaidey
Abstract:
Power transformer consists of components which are under consistent thermal and electrical stresses. The major component which degrades under these stresses is the paper insulation of the power transformer. At site, lightning impulses and cable faults may cause the winding deformation. In addition, the winding may deform due to impact during transportation. A deformed winding will excite more stress to its insulating paper thus will degrade it. Insulation degradation will shorten the life-span of the transformer. Currently there are two methods of detecting the winding deformation which are Sweep Frequency Response Analysis (SFRA) and Low Voltage Impulse Test (LVI). The latter injects current pulses to the winding and capture the admittance plot. In this paper, a transformer which experienced overheating and arcing was identified, and both SFRA and LVI were performed. Next, the transformer was brought to the factory for untanking. The untanking results revealed that the LVI is more accurate than the SFRA method for this case study.Keywords: Winding Deformation, Arcing, Dissolved GasAnalysis, Sweep Frequency Response Analysis, Low VoltageImpulse Method
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28466574 Novel Method for Elliptic Curve Multi-Scalar Multiplication
Authors: Raveen R. Goundar, Ken-ichi Shiota, Masahiko Toyonaga
Abstract:
The major building block of most elliptic curve cryptosystems are computation of multi-scalar multiplication. This paper proposes a novel algorithm for simultaneous multi-scalar multiplication, that is by employing addition chains. The previously known methods utilizes double-and-add algorithm with binary representations. In order to accomplish our purpose, an efficient empirical method for finding addition chains for multi-exponents has been proposed.Keywords: elliptic curve cryptosystems, multi-scalar multiplication, addition chains, Fibonacci sequence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16116573 Iterative Image Reconstruction for Sparse-View Computed Tomography via Total Variation Regularization and Dictionary Learning
Authors: XianYu Zhao, JinXu Guo
Abstract:
Recently, low-dose computed tomography (CT) has become highly desirable due to increasing attention to the potential risks of excessive radiation. For low-dose CT imaging, ensuring image quality while reducing radiation dose is a major challenge. To facilitate low-dose CT imaging, we propose an improved statistical iterative reconstruction scheme based on the Penalized Weighted Least Squares (PWLS) standard combined with total variation (TV) minimization and sparse dictionary learning (DL) to improve reconstruction performance. We call this method "PWLS-TV-DL". In order to evaluate the PWLS-TV-DL method, we performed experiments on digital phantoms and physical phantoms, respectively. The experimental results show that our method is in image quality and calculation. The efficiency is superior to other methods, which confirms the potential of its low-dose CT imaging.Keywords: Low dose computed tomography, penalized weighted least squares, total variation, dictionary learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8346572 Measurement of Systemic Power Efficiency of Microwave Heating Application
Authors: Yi He, Nutdechatorn Puangngernmak, Suramate Chalermwisutkul
Abstract:
Microwave heating process has been developed about sixty years while measurement system has also progressed. Because of irradiation of high frequency of microwave, researchers have been utilized many costly technical instrument measuring parameters to evaluate the performance of microwave heating system. Therefore, this paper is intended to present an easier and feasible efficiency measurement method. It can help inspecting efficiency of microwave heating system with good accuracy, while the method can also give reference to optimizing procedure for microwave heating system for various load material
Keywords: measurement, microwave heating system, systemic power efficiency
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18476571 Analysis Fraction Flow of Water versus Cumulative Oil Recoveries Using Buckley Leverett Method
Authors: Reza Cheraghi Kootiani, Ariffin Bin Samsuri
Abstract:
To derive the fractional flow equation oil displacement will be assumed to take place under the so-called diffusive flow condition. The constraints are that fluid saturations at any point in the linear displacement path are uniformly distributed with respect to thickness; this allows the displacement to be described mathematically in one dimension. The simultaneous flow of oil and water can be modeled using thickness averaged relative permeability, along the centerline of the reservoir. The condition for fluid potential equilibrium is simply that of hydrostatic equilibrium for which the saturation distribution can be determined as a function of capillary pressure and therefore, height. That is the fluids are distributed in accordance with capillary-gravity equilibrium. This paper focused on the fraction flow of water versus cumulative oil recoveries using Buckley Leverett method. Several field cases have been developed to aid in analysis. Producing watercut (at surface conditions) will be compared with the cumulative oil recovery at breakthrough for the flowing fluid.Keywords: Fractional Flow, Fluid Saturations, Permeability, Cumulative Oil Recoveries, Buckley Leverett Method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 92536570 A New Objective Weight on Interval Type-2 Fuzzy Sets
Authors: Nurnadiah Z., Lazim A.
Abstract:
The design of weight is one of the important parts in fuzzy decision making, as it would have a deep effect on the evaluation results. Entropy is one of the weight measure based on objective evaluation. Non--probabilistic-type entropy measures for fuzzy set and interval type-2 fuzzy sets (IT2FS) have been developed and applied to weight measure. Since the entropy for (IT2FS) for decision making yet to be explored, this paper proposes a new objective weight method by using entropy weight method for multiple attribute decision making (MADM). This paper utilizes the nature of IT2FS concept in the evaluation process to assess the attribute weight based on the credibility of data. An example was presented to demonstrate the feasibility of the new method in decision making. The entropy measure of interval type-2 fuzzy sets yield flexible judgment and could be applied in decision making environment.Keywords: Objective weight, entropy weight, multiple attributedecision making, type-2 fuzzy sets, interval type-2 fuzzy sets
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16606569 Optimal Transmission Network Usage and Loss Allocation Using Matrices Methodology and Cooperative Game Theory
Authors: Baseem Khan, Ganga Agnihotri
Abstract:
Restructuring of Electricity supply industry introduced many issues such as transmission pricing, transmission loss allocation and congestion management. Many methodologies and algorithms were proposed for addressing these issues. In this paper a power flow tracing based method is proposed which involves Matrices methodology for the transmission usage and loss allocation for generators and demands. This method provides loss allocation in a direct way because all the computation is previously done for usage allocation. The proposed method is simple and easy to implement in a large power system. Further it is less computational because it requires matrix inversion only a single time. After usage and loss allocation cooperative game theory is applied to results for finding efficient economic signals. Nucleolus and Shapely value approach is used for optimal allocation of results. Results are shown for the IEEE 6 bus system and IEEE 14 bus system.
Keywords: Modified Kirchhoff Matrix, Power flow tracing, Transmission Pricing, Transmission Loss Allocation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25936568 Automotive ECU Design with Functional Safety for Electro-Mechanical Actuator Systems
Authors: Kyung-Jung Lee, Young-Hun Ki, Hyun-Sik Ahn
Abstract:
In this paper, we propose a hardware and software design method for automotive Electronic Control Units (ECU) considering the functional safety. The proposed ECU is considered for the application to Electro-Mechanical Actuator systems and the validity of the design method is shown by the application to the Electro-Mechanical Brake (EMB) control system which is used as a brake actuator in Brake-By-Wire (BBW) systems. The importance of a functional safety-based design approach to EMB ECU design has been emphasized because of its safety-critical functions, which are executed with the aid of many electric actuators, sensors, and application software. Based on hazard analysis and risk assessment according to ISO26262, the EMB system should be ASIL-D-compliant, the highest ASIL level. To this end, an external signature watchdog and an Infineon 32-bit microcontroller TriCore are used to reduce risks considering common-cause hardware failure. Moreover, a software design method is introduced for implementing functional safety-oriented monitoring functions based on an asymmetric dual core architecture considering redundancy and diversity. The validity of the proposed ECU design approach is verified by using the EMB Hardware-In-the-Loop (HILS) system, which consists of the EMB assembly, actuator ECU, a host PC, and a few debugging devices. Furthermore, it is shown that the existing sensor fault tolerant control system can be used more effectively for mitigating the effects of hardware and software faults by applying the proposed ECU design method.
Keywords: BBW (Brake-By-wire), EMB (Electro-Mechanical Brake), Functional Safety, ISO26262.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 69956567 Project Base Learning for IT Personnel Resources Development using TVML
Authors: Tansuriyavong Suriyon, Endo Takanobu, Boonmee Choompol
Abstract:
Using the animations video of teaching materials is an effective learning method. However, we thought that more effective learning method is to produce the teaching video by learners themselves. The learners who act as the producer must learn and understand well to produce and present video of teaching materials to others. The purpose of this study is to propose the project based learning (PBL) technique by co-producing video of IT (information technology) teaching materials. We used the T2V player to produce the video based on TVML a TV program description language. By proposed method, we have assigned the learners to produce the animations video for “National Examination for Information Processing Technicians (IPA examination)" in Japan, in order to get them learns various knowledge and skill on IT field. Experimental result showed that learning effect has occurred at the video production process that useful for IT personnel resources development.Keywords: TVML , T2V Player, The animation made as learning materials, National Examination for Information Processing Technicians, IT Education, Problem Based Learning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15346566 Study of Hydrophobicity Effect on 220kV Double Tension Insulator String Surface Using Finite Element Method
Authors: M. Nageswara Rao, V. S. N. K. Chaitanya, P. Vijaya Haritha
Abstract:
Insulators are one of the most significant equipment in power system. The insulators’ operation may affect the power flow, line loss and reliability. The electrical parameters that influence the performance of insulator are surface leakage current, corona and dry band arcing. Electric field stresses on the insulator surface will degrade the insulating properties and lead to puncture. Electric filed stresses can be analyzed by numerical methods and experimental evaluation. As per economic aspects, evaluation by numerical methods are best. In outdoor insulation, a hydrophobic surface can facilitate to prevent water film formation on the insulation surface, which is decisive for diminishing leakage currents and partial discharge (PD) under heavy polluted environments and harsh weather conditions. Polymer materials like silicone rubber have an outstanding hydrophobic property among general insulation materials. In this paper, electrical field intensity of 220 kV porcelain and polymer double tension insulator strings at critical regions are analyzed and compared by using Finite Element Method. Hydrophobic conditions of polymer insulator with equal and unequal water molecule conditions are verified by using finite element method.
Keywords: Porcelain insulator, polymer insulator, electric field analysis, EFA, finite element method, FEM, hydrophobicity, FEMM-2D.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6986565 Sonochemically Prepared SnO2 Quantum Dots as a Selective and Low Temperature CO Sensor
Authors: S. Mosadegh Sedghi, Y. Mortazavi, A. Khodadadi, O. Alizadeh Sahraei, M. Vesali Naseh
Abstract:
In this study, a low temperature sensor highly selective to CO in presence of methane is fabricated by using 4 nm SnO2 quantum dots (QDs) prepared by sonication assisted precipitation. SnCl4 aqueous solution was precipitated by ammonia under sonication, which continued for 2 h. A part of the sample was then dried and calcined at 400°C for 1.5 h and characterized by XRD and BET. The average particle size and the specific surface area of the SnO2 QDs as well as their sensing properties were compared with the SnO2 nano-particles which were prepared by conventional sol-gel method. The BET surface area of sonochemically as-prepared product and the one calcined at 400°C after 1.5 hr are 257 m2/gr and 212 m2/gr respectively while the specific surface area for SnO2 nanoparticles prepared by conventional sol-gel method is about 80m2/gr. XRD spectra revealed pure crystalline phase of SnO2 is formed for both as-prepared and calcined samples of SnO2 QDs. However, for the sample prepared by sol-gel method and calcined at 400°C SnO crystals are detected along with those of SnO2. Quantum dots of SnO2 show exceedingly high sensitivity to CO with different concentrations of 100, 300 and 1000 ppm in whole range of temperature (25- 350°C). At 50°C a sensitivity of 27 was obtained for 1000 ppm CO, which increases to a maximum of 147 when the temperature rises to 225°C and then drops off while the maximum sensitivity for the SnO2 sample prepared by the sol-gel method was obtained at 300°C with the amount of 47.2. At the same time no sensitivity to methane is observed in whole range of temperatures for SnO2 QDs. The response and recovery times of the sensor sharply decreases with temperature, while the high selectivity to CO does not deteriorate.
Keywords: Sonochemical, SnO2 QDs, SnO2 gas sensor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22486564 Application of the Least Squares Method in the Adjustment of Chlorodifluoromethane (HCFC-142b) Regression Models
Authors: L. J. de Bessa Neto, V. S. Filho, J. V. Ferreira Nunes, G. C. Bergamo
Abstract:
There are many situations in which human activities have significant effects on the environment. Damage to the ozone layer is one of them. The objective of this work is to use the Least Squares Method, considering the linear, exponential, logarithmic, power and polynomial models of the second degree, to analyze through the coefficient of determination (R²), which model best fits the behavior of the chlorodifluoromethane (HCFC-142b) in parts per trillion between 1992 and 2018, as well as estimates of future concentrations between 5 and 10 periods, i.e. the concentration of this pollutant in the years 2023 and 2028 in each of the adjustments. A total of 809 observations of the concentration of HCFC-142b in one of the monitoring stations of gases precursors of the deterioration of the ozone layer during the period of time studied were selected and, using these data, the statistical software Excel was used for make the scatter plots of each of the adjustment models. With the development of the present study, it was observed that the logarithmic fit was the model that best fit the data set, since besides having a significant R² its adjusted curve was compatible with the natural trend curve of the phenomenon.
Keywords: Chlorodifluoromethane (HCFC-142b), ozone (O3), least squares method, regression models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8276563 Dynamic Analysis of Nonlinear Models with Infinite Extension by Boundary Elements
Authors: Delfim Soares Jr., Webe J. Mansur
Abstract:
The Time-Domain Boundary Element Method (TDBEM) is a well known numerical technique that handles quite properly dynamic analyses considering infinite dimension media. However, when these analyses are also related to nonlinear behavior, very complex numerical procedures arise considering the TD-BEM, which may turn its application prohibitive. In order to avoid this drawback and model nonlinear infinite media, the present work couples two BEM formulations, aiming to achieve the best of two worlds. In this context, the regions expected to behave nonlinearly are discretized by the Domain Boundary Element Method (D-BEM), which has a simpler mathematical formulation but is unable to deal with infinite domain analyses; the TD-BEM is employed as in the sense of an effective non-reflexive boundary. An iterative procedure is considered for the coupling of the TD-BEM and D-BEM, which is based on a relaxed renew of the variables at the common interfaces. Elastoplastic models are focused and different time-steps are allowed to be considered by each BEM formulation in the coupled analysis.Keywords: Boundary Element Method, Dynamic Elastoplastic Analysis, Iterative Coupling, Multiple Time-Steps.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15386562 Circuit Breaker and Transformer Monitoring
Authors: M.Nafar, A.H.Gheisari, A.Alesaadi
Abstract:
Since large power transformers are the most expensive and strategically important components of any power generator and transmission system, their reliability is crucially important for the energy system operation. Also, Circuit breakers are very important elements in the power transmission line so monitoring the events gives a knowledgebase to determine time to the next maintenance. This paper deals with the introduction of the comparative method of the state estimation of transformers and Circuit breakers using continuous monitoring of voltage, current. This paper gives details a new method based on wavelet to apparatus insulation monitoring. In this paper to insulation monitoring of transformer, a new method based on wavelet transformation and neutral point analysis is proposed. Using the EMTP tools, fault in transformer winding and the detailed transformer winding model were simulated. The current of neutral point of winding was analyzed by wavelet transformation. It is shown that the neutral current of the transformer winding has useful information about fault in insulation of the transformer.Keywords: Wavelet, Power Transformer, EMTP, CircuitBreaker, Monitoring
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20416561 Kinematic Parameter-Independent Modeling and Measuring of Three-Axis Machine Tools
Authors: Yung-Yuan Hsu
Abstract:
The primary objective of this paper was to construct a “kinematic parameter-independent modeling of three-axis machine tools for geometric error measurement" technique. Improving the accuracy of the geometric error for three-axis machine tools is one of the machine tools- core techniques. This paper first applied the traditional method of HTM to deduce the geometric error model for three-axis machine tools. This geometric error model was related to the three-axis kinematic parameters where the overall errors was relative to the machine reference coordinate system. Given that the measurement of the linear axis in this model should be on the ideal motion axis, there were practical difficulties. Through a measurement method consolidating translational errors and rotational errors in the geometric error model, we simplified the three-axis geometric error model to a kinematic parameter-independent model. Finally, based on the new measurement method corresponding to this error model, we established a truly practical and more accurate error measuring technique for three-axis machine tools.Keywords: Three-axis machine tool, Geometric error, HTM, Error measuring
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21226560 Accurate Positioning Method of Indoor Plastering Robot Based on Line Laser
Authors: Guanqiao Wang, Hongyang Yu
Abstract:
There is a lot of repetitive work in the traditional construction industry. These repetitive tasks can significantly improve production efficiency by replacing manual tasks with robots. Therefore, robots appear more and more frequently in the construction industry. Navigation and positioning is a very important task for construction robots, and the requirements for accuracy of positioning are very high. Traditional indoor robots mainly use radio frequency or vision methods for positioning. Compared with ordinary robots, the indoor plastering robot needs to be positioned closer to the wall for wall plastering, so the requirements for construction positioning accuracy are higher, and the traditional navigation positioning method has a large error, which will cause the robot to move. Without the exact position, the wall cannot be plastered or the error of plastering the wall is large. A positioning method is proposed, which is assisted by line lasers and uses image processing-based positioning to perform more accurate positioning on the traditional positioning work. In actual work, filter, edge detection, Hough transform and other operations are performed on the images captured by the camera. Each time the position of the laser line is found, it is compared with the standard value, and the position of the robot is moved or rotated to complete the positioning work. The experimental results show that the actual positioning error is reduced to less than 0.5 mm by this accurate positioning method.
Keywords: Indoor plastering robot, navigation, precise positioning, line laser, image processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5396559 Effectiveness of Working Memory Training on Cognitive Flexibility
Authors: Leila Maleki, Ezatollah Ahmadi
Abstract:
The aim of this study was to investigate the effectiveness of memory training exercise on cognitive flexibility. The method of this study was experimental. The statistical population selected 40 students 14 years old, samples were chosen by available sampling method and then they were replaced in experimental (training program) group and control group randomly and answered to Wisconsin Card Sorting Test; covariance test results indicated that there were a significant in post-test scores of experimental group (p<0.005).Keywords: Cognitive flexibility, working memory exercises, problem solving, reaction time.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19036558 Digital Library Evaluation by SWARA-WASPAS Method
Authors: Mehmet Yörükoğlu, Serhat Aydın
Abstract:
Since the discovery of the manuscript, mechanical methods for storing, transferring and using the information have evolved into digital methods over the time. In this process, libraries that are the center of the information have also become digitized and become accessible from anywhere and at any time in the world by taking on a structure that has no physical boundaries. In this context, some criteria for information obtained from digital libraries have become more important for users. This paper evaluates the user criteria from different perspectives that make a digital library more useful. The Step-Wise Weight Assessment Ratio Analysis-Weighted Aggregated Sum Product Assessment (SWARA-WASPAS) method is used with flexibility and easy calculation steps for the evaluation of digital library criteria. Three different digital libraries are evaluated by information technology experts according to five conflicting main criteria, ‘interface design’, ‘effects on users’, ‘services’, ‘user engagement’ and ‘context’. Finally, alternatives are ranked in descending order.
Keywords: Digital library, multi criteria decision making, SWARA-WASPAS method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9016557 Decision Trees for Predicting Risk of Mortality using Routinely Collected Data
Authors: Tessy Badriyah, Jim S. Briggs, Dave R. Prytherch
Abstract:
It is well known that Logistic Regression is the gold standard method for predicting clinical outcome, especially predicting risk of mortality. In this paper, the Decision Tree method has been proposed to solve specific problems that commonly use Logistic Regression as a solution. The Biochemistry and Haematology Outcome Model (BHOM) dataset obtained from Portsmouth NHS Hospital from 1 January to 31 December 2001 was divided into four subsets. One subset of training data was used to generate a model, and the model obtained was then applied to three testing datasets. The performance of each model from both methods was then compared using calibration (the χ2 test or chi-test) and discrimination (area under ROC curve or c-index). The experiment presented that both methods have reasonable results in the case of the c-index. However, in some cases the calibration value (χ2) obtained quite a high result. After conducting experiments and investigating the advantages and disadvantages of each method, we can conclude that Decision Trees can be seen as a worthy alternative to Logistic Regression in the area of Data Mining.Keywords: Decision Trees, Logistic Regression, clinical outcome, risk of mortality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25236556 Automatic Detection of Syllable Repetition in Read Speech for Objective Assessment of Stuttered Disfluencies
Authors: K. M. Ravikumar, Balakrishna Reddy, R. Rajagopal, H. C. Nagaraj
Abstract:
Automatic detection of syllable repetition is one of the important parameter in assessing the stuttered speech objectively. The existing method which uses artificial neural network (ANN) requires high levels of agreement as prerequisite before attempting to train and test ANNs to separate fluent and nonfluent. We propose automatic detection method for syllable repetition in read speech for objective assessment of stuttered disfluencies which uses a novel approach and has four stages comprising of segmentation, feature extraction, score matching and decision logic. Feature extraction is implemented using well know Mel frequency Cepstra coefficient (MFCC). Score matching is done using Dynamic Time Warping (DTW) between the syllables. The Decision logic is implemented by Perceptron based on the score given by score matching. Although many methods are available for segmentation, in this paper it is done manually. Here the assessment by human judges on the read speech of 10 adults who stutter are described using corresponding method and the result was 83%.Keywords: Assessment, DTW, MFCC, Objective, Perceptron, Stuttering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2811