Search results for: Discontinuous Galerkin method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8131

Search results for: Discontinuous Galerkin method

6571 Problems of Boolean Reasoning Based Biclustering Parallelization

Authors: Marcin Michalak

Abstract:

Biclustering is the way of two-dimensional data analysis. For several years it became possible to express such issue in terms of Boolean reasoning, for processing continuous, discrete and binary data. The mathematical backgrounds of such approach — proved ability of induction of exact and inclusion–maximal biclusters fulfilling assumed criteria — are strong advantages of the method. Unfortunately, the core of the method has quite high computational complexity. In the paper the basics of Boolean reasoning approach for biclustering are presented. In such context the problems of computation parallelization are risen.

Keywords: Boolean reasoning, biclustering, parallelization, prime implicant.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 603
6570 Iron Recovery from Red Mud as Zero-Valent Iron Metal Powder Using Direct Electrochemical Reduction Method

Authors: Franky Michael Hamonangan Siagian, Affan Maulana, Himawan Tri Bayu Murti Petrus, Panut Mulyono, Widi Astuti

Abstract:

In this study, the feasibility of the direct electrowinning method was used to produce zero-valent iron from red mud. The red mud sample came from the Tayan mine, Indonesia, which contains high hematite (Fe2O3). Before electrolysis, the samples were characterized by various analytical techniques (ICP-AES, SEM, XRD) to determine their chemical composition and mineralogy. The direct electrowinning method of red mud suspended in NaOH was introduced at low temperatures ranging from 30-110 °C. Current density and temperature variations were carried out to determine the optimum operation of the direct electrowinning process. Cathode deposits and residues in electrochemical cells were analyzed using XRD, XRF, and SEM to determine the chemical composition and current recovery. The low-temperature electrolysis current efficiency on Redmud can reach 11.8% recovery at a current density of 796 A/m². The moderate performance of the process was investigated with red mud, which was attributed to the troublesome adsorption of red mud particles on the cathode, making the reduction far less efficient than that with hematite.

Keywords: Alumina, electrochemical reduction, iron production, red mud.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 266
6569 Metal Berthelot Tubes with Windows for Observing Cavitation under Static Negative Pressure

Authors: K. Hiro, Y. Imai, T. Sasayama

Abstract:

Cavitation under static negative pressure is not revealed well. The Berthelot method to generate such negative pressure can be a means to study cavitation inception. In this study, metal Berthelot tubes built in observation windows are newly developed and are checked whether high static negative pressure is generated or not. Negative pressure in the tube with a pair of a corundum plate and an aluminum gasket increased with temperature cycles. The trend was similar to that as reported before.

Keywords: Berthelot method, negative pressure, cavitation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1059
6568 A New Approach for Effect Evaluation of Sediment Management

Authors: Jazaul Ikhsan, Masaharu Fujita

Abstract:

Safety, river environment, and sediment utilization are the elements of the target of sediment management. As a change in an element by sediment management, may affect the other two elements, and the priority among three elements depends on stakeholders. It is necessary to develop a method to evaluate the effect of sediment management on each element and an integrated evaluation method for socio-economic effect. In this study, taking Mount Merapi basin as an investigation field, the method for an active volcanic basin was developed. An integrated evaluation method for sediment management was discussed from a socio-economic point on safety, environment, and sediment utilization and a case study of sediment management was evaluated by means of this method. To evaluate the effect of sediment management, some parameters on safety, utilization, and environment have been introduced. From a utilization point of view, job opportunity, additional income of local people, and tax income to local government were used to evaluate the effectiveness of sediment management. The risk degree of river infrastructure was used to describe the effect of sediment management on a safety aspect. To evaluate the effects of sediment management on environment, the mean diameter of grain size distribution of riverbed surface was used. On the coordinate system designating these elements, the direction of change in basin condition by sediment management can be predicted, so that the most preferable sediment management can be decided. The results indicate that the cases of sediment management tend to give the negative impacts on sediment utilization. However, these sediment managements will give positive impacts on safety and environment condition. Evaluation result from a social-economic point of view shows that the case study of sediment management reduces job opportunity and additional income for inhabitants as well as tax income for government. Therefore, it is necessary to make another policy for creating job opportunity for inhabitants to support these sediment managements.

Keywords: Merapi, sediment, management, evaluation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1447
6567 Graded Orientation of the Linear Polymers

Authors: Levan Nadareishvili, Roland Bakuradze, Barbara Kilosanidze, Nona Topuridze, Liana Sharashidze, Ineza Pavlenishvili

Abstract:

Some regularities of formation of a new structural state of the thermoplastic polymers - gradually oriented (stretched) state (GOS) are discussed. Transition into GOS is realized by the graded oriented stretching - by action of inhomogeneous mechanical field on the isotropic linear polymers or by zone stretching that is implemented on a standard tensile-testing machine with using a specially designed zone stretching device (ZSD). Both technical approaches (especially zone stretching method) allows to manage the such quantitative parameters of gradually oriented polymers as a range of change in relative elongation/orientation degree, length of this change and profile (linear, hyperbolic, parabolic, logarithmic, etc.). The possibility of obtaining functionally graded materials (FGMs) by graded orientation method is briefly discussed. Uniaxial graded stretching method should be considered as an effective technological solution to create polymer materials with a predetermined gradient of physical properties.

Keywords: Controlled graded stretching, gradually oriented state, linear polymers, zone stretching device.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2149
6566 Laban Movement Analysis Using Kinect

Authors: Ran Bernstein, Tal Shafir, Rachelle Tsachor, Karen Studd, Assaf Schuster

Abstract:

Laban Movement Analysis (LMA), developed in the dance community over the past seventy years, is an effective method for observing, describing, notating, and interpreting human movement to enhance communication and expression in everyday and professional life. Many applications that use motion capture data might be significantly leveraged if the Laban qualities will be recognized automatically. This paper presents an automated recognition method of Laban qualities from motion capture skeletal recordings and it is demonstrated on the output of Microsoft’s Kinect V2 sensor.

Keywords: Laban Movement Analysis, Kinect, Machine Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2839
6565 Comparative Analysis of Two Modeling Approaches for Optimizing Plate Heat Exchangers

Authors: Fábio A. S. Mota, Mauro A. S. S. Ravagnani, E. P. Carvalho

Abstract:

In the present paper the design of plate heat exchangers is formulated as an optimization problem considering two mathematical modelling. The number of plates is the objective function to be minimized, considering implicitly some parameters configuration. Screening is the optimization method used to solve the problem. Thermal and hydraulic constraints are verified, not viable solutions are discarded and the method searches for the convergence to the optimum, case it exists. A case study is presented to test the applicability of the developed algorithm. Results show coherency with the literature.

Keywords: Plate heat exchanger, optimization, modeling, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1963
6564 Modified Naïve Bayes Based Prediction Modeling for Crop Yield Prediction

Authors: Kefaya Qaddoum

Abstract:

Most of greenhouse growers desire a determined amount of yields in order to accurately meet market requirements. The purpose of this paper is to model a simple but often satisfactory supervised classification method. The original naive Bayes have a serious weakness, which is producing redundant predictors. In this paper, utilized regularization technique was used to obtain a computationally efficient classifier based on naive Bayes. The suggested construction, utilized L1-penalty, is capable of clearing redundant predictors, where a modification of the LARS algorithm is devised to solve this problem, making this method applicable to a wide range of data. In the experimental section, a study conducted to examine the effect of redundant and irrelevant predictors, and test the method on WSG data set for tomato yields, where there are many more predictors than data, and the urge need to predict weekly yield is the goal of this approach. Finally, the modified approach is compared with several naive Bayes variants and other classification algorithms (SVM and kNN), and is shown to be fairly good.

Keywords: Tomato yields prediction, naive Bayes, redundancy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5112
6563 Estimating of the Renewal Function with Heavy-tailed Claims

Authors: Rassoul Abdelaziz

Abstract:

We develop a new estimator of the renewal function for heavy-tailed claims amounts. Our approach is based on the peak over threshold method for estimating the tail of the distribution with a generalized Pareto distribution. The asymptotic normality of an appropriately centered and normalized estimator is established, and its performance illustrated in a simulation study.

Keywords: Renewal function, peak-over-threshold, POT method, extremes value, generalized pareto distribution, heavy-tailed distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1477
6562 Delaunay Triangulations Efficiency for Conduction-Convection Problems

Authors: Bashar Albaalbaki, Roger E. Khayat

Abstract:

This work is a comparative study on the effect of Delaunay triangulation algorithms on discretization error for conduction-convection conservation problems. A structured triangulation and many unstructured Delaunay triangulations using three popular algorithms for node placement strategies are used. The numerical method employed is the vertex-centered finite volume method. It is found that when the computational domain can be meshed using a structured triangulation, the discretization error is lower for structured triangulations compared to unstructured ones for only low Peclet number values, i.e. when conduction is dominant. However, as the Peclet number is increased and convection becomes more significant, the unstructured triangulations reduce the discretization error. Also, no statistical correlation between triangulation angle extremums and the discretization error is found using 200 samples of randomly generated Delaunay and non-Delaunay triangulations. Thus, the angle extremums cannot be an indicator of the discretization error on their own and need to be combined with other triangulation quality measures, which is the subject of further studies.

Keywords: Conduction-convection problems, Delaunay triangulation, discretization error, finite volume method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 165
6561 Dispersed Error Control based on Error Filter Design for Improving Halftone Image Quality

Authors: Sang-Chul Kim, Sung-Il Chien

Abstract:

The error diffusion method generates worm artifacts, and weakens the edge of the halftone image when the continuous gray scale image is reproduced by a binary image. First, to enhance the edges, we propose the edge-enhancing filter by considering the quantization error information and gradient of the neighboring pixels. Furthermore, to remove worm artifacts often appearing in a halftone image, we add adaptively random noise into the weights of an error filter.

Keywords: Artifact suppression, Edge enhancement, Error diffusion method, Halftone image

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1427
6560 Improving Teacher Profesionalism through Certification Program: An Indonesia Case Study

Authors: Triyanto

Abstract:

Government of Indonesia held a certification program to enhance the professionalism of teachers by using portfolio assessment. This research discusses about the effectiveness of certification programs to enhance the professionalism of teacher in Indonesia. Portfolio assessment method has drawbacks. The certified teachers do not show significant performance improvement. Therefore, the government changes the portfolio assessment method to the education and training for teachers.

Keywords: Profesionalism, Teacher, Certification, Indonesia

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2821
6559 Single Image Defogging Method Using Variational Approach for Edge-Preserving Regularization

Authors: Wan-Hyun Cho, In-Seop Na, Seong-ChaeSeo, Sang-Kyoon Kim, Soon-Young Park

Abstract:

In this paper, we propose the variational approach to solve single image defogging problem. In the inference process of the atmospheric veil, we defined new functional for atmospheric veil that satisfy edge-preserving regularization property. By using the fundamental lemma of calculus of variations, we derive the Euler-Lagrange equation foratmospheric veil that can find the maxima of a given functional. This equation can be solved by using a gradient decent method and time parameter. Then, we can have obtained the estimated atmospheric veil, and then have conducted the image restoration by using inferred atmospheric veil. Finally we have improved the contrast of restoration image by various histogram equalization methods. The experimental results show that the proposed method achieves rather good defogging results.

Keywords: Image defogging, Image restoration, Atmospheric veil, Transmission, Variational approach, Euler-Lagrange equation, Image enhancement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2945
6558 Effect of Non Uniformity Factors and Assignment Factors on Errors in Charge Simulation Method with Point Charge Model

Authors: Gururaj S Punekar, N K Kishore Senior, H S Y Shastry

Abstract:

Charge Simulation Method (CSM) is one of the very widely used numerical field computation technique in High Voltage (HV) engineering. The high voltage fields of varying non uniformities are encountered in practice. CSM programs being case specific, the simulation accuracies heavily depend on the user (programmers) experience. Here is an effort to understand CSM errors and evolve some guidelines to setup accurate CSM models, relating non uniformities with assignment factors. The results are for the six-point-charge model of sphere-plane gap geometry. Using genetic algorithm (GA) as tool, optimum assignment factors at different non uniformity factors for this model have been evaluated and analyzed. It is shown that the symmetrically placed six-point-charge models can be good enough to set up CSM programs with potential errors less than 0.1% when the field non uniformity factor is greater than 2.64 (field utilization factor less than 52.76%).

Keywords: Assignment factor, Charge Simulation Method, High Voltage, Numerical field computation, Non uniformity factor, Simulation errors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2055
6557 Signal Reconstruction Using Cepstrum of Higher Order Statistics

Authors: Adnan Al-Smadi, Mahmoud Smadi

Abstract:

This paper presents an algorithm for reconstructing phase and magnitude responses of the impulse response when only the output data are available. The system is driven by a zero-mean independent identically distributed (i.i.d) non-Gaussian sequence that is not observed. The additive noise is assumed to be Gaussian. This is an important and essential problem in many practical applications of various science and engineering areas such as biomedical, seismic, and speech processing signals. The method is based on evaluating the bicepstrum of the third-order statistics of the observed output data. Simulations results are presented that demonstrate the performance of this method.

Keywords: Cepstrum, bicepstrum, third order statistics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2039
6556 Effect of TEOS Electrospun Nanofiber Modified Resin on Interlaminar Shear Strength of Glass Fiber/Epoxy Composite

Authors: Dattaji K. Shinde, Ajit D. Kelkar

Abstract:

Interlaminar shear strength (ILSS) of fiber reinforced polymer composite is an important property for most of the structural applications. Matrix modification is an effective method used to improve the interlaminar shear strength of composite. In this paper, EPON 862/w epoxy system was modified using Tetraethyl orthosilicate (TEOS) electrospun nanofibers (ENFs) which were produced using electrospinning method. Unmodified and nanofibers modified resins were used to fabricate glass fiber reinforced polymer composite (GFRP) using H-VARTM method. The ILSS of the Glass Fiber Reinforced Polymeric Composites (GFRP) was investigated. The study shows that introduction of TEOS ENFs in the epoxy resin enhanced the ILSS of GFRPby 15% with 0.6% wt. fraction of TEOS ENFs.

Keywords: Electrospun nanofibers, H-VARTM, Interlaminar shear strength (ILSS), Matrix modification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3253
6555 Rapid Determination of Biochemical Oxygen Demand

Authors: Mayur Milan Kale, Indu Mehrotra

Abstract:

Biochemical Oxygen Demand (BOD) is a measure of the oxygen used in bacteria mediated oxidation of organic substances in water and wastewater. Theoretically an infinite time is required for complete biochemical oxidation of organic matter, but the measurement is made over 5-days at 20 0C or 3-days at 27 0C test period with or without dilution. Researchers have worked to further reduce the time of measurement. The objective of this paper is to review advancement made in BOD measurement primarily to minimize the time and negate the measurement difficulties. Survey of literature review in four such techniques namely BOD-BARTTM, Biosensors, Ferricyanidemediated approach, luminous bacterial immobilized chip method. Basic principle, method of determination, data validation and their advantage and disadvantages have been incorporated of each of the methods. In the BOD-BARTTM method the time lag is calculated for the system to change from oxidative to reductive state. BIOSENSORS are the biological sensing element with a transducer which produces a signal proportional to the analyte concentration. Microbial species has its metabolic deficiencies. Co-immobilization of bacteria using sol-gel biosensor increases the range of substrate. In ferricyanidemediated approach, ferricyanide has been used as e-acceptor instead of oxygen. In Luminous bacterial cells-immobilized chip method, bacterial bioluminescence which is caused by lux genes was observed. Physiological responses is measured and correlated to BOD due to reduction or emission. There is a scope to further probe into the rapid estimation of BOD.

Keywords: BOD, Four methods, Rapid estimation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3644
6554 Usage-based Traffic Control for P2P Content Delivery

Authors: Megumi Shibuya, Tomohiko Ogishi

Abstract:

Recently, content delivery services have grown rapidly over the Internet. For ASPs (Application Service Provider) providing content delivery services, P2P architecture is beneficial to reduce outgoing traffic from content servers. On the other hand, ISPs are suffering from the increase in P2P traffic. The P2P traffic is unnecessarily redundant because the same content or the same fractions of content are transferred through an inter-ISP link several times. Subscriber ISPs have to pay a transit fee to upstream ISPs based on the volume of inter-ISP traffic. In order to solve such problems, several works have been done for the purpose of P2P traffic reduction. However, these existing works cannot control the traffic volume of a certain link. In order to solve such an ISP-s operational requirement, we propose a method to control traffic volume for a link within a preconfigured upper bound value. We evaluated that the proposed method works well by conducting a simulation on a 1,000-user scale. We confirm that the traffic volume could be controlled at a lower level than the upper bound for all evaluated conditions. Moreover, our method could control the traffic volume at 98.95% link usage against the target value.

Keywords: P2P, traffic control, traffic localization, ALTO.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1566
6553 Comparative Study of Seismic Isolation as Retrofit Method for Historical Constructions

Authors: Carlos H. Cuadra

Abstract:

Seismic isolation can be used as a retrofit method for historical buildings with the advantage that minimum intervention on super-structure is required. However, selection of isolation devices depends on weight and stiffness of upper structure. In this study, two buildings are considered for analyses to evaluate the applicability of this retrofitting methodology. Both buildings are located at Akita prefecture in the north part of Japan. One building is a wooden structure that corresponds to the old council meeting hall of Noshiro city. The second building is a brick masonry structure that was used as house of a foreign mining engineer and it is located at Ani town. Ambient vibration measurements were performed on both buildings to estimate their dynamic characteristics. Then, target period of vibration of isolated systems is selected as 3 seconds is selected to estimate required stiffness of isolation devices. For wooden structure, which is a light construction, it was found that natural rubber isolators in combination with friction bearings are suitable for seismic isolation. In case of masonry building elastomeric isolator can be used for its seismic isolation. Lumped mass systems are used for seismic response analysis and it is verified in both cases that seismic isolation can be used as retrofitting method of historical construction. However, in the case of the light building, most of the weight corresponds to the reinforced concrete slab that is required to install isolation devices.

Keywords: Historical building, finite element method, masonry structure, seismic isolation, wooden structure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 729
6552 Real-Time Recognition of the Terrain Configuration to Improve Driving Stability for Unmanned Robots

Authors: Bongsoo Jeon, Jayoung Kim, Jihong Lee

Abstract:

Methods for measuring or estimating ground shape by a laser range finder and a vision sensor (Exteroceptive sensors) have critical weaknesses in terms that these methods need a prior database built to distinguish acquired data as unique surface conditions for driving. Also, ground information by Exteroceptive sensors does not reflect the deflection of ground surface caused by the movement of UGVs. Therefore, this paper proposes a method of recognizing exact and precise ground shape using an Inertial Measurement Unit (IMU) as a proprioceptive sensor. In this paper, firstly this method recognizes the attitude of a robot in real-time using IMU and compensates attitude data of a robot with angle errors through analysis of vehicle dynamics. This method is verified by outdoor driving experiments of a real mobile robot.

Keywords: Inertial Measurement Unit, Laser Range Finder, Real-time recognition of the ground shape.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697
6551 Probabilistic Approach as a Method Used in the Solution of Engineering Design for Biomechanics and Mining

Authors: Karel Frydrýšek

Abstract:

This paper focuses on the probabilistic numerical solution of the problems in biomechanics and mining. Applications of Simulation-Based Reliability Assessment (SBRA) Method are presented in the solution of designing of the external fixators applied in traumatology and orthopaedics (these fixators can be applied for the treatment of open and unstable fractures etc.) and in the solution of a hard rock (ore) disintegration process (i.e. the bit moves into the ore and subsequently disintegrates it, the results are compared with experiments, new design of excavation tool is proposed.

Keywords: probabilistic approach, engineering design, traumatology, rock mechanics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1483
6550 Research of the Behavior of Solar Module Frame Installed by Solar Clamping System by Finite Element Method

Authors: Li-Chung Su, Chia-Yu Chen, Tzu-Yuan Lai, Sheng-Jye Hwang

Abstract:

Mechanical design of the thin-film solar framed module and mounting system is important to enhance module reliability and to increase areas of applications. The stress induced by different mounting positions played a main role controlling the stability of the whole mechanical structure. From the finite element method, under the pressure from the back of module, the stress at Lc (center point of the Long frame) increased and the stresses at Center, Corner and Sc (center point of the Short frame) decreased while the mounting position was away from the center of the module. In addition, not only the stress of the glass but also the stress of the frame decreased. Accordingly it was safer to mount in the position away from the center of the module. The emphasis of designing frame system of the module was on the upper support of the Short frame. Strength of the overall structure and design of the corner were also important due to the complexity of the stress in the Long frame.

Keywords: Finite element method, Framed module, Mountingposition

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1712
6549 A New Stabilizing GPC for Nonminimum Phase LTI Systems Using Time Varying Weighting

Authors: Mahdi Yaghobi, Mohammad Haeri

Abstract:

In this paper, we show that the stability can not be achieved with current stabilizing MPC methods for some unstable processes. Hence we present a new method for stabilizing these processes. The main idea is to use a new time varying weighted cost function for traditional GPC. This stabilizes the closed loop system without adding soft or hard constraint in optimization problem. By studying different examples it is shown that using the proposed method, the closed-loop stability of unstable nonminimum phase process is achieved.

Keywords: GPC, Stability, Varying Weighting Coefficients.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1467
6548 Physical and Electrical Characterization of ZnO Thin Films Prepared by Sol-Gel Method

Authors: Mohammad Reza Tabatabaei, Ali Vaseghi Ardekani

Abstract:

In this paper, Zinc Oxide (ZnO) thin films are deposited on glass substrate by sol-gel method. The ZnO thin films with well defined orientation were acquired by spin coating of zinc acetate dehydrate monoethanolamine (MEA), de-ionized water and isopropanol alcohol. These films were pre-heated at 275°C for 10 min and then annealed at 350°C, 450°C and 550°C for 80 min. The effect of annealing temperature and different thickness on structure and surface morphology of the thin films were verified by Atomic Force Microscopy (AFM). It was found that there was a significant effect of annealing temperature on the structural parameters of the films such as roughness exponent, fractal dimension and interface width. Thin films also were characterizied by X-ray Diffractometery (XRD) method. XRD analysis revealed that the annealed ZnO thin films consist of single phase ZnO with wurtzite structure and show the c-axis grain orientation. Increasing annealing temperature increased the crystallite size and the c-axis orientation of the film after 450°C. Also In this study, ZnO thin films in different thickness have been prepared by sol-gel method on the glass substrate at room temperature. The thicknesses of films are 100, 150 and 250 nm. Using fractal analysis, morphological characteristics of surface films thickness in amorphous state were investigated. The results show that with increasing thickness, surface roughness (RMS) and lateral correlation length (ξ) are decreased. Also, the roughness exponent (α) and growth exponent (β) were determined to be 0.74±0.02 and 0.11±0.02, respectively.

Keywords: ZnO, Thin film, Fractal analysis, Morphology, AFM, annealing temperature, different thickness, XRD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3489
6547 A Generic Approach to Reuse Unified Modeling Language Components Following an Agile Process

Authors: Rim Bouhaouel, Naoufel Kraïem, Zuhoor Al Khanjari

Abstract:

Unified Modeling Language (UML) is considered as one of the widespread modeling language standardized by the Object Management Group (OMG). Therefore, the model driving engineering (MDE) community attempts to provide reuse of UML diagrams, and do not construct it from scratch. The UML model appears according to a specific software development process. The existing method generation models focused on the different techniques of transformation without considering the development process. Our work aims to construct an UML component from fragments of UML diagram basing on an agile method. We define UML fragment as a portion of a UML diagram, which express a business target. To guide the generation of fragments of UML models using an agile process, we need a flexible approach, which adapts to the agile changes and covers all its activities. We use the software product line (SPL) to derive a fragment of process agile method. This paper explains our approach, named RECUP, to generate UML fragments following an agile process, and overviews the different aspects. In this paper, we present the approach and we define the different phases and artifacts.

Keywords: UML, component, fragment, agile, SPL.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 918
6546 A New Distribution Network Reconfiguration Approach using a Tree Model

Authors: E. Dolatdar, S. Soleymani, B. Mozafari

Abstract:

Power loss reduction is one of the main targets in power industry and so in this paper, the problem of finding the optimal configuration of a radial distribution system for loss reduction is considered. Optimal reconfiguration involves the selection of the best set of branches to be opened ,one each from each loop, for reducing resistive line losses , and reliving overloads on feeders by shifting the load to adjacent feeders. However ,since there are many candidate switching combinations in the system ,the feeder reconfiguration is a complicated problem. In this paper a new approach is proposed based on a simple optimum loss calculation by determining optimal trees of the given network. From graph theory a distribution network can be represented with a graph that consists a set of nodes and branches. In fact this problem can be viewed as a problem of determining an optimal tree of the graph which simultaneously ensure radial structure of each candidate topology .In this method the refined genetic algorithm is also set up and some improvements of algorithm are made on chromosome coding. In this paper an implementation of the algorithm presented by [7] is applied by modifying in load flow program and a comparison of this method with the proposed method is employed. In [7] an algorithm is proposed that the choice of the switches to be opened is based on simple heuristic rules. This algorithm reduce the number of load flow runs and also reduce the switching combinations to a fewer number and gives the optimum solution. To demonstrate the validity of these methods computer simulations with PSAT and MATLAB programs are carried out on 33-bus test system. The results show that the performance of the proposed method is better than [7] method and also other methods.

Keywords: Distribution System, Reconfiguration, Loss Reduction , Graph Theory , Optimization , Genetic Algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3785
6545 Heteromolecular Structure Formation in Aqueous Solutions of Ethanol, Tetrahydrofuran and Dimethylformamide

Authors: Sh. Gofurov, O. Ismailova, U. Makhmanov, A. Kokhkharov

Abstract:

The refractometric method has been used to determine optical properties of concentration features of aqueous solutions of ethanol, tetrahydrofuran and dimethylformamide at the room temperature. Changes in dielectric permittivity of aqueous solutions of ethanol, tetrahydrofuran and dimethylformamide in a wide range of concentrations (0÷1.0 molar fraction) have been studied using molecular dynamics method. The curves depending on the concentration of experimental data on excess refractive indices and excess dielectric permittivity were compared. It has been shown that stable heteromolecular complexes in binary solutions are formed in the concentration range of 0.3÷0.4 mole fractions. The real and complex part of dielectric permittivity was obtained from dipole-dipole autocorrelation functions of molecules. At the concentrations of C = 0.3 / 0.4 m.f. the heteromolecular structures with hydrogen bonds are formed. This is confirmed by the extremum values of excessive dielectric permittivity and excessive refractive index of aqueous solutions.

Keywords: Refractometric method, dielectric constant, molecular dynamics, aqueous solution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1007
6544 Synthesis of ZnO Nanostructures via Gel-casting Method

Authors: A.A.Rohani, A.Salehi, M.Tabrizi, S. A. Manafi, A. Fardafshari

Abstract:

In this study, ZnO nano rods and ZnO ultrafine particles were synthesized by Gel-casting method. The synthesized ZnO powder has a hexagonal zincite structure. The ZnO aggregates with rod-like morphology are typically 1.4 μm in length and 120 nm in diameter, which consist of many small nanocrystals with diameters of 10 nm. Longer wires connected by many hexahedral ZnO nanocrystals were obtained after calcinations at the temperature over 600° C.The crystalline structures and morphologies of the powder have been characterized by X-ray diffraction(XRD) and Scaning electron microscopy (SEM).The result shows that the different preparation conditions such as concentration H2O, calcinations time and calcinations temperature have a lot of influences upon the properties of nano ZnO powders, an increase in the temperature of the calcinations results in an increase of the grain size and also the increase of the calcinations time in high temperature makes the size of the grains bigger. The existences of extra watter prevent nano grains from improving like rod morphology. We have obtained the smallest grain size of ZnO powder by controlling the process conditions. Finally In a suitable condition, a novel nanostructure, namely bi-rod-like ZnO nano rods was found which is different from known ZnO nanostructures.

Keywords: morphology, nano particles, ZnO, gel-Casting method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1774
6543 Creation of GaxCo1-xZnSe0.4 (x = 0.1, 0.3, 0.5) Nanoparticles Using Pulse Laser Ablation Method

Authors: Yong Pan, Li Wang, Xue Qiong Su, Dong Wen Gao

Abstract:

To date, nanomaterials have received extensive attention over the years because of their wide application. Various nanomaterials such as nanoparticles, nanowire, nanoring, nanostars and other nanostructures have begun to be systematically studied. The preparation of these materials by chemical methods is not only costly, but also has a long cycle and high toxicity. At the same time, preparation of nanoparticles of multi-doped composites has been limited due to the special structure of the materials. In order to prepare multi-doped composites with the same structure as macro-materials and simplify the preparation method, the GaxCo1-xZnSe0.4 (x = 0.1, 0.3, 0.5) nanoparticles are prepared by Pulse Laser Ablation (PLA) method. The particle component and structure are systematically investigated by X-ray diffraction (XRD) and Raman spectra, which show that the success of our preparation and the same concentration between nanoparticles (NPs) and target. Morphology of the NPs characterized by Transmission Electron Microscopy (TEM) indicates the circular-shaped particles in preparation. Fluorescence properties are reflected by PL spectra, which demonstrate the best performance in concentration of Ga0.3Co0.3ZnSe0.4. Therefore, all the results suggest that PLA is promising to prepare the multi-NPs since it can modulate performance of NPs.

Keywords: PLA, physics, nanoparticles, multi-doped.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 817
6542 Microarrays Denoising via Smoothing of Coefficients in Wavelet Domain

Authors: Mario Mastriani, Alberto E. Giraldez

Abstract:

We describe a novel method for removing noise (in wavelet domain) of unknown variance from microarrays. The method is based on a smoothing of the coefficients of the highest subbands. Specifically, we decompose the noisy microarray into wavelet subbands, apply smoothing within each highest subband, and reconstruct a microarray from the modified wavelet coefficients. This process is applied a single time, and exclusively to the first level of decomposition, i.e., in most of the cases, it is not necessary a multirresoltuion analysis. Denoising results compare favorably to the most of methods in use at the moment.

Keywords: Directional smoothing, denoising, edge preservation, microarrays, thresholding, wavelets

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1508