Search results for: GPU parallel programming
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2028

Search results for: GPU parallel programming

1758 Massively Parallel Sequencing Improved Resolution for Paternity Testing

Authors: Xueying Zhao, Ke Ma, Hui Li, Yu Cao, Fan Yang, Qingwen Xu, Wenbin Liu

Abstract:

Massively parallel sequencing (MPS) technologies allow high-throughput sequencing analyses with a relatively affordable price and have gradually been applied to forensic casework. MPS technology identifies short tandem repeat (STR) loci based on sequence so that repeat motif variation within STRs can be detected, which may help one to infer the origin of the mutation in some cases. Here, we report on one case with one three-step mismatch (D18S51) in family trios based on both capillary electrophoresis (CE) and MPS typing. The alleles of the alleged father (AF) are [AGAA]₁₇AGAG[AGAA]₃ and [AGAA]₁₅. The mother’s alleles are [AGAA]₁₉ and [AGAA]₉AGGA[AGAA]₃. The questioned child’s (QC) alleles are [AGAA]₁₉ and [AGAA]₁₂. Given that the sequence variants in repeat regions of AF and mother are not observed in QC’s alleles, the QC’s allele [AGAA]₁₂ was likely inherited from the AF’s allele [AGAA]₁₅ by loss of three repeat [AGAA]. Besides, two new alleles of D18S51 in this study, [AGAA]₁₇AGAG[AGAA]₃ and [AGAA]₉AGGA[AGAA]₃, have not been reported before. All the results in this study were verified using Sanger-type sequencing. In summary, the MPS typing method can offer valuable information for forensic genetics research and play a promising role in paternity testing.

Keywords: family trios analysis, forensic casework, ion torrent personal genome machine (PGM), massively parallel sequencing (MPS)

Procedia PDF Downloads 275
1757 Solving 94-Bit ECDLP with 70 Computers in Parallel

Authors: Shunsuke Miyoshi, Yasuyuki Nogami, Takuya Kusaka, Nariyoshi Yamai

Abstract:

Elliptic curve discrete logarithm problem (ECDLP) is one of problems on which the security of pairing-based cryptography is based. This paper considers Pollard's rho method to evaluate the security of ECDLP on Barreto-Naehrig (BN) curve that is an efficient pairing-friendly curve. Some techniques are proposed to make the rho method efficient. Especially, the group structure on BN curve, distinguished point method, and Montgomery trick are well-known techniques. This paper applies these techniques and shows its optimization. According to the experimental results for which a large-scale parallel system with MySQL is applied, 94-bit ECDLP was solved about 28 hours by parallelizing 71 computers.

Keywords: Pollard's rho method, BN curve, Montgomery multiplication

Procedia PDF Downloads 239
1756 Supplier Selection by Considering Cost and Reliability

Authors: K. -H. Yang

Abstract:

Supplier selection problem is one of the important issues of supply chain problems. Two categories of methodologies include qualitative and quantitative approaches which can be applied to supplier selection problems. However, due to the complexities of the problem and lacking of reliable and quantitative data, qualitative approaches are more than quantitative approaches. This study considers operational cost and supplier’s reliability factor and solves the problem by using a quantitative approach. A mixed integer programming model is the primary analytic tool. Analyses of different scenarios with variable cost and reliability structures show that the effectiveness of this approach to the supplier selection problem.

Keywords: mixed integer programming, quantitative approach, supplier’s reliability, supplier selection

Procedia PDF Downloads 345
1755 An Ultra-Low Output Impedance Power Amplifier for Tx Array in 7-Tesla Magnetic Resonance Imaging

Authors: Ashraf Abuelhaija, Klaus Solbach

Abstract:

In Ultra high-field MRI scanners (3T and higher), parallel RF transmission techniques using multiple RF chains with multiple transmit elements are a promising approach to overcome the high-field MRI challenges in terms of inhomogeneity in the RF magnetic field and SAR. However, mutual coupling between the transmit array elements disturbs the desirable independent control of the RF waveforms for each element. This contribution demonstrates a 18 dB improvement of decoupling (isolation) performance due to the very low output impedance of our 1 kW power amplifier.

Keywords: EM coupling, inter-element isolation, magnetic resonance imaging (mri), parallel transmit

Procedia PDF Downloads 466
1754 Cloud Computing in Data Mining: A Technical Survey

Authors: Ghaemi Reza, Abdollahi Hamid, Dashti Elham

Abstract:

Cloud computing poses a diversity of challenges in data mining operation arising out of the dynamic structure of data distribution as against the use of typical database scenarios in conventional architecture. Due to immense number of users seeking data on daily basis, there is a serious security concerns to cloud providers as well as data providers who put their data on the cloud computing environment. Big data analytics use compute intensive data mining algorithms (Hidden markov, MapReduce parallel programming, Mahot Project, Hadoop distributed file system, K-Means and KMediod, Apriori) that require efficient high performance processors to produce timely results. Data mining algorithms to solve or optimize the model parameters. The challenges that operation has to encounter is the successful transactions to be established with the existing virtual machine environment and the databases to be kept under the control. Several factors have led to the distributed data mining from normal or centralized mining. The approach is as a SaaS which uses multi-agent systems for implementing the different tasks of system. There are still some problems of data mining based on cloud computing, including design and selection of data mining algorithms.

Keywords: cloud computing, data mining, computing models, cloud services

Procedia PDF Downloads 447
1753 Improving Student Programming Skills in Introductory Computer and Data Science Courses Using Generative AI

Authors: Genady Grabarnik, Serge Yaskolko

Abstract:

Generative Artificial Intelligence (AI) has significantly expanded its applicability with the incorporation of Large Language Models (LLMs) and become a technology with promise to automate some areas that were very difficult to automate before. The paper describes the introduction of generative Artificial Intelligence into Introductory Computer and Data Science courses and analysis of effect of such introduction. The generative Artificial Intelligence is incorporated in the educational process two-fold: For the instructors, we create templates of prompts for generation of tasks, and grading of the students work, including feedback on the submitted assignments. For the students, we introduce them to basic prompt engineering, which in turn will be used for generation of test cases based on description of the problems, generating code snippets for the single block complexity programming, and partitioning into such blocks of an average size complexity programming. The above-mentioned classes are run using Large Language Models, and feedback from instructors and students and courses’ outcomes are collected. The analysis shows statistically significant positive effect and preference of both stakeholders.

Keywords: introductory computer and data science education, generative AI, large language models, application of LLMS to computer and data science education

Procedia PDF Downloads 31
1752 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing

Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou

Abstract:

The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.

Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation

Procedia PDF Downloads 79
1751 Stochastic Energy and Reserve Scheduling with Wind Generation and Generic Energy Storage Systems

Authors: Amirhossein Khazali, Mohsen Kalantar

Abstract:

Energy storage units can play an important role to provide an economic and secure operation of future energy systems. In this paper, a stochastic energy and reserve market clearing scheme is presented considering storage energy units. The approach is proposed to deal with stochastic and non-dispatchable renewable sources with a high level of penetration in the energy system. A two stage stochastic programming scheme is formulated where in the first stage the energy market is cleared according to the forecasted amount of wind generation and demands and in the second stage the real time market is solved according to the assumed scenarios.

Keywords: energy and reserve market, energy storage device, stochastic programming, wind generation

Procedia PDF Downloads 545
1750 Improvement of Parallel Compressor Model in Dealing Outlet Unequal Pressure Distribution

Authors: Kewei Xu, Jens Friedrich, Kevin Dwinger, Wei Fan, Xijin Zhang

Abstract:

Parallel Compressor Model (PCM) is a simplified approach to predict compressor performance with inlet distortions. In PCM calculation, it is assumed that the sub-compressors’ outlet static pressure is uniform and therefore simplifies PCM calculation procedure. However, if the compressor’s outlet duct is not long and straight, such assumption frequently induces error ranging from 10% to 15%. This paper provides a revised calculation method of PCM that can correct the error. The revised method employs energy equation, momentum equation and continuity equation to acquire needed parameters and replace the equal static pressure assumption. Based on the revised method, PCM is applied on two compression system with different blades types. The predictions of their performance in non-uniform inlet conditions are yielded through the revised calculation method and are employed to evaluate the method’s efficiency. Validating the results by experimental data, it is found that although little deviation occurs, calculated result agrees well with experiment data whose error ranges from 0.1% to 3%. Therefore, this proves the revised calculation method of PCM possesses great advantages in predicting the performance of the distorted compressor with limited exhaust duct.

Keywords: parallel compressor model (pcm), revised calculation method, inlet distortion, outlet unequal pressure distribution

Procedia PDF Downloads 301
1749 Logic Programming and Artificial Neural Networks in Pharmacological Screening of Schinus Essential Oils

Authors: José Neves, M. Rosário Martins, Fátima Candeias, Diana Ferreira, Sílvia Arantes, Júlio Cruz-Morais, Guida Gomes, Joaquim Macedo, António Abelha, Henrique Vicente

Abstract:

Some plants of genus Schinus have been used in the folk medicine as topical antiseptic, digestive, purgative, diuretic, analgesic or antidepressant, and also for respiratory and urinary infections. Chemical composition of essential oils of S. molle and S. terebinthifolius had been evaluated and presented high variability according with the part of the plant studied and with the geographic and climatic regions. The pharmacological properties, namely antimicrobial, anti-tumoural and anti-inflammatory activities are conditioned by chemical composition of essential oils. Taking into account the difficulty to infer the pharmacological properties of Schinus essential oils without hard experimental approach, this work will focus on the development of a decision support system, in terms of its knowledge representation and reasoning procedures, under a formal framework based on Logic Programming, complemented with an approach to computing centered on Artificial Neural Networks and the respective Degree-of-Confidence that one has on such an occurrence.

Keywords: artificial neuronal networks, essential oils, knowledge representation and reasoning, logic programming, Schinus molle L., Schinus terebinthifolius Raddi

Procedia PDF Downloads 512
1748 Capture Zone of a Well Field in an Aquifer Bounded by Two Parallel Streams

Authors: S. Nagheli, N. Samani, D. A. Barry

Abstract:

In this paper, the velocity potential and stream function of capture zone for a well field in an aquifer bounded by two parallel streams with or without a uniform regional flow of any directions are presented. The well field includes any number of extraction or injection wells or a combination of both types with any pumping rates. To delineate the capture envelope, the potential and streamlines equations are derived by conformal mapping method. This method can help us to release constrains of other methods. The equations can be applied as useful tools to design in-situ groundwater remediation systems, to evaluate the surface–subsurface water interaction and to manage the water resources.

Keywords: complex potential, conformal mapping, image well theory, Laplace’s equation, superposition principle

Procedia PDF Downloads 399
1747 Developing Computational Thinking in Early Childhood Education

Authors: Kalliopi Kanaki, Michael Kalogiannakis

Abstract:

Nowadays, in the digital era, the early acquisition of basic programming skills and knowledge is encouraged, as it facilitates students’ exposure to computational thinking and empowers their creativity, problem-solving skills, and cognitive development. More and more researchers and educators investigate the introduction of computational thinking in K-12 since it is expected to be a fundamental skill for everyone by the middle of the 21st century, just like reading, writing and arithmetic are at the moment. In this paper, a doctoral research in the process is presented, which investigates the infusion of computational thinking into science curriculum in early childhood education. The whole attempt aims to develop young children’s computational thinking by introducing them to the fundamental concepts of object-oriented programming in an enjoyable, yet educational framework. The backbone of the research is the digital environment PhysGramming (an abbreviation of Physical Science Programming), which provides children the opportunity to create their own digital games, turning them from passive consumers to active creators of technology. PhysGramming deploys an innovative hybrid schema of visual and text-based programming techniques, with emphasis on object-orientation. Through PhysGramming, young students are familiarized with basic object-oriented programming concepts, such as classes, objects, and attributes, while, at the same time, get a view of object-oriented programming syntax. Nevertheless, the most noteworthy feature of PhysGramming is that children create their own digital games within the context of physical science courses, in a way that provides familiarization with the basic principles of object-oriented programming and computational thinking, even though no specific reference is made to these principles. Attuned to the ethical guidelines of educational research, interventions were conducted in two classes of second grade. The interventions were designed with respect to the thematic units of the curriculum of physical science courses, as a part of the learning activities of the class. PhysGramming was integrated into the classroom, after short introductory sessions. During the interventions, 6-7 years old children worked in pairs on computers and created their own digital games (group games, matching games, and puzzles). The authors participated in these interventions as observers in order to achieve a realistic evaluation of the proposed educational framework concerning its applicability in the classroom and its educational and pedagogical perspectives. To better examine if the objectives of the research are met, the investigation was focused on six criteria; the educational value of PhysGramming, its engaging and enjoyable characteristics, its child-friendliness, its appropriateness for the purpose that is proposed, its ability to monitor the user’s progress and its individualizing features. In this paper, the functionality of PhysGramming and the philosophy of its integration in the classroom are both described in detail. Information about the implemented interventions and the results obtained is also provided. Finally, several limitations of the research conducted that deserve attention are denoted.

Keywords: computational thinking, early childhood education, object-oriented programming, physical science courses

Procedia PDF Downloads 98
1746 Simulating Drilling Using a CAD System

Authors: Panagiotis Kyratsis, Konstantinos Kakoulis

Abstract:

Nowadays, the rapid development of CAD systems’ programming environments results in the creation of multiple downstream applications, which are developed and becoming increasingly available. CAD based manufacturing simulations is gradually following the same trend. Drilling is the most popular hole-making process used in a variety of industries. A specially built piece of software that deals with the drilling kinematics is presented. The cutting forces are calculated based on the tool geometry, the cutting conditions and the tool/work piece materials. The results are verified by experimental work. Finally, the response surface methodology (RSM) is applied and mathematical models of the total thrust force and the thrust force developed because of the main cutting edges are proposed.

Keywords: CAD, application programming interface, response surface methodology, drilling, RSM

Procedia PDF Downloads 440
1745 Researching Apache Hama: A Pure BSP Computing Framework

Authors: Kamran Siddique, Yangwoo Kim, Zahid Akhtar

Abstract:

In recent years, the technological advancements have led to a deluge of data from distinctive domains and the need for development of solutions based on parallel and distributed computing has still long way to go. That is why, the research and development of massive computing frameworks is continuously growing. At this particular stage, highlighting a potential research area along with key insights could be an asset for researchers in the field. Therefore, this paper explores one of the emerging distributed computing frameworks, Apache Hama. It is a Top Level Project under the Apache Software Foundation, based on Bulk Synchronous Processing (BSP). We present an unbiased and critical interrogation session about Apache Hama and conclude research directions in order to assist interested researchers.

Keywords: apache hama, bulk synchronous parallel, BSP, distributed computing

Procedia PDF Downloads 218
1744 A Redesigned Pedagogy in Introductory Programming Reduces Failure and Withdrawal Rates by Half

Authors: Said Fares, Mary Fares

Abstract:

It is well documented that introductory computer programming courses are difficult and that failure rates are high. The aim of this project was to reduce the high failure and withdrawal rates in learning to program. This paper presents a number of changes in module organization and instructional delivery system in teaching CS1. Daily out of class help sessions and tutoring services were applied, interactive lectures and laboratories, online resources, and timely feedback were introduced. Five years of data of 563 students in 21 sections was collected and analyzed. The primary results show that the failure and withdrawal rates were cut by more than half. Student surveys indicate a positive evaluation of the modified instructional approach, overall satisfaction with the course and consequently, higher success and retention rates.

Keywords: failure rate, interactive learning, student engagement, CS1

Procedia PDF Downloads 281
1743 Automated Java Testing: JUnit versus AspectJ

Authors: Manish Jain, Dinesh Gopalani

Abstract:

Growing dependency of mankind on software technology increases the need for thorough testing of the software applications and automated testing techniques that support testing activities. We have outlined our testing strategy for performing various types of automated testing of Java applications using AspectJ which has become the de-facto standard for Aspect Oriented Programming (AOP). Likewise JUnit, a unit testing framework is the most popular Java testing tool. In this paper, we have evaluated our proposed AOP approach for automated testing and JUnit on various parameters. First we have provided the similarity between the two approaches and then we have done a detailed comparison of the two testing techniques on factors like lines of testing code, learning curve, testing of private members etc. We established that our AOP testing approach using AspectJ has got several advantages and is thus particularly more effective than JUnit.

Keywords: aspect oriented programming, AspectJ, aspects, JU-nit, software testing

Procedia PDF Downloads 296
1742 Towards Developing a Self-Explanatory Scheduling System Based on a Hybrid Approach

Authors: Jian Zheng, Yoshiyasu Takahashi, Yuichi Kobayashi, Tatsuhiro Sato

Abstract:

In the study, we present a conceptual framework for developing a scheduling system that can generate self-explanatory and easy-understanding schedules. To this end, a user interface is conceived to help planners record factors that are considered crucial in scheduling, as well as internal and external sources relating to such factors. A hybrid approach combining machine learning and constraint programming is developed to generate schedules and the corresponding factors, and accordingly display them on the user interface. Effects of the proposed system on scheduling are discussed, and it is expected that scheduling efficiency and system understandability will be improved, compared with previous scheduling systems.

Keywords: constraint programming, factors considered in scheduling, machine learning, scheduling system

Procedia PDF Downloads 294
1741 Deterministic and Stochastic Modeling of a Micro-Grid Management for Optimal Power Self-Consumption

Authors: D. Calogine, O. Chau, S. Dotti, O. Ramiarinjanahary, P. Rasoavonjy, F. Tovondahiniriko

Abstract:

Mafate is a natural circus in the north-western part of Reunion Island, without an electrical grid and road network. A micro-grid concept is being experimented in this area, composed of a photovoltaic production combined with electrochemical batteries, in order to meet the local population for self-consumption of electricity demands. This work develops a discrete model as well as a stochastic model in order to reach an optimal equilibrium between production and consumptions for a cluster of houses. The management of the energy power leads to a large linearized programming system, where the time interval of interest is 24 hours The experimental data are solar production, storage energy, and the parameters of the different electrical devices and batteries. The unknown variables to evaluate are the consumptions of the various electrical services, the energy drawn from and stored in the batteries, and the inhabitants’ planning wishes. The objective is to fit the solar production to the electrical consumption of the inhabitants, with an optimal use of the energies in the batteries by satisfying as widely as possible the users' planning requirements. In the discrete model, the different parameters and solutions of the linear programming system are deterministic scalars. Whereas in the stochastic approach, the data parameters and the linear programming solutions become random variables, then the distributions of which could be imposed or established by estimation from samples of real observations or from samples of optimal discrete equilibrium solutions.

Keywords: photovoltaic production, power consumption, battery storage resources, random variables, stochastic modeling, estimations of probability distributions, mixed integer linear programming, smart micro-grid, self-consumption of electricity.

Procedia PDF Downloads 85
1740 Performance Evaluation of Using Genetic Programming Based Surrogate Models for Approximating Simulation Complex Geochemical Transport Processes

Authors: Hamed K. Esfahani, Bithin Datta

Abstract:

Transport of reactive chemical contaminant species in groundwater aquifers is a complex and highly non-linear physical and geochemical process especially for real life scenarios. Simulating this transport process involves solving complex nonlinear equations and generally requires huge computational time for a given aquifer study area. Development of optimal remediation strategies in aquifers may require repeated solution of such complex numerical simulation models. To overcome this computational limitation and improve the computational feasibility of large number of repeated simulations, Genetic Programming based trained surrogate models are developed to approximately simulate such complex transport processes. Transport process of acid mine drainage, a hazardous pollutant is first simulated using a numerical simulated model: HYDROGEOCHEM 5.0 for a contaminated aquifer in a historic mine site. Simulation model solution results for an illustrative contaminated aquifer site is then approximated by training and testing a Genetic Programming (GP) based surrogate model. Performance evaluation of the ensemble GP models as surrogate models for the reactive species transport in groundwater demonstrates the feasibility of its use and the associated computational advantages. The results show the efficiency and feasibility of using ensemble GP surrogate models as approximate simulators of complex hydrogeologic and geochemical processes in a contaminated groundwater aquifer incorporating uncertainties in historic mine site.

Keywords: geochemical transport simulation, acid mine drainage, surrogate models, ensemble genetic programming, contaminated aquifers, mine sites

Procedia PDF Downloads 241
1739 Applying the Extreme-Based Teaching Model in Post-Secondary Online Classroom Setting: A Field Experiment

Authors: Leon Pan

Abstract:

The first programming course within post-secondary education has long been recognized as a challenging endeavor for both educators and students alike. Historically, these courses have exhibited high failure rates and a notable number of dropouts. Instructors often lament students' lack of effort in their coursework, and students often express frustration that the teaching methods employed are not effective. Drawing inspiration from the successful principles of Extreme Programming, this study introduces an approach—the Extremes-based teaching model — aimed at enhancing the teaching of introductory programming courses. To empirically determine the effectiveness of the model, a comparison was made between a section taught using the extreme-based model and another utilizing traditional teaching methods. Notably, the extreme-based teaching class required students to work collaboratively on projects while also demanding continuous assessment and performance enhancement within groups. This paper details the application of the extreme-based model within the post-secondary online classroom context and presents the compelling results that emphasize its effectiveness in advancing the teaching and learning experiences. The extreme-based model led to a significant increase of 13.46 points in the weighted total average and a commendable 10% reduction in the failure rate.

Keywords: extreme-based teaching model, innovative pedagogical methods, project-based learning, team-based learning

Procedia PDF Downloads 31
1738 Coupling Concept of Two Parallel Research Codes for Two and Three Dimensional Fluid Structure Interaction Analysis

Authors: Luciano Garelli, Marco Schauer, Jorge D’Elia, Mario A. Storti, Sabine C. Langer

Abstract:

This paper discuss a coupling strategy of two different software packages to provide fluid structure interaction (FSI) analysis. The basic idea is to combine the advantages of the two codes to create a powerful FSI solver for two and three dimensional analysis. The fluid part is computed by a program called PETSc-FEM, a software developed at Centro de Investigación de Métodos Computacionales (CIMEC). The structural part of the coupled process is computed by the research code elementary Parallel Solver (elPaSo) of the Technische Universität Braunschweig, Institut für Konstruktionstechnik (IK).

Keywords: computational fluid dynamics (CFD), fluid structure interaction (FSI), finite element method (FEM), software

Procedia PDF Downloads 524
1737 Optimal Scheduling for Energy Storage System Considering Reliability Constraints

Authors: Wook-Won Kim, Je-Seok Shin, Jin-O Kim

Abstract:

This paper propose the method for optimal scheduling for battery energy storage system with reliability constraint of energy storage system in reliability aspect. The optimal scheduling problem is solved by dynamic programming with proposed transition matrix. Proposed optimal scheduling method guarantees the minimum fuel cost within specific reliability constraint. For evaluating proposed method, the timely capacity outage probability table (COPT) is used that is calculated by convolution of probability mass function of each generator. This study shows the result of optimal schedule of energy storage system.

Keywords: energy storage system (ESS), optimal scheduling, dynamic programming, reliability constraints

Procedia PDF Downloads 375
1736 Improved Pattern Matching Applied to Surface Mounting Devices Components Localization on Automated Optical Inspection

Authors: Pedro M. A. Vitoriano, Tito. G. Amaral

Abstract:

Automated Optical Inspection (AOI) Systems are commonly used on Printed Circuit Boards (PCB) manufacturing. The use of this technology has been proven as highly efficient for process improvements and quality achievements. The correct extraction of the component for posterior analysis is a critical step of the AOI process. Nowadays, the Pattern Matching Algorithm is commonly used, although this algorithm requires extensive calculations and is time consuming. This paper will present an improved algorithm for the component localization process, with the capability of implementation in a parallel execution system.

Keywords: AOI, automated optical inspection, SMD, surface mounting devices, pattern matching, parallel execution

Procedia PDF Downloads 277
1735 Control of Chaotic Behaviour in Parallel-Connected DC-DC Buck-Boost Converters

Authors: Ammar Nimer Natsheh

Abstract:

Chaos control is used to design a controller that is able to eliminate the chaotic behaviour of nonlinear dynamic systems that experience such phenomena. The paper describes the control of the bifurcation behaviour of a parallel-connected DC-DC buck-boost converter used to provide an interface between energy storage batteries and photovoltaic (PV) arrays as renewable energy sources. The paper presents a delayed feedback control scheme in a module converter comprises two identical buck-boost circuits and operates in the continuous-current conduction mode (CCM). MATLAB/SIMULINK simulation results show the effectiveness and robustness of the scheme.

Keywords: chaos, bifurcation, DC-DC Buck-Boost Converter, Delayed Feedback Control

Procedia PDF Downloads 399
1734 Decomposition-Based Pricing Technique for Solving Large-Scale Mixed IP

Authors: M. Babul Hasan

Abstract:

Management sciences (MS), big group of companies and industries or government policies (GP) is affiliated with a huge number of decision ingredients and complicated restrictions. Every factor in MS, every product in Industries or decision in GP is not always bankable in practice. After formulating these models there arises large-scale mixed integer programming (MIP) problem. In this paper, we developed decomposition-based pricing procedure to filter the unnecessary decision ingredients from MIP where the variables in huge number will be abated and the complicacy of restrictions will be elementary. A real life numerical example has been illustrated to demonstrate the methods. We develop the computer techniques for these methods by using a mathematical programming language (AMPL).

Keywords: Lagrangian relaxation, decomposition, sub-problem, master-problem, pricing, mixed IP, AMPL

Procedia PDF Downloads 472
1733 Acceleration of Lagrangian and Eulerian Flow Solvers via Graphics Processing Units

Authors: Pooya Niksiar, Ali Ashrafizadeh, Mehrzad Shams, Amir Hossein Madani

Abstract:

There are many computationally demanding applications in science and engineering which need efficient algorithms implemented on high performance computers. Recently, Graphics Processing Units (GPUs) have drawn much attention as compared to the traditional CPU-based hardware and have opened up new improvement venues in scientific computing. One particular application area is Computational Fluid Dynamics (CFD), in which mature CPU-based codes need to be converted to GPU-based algorithms to take advantage of this new technology. In this paper, numerical solutions of two classes of discrete fluid flow models via both CPU and GPU are discussed and compared. Test problems include an Eulerian model of a two-dimensional incompressible laminar flow case and a Lagrangian model of a two phase flow field. The CUDA programming standard is used to employ an NVIDIA GPU with 480 cores and a C++ serial code is run on a single core Intel quad-core CPU. Up to two orders of magnitude speed up is observed on GPU for a certain range of grid resolution or particle numbers. As expected, Lagrangian formulation is better suited for parallel computations on GPU although Eulerian formulation represents significant speed up too.

Keywords: CFD, Eulerian formulation, graphics processing units, Lagrangian formulation

Procedia PDF Downloads 377
1732 A Study of Using Multiple Subproblems in Dantzig-Wolfe Decomposition of Linear Programming

Authors: William Chung

Abstract:

This paper is to study the use of multiple subproblems in Dantzig-Wolfe decomposition of linear programming (DW-LP). Traditionally, the decomposed LP consists of one LP master problem and one LP subproblem. The master problem and the subproblem is solved alternatively by exchanging the dual prices of the master problem and the proposals of the subproblem until the LP is solved. It is well known that convergence is slow with a long tail of near-optimal solutions (asymptotic convergence). Hence, the performance of DW-LP highly depends upon the number of decomposition steps. If the decomposition steps can be greatly reduced, the performance of DW-LP can be improved significantly. To reduce the number of decomposition steps, one of the methods is to increase the number of proposals from the subproblem to the master problem. To do so, we propose to add a quadratic approximation function to the LP subproblem in order to develop a set of approximate-LP subproblems (multiple subproblems). Consequently, in each decomposition step, multiple subproblems are solved for providing multiple proposals to the master problem. The number of decomposition steps can be reduced greatly. Note that each approximate-LP subproblem is nonlinear programming, and solving the LP subproblem must faster than solving the nonlinear multiple subproblems. Hence, using multiple subproblems in DW-LP is the tradeoff between the number of approximate-LP subproblems being formed and the decomposition steps. In this paper, we derive the corresponding algorithms and provide some simple computational results. Some properties of the resulting algorithms are also given.

Keywords: approximate subproblem, Dantzig-Wolfe decomposition, large-scale models, multiple subproblems

Procedia PDF Downloads 132
1731 Inventory Management System of Seasonal Raw Materials of Feeds at San Jose Batangas through Integer Linear Programming and VBA

Authors: Glenda Marie D. Balitaan

Abstract:

The branch of business management that deals with inventory planning and control is known as inventory management. It comprises keeping track of supply levels and forecasting demand, as well as scheduling when and how to plan. Keeping excess inventory results in a loss of money, takes up physical space, and raises the risk of damage, spoilage, and loss. On the other hand, too little inventory frequently causes operations to be disrupted and raises the possibility of low customer satisfaction, both of which can be detrimental to a company's reputation. The United Victorious Feed mill Corporation's present inventory management practices were assessed in terms of inventory level, warehouse allocation, ordering frequency, shelf life, and production requirement. To help the company achieve their optimal level of inventory, a mathematical model was created using Integer Linear Programming. Due to the season, the goal function was to reduce the cost of purchasing US Soya and Yellow Corn. Warehouse space, annual production requirements, and shelf life were all considered. To ensure that the user only uses one application to record all relevant information, like production output and delivery, the researcher built a Visual Basic system. Additionally, the technology allows management to change the model's parameters.

Keywords: inventory management, integer linear programming, inventory management system, feed mill

Procedia PDF Downloads 56
1730 The Comparative Study of Binary Artifact Repository Managers

Authors: Evgeny Chugunnyy, Alena Gerasimova, Kirill Chernyavskiy, Alexander Krasnov

Abstract:

One of the primary component of Continuous deployment (CD) is a binary artifact repository — the place where artifacts are stored with metadata in a structured way. The binary artifact repository manager (BARM) is a software, which implements this repository logic and exposes a public application programming interface (API) for managing these artifacts. Almost every programming language ecosystem has its own artifact repository kind. During creating Artipie — BARM constructor and server, we analyzed and implemented a lot of different artifact repositories. In this paper we present criterias for comparing artifact repositories, and analyze the most popular repositories using these metrics. We also describe some of the notable features of different repositories. This paper aimed to help people who are creating, maintaining or optimizing software repository and CI tools.

Keywords: artifact, repository, continuous deployment, build automation, artifacts management

Procedia PDF Downloads 112
1729 Analysis of Slip Flow Heat Transfer between Asymmetrically Heated Parallel Plates

Authors: Hari Mohan Kushwaha, Santosh Kumar Sahu

Abstract:

In the present study, analysis of heat transfer is carried out in the slip flow region for the fluid flowing between two parallel plates by employing the asymmetric heat fluxes at surface of the plates. The flow is assumed to be hydrodynamically and thermally fully developed for the analysis. The second order velocity slip and viscous dissipation effects are considered for the analysis. Closed form expressions are obtained for the Nusselt number as a function of Knudsen number and modified Brinkman number. The limiting condition of the present prediction for Kn = 0, Kn2 = 0, and Brq1 = 0 is considered and found to agree well with other analytical results.

Keywords: Knudsen number, modified Brinkman number, slip flow, velocity slip

Procedia PDF Downloads 350