Search results for: Parallel processes
1325 Determining Fire Resistance of Wooden Construction Elements through Experimental Studies and Artificial Neural Network
Authors: Sakir Tasdemir, Mustafa Altin, Gamze Fahriye Pehlivan, Ismail Saritas, Sadiye Didem Boztepe Erkis, Selma Tasdemir
Abstract:
Artificial intelligence applications are commonly used in industry in many fields in parallel with the developments in the computer technology. In this study, a fire room was prepared for the resistance of wooden construction elements and with the mechanism here, the experiments of polished materials were carried out. By utilizing from the experimental data, an artificial neural network (ANN) was modelled in order to evaluate the final cross sections of the wooden samples remaining from the fire. In modelling, experimental data obtained from the fire room were used. In the developed system, the first weight of samples (ws-gr), preliminary cross-section (pcs-mm2), fire time (ft-minute), and fire temperature (t-oC) as input parameters and final cross-section (fcs-mm2) as output parameter were taken. When the results obtained from ANN and experimental data are compared after making statistical analyses, the data of two groups are determined to be coherent and seen to have no meaning difference between them. As a result, it is seen that ANN can be safely used in determining cross sections of wooden materials after fire and it prevents many disadvantages.
Keywords: Artificial neural network, final cross-section, fire retardant polishes, fire safety, wood resistance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19671324 Reliability Indices Evaluation of SEIG Rotor Core Magnetization with Minimum Capacitive Excitation for WECs
Authors: Lokesh Varshney, R. K. Saket
Abstract:
This paper presents reliability indices evaluation of the rotor core magnetization of the induction motor operated as a self excited induction generator by using probability distribution approach and Monte Carlo simulation. Parallel capacitors with calculated minimum capacitive value across the terminals of the induction motor operated as a SEIG with unregulated shaft speed have been connected during the experimental study. A three phase, 4 poles, 50Hz, 5.5 hp, 12.3A, 230V induction motor coupled with DC Shunt Motor was tested in the electrical machine laboratory with variable reactive loads. Based on this experimental study, it is possible to choose a reliable induction machines operated as a SEIG for unregulated renewable energy application in remote area or where grid is not available. Failure density function, cumulative failure distribution function, survivor function, hazard model, probability of success and probability of failure for reliability evaluation of the three phase induction motor operating as a SEIG have been presented graphically in this paper.
Keywords: Residual magnetism, magnetization curve, induction motor, self excited induction generator, probability distribution, Monte Carlo simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21361323 Internal Accounting Controls
Authors: Alireza Azimi Sani , Shahram Chaharmahalie
Abstract:
Internal controls of accounting are an essential business function for a growth-oriented organization, and include the elements of risk assessment, information communications and even employees' roles and responsibilities. Internal controls of accounting systems are designed to protect a company from fraud, abuse and inaccurate data recording and help organizations keep track of essential financial activities. Internal controls of accounting provide a streamlined solution for organizing all accounting procedures and ensuring that the accounting cycle is completed consistently and successfully. Implementing a formal Accounting Procedures Manual for the organization allows the financial department to facilitate several processes and maintain rigorous standards. Internal controls also allow organizations to keep detailed records, manage and organize important financial transactions and set a high standard for the organization's financial management structure and protocols. A well-implemented system also reduces the risk of accounting errors and abuse. A well-implemented controls system allows a company's financial managers to regulate and streamline all functions of the accounting department. Internal controls of accounting can be set up for every area to track deposits, monitor check handling, keep track of creditor accounts, and even assess budgets and financial statements on an ongoing basis. Setting up an effective accounting system to monitor accounting reports, analyze records and protect sensitive financial information also can help a company set clear goals and make accurate projections. Creating efficient accounting processes allows an organization to set specific policies and protocols on accounting procedures, and reach its financial objectives on a regular basis. Internal accounting controls can help keep track of such areas as cash-receipt recording, payroll management, appropriate recording of grants and gifts, cash disbursements by authorized personnel, and the recording of assets. These systems also can take into account any government regulations and requirements for financial reporting.Keywords: Internal controls, risk assessment, financial management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20351322 Machine Learning for Music Aesthetic Annotation Using MIDI Format: A Harmony-Based Classification Approach
Authors: Lin Yang, Zhian Mi, Jiacheng Xiao, Rong Li
Abstract:
Swimming with the tide of deep learning, the field of music information retrieval (MIR) experiences parallel development and a sheer variety of feature-learning models has been applied to music classification and tagging tasks. Among those learning techniques, the deep convolutional neural networks (CNNs) have been widespreadly used with better performance than the traditional approach especially in music genre classification and prediction. However, regarding the music recommendation, there is a large semantic gap between the corresponding audio genres and the various aspects of a song that influence user preference. In our study, aiming to bridge the gap, we strive to construct an automatic music aesthetic annotation model with MIDI format for better comparison and measurement of the similarity between music pieces in the way of harmonic analysis. We use the matrix of qualification converted from MIDI files as input to train two different classifiers, support vector machine (SVM) and Decision Tree (DT). Experimental results in performance of a tag prediction task have shown that both learning algorithms are capable of extracting high-level properties in an end-to end manner from music information. The proposed model is helpful to learn the audience taste and then the resulting recommendations are likely to appeal to a niche consumer.
Keywords: Harmonic analysis, machine learning, music classification and tagging, MIDI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7721321 A Hybrid Feature Selection and Deep Learning Algorithm for Cancer Disease Classification
Authors: Niousha Bagheri Khulenjani, Mohammad Saniee Abadeh
Abstract:
Learning from very big datasets is a significant problem for most present data mining and machine learning algorithms. MicroRNA (miRNA) is one of the important big genomic and non-coding datasets presenting the genome sequences. In this paper, a hybrid method for the classification of the miRNA data is proposed. Due to the variety of cancers and high number of genes, analyzing the miRNA dataset has been a challenging problem for researchers. The number of features corresponding to the number of samples is high and the data suffer from being imbalanced. The feature selection method has been used to select features having more ability to distinguish classes and eliminating obscures features. Afterward, a Convolutional Neural Network (CNN) classifier for classification of cancer types is utilized, which employs a Genetic Algorithm to highlight optimized hyper-parameters of CNN. In order to make the process of classification by CNN faster, Graphics Processing Unit (GPU) is recommended for calculating the mathematic equation in a parallel way. The proposed method is tested on a real-world dataset with 8,129 patients, 29 different types of tumors, and 1,046 miRNA biomarkers, taken from The Cancer Genome Atlas (TCGA) database.
Keywords: Cancer classification, feature selection, deep learning, genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12871320 Analytical and Numerical Approaches in Coagulation of Particles
Authors: Bilal Barakeh
Abstract:
In this paper we discuss the effect of unbounded particle interaction operator on particle growth and we study how this can address the choice of appropriate time steps of the numerical simulation. We provide also rigorous mathematical proofs showing that large particles become dominating with increasing time while small particles contribute negligibly. Second, we discuss the efficiency of the algorithm by performing numerical simulations tests and by comparing the simulated solutions with some known analytic solutions to the Smoluchowski equation.
Keywords: Stochastic processes, coagulation of particles, numerical scheme.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15111319 The Techno-Economic and Environmental Assessments of Grid-Connected Photovoltaic Systems in Bhubaneswar, India
Authors: A. K. Pradhan, M. K. Mohanty, S. K. Kar
Abstract:
The power system utility has started to think about the green power technology in order to have an eco-friendly environment. The green power technology utilizes renewable energy sources for reduction of GHG emissions. Odisha state (India) is very rich in potential of renewable energy sources especially in solar energy (about 300 solar days), for installation of grid connected photovoltaic system. This paper focuses on the utilization of photovoltaic systems in an Institute building of Bhubaneswar city, Odisha. Different data like solar insolation (kW/m2/day), sunshine duration has been collected from metrological stations for Bhubaneswar city. The required electrical power and cost are calculated for daily load of 1.0 kW. The HOMER (Hybrid Optimization Model of Electric Renewable) software is used to estimate system size and its performance analysis. The simulation result shows that the cost of energy (COE) is $ 0.194/kWh, the Operating cost is $63/yr and the net present cost (NPC) is $3,917. The energy produced from PV array is 1,756kWh/yr and energy purchased from grid is 410kWh/yr. The AC primary load consumption is 1314 kWh/yr and the Grid sales are 746 kWh/yr. One battery is connected in parallel with 12V DC Bus and the usable nominal capacity 2.4 kWh with 9.6 h autonomy capacity.
Keywords: Economic assessment, HOMER, Optimization, Photovoltaic (PV), Renewable energy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22731318 A Practical Approach for Testing the Process Quality
Authors: Mou-Yuan Liao, Chien-Wei Wu, Chien-Hua Lin
Abstract:
Process capability index Cpk is the most widely used index in making managerial decisions since it provides bounds on the process yield for normally distributed processes. However, existent methods for assessing process performance which constructed by statistical inference may unfortunately lead to fine results, because uncertainties exist in most real-world applications. Thus, this study adopts fuzzy inference to deal with testing of Cpk . A brief score is obtained for assessing a supplier’s process instead of a severe evaluation.
Keywords: Process capability analysis, quality control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14341317 A Paradigm Shift towards Personalized and Scalable Product Development and Lifecycle Management Systems in the Aerospace Industry
Authors: David E. Culler, Noah D. Anderson
Abstract:
Integrated systems for product design, manufacturing, and lifecycle management are difficult to implement and customize. Commercial software vendors, including CAD/CAM and third party PDM/PLM developers, create user interfaces and functionality that allow their products to be applied across many industries. The result is that systems become overloaded with functionality, difficult to navigate, and use terminology that is unfamiliar to engineers and production personnel. For example, manufacturers of automotive, aeronautical, electronics, and household products use similar but distinct methods and processes. Furthermore, each company tends to have their own preferred tools and programs for controlling work and information flow and that connect design, planning, and manufacturing processes to business applications. This paper presents a methodology and a case study that addresses these issues and suggests that in the future more companies will develop personalized applications that fit to the natural way that their business operates. A functioning system has been implemented at a highly competitive U.S. aerospace tooling and component supplier that works with many prominent airline manufacturers around the world including The Boeing Company, Airbus, Embraer, and Bombardier Aerospace. During the last three years, the program has produced significant benefits such as the automatic creation and management of component and assembly designs (parametric models and drawings), the extensive use of lightweight 3D data, and changes to the way projects are executed from beginning to end. CATIA (CAD/CAE/CAM) and a variety of programs developed in C#, VB.Net, HTML, and SQL make up the current system. The web-based platform is facilitating collaborative work across multiple sites around the world and improving communications with customers and suppliers. This work demonstrates that the creative use of Application Programming Interface (API) utilities, libraries, and methods is a key to automating many time-consuming tasks and linking applications together.
Keywords: CAD/CAM, CAPP, PDM, PLM, Scalable Systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16601316 Absolute Cross Sections of Multi-Photon Ionization of Xenon by the Comparison with Process of its Electron-Impact Ionization
Authors: A. A. Mityureva, A. A. Pastor, P. Yu. Serdobintsev, N. A. Timofeev
Abstract:
Comparison of electron- and photon-impact processes as a method for determination of photo-ionization cross sections is described, discussed and shown to have many attractive features.
Keywords: Transition probability, cross section, photo-ionization, electron-ionization, multi-photon process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22391315 Development of EN338 (2009) Strength Classes for Some Common Nigerian Timber Species Using Three Point Bending Test
Authors: Abubakar Idris, Nabade Abdullahi Muhammad
Abstract:
The work presents a development of EN338 strength classes for Strombosia pustulata, Pterygotama crocarpa, Nauclea diderrichii and Entandrophragma cyclindricum Nigerian timber species. The specimens for experimental measurements were obtained from the timber-shed at the famous Panteka market in Kaduna in the northern part of Nigeria. Laboratory experiments were conducted to determine the physical and mechanical properties of the selected timber species in accordance with EN 13183-1 and ASTM D193. The mechanical properties were determined using three point bending test. The generated properties were used to obtain the characteristic values of the material properties in accordance with EN384. The selected timber species were then classified according to EN 338. Strombosia pustulata, Pterygotama crocarpa, Nauclea diderrichii and Entandrophragma cyclindricum were assigned to strength classes D40, C14, D40 and D24 respectively. Other properties such as tensile and compressive strengths parallel and perpendicular to grains, shear strength as well as shear modulus were obtained in accordance with EN 338.
Keywords: Mechanical properties, Nigerian timber, strength classes, three-point bending test.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 40931314 Radio and Television Supreme Council as a Regulatory Board
Authors: Sevil Yildiz
Abstract:
In parallel, broadcasting has changed rapidly with the changing of the world at the same area. Broadcasting is also influenced and reshaped in terms of the emergence of new communication technologies. These developments have resulted a lot of economic and social consequences. The most important consequences of these results are those of the powers of the governments to control over the means of communication and control mechanisms related to the descriptions of the new issues. For this purpose, autonomous and independent regulatory bodies have been established by the state. One of these regulatory bodies is the Radio and Television Supreme Council, which to be established in 1994, with the Code no 3984. Today’s Radio and Television Supreme Council which is responsible for the regulation of the radio and television broadcasts all across Turkey has an important and effective position as autonomous and independent regulatory body. The Radio and Television Supreme Council acts as being a remarkable organizer for a sensitive area of radio and television broadcasting on one hand, and the area of democratic, liberal and keep in mind the concept of the public interest by putting certain principles for the functioning of the Board control, in the context of media policy as one of the central organs, on the other hand. In this study, the role of the Radio and Television Supreme Council is examined in accordance with the Code no 3894 in order to control over the communication and control mechanisms as well as the examination of the changes in the duties of the Code No. 6112, dated 2011.
Keywords: Regulatory Boards, Radio and Television Supreme Council.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15221313 Fast Wavelet Image Denoising Based on Local Variance and Edge Analysis
Authors: Gaoyong Luo
Abstract:
The approach based on the wavelet transform has been widely used for image denoising due to its multi-resolution nature, its ability to produce high levels of noise reduction and the low level of distortion introduced. However, by removing noise, high frequency components belonging to edges are also removed, which leads to blurring the signal features. This paper proposes a new method of image noise reduction based on local variance and edge analysis. The analysis is performed by dividing an image into 32 x 32 pixel blocks, and transforming the data into wavelet domain. Fast lifting wavelet spatial-frequency decomposition and reconstruction is developed with the advantages of being computationally efficient and boundary effects minimized. The adaptive thresholding by local variance estimation and edge strength measurement can effectively reduce image noise while preserve the features of the original image corresponding to the boundaries of the objects. Experimental results demonstrate that the method performs well for images contaminated by natural and artificial noise, and is suitable to be adapted for different class of images and type of noises. The proposed algorithm provides a potential solution with parallel computation for real time or embedded system application.Keywords: Edge strength, Fast lifting wavelet, Image denoising, Local variance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20351312 Comparison of Welding Fumes Exposure during Standing and Sitting Welder’s Position
Authors: Azian Hariri, M. Z. M Yusof, A. M. Leman
Abstract:
Experimental study was conducted to assess personal welding fumes exposure toward welders during an aluminum metal inert gas (MIG) process. The welding process was carried out by a welding machine attached to a Computer Numerical Control (CNC) workbench. A dummy welder was used to replicate welder during welding works and was attached with sampling pumps and filter cassettes for welding fumes sampling. Direct reading instruments to measure air velocity, humidity, temperature and particulate matter with diameter size 10µm or less (PM10) were located behind the dummy welder and parallel to the neck collar level to make sure the measured welding fumes exposure were not being influenced by other factors. Welding fumes exposure during standing and sitting position with and without the usage of local exhaust ventilation (LEV) was investigated. Welding fume samples were then digested and analyzed by using inductively coupled plasma mass spectroscopy (ICP-MS) according to ASTM D7439-08 method. The results of the study showed the welding fume exposure during sitting was lower compared to standing position. LEV helped reduce aluminum and lead exposure to acceptable levels during standing position. However during sitting position reduction of exposure was smaller. It can be concluded that welder position and the correct positioning of LEV should be implemented for effective exposure reduction.
Keywords: ICP-MS, MIG process, personal sampling, welding fumes exposure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26161311 Verification and Proposal of Information Processing Model Using EEG-Based Brain Activity Monitoring
Authors: Toshitaka Higashino, Naoki Wakamiya
Abstract:
Human beings perform a task by perceiving information from outside, recognizing them, and responding them. There have been various attempts to analyze and understand internal processes behind the reaction to a given stimulus by conducting psychological experiments and analysis from multiple perspectives. Among these, we focused on Model Human Processor (MHP). However, it was built based on psychological experiments and thus the relation with brain activity was unclear so far. To verify the validity of the MHP and propose our model from a viewpoint of neuroscience, EEG (Electroencephalography) measurements are performed during experiments in this study. More specifically, first, experiments were conducted where Latin alphabet characters were used as visual stimuli. In addition to response time, ERPs (event-related potentials) such as N100 and P300 were measured by using EEG. By comparing cycle time predicted by the MHP and latency of ERPs, it was found that N100, related to perception of stimuli, appeared at the end of the perceptual processor. Furthermore, by conducting an additional experiment, it was revealed that P300, related to decision making, appeared during the response decision process, not at the end. Second, by experiments using Japanese Hiragana characters, i.e. Japan's own phonetic symbols, those findings were confirmed. Finally, Japanese Kanji characters were used as more complicated visual stimuli. A Kanji character usually has several readings and several meanings. Despite the difference, a reading-related task and a meaning-related task exhibited similar results, meaning that they involved similar information processing processes of the brain. Based on those results, our model was proposed which reflects response time and ERP latency. It consists of three processors: the perception processor from an input of a stimulus to appearance of N100, the cognitive processor from N100 to P300, and the decision-action processor from P300 to response. Using our model, an application system which reflects brain activity can be established.
Keywords: Brain activity, EEG, information processing model, model human processor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7011310 A Distributed Cryptographically Generated Address Computing Algorithm for Secure Neighbor Discovery Protocol in IPv6
Authors: M. Moslehpour, S. Khorsandi
Abstract:
Due to shortage in IPv4 addresses, transition to IPv6 has gained significant momentum in recent years. Like Address Resolution Protocol (ARP) in IPv4, Neighbor Discovery Protocol (NDP) provides some functions like address resolution in IPv6. Besides functionality of NDP, it is vulnerable to some attacks. To mitigate these attacks, Internet Protocol Security (IPsec) was introduced, but it was not efficient due to its limitation. Therefore, SEND protocol is proposed to automatic protection of auto-configuration process. It is secure neighbor discovery and address resolution process. To defend against threats on NDP’s integrity and identity, Cryptographically Generated Address (CGA) and asymmetric cryptography are used by SEND. Besides advantages of SEND, its disadvantages like the computation process of CGA algorithm and sequentially of CGA generation algorithm are considerable. In this paper, we parallel this process between network resources in order to improve it. In addition, we compare the CGA generation time in self-computing and distributed-computing process. We focus on the impact of the malicious nodes on the CGA generation time in the network. According to the result, although malicious nodes participate in the generation process, CGA generation time is less than when it is computed in a one-way. By Trust Management System, detecting and insulating malicious nodes is easier.
Keywords: NDP, IPsec, SEND, CGA, Modifier, Malicious node, Self-Computing, Distributed-Computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13841309 Properties of Fly Ash Brick Prepared in Local Environment of Bangladesh
Authors: Robiul Islam, Monjurul Hasan, Rezaul Karim, M. F. M. Zain
Abstract:
Coal fly ash, an industrial by product of coal combustion thermal power plants is considered as a hazardous material and its improper disposal has become an environmental issue. On the other hand, manufacturing conventional clay bricks involves on consumption of large amount of clay and leads substantial depletion of topsoil. This paper unveils the possibility of using fly ash as a partial replacement of clay for brick manufacturing considering the local technology practiced in Bangladesh. The effect of fly ash with different replacing ratio (0%, 20%, 30%, 40%, and 50% by volume) of clay on properties of bricks was studied. Bricks were made in the field parallel to ordinary bricks marked with specific number for different percentage to identify them at time of testing. No physical distortion is observed in fly ash brick after burning in the kiln. Results from laboratory test show that compressive strength of brick is decreased with the increase of fly ash and maximum compressive strength is found to be 19.6 MPa at 20% of fly ash. In addition, water absorption of fly ash brick is increased with the increase of fly ash. The abrasion value and Specific gravity of coarse aggregate prepared from brick with fly ash also studied and the results of this study suggests that 20% fly ash can be considered as the optimum fly ash content for producing good quality bricks utilizing present practiced technology.Keywords: Bangladesh brick, fly ash, clay brick, physical properties, compressive strength.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25001308 Mathematical Modeling of Elastically Creeping State of Arbitrarily Orientated Cavities in the Transversally Isotropic Massif
Authors: N. Azhikhanov, T. Turimbetov, Zh. Masanov, N. Zhunisov
Abstract:
It can be determined in preference between representative mechanical and mathematical model of elasticcreeping deformation of transversally isotropic array with doubly periodic system of tilted slots, and offer of the finite elements calculation scheme, and inspection of the states of two diagonal arbitrary profile cavities of deep inception, and in setting up the tense and dislocation fields distribution nature in computing processes.
Keywords: Mathematical model, tunnel, transversally isotropic, finite elements.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16021307 The Journey from Lean Manufacturing to Industry 4.0: The Rail Manufacturing Process in Mexico
Authors: Diana Flores Galindo, Richard Gil Herrera
Abstract:
Nowadays, Lean Manufacturing and Industry 4.0 are very important in every country. One of the main benefits is continued market presence. It has been identified that there is a need to change existing educational programs, as well as update the knowledge and skills of existing employees. It should be borne in mind that behind each technological improvement, there is a human being. Human talent cannot be neglected. The main objectives of this article are to review the link between Lean Manufacturing, the incorporation of Industry 4.0 and the steps to follow to implement it; analyze the current situation and study the implications and benefits of this new trend, with a particular focus on Mexico. Lean Manufacturing and Industry 4.0 implementation waves must always take care of the most important capital – intellectual capital. The methodology used in this article comprised the following steps: reviewing the reality of the fourth industrial revolution, reviewing employees’ skills on the journey to become world-class, and analyzing the situation in Mexico. Lean Manufacturing and Industry 4.0 were studied not as exclusive concepts, but as complementary ones. The methodological framework used is focused on motivating companies’ collaborators to guarantee common results, innovate, and remain in the market in the face of new requirements from company stakeholders. The key findings were that both trends emphasize the need to improve communication across the entire company and incorporate new technologies into everyday work, from the shop floor to administrative staff, to help improve processes. Taking care of people, activities and processes will bring a company success. In the specific case of Mexico, companies in all sectors need to be aware of and implement technological improvements according to their specific needs. Low-cost labor represents one of the most typical barriers. In conclusion, companies must build a roadmap according to their strategy and needs to achieve their short, medium- and long-term goals.
Keywords: Lean management, lean manufacturing, industry 4.0, motivation, SWOT analysis, Hoshin Kanri.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8611306 Using the Monte Carlo Simulation to Predict the Assembly Yield
Authors: C. Chahin, M. C. Hsu, Y. H. Lin, C. Y. Huang
Abstract:
Electronics Products that achieve high levels of integrated communications, computing and entertainment, multimedia features in small, stylish and robust new form factors are winning in the market place. Due to the high costs that an industry may undergo and how a high yield is directly proportional to high profits, IC (Integrated Circuit) manufacturers struggle to maximize yield, but today-s customers demand miniaturization, low costs, high performance and excellent reliability making the yield maximization a never ending research of an enhanced assembly process. With factors such as minimum tolerances, tighter parameter variations a systematic approach is needed in order to predict the assembly process. In order to evaluate the quality of upcoming circuits, yield models are used which not only predict manufacturing costs but also provide vital information in order to ease the process of correction when the yields fall below expectations. For an IC manufacturer to obtain higher assembly yields all factors such as boards, placement, components, the material from which the components are made of and processes must be taken into consideration. Effective placement yield depends heavily on machine accuracy and the vision of the system which needs the ability to recognize the features on the board and component to place the device accurately on the pads and bumps of the PCB. There are currently two methods for accurate positioning, using the edge of the package and using solder ball locations also called footprints. The only assumption that a yield model makes is that all boards and devices are completely functional. This paper will focus on the Monte Carlo method which consists in a class of computational algorithms (information processed algorithms) which depends on repeated random samplings in order to compute the results. This method utilized in order to recreate the simulation of placement and assembly processes within a production line.
Keywords: Monte Carlo simulation, placement yield, PCBcharacterization, electronics assembly
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21711305 The Highest Art Tasks of the World and Humans Transforming
Authors: K. Khalykov, G. Begalinova
Abstract:
In the given article the creative arts is being investigated in the modern era and from the aspect of the artistic interrelationship, having created by the character of his personality and as the viewer. A study in the identity formation terms, the definition of its being unique, unity and similarity as a global issue of the XXI century has been conducted by the analyzing the definitions which characterize the human nature in the arts. Spiritual universality and human existence have been considered in the art system as a human who is a creator, as the man hero and as the character who is the recipient as well as the analyses which have been conducted along with the worldwide cultural and historical processes.Keywords: author, being, creative function of art, recipient and cultural contexts.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14171304 Control Strategy for Two-Mode Hybrid Electric Vehicle by Using Fuzzy Controller
Authors: Jia-Shiun Chen, Hsiu-Ying Hwang
Abstract:
Hybrid electric vehicles can reduce pollution and improve fuel economy. Power-split hybrid electric vehicles (HEVs) provide two power paths between the internal combustion engine (ICE) and energy storage system (ESS) through the gears of an electrically variable transmission (EVT). EVT allows ICE to operate independently from vehicle speed all the time. Therefore, the ICE can operate in the efficient region of its characteristic brake specific fuel consumption (BSFC) map. The two-mode powertrain can operate in input-split or compound-split EVT modes and in four different fixed gear configurations. Power-split architecture is advantageous because it combines conventional series and parallel power paths. This research focuses on input-split and compound-split modes in the two-mode power-split powertrain. Fuzzy Logic Control (FLC) for an internal combustion engine (ICE) and PI control for electric machines (EMs) are derived for the urban driving cycle simulation. These control algorithms reduce vehicle fuel consumption and improve ICE efficiency while maintaining the state of charge (SOC) of the energy storage system in an efficient range.
Keywords: Hybrid electric vehicle, fuel economy, two-mode hybrid, fuzzy control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26181303 Aging Evaluation of Ammonium Perchlorate/Hydroxyl Terminated Polybutadiene-Based Solid Rocket Engine by Reactive Molecular Dynamics Simulation and Thermal Analysis
Authors: R. F. B. Gonçalves, E. N. Iwama, J. A. F. F. Rocco, K. Iha
Abstract:
Propellants based on Hydroxyl Terminated Polybutadiene/Ammonium Perchlorate (HTPB/AP) are the most commonly used in most of the rocket engines used by the Brazilian Armed Forces. This work aimed at the possibility of extending its useful life (currently in 10 years) by performing kinetic-chemical analyzes of its energetic material via Differential Scanning Calorimetry (DSC) and also performing computer simulation of aging process using the software Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS). Thermal analysis via DSC was performed in triplicates and in three heating ratios (5 ºC, 10 ºC, and 15 ºC) of rocket motor with 11 years shelf-life, using the Arrhenius equation to obtain its activation energy, using Ozawa and Kissinger kinetic methods, allowing comparison with manufacturing period data (standard motor). In addition, the kinetic parameters of internal pressure of the combustion chamber in 08 rocket engines with 11 years of shelf-life were also acquired, for comparison purposes with the engine start-up data.
Keywords: Shelf-life, thermal analysis, Ozawa method, Kissinger method, LAMMPS software, thrust.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8331302 The Impact of Local Decision-Making in Regional Development Schemes on the Achievement of Efficiency in EU Funds
Authors: Kuyucu Helvacioglu Asli Deniz, Tektas Arzu
Abstract:
European Union candidate status provides a strong motivation for decision-making in the candidate countries in shaping the regional development policy where there is an envisioned transfer of power from center to the periphery. The process of Europeanization anticipates the candidate countries configure their regional institutional templates in the context of the requirements of the European Union policies and introduces new instruments of incentive framework of enlargement to be employed in regional development schemes. It is observed that the contribution of the local actors to the decision making in the design of the allocation architectures enhances the efficiency of the funds and increases the positive effects of the projects funded under the regional development objectives. This study aims at exploring the performances of the three regional development grant schemes in Turkey, established and allocated under the pre-accession process with a special emphasis given to the roles of the national and local actors in decision-making for regional development. Efficiency analyses have been conducted using the DEA methodology which has proved to be a superior method in comparative efficiency and benchmarking measurements. The findings of this study as parallel to similar international studies, provides that the participation of the local actors to the decision-making in funding contributes both to the quality and the efficiency of the projects funded under the EU schemes.Keywords: Efficiency, European Union Funds, RegionalDevelopment, Turkey
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16511301 Flexible Wormhole-Switched Network-on-chip with Two-Level Priority Data Delivery Service
Authors: Faizal A. Samman, Thomas Hollstein, Manfred Glesner
Abstract:
A synchronous network-on-chip using wormhole packet switching and supporting guaranteed-completion best-effort with low-priority (LP) and high-priority (HP) wormhole packet delivery service is presented in this paper. Both our proposed LP and HP message services deliver a good quality of service in term of lossless packet completion and in-order message data delivery. However, the LP message service does not guarantee minimal completion bound. The HP packets will absolutely use 100% bandwidth of their reserved links if the HP packets are injected from the source node with maximum injection. Hence, the service are suitable for small size messages (less than hundred bytes). Otherwise the other HP and LP messages, which require also the links, will experience relatively high latency depending on the size of the HP message. The LP packets are routed using a minimal adaptive routing, while the HP packets are routed using a non-minimal adaptive routing algorithm. Therefore, an additional 3-bit field, identifying the packet type, is introduced in their packet headers to classify and to determine the type of service committed to the packet. Our NoC prototypes have been also synthesized using a 180-nm CMOS standard-cell technology to evaluate the cost of implementing the combination of both services.Keywords: Network-on-Chip, Parallel Pipeline Router Architecture, Wormhole Switching, Two-Level Priority Service.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17721300 Organization of the Purchasing Function for Innovation
Authors: Jasna Prester, Ivana Rašić Bakarić, Božidar Matijević
Abstract:
Innovations not only contribute to competitiveness of the company but have also positive effects on revenues. On average, product innovations account to 14 percent of companies’ sales. Innovation management has substantially changed during the last decade, because of growing reliance on external partners. As a consequence, a new task for purchasing arises, as firms need to understand which suppliers actually do have high potential contributing to the innovativeness of the firm and which do not. Proper organization of the purchasing function is important since for the majority of manufacturing companies deal with substantial material costs which pass through the purchasing function. In the past the purchasing function was largely seen as a transaction-oriented, clerical function but today purchasing is the intermediate with supply chain partners contributing to innovations, be it product or process innovations. Therefore, purchasing function has to be organized differently to enable firm innovation potential. However, innovations are inherently risky. There are behavioral risk (that some partner will take advantage of the other party), technological risk in terms of complexity of products and processes of manufacturing and incoming materials and finally market risks, which in fact judge the value of the innovation. These risks are investigated in this work. Specifically, technological risks which deal with complexity of the products, and processes will be investigated more thoroughly. Buying components or such high edge technologies necessities careful investigation of technical features and therefore is usually conducted by a team of experts. Therefore it is hypothesized that higher the technological risk, higher will be the centralization of the purchasing function as an interface with other supply chain members. Main contribution of this research lies is in the fact that analysis was performed on a large data set of 1493 companies, from 25 countries collected in the GMRG 4 survey. Most analyses of purchasing function are done by case study analysis of innovative firms. Therefore this study contributes with empirical evaluations that can be generalized.
Keywords: Purchasing function organization, innovation, technological risk, GMRG 4 survey.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37271299 The Relation between the Organizational Trust Level and Organizational Justice Perceptions of Staff in Konya Municipality: A Theoretical and Empirical Study
Authors: Handan Ertaş
Abstract:
The aim of the study is to determine the relationship between organizational trust level and organizational justice of Municipality officials. Correlational method has been used via descriptive survey model and Organizational Justice Perception Scale, Organizational Trust Inventory and Interpersonal Trust Scale have been applied to 353 participants who work in Konya Metropolitan Municipality and central district municipalities in the study. Frequency as statistical method, Independent Samples t test for binary groups, One Way-ANOVA analyses for multi-groups and Pearson Correlation analysis have been used to determine the relation in the data analysis process.It has been determined in the outcomes of the study that participants have high level of organizational trust, “Interpersonal Trust” is in the first place and there is a significant difference in the favor of male officials in terms of Trust on the Organization Itself and Interpersonal Trust. It has also been understood that officials in district municipalities have higher perception level in all dimensions, there is a significant difference in Trust on the Organization sub-dimension and work status is an important factor on organizational trust perception. Moreover, the study has shown that organizational justice implementations are important in raising trust of official on the organization, administrator and colleagues, and there is a parallel relation between Organizational Trust components and Organizational Trust dimensions.
Keywords: Konya, Organizational Justice, Organizational.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18791298 Real-Time Recognition of Dynamic Hand Postures on a Neuromorphic System
Authors: Qian Liu, Steve Furber
Abstract:
To explore how the brain may recognise objects in its general,accurate and energy-efficient manner, this paper proposes the use of a neuromorphic hardware system formed from a Dynamic Video Sensor (DVS) silicon retina in concert with the SpiNNaker real-time Spiking Neural Network (SNN) simulator. As a first step in the exploration on this platform a recognition system for dynamic hand postures is developed, enabling the study of the methods used in the visual pathways of the brain. Inspired by the behaviours of the primary visual cortex, Convolutional Neural Networks (CNNs) are modelled using both linear perceptrons and spiking Leaky Integrate-and-Fire (LIF) neurons. In this study’s largest configuration using these approaches, a network of 74,210 neurons and 15,216,512 synapses is created and operated in real-time using 290 SpiNNaker processor cores in parallel and with 93.0% accuracy. A smaller network using only 1/10th of the resources is also created, again operating in real-time, and it is able to recognise the postures with an accuracy of around 86.4% - only 6.6% lower than the much larger system. The recognition rate of the smaller network developed on this neuromorphic system is sufficient for a successful hand posture recognition system, and demonstrates a much improved cost to performance trade-off in its approach.
Keywords: Spiking neural network (SNN), convolutional neural network (CNN), posture recognition, neuromorphic system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20621297 A PIM (Processor-In-Memory) for Computer Graphics : Data Partitioning and Placement Schemes
Authors: Jae Chul Cha, Sandeep K. Gupta
Abstract:
The demand for higher performance graphics continues to grow because of the incessant desire towards realism. And, rapid advances in fabrication technology have enabled us to build several processor cores on a single die. Hence, it is important to develop single chip parallel architectures for such data-intensive applications. In this paper, we propose an efficient PIM architectures tailored for computer graphics which requires a large number of memory accesses. We then address the two important tasks necessary for maximally exploiting the parallelism provided by the architecture, namely, partitioning and placement of graphic data, which affect respectively load balances and communication costs. Under the constraints of uniform partitioning, we develop approaches for optimal partitioning and placement, which significantly reduce search space. We also present heuristics for identifying near-optimal placement, since the search space for placement is impractically large despite our optimization. We then demonstrate the effectiveness of our partitioning and placement approaches via analysis of example scenes; simulation results show considerable search space reductions, and our heuristics for placement performs close to optimal – the average ratio of communication overheads between our heuristics and the optimal was 1.05. Our uniform partitioning showed average load-balance ratio of 1.47 for geometry processing and 1.44 for rasterization, which is reasonable.Keywords: Data Partitioning and Placement, Graphics, PIM, Search Space Reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15061296 An FPGA Implementation of Intelligent Visual Based Fall Detection
Authors: Peng Shen Ong, Yoong Choon Chang, Chee Pun Ooi, Ettikan K. Karuppiah, Shahirina Mohd Tahir
Abstract:
Falling has been one of the major concerns and threats to the independence of the elderly in their daily lives. With the worldwide significant growth of the aging population, it is essential to have a promising solution of fall detection which is able to operate at high accuracy in real-time and supports large scale implementation using multiple cameras. Field Programmable Gate Array (FPGA) is a highly promising tool to be used as a hardware accelerator in many emerging embedded vision based system. Thus, it is the main objective of this paper to present an FPGA-based solution of visual based fall detection to meet stringent real-time requirements with high accuracy. The hardware architecture of visual based fall detection which utilizes the pixel locality to reduce memory accesses is proposed. By exploiting the parallel and pipeline architecture of FPGA, our hardware implementation of visual based fall detection using FGPA is able to achieve a performance of 60fps for a series of video analytical functions at VGA resolutions (640x480). The results of this work show that FPGA has great potentials and impacts in enabling large scale vision system in the future healthcare industry due to its flexibility and scalability.Keywords: Fall detection, FPGA, hardware implementation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2474