Search results for: Measurement Process
5939 Determining the Width and Depths of Cut in Milling on the Basis of a Multi-Dexel Model
Authors: Jens Friedrich, Matthias A. Gebele, Armin Lechler, Alexander Verl
Abstract:
Chatter vibrations and process instabilities are the most important factors limiting the productivity of the milling process. Chatter can leads to damage of the tool, the part or the machine tool. Therefore, the estimation and prediction of the process stability is very important. The process stability depends on the spindle speed, the depth of cut and the width of cut. In milling, the process conditions are defined in the NC-program. While the spindle speed is directly coded in the NC-program, the depth and width of cut are unknown. This paper presents a new simulation based approach for the prediction of the depth and width of cut of a milling process. The prediction is based on a material removal simulation with an analytically represented tool shape and a multi-dexel approach for the workpiece. The new calculation method allows the direct estimation of the depth and width of cut, which are the influencing parameters of the process stability, instead of the removed volume as existing approaches do. The knowledge can be used to predict the stability of new, unknown parts. Moreover with an additional vibration sensor, the stability lobe diagram of a milling process can be estimated and improved based on the estimated depth and width of cut.Keywords: Dexel, process stability, material removal, milling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22615938 New Product Development Process on High-Tech Innovation Life Cycle
Authors: Gonçalo G. Aleixo, Alexandra B. Tenera
Abstract:
This work will provide a new perspective of exploring innovation thematic. It will reveal that radical and incremental innovations are complementary during the innovation life cycle and accomplished through distinct ways of developing new products. Each new product development process will be constructed according to the nature of each innovation and the state of the product development. This paper proposes the inclusion of the organizational function areas that influence new product's development on the new product development process.
Keywords: Cross-functional, Incremental Innovation, New Product development Process, Radical Innovation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38435937 Maintenance Function's Performance Evaluation Using Adapted Balanced Scorecard Model
Authors: A. Bakhtiar, B. Purwanggono, N. Metasari
Abstract:
PT XYZ is a bottled drinking water company. To preserve production resources owned by the company so that the resources could be utilized well, it has implemented maintenance management system, which has important role in company's profitability, and is one of the factors influenced overall company's performance. Yet, up to now the company has never measured maintenance activities' contribution to company's performance. Performance evaluation is done according to adapted Balanced Scorecard model fitted to maintenance function context. This model includes six perspectives: innovation and growth, production, maintenance, environment, costumer, and finance. Actual performance measurement is done through Analytic Hierarchy Process and Objective Matrix. From the research done, we can conclude that the company's maintenance function is categorized in moderate performance. But, there are some indicators which has high priority but low performance, which are: costumers' complain rate, work lateness rate, and Return on Investment.
Keywords: Maintenance, performance, balanced scorecard, objective matrix.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27155936 UD Covariance Factorization for Unscented Kalman Filter using Sequential Measurements Update
Authors: H. Ghanbarpour Asl, S. H. Pourtakdoust
Abstract:
Extended Kalman Filter (EKF) is probably the most widely used estimation algorithm for nonlinear systems. However, not only it has difficulties arising from linearization but also many times it becomes numerically unstable because of computer round off errors that occur in the process of its implementation. To overcome linearization limitations, the unscented transformation (UT) was developed as a method to propagate mean and covariance information through nonlinear transformations. Kalman filter that uses UT for calculation of the first two statistical moments is called Unscented Kalman Filter (UKF). Square-root form of UKF (SRUKF) developed by Rudolph van der Merwe and Eric Wan to achieve numerical stability and guarantee positive semi-definiteness of the Kalman filter covariances. This paper develops another implementation of SR-UKF for sequential update measurement equation, and also derives a new UD covariance factorization filter for the implementation of UKF. This filter is equivalent to UKF but is computationally more efficient.Keywords: Unscented Kalman filter, Square-root unscentedKalman filter, UD covariance factorization, Target tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 48455935 Phenomenological and Theoretical Analysis of Relativistic Temperature Transformation and Relativistic Entropy
Authors: Marko Popovic
Abstract:
There are three possible effects of Special Theory of Relativity (STR) on a thermodynamic system. Planck and Einstein looked upon this process as isobaric; on the other hand Ott saw it as an adiabatic process. However plenty of logical reasons show that the process is isotherm. Our phenomenological consideration demonstrates that the temperature is invariant with Lorenz transformation. In that case process is isotherm, so volume and pressure are Lorentz covariant. If the process is isotherm the Boyles law is Lorentz invariant. Also equilibrium constant and Gibbs energy, activation energy, enthalpy entropy and extent of the reaction became Lorentz invariant.Keywords: STR, relativistic temperature transformation, Boyle'slaw, equilibrium constant, Gibbs energy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22915934 The Use of Process-Oriented Methods of Calculation to Determine the Costs of Logistics Processes
Authors: Tomas Cechura, Michal Simon
Abstract:
The aim of this paper is to create a proposal for determining the costs of logistics processes by using process-oriented calculation methods. The traditional approach is that logistics costs are part of manufacturing overhead which is usually calculated as a percentage surcharge. Therefore in the traditional approach it is not obvious where and in which activities costs were incurred. So it is impossible to trace logistics costs to products. Our point of view is trying to fix or at least improve this issue. Another benefit of applying the process approach is identification of logistics processes which are otherwise part of manufacturing overhead. In the first part this paper describes the development of process-oriented methods over time. The next part shows the possibility of implementing the process-oriented method called Prozesskostenrechnung to logistics processes. The conclusion summarizes advantages and disadvantages of using this method in logistics.
Keywords: Cost, logistics, calculation, process-oriented method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15955933 Influence of Improved Roughage Quality and Period of Meal Termination on Digesta Load in the Digestive Organs of Goats
Authors: Rasheed A. Adebayo, Mehluli M. Moyo, Ignatius V. Nsahlai
Abstract:
Ruminants are known to relish roughage for productivity but the effect of its quality on digesta load in rumen, omasum, abomasum and other distal organs of the digestive tract is yet unknown. Reticulorumen fill is a strong indicator for long-term control of intake in ruminants. As such, the measurement and prediction of digesta load in these compartments may be crucial to productivity in the ruminant industry. The current study aimed at determining the effect of (a) diet quality on digesta load in digestive organs of goats, and (b) period of meal termination on the reticulorumen fill and digesta load in other distal compartments of the digestive tract of goats. Goats were fed with urea-treated hay (UTH), urea-sprayed hay (USH) and non-treated hay (NTH). At the end of eight weeks of a feeding trial period, upon termination of a meal in the morning, afternoon or evening, all goats were slaughtered in random groups of three per day to measure reticulorumen fill and digesta loads in other distal compartments of the digestive tract. Both diet quality and period affected (P < 0.05) the measure of reticulorumen fill. However, reticulorumen fill in the evening was larger (P < 0.05) than afternoon, while afternoon was similar (P > 0.05) to morning. Also, diet quality affected (P < 0.05) the wet omasal digesta load, wet abomasum, dry abomasum and dry caecum digesta loads but did not affect (P > 0.05) both wet and dry digesta loads in other compartments of the digestive tract. Period of measurement did not affect (P > 0.05) the wet omasal digesta load, and both wet and dry digesta loads in other compartments of the digestive tract except wet abomasum digesta load (P < 0.05) and dry caecum digesta load (P < 0.05). Both wet and dry reticulorumen fill were correlated (P < 0.05) with omasum (r = 0.623) and (r = 0.723), respectively. In conclusion, reticulorumen fill of goats decreased by improving the roughage quality; and the period of meal termination and measurement of the fill is a key factor to the quantity of digesta load.
Keywords: Digesta, goats, meal termination, reticulorumen fill.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8145932 Prediction of Solidification Behavior of Al Alloy in a Cube Mold Cavity
Authors: N. P. Yadav, Deepti Verma
Abstract:
This paper focuses on the mathematical modeling for solidification of Al alloy in a cube mold cavity to study the solidification behavior of casting process. The parametric investigation of solidification process inside the cavity was performed by using computational solidification/melting model coupled with Volume of fluid (VOF) model. The implicit filling algorithm is used in this study to understand the overall process from the filling stage to solidification in a model metal casting process. The model is validated with past studied at same conditions. The solidification process is analyzed by including the effect of pouring velocity as well as natural convection from the wall and geometry of the cavity. These studies show the possibility of various defects during solidification process.Keywords: Buoyancy driven flow, natural convection driven flow, residual flow, secondary flow, volume of fluid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23155931 Project Objective Structure Model: An Integrated, Systematic and Balanced Approach in Order to Achieve Project Objectives
Authors: Mohammad Reza Oftadeh
Abstract:
The purpose of the article is to describe project objective structure (POS) concept that was developed on research activities and experiences about project management, Balanced Scorecard (BSC) and European Foundation Quality Management Excellence Model (EFQM Excellence Model). Furthermore, this paper tries to define a balanced, systematic, and integrated measurement approach to meet project objectives and project strategic goals based on a process-oriented model. In this paper, POS is suggested in order to measure project performance in the project life cycle. After using the POS model, the project manager can ensure in order to achieve the project objectives on the project charter. This concept can help project managers to implement integrated and balanced monitoring and control project work.Keywords: Project objectives, project performance management, PMBOK, key performance indicators, integration management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8125930 Fault Detection and Identification of COSMED K4b2 Based On PCA and Neural Network
Authors: Jing Zhou, Steven Su, Aihuang Guo
Abstract:
COSMED K4b2 is a portable electrical device designed to test pulmonary functions. It is ideal for many applications that need the measurement of the cardio-respiratory response either in the field or in the lab is capable with the capability to delivery real time data to a sink node or a PC base station with storing data in the memory at the same time. But the actual sensor outputs and data received may contain some errors, such as impulsive noise which can be related to sensors, low batteries, environment or disturbance in data acquisition process. These abnormal outputs might cause misinterpretations of exercise or living activities to persons being monitored. In our paper we propose an effective and feasible method to detect and identify errors in applications by principal component analysis (PCA) and a back propagation (BP) neural network.
Keywords: BP Neural Network, Exercising Testing, Fault Detection and Identification, Principal Component Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30755929 Operational Risk – Scenario Analysis
Authors: Milan Rippel, Petr Teply
Abstract:
This paper focuses on operational risk measurement techniques and on economic capital estimation methods. A data sample of operational losses provided by an anonymous Central European bank is analyzed using several approaches. Loss Distribution Approach and scenario analysis method are considered. Custom plausible loss events defined in a particular scenario are merged with the original data sample and their impact on capital estimates and on the financial institution is evaluated. Two main questions are assessed – What is the most appropriate statistical method to measure and model operational loss data distribution? and What is the impact of hypothetical plausible events on the financial institution? The g&h distribution was evaluated to be the most suitable one for operational risk modeling. The method based on the combination of historical loss events modeling and scenario analysis provides reasonable capital estimates and allows for the measurement of the impact of extreme events on banking operations.Keywords: operational risk, scenario analysis, economic capital, loss distribution approach, extreme value theory, stress testing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24295928 Stress Intensity Factor for Dynamic Cracking of Composite Material by X-FEM Method
Authors: S. Lecheb, A. Nour, A. Chellil, H. Mechakra, N. Hamad, H. Kebir
Abstract:
The work involves develops attended by a numerical execution of the eXtend Finite Element Method premises a measurement by the fracture process cracked so many cracked plates an application will be processed for the calculation of the stress intensity factor SIF. In the first we give in statically part the distribution of stress, displacement field and strain of composite plate in two cases uncrack/edge crack, also in dynamical part the first six modes shape. Secondly, we calculate Stress Intensity Factor SIF for different orientation angle θ of central crack with length (2a=0.4mm) in plan strain condition, KI and KII are obtained for mode I and mode II respectively using X-FEM method. Finally from crack inclined involving mixed modes results, the comparison we chose dangerous inclination and the best crack angle when K is minimal.
Keywords: Stress Intensity Factor (SIF), Crack orientation, Glass/Epoxy, natural Frequencies, X-FEM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28945927 Estimation of Missing or Incomplete Data in Road Performance Measurement Systems
Authors: Kristjan Kuhi, Kati K. Kaare, Ott Koppel
Abstract:
Modern management in most fields is performance based; both planning and implementation of maintenance and operational activities are driven by appropriately defined performance indicators. Continuous real-time data collection for management is becoming feasible due to technological advancements. Outdated and insufficient input data may result in incorrect decisions. When using deterministic models the uncertainty of the object state is not visible thus applying the deterministic models are more likely to give false diagnosis. Constructing structured probabilistic models of the performance indicators taking into consideration the surrounding indicator environment enables to estimate the trustworthiness of the indicator values. It also assists to fill gaps in data to improve the quality of the performance analysis and management decisions. In this paper authors discuss the application of probabilistic graphical models in the road performance measurement and propose a high-level conceptual model that enables analyzing and predicting more precisely future pavement deterioration based on road utilization.
Keywords: Probabilistic graphical models, performance indicators, road performance management, data collection
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18345926 Designing Pictogram for Food Portion Size
Authors: Y.C. Liu, S.J. Lu, Y.C. Weng, H. Su
Abstract:
The objective of this paper is to investigate a new approach based on the idea of pictograms for food portion size. This approach adopts the model of the United States Pharmacopeia- Drug Information (USP-DI). The representation of each food portion size composed of three parts: frame, the connotation of dietary portion sizes and layout. To investigate users- comprehension based on this approach, two experiments were conducted, included 122 Taiwanese people, 60 male and 62 female with ages between 16 and 64 (divided into age groups of 16-30, 31-45 and 46-64). In Experiment 1, the mean correcting rate of the understanding level of food items is 48.54% (S.D.= 95.08) and the mean response time 2.89sec (S.D.=2.14). The difference on the correct rates for different age groups is significant (P*=0.00<0.05). In Experiment 2, the correcting rate of selecting the right life-size measurement aid is 65.02% (S.D.=21.31). The result showed the potential of the approach for certain food potion sizes. Issues raised for discussions including comprehension on numerous food varieties in an open environment, selection of photograph or drawing, reasons of different correcting rates for the measurement aid. This research also could be used for those interested in systematic and pictorial representation of dietary portion size information.Keywords: Comprehension, Food Portion Size, Model of DietaryInformation, Pictogram Design, USP-DI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19365925 Temperature Dependence of Relative Permittivity: A Measurement Technique Using Split Ring Resonators
Authors: Sreedevi P. Chakyar, Jolly Andrews, V. P. Joseph
Abstract:
A compact method for measuring the relative permittivity of a dielectric material at different temperatures using a single circular Split Ring Resonator (SRR) metamaterial unit working as a test probe is presented in this paper. The dielectric constant of a material is dependent upon its temperature and the LC resonance of the SRR depends on its dielectric environment. Hence, the temperature of the dielectric material in contact with the resonator influences its resonant frequency. A single SRR placed between transmitting and receiving probes connected to a Vector Network Analyser (VNA) is used as a test probe. The dependence of temperature between 30 oC and 60 oC on resonant frequency of SRR is analysed. Relative permittivities ‘ε’ of test samples for different temperatures are extracted from a calibration graph drawn between the relative permittivity of samples of known dielectric constant and their corresponding resonant frequencies. This method is found to be an easy and efficient technique for analysing the temperature dependent permittivity of different materials.
Keywords: Metamaterials, negative permeability, permittivity measurement techniques, split ring resonators, temperature dependent dielectric constant.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25855924 Vessel Inscribed Trigonometry to Measure the Vessel Progressive Orientations in the Digital Fundus Image
Authors: Pil Un Kim, Yunjung Lee, Gihyoun Lee, Jin Ho Cho, Myoung Nam Kim
Abstract:
In this paper, the vessel inscribed trigonometry (VITM) for the vessel progression orientation (VPO) is proposed in the two-dimensional fundus image. The VPO is a major factor in the optic disc (OD) detection which is a basic process in the retina analysis. To measure the VPO, skeletons of vessel are used. First, the vessels are classified into three classes as vessel end, vessel branch and vessel stem. And the chain code maps of VS are generated. Next, two farthest neighborhoods of each point on VS are searched by the proposed angle restriction. Lastly, a gradient of the straight line between two farthest neighborhoods is estimated to measure the VPO. VITM is validated by comparing with manual results and 2D Gaussian templates. It is confirmed that VPO of the proposed mensuration is correct enough to detect OD from the results of experiment which applied VITM to detect OD in fundus images.
Keywords: Angle measurement, Optic disc, Retina vessel, Vessel progression orientation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14175923 Using Linear Quadratic Gaussian Optimal Control for Lateral Motion of Aircraft
Authors: A. Maddi, A. Guessoum, D. Berkani
Abstract:
The purpose of this paper is to provide a practical example to the Linear Quadratic Gaussian (LQG) controller. This method includes a description and some discussion of the discrete Kalman state estimator. One aspect of this optimality is that the estimator incorporates all information that can be provided to it. It processes all available measurements, regardless of their precision, to estimate the current value of the variables of interest, with use of knowledge of the system and measurement device dynamics, the statistical description of the system noises, measurement errors, and uncertainty in the dynamics models. Since the time of its introduction, the Kalman filter has been the subject of extensive research and application, particularly in the area of autonomous or assisted navigation. For example, to determine the velocity of an aircraft or sideslip angle, one could use a Doppler radar, the velocity indications of an inertial navigation system, or the relative wind information in the air data system. Rather than ignore any of these outputs, a Kalman filter could be built to combine all of this data and knowledge of the various systems- dynamics to generate an overall best estimate of velocity and sideslip angle.Keywords: Aircraft motion, Kalman filter, LQG control, Lateral stability, State estimator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24705922 Basic Study of Mammographic Image Magnification System with Eye-Detector and Simple EEG Scanner
Authors: A. Umemuro, M. Sato, M. Narita, S. Hori, S. Sakurai, T. Nakayama, A. Nakazawa, T. Ogura
Abstract:
Mammography requires the detection of very small calcifications, and physicians search for microcalcifications by magnifying the images as they read them. The mouse is necessary to zoom in on the images, but this can be tiring and distracting when many images are read in a single day. Therefore, an image magnification system combining an eye-detector and a simple electroencephalograph (EEG) scanner was devised, and its operability was evaluated. Two experiments were conducted in this study: the measurement of eye-detection error using an eye-detector and the measurement of the time required for image magnification using a simple EEG scanner. Eye-detector validation showed that the mean distance of eye-detection error ranged from 0.64 cm to 2.17 cm, with an overall mean of 1.24 ± 0.81 cm for the observers. The results showed that the eye detection error was small enough for the magnified area of the mammographic image. The average time required for point magnification in the verification of the simple EEG scanner ranged from 5.85 to 16.73 seconds, and individual differences were observed. The reason for this may be that the size of the simple EEG scanner used was not adjustable, so it did not fit well for some subjects. The use of a simple EEG scanner with size adjustment would solve this problem. Therefore, the image magnification system using the eye-detector and the simple EEG scanner is useful.
Keywords: EEG scanner, eye-detector, mammography, observers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3605921 Rapid Frequency Response Measurement of Power Conversion Products with Coherence-Based Confidence Analysis
Authors: Tomi Roinila, Aki Taskinen, Matti Vilkko
Abstract:
Switched-mode converters play now a significant role in modern society. Their operation are often crucial in various electrical applications affecting the every day life. Therefore, the quality of the converters needs to be reliably verified. Recent studies have shown that the converters can be fully characterized by a set of frequency responses which can be efficiently used to validate the proper operation of the converters. Consequently, several methods have been proposed to measure the frequency responses fast and accurately. Most often correlation-based techniques have been applied. The presented measurement methods are highly sensitive to external errors and system nonlinearities. This fact has been often forgotten and the necessary uncertainty analysis of the measured responses has been neglected. This paper presents a simple approach to analyze the noise and nonlinearities in the frequency-response measurements of switched-mode converters. Coherence analysis is applied to form a confidence interval characterizing the noise and nonlinearities involved in the measurements. The presented method is verified by practical measurements from a high-frequency switchedmode converter.Keywords: Switched-mode converters, Frequency analysis, CoherenceAnalysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17195920 Optimization of Surface Roughness in Additive Manufacturing Processes via Taguchi Methodology
Authors: Anjian Chen, Joseph C. Chen
Abstract:
This paper studies a case where the targeted surface roughness of fused deposition modeling (FDM) additive manufacturing process is improved. The process is designing to reduce or eliminate the defects and improve the process capability index Cp and Cpk for an FDM additive manufacturing process. The baseline Cp is 0.274 and Cpk is 0.654. This research utilizes the Taguchi methodology, to eliminate defects and improve the process. The Taguchi method is used to optimize the additive manufacturing process and printing parameters that affect the targeted surface roughness of FDM additive manufacturing. The Taguchi L9 orthogonal array is used to organize the parameters' (four controllable parameters and one non-controllable parameter) effectiveness on the FDM additive manufacturing process. The four controllable parameters are nozzle temperature [°C], layer thickness [mm], nozzle speed [mm/s], and extruder speed [%]. The non-controllable parameter is the environmental temperature [°C]. After the optimization of the parameters, a confirmation print was printed to prove that the results can reduce the amount of defects and improve the process capability index Cp from 0.274 to 1.605 and the Cpk from 0.654 to 1.233 for the FDM additive manufacturing process. The final results confirmed that the Taguchi methodology is sufficient to improve the surface roughness of FDM additive manufacturing process.
Keywords: Additive manufacturing, fused deposition modeling, surface roughness, Six-Sigma, Taguchi method, 3D printing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13895919 Line Balancing in the Hard Disk Drive Process Using Simulation Techniques
Authors: Teerapun Saeheaw, Nivit Charoenchai, Wichai Chattinnawat
Abstract:
Simulation model is an easy way to build up models to represent real life scenarios, to identify bottlenecks and to enhance system performance. Using a valid simulation model may give several advantages in creating better manufacturing design in order to improve the system performances. This paper presents result of implementing a simulation model to design hard disk drive manufacturing process by applying line balancing to improve both productivity and quality of hard disk drive process. The line balance efficiency showed 86% decrease in work in process, output was increased by an average of 80%, average time in the system was decreased 86% and waiting time was decreased 90%.Keywords: line balancing, arena, hard disk drive process, simulation, work in process (WIP)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21645918 Integration Process of Industrial Design and Engineering Design
Authors: Kazuhide Sugiyama, Hiroshi Osada
Abstract:
Lately management strategy that put Industrial Design (ID) in its core is recognized more important, as technology and price alone cannot differentiate a product. The needs to shorten the time to develop a product also shorten the development period of ID, and it necessitates the ID process management. This research analyzes the status of integration process of ID and Engineering Design (ED) of office equipment that requires the collaboration of ID and ED to clarify the issues for the efficiency of the development and to propose solutions.
Keywords: Industrial Design (ID), Engineering Design (ED), Integration process, Office equipment
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18135917 Bi-Criteria Latency Optimization of Intra-and Inter-Autonomous System Traffic Engineering
Authors: K. Vidya, V.Rhymend Uthariaraj
Abstract:
Traffic Engineering (TE) is the process of controlling how traffic flows through a network in order to facilitate efficient and reliable network operations while simultaneously optimizing network resource utilization and traffic performance. TE improves the management of data traffic within a network and provides the better utilization of network resources. Many research works considers intra and inter Traffic Engineering separately. But in reality one influences the other. Hence the effective network performances of both inter and intra Autonomous Systems (AS) are not optimized properly. To achieve a better Joint Optimization of both Intra and Inter AS TE, we propose a joint Optimization technique by considering intra-AS features during inter – AS TE and vice versa. This work considers the important criterion say latency within an AS and between ASes. and proposes a Bi-Criteria Latency optimization model. Hence an overall network performance can be improved by considering this jointoptimization technique in terms of Latency.Keywords: Inter-Domain Routing , Measurement, OptimizationPerformance, Traffic Engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15165916 Using Perspective Schemata to Model the ETL Process
Authors: Valeria M. Pequeno, Joao Carlos G. M. Pires
Abstract:
Data Warehouses (DWs) are repositories which contain the unified history of an enterprise for decision support. The data must be Extracted from information sources, Transformed and integrated to be Loaded (ETL) into the DW, using ETL tools. These tools focus on data movement, where the models are only used as a means to this aim. Under a conceptual viewpoint, the authors want to innovate the ETL process in two ways: 1) to make clear compatibility between models in a declarative fashion, using correspondence assertions and 2) to identify the instances of different sources that represent the same entity in the real-world. This paper presents the overview of the proposed framework to model the ETL process, which is based on the use of a reference model and perspective schemata. This approach provides the designer with a better understanding of the semantic associated with the ETL process.
Keywords: conceptual data model, correspondence assertions, data warehouse, data integration, ETL process, object relational database.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15115915 Nanocrystalline Na0.1V2O5.nH2O Xerogel Thin Film for Gas Sensing
Authors: M. S. Al-Assiri, M. M. El-Desoky, Ahmed A. Ibrahim, M. Abaker, A. A. Bahgat
Abstract:
Nanocrystalline thin film of Na0.1V2O5.nH2O xerogel obtained by sol gel synthesis was used as gas sensor. Gas sensing properties of different gases such as hydrogen, petroleum and humidity were investigated. Applying XRD and TEM the size of the nanocrystals is found to be 7.5 nm. SEM shows a highly porous structure with submicron meter-sized voids present throughout the sample. FTIR measurement shows different chemical groups identifying the obtained series of gels. The sample was n-type semiconductor according to the thermoelectric power and electrical conductivity. It can be seen that the sensor response curves from 130oC to 150oC show a rapid increase in sensitivity for all types of gas injection, low response values for heating period and the rapid high response values for cooling period. This result may suggest that this material is able to act as gas sensor during the heating and cooling process.
Keywords: Sol gel, Thermoelectric power, XRD, TEM, Gas sensing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18415914 The Effectiveness of Ultrasound Treatment on the Germination Stimulation of Barley Seed and its Alpha-Amylase Activity
Authors: M. Yaldagard, S.A. Mortazavi, F. Tabatabaie
Abstract:
In the present study, the effects of ultrasound as emerging technology were investigated on germination stimulation, amount of alpha-amylase activity on dry barley seeds before steeping stage of malting process. All experiments were carried out at 20 KHz on the ultrasonic generator in 3 different ultrasonic intensities (20, 60 and 100% setting from total power of device) and time (5, 10 and 15 min) at constant temperature (30C). For determining the effects of these parameters on enzyme the Fuwa method assay based on the decreased staining value of blue starch–iodine complexes employed for measurement an activity. The results of these assays were analyzed by Qualitek4 software using the Taguchi statistical method to evaluate the factor-s effects on enzyme activity. It has been found that when malting barley is irradiated with an ultrasonic power, a stimulating effect occurs as to the enzyme activity.Keywords: ultrasound, alpha-amylase activity, stimulationand Taguchi statistical method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 46185913 Inverse Dynamic Active Ground Motion Acceleration Inputs Estimation of the Retaining Structure
Authors: Ming-Hui Lee, Iau-Teh Wang
Abstract:
The innovative fuzzy estimator is used to estimate the ground motion acceleration of the retaining structure in this study. The Kalman filter without the input term and the fuzzy weighting recursive least square estimator are two main portions of this method. The innovation vector can be produced by the Kalman filter, and be applied to the fuzzy weighting recursive least square estimator to estimate the acceleration input over time. The excellent performance of this estimator is demonstrated by comparing it with the use of difference weighting function, the distinct levels of the measurement noise covariance and the initial process noise covariance. The availability and the precision of the proposed method proposed in this study can be verified by comparing the actual value and the one obtained by numerical simulation.Keywords: Earthquake, Fuzzy Estimator, Kalman Filter, Recursive Least Square Estimator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15475912 Assessing Community Participation in Decision-Making Process under Co-Management: A Case Study on Hail Haor, Bangladesh
Authors: R. Ferdous
Abstract:
Power, responsibility sharing, and democratic decision-making are the central ethos to co-management. It is assumed that involving local community in the decision-making process can create a sense of ownership and responsibility of that community and motivate the community towards collective action. But this paper demonstrated that the process to involve local community is not simple and straightforward as it is influenced by structural aspects, power relations among the actors, and social embedded institutions. These factors shape the process in that way who will participate, how they will participate and how the local community maneuvers their agency in the decision-making process. To grasp the complexities that materialize in the process of participation and to understand the inclusionary and exclusionary nature of participation, this paper examines the subjective understanding of different stakeholders concerning participation and furthermore observes the enabling or constraining factors that affect the community to exercise their agency.
Keywords: Participation, social embeddedness, power, structure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16875911 Quality Based Approach for Efficient Biologics Manufacturing
Authors: Takashi Kaminagayoshi, Shigeyuki Haruyama
Abstract:
To improve the manufacturing efficiency of biologics, such as antibody drugs, a quality engineering framework was designed. Within this framework, critical steps and parameters in the manufacturing process were studied. Identification of these critical steps and critical parameters allows a deeper understanding of manufacturing capabilities, and suggests to process development department process control standards based on actual manufacturing capabilities as part of a PDCA (plan-do-check-act) cycle. This cycle can be applied to each manufacturing process so that it can be standardized, reducing the time needed to establish each new process.Keywords: Antibody drugs, biologics, manufacturing efficiency, PDCA cycle, quality engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16515910 Measurement and Evaluation of Outdoor Lighting Environment at Night in Residential Community in China: A Case Study of Hangzhou
Authors: Jiantao Weng, Yujie Zhao
Abstract:
With the improvement of living quality and demand for nighttime activities in China, the current situation of outdoor lighting environment at night needs to be assessed. Lighting environment at night plays an important role to guarantee night safety. Two typical residential communities in Hangzhou were selected. A comprehensive test method of outdoor lighting environment at night was established. The road, fitness area, landscape, playground and entrance were included. Field measurements and questionnaires were conducted in these two residential communities. The characteristics of residents’ habits and the subjective evaluation on different aspects of outdoor lighting environment at night were collected via questionnaire. A safety evaluation system on the outdoor lighting environment at night in the residential community was established. The results show that there is a big difference in illumination in different areas. The lighting uniformities of roads cannot meet the requirement of lighting standard in China. Residents pay more attention to the lighting environment of the fitness area and road than others. This study can provide guidance for the design and management of outdoor lighting environment at night.
Keywords: Residential community, lighting environment, night, field measurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 656