Search results for: life time
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7722

Search results for: life time

6222 Dynamic Traffic Simulation for Traffic Congestion Problem Using an Enhanced Algorithm

Authors: Wong Poh Lee, Mohd. Azam Osman, Abdullah Zawawi Talib, Ahmad Izani Md. Ismail

Abstract:

Traffic congestion has become a major problem in many countries. One of the main causes of traffic congestion is due to road merges. Vehicles tend to move slower when they reach the merging point. In this paper, an enhanced algorithm for traffic simulation based on the fluid-dynamic algorithm and kinematic wave theory is proposed. The enhanced algorithm is used to study traffic congestion at a road merge. This paper also describes the development of a dynamic traffic simulation tool which is used as a scenario planning and to forecast traffic congestion level in a certain time based on defined parameter values. The tool incorporates the enhanced algorithm as well as the two original algorithms. Output from the three above mentioned algorithms are measured in terms of traffic queue length, travel time and the total number of vehicles passing through the merging point. This paper also suggests an efficient way of reducing traffic congestion at a road merge by analyzing the traffic queue length and travel time.

Keywords: Dynamic, fluid-dynamic, kinematic wave theory, simulation, traffic congestion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3143
6221 Coupling Time-Domain Analysis for Dynamic Positioning during S-Lay Installation

Authors: Sun Li-ping, Zhu Jian-xun, Liu Sheng-nan

Abstract:

In order to study the performance of dynamic positioning system during S-lay operations, dynamic positioning system is simulated with the hull-stinger-pipe coupling effect. The roller of stinger is simulated by the generalized elastic contact theory. The stinger is composed of Morrison members. Force on pipe is calculated by lumped mass method. Time domain of fully coupled barge model is analyzed combining with PID controller, Kalman filter and allocation of thrust using Sequential Quadratic Programming method. It is also analyzed that the effect of hull wave frequency motion on pipe-stinger coupling force and dynamic positioning system. Besides, it is studied that how S-lay operations affect the dynamic positioning accuracy. The simulation results are proved to be available by checking pipe stress with API criterion. The effect of heave and yaw motion cannot be ignored on hull-stinger-pipe coupling force and dynamic positioning system. It is important to decrease the barge’s pitch motion and lay pipe in head sea in order to improve safety of the S-lay installation and dynamic positioning.

Keywords: S-lay operation, dynamic positioning, coupling motion; time domain, allocation of thrust.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2778
6220 Estimation of Asphalt Pavement Surfaces Using Image Analysis Technique

Authors: Mohammad A. Khasawneh

Abstract:

Asphalt concrete pavements gradually lose their skid resistance causing safety problems especially under wet conditions and high driving speeds. In order to enact the actual field polishing and wearing process of asphalt pavement surfaces in a laboratory setting, several laboratory-scale accelerated polishing devices were developed by different agencies. To mimic the actual process, friction and texture measuring devices are needed to quantify surface deterioration at different polishing intervals that reflect different stages of the pavement life. The test could still be considered lengthy and to some extent labor-intensive. Therefore, there is a need to come up with another method that can assist in investigating the bituminous pavement surface characteristics in a practical and time-efficient test procedure.

The purpose of this paper is to utilize a well-developed image analysis technique to characterize asphalt pavement surfaces without the need to use conventional friction and texture measuring devices in an attempt to shorten and simplify the polishing procedure in the lab.

Promising findings showed the possibility of using image analysis in lieu of the labor-sensitive-variable-in-nature friction and texture measurements. It was found that the exposed aggregate surface area of asphalt specimens made from limestone and gravel aggregates produced solid evidence of the validity of this method in describing asphalt pavement surfaces. Image analysis results correlated well with the British Pendulum Numbers (BPN), Polish Values (PV) and Mean Texture Depth (MTD) values.

Keywords: Friction, Image Analysis, Polishing, Statistical Analysis, Texture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2561
6219 Effect of Fermentation Time on Xanthan Gum Production from Sugar Beet Molasses

Authors: Marzieh Moosavi- Nasab, Safoora Pashangeh, Maryam Rafsanjani

Abstract:

Xanthan gum is a microbial polysaccharide of great commercial significance. The purpose of this study was to select the optimum fermentation time for xanthan gum production by Xanthomonas campestris (NRRL-B-1459) using 10% sugar beet molasses as a carbon source. The pre-heating of sugar beet molasses and the supplementation of the medium were investigated in order to improve xanthan gum production. Maximum xanthan gum production in fermentation media (9.02 g/l) was observed after 4 days shaking incubation at 25°C and 240 rpm agitation speed. A solution of 10% sucrose was used as a control medium. Results indicated that the optimum period for xanthan gum production in this condition was 4 days.

Keywords: Biomass, Molasses, Xanthan gum, Xanthomonascampestris

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3734
6218 Designing and Implementing a Novel Scheduler for Multiprocessor System using Genetic Algorithm

Authors: Iman Zangeneh, Mostafa Moradi, Mazyar Baranpouyan

Abstract:

System is using multiple processors for computing and information processing, is increasing rapidly speed operation of these systems compared with single processor systems, very significant impact on system performance is increased .important differences to yield a single multi-processor cpu, the scheduling policies, to reduce the implementation time of all processes. Notwithstanding the famous algorithms such as SPT, LPT, LSPT and RLPT for scheduling and there, but none led to the answer are not optimal.In this paper scheduling using genetic algorithms and innovative way to finish the whole process faster that we do and the result compared with three algorithms we mentioned.

Keywords: Multiprocessor system, genetic algorithms, time implementation process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1559
6217 Data Mining Approach for Commercial Data Classification and Migration in Hybrid Storage Systems

Authors: Mais Haj Qasem, Maen M. Al Assaf, Ali Rodan

Abstract:

Parallel hybrid storage systems consist of a hierarchy of different storage devices that vary in terms of data reading speed performance. As we ascend in the hierarchy, data reading speed becomes faster. Thus, migrating the application’ important data that will be accessed in the near future to the uppermost level will reduce the application I/O waiting time; hence, reducing its execution elapsed time. In this research, we implement trace-driven two-levels parallel hybrid storage system prototype that consists of HDDs and SSDs. The prototype uses data mining techniques to classify application’ data in order to determine its near future data accesses in parallel with the its on-demand request. The important data (i.e. the data that the application will access in the near future) are continuously migrated to the uppermost level of the hierarchy. Our simulation results show that our data migration approach integrated with data mining techniques reduces the application execution elapsed time when using variety of traces in at least to 22%.

Keywords: Data mining, hybrid storage system, recurrent neural network, support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1736
6216 Dynamic Analysis of Porous Media Using Finite Element Method

Authors: M. Pasbani Khiavi, A. R. M. Gharabaghi, K. Abedi

Abstract:

The mechanical behavior of porous media is governed by the interaction between its solid skeleton and the fluid existing inside its pores. The interaction occurs through the interface of gains and fluid. The traditional analysis methods of porous media, based on the effective stress and Darcy's law, are unable to account for these interactions. For an accurate analysis, the porous media is represented in a fluid-filled porous solid on the basis of the Biot theory of wave propagation in poroelastic media. In Biot formulation, the equations of motion of the soil mixture are coupled with the global mass balance equations to describe the realistic behavior of porous media. Because of irregular geometry, the domain is generally treated as an assemblage of fmite elements. In this investigation, the numerical formulation for the field equations governing the dynamic response of fluid-saturated porous media is analyzed and employed for the study of transient wave motion. A finite element model is developed and implemented into a computer code called DYNAPM for dynamic analysis of porous media. The weighted residual method with 8-node elements is used for developing of a finite element model and the analysis is carried out in the time domain considering the dynamic excitation and gravity loading. Newmark time integration scheme is developed to solve the time-discretized equations which are an unconditionally stable implicit method Finally, some numerical examples are presented to show the accuracy and capability of developed model for a wide variety of behaviors of porous media.

Keywords: Dynamic analysis, Interaction, Porous media, time domain

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1876
6215 A Constitutive Model for Time-Dependent Behavior of Clay

Authors: T. N. Mac, B. Shahbodaghkhan, N. Khalili

Abstract:

A new elastic-viscoplastic (EVP) constitutive model is proposed for the analysis of time-dependent behavior of clay. The proposed model is based on the bounding surface plasticity and the concept of viscoplastic consistency framework to establish continuous transition from plasticity to rate dependent viscoplasticity. Unlike the overstress based models, this model will meet the consistency condition in formulating the constitutive equation for EVP model. The procedure of deriving the constitutive relationship is also presented. Simulation results and comparisons with experimental data are then presented to demonstrate the performance of the model.

Keywords: Bounding surface, consistency theory, constitutive model, viscosity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2744
6214 Modified Energy and Link Failure Recovery Routing Algorithm for Wireless Sensor Network

Authors: M. Jayekumar, V. Nagarajan

Abstract:

Wireless sensor network finds role in environmental monitoring, industrial applications, surveillance applications, health monitoring and other supervisory applications. Sensing devices form the basic operational unit of the network that is self-battery powered with limited life time. Sensor node spends its limited energy for transmission, reception, routing and sensing information. Frequent energy utilization for the above mentioned process leads to network lifetime degradation. To enhance energy efficiency and network lifetime, we propose a modified energy optimization and node recovery post failure method, Energy-Link Failure Recovery Routing (E-LFRR) algorithm. In our E-LFRR algorithm, two phases namely, Monitored Transmission phase and Replaced Transmission phase are devised to combat worst case link failure conditions. In Monitored Transmission phase, the Actuator Node monitors and identifies suitable nodes for shortest path transmission. The Replaced Transmission phase dispatches the energy draining node at early stage from the active link and replaces it with the new node that has sufficient energy. Simulation results illustrate that this combined methodology reduces overhead, energy consumption, delay and maintains considerable amount of alive nodes thereby enhancing the network performance.

Keywords: Actuator node, energy efficient routing, energy hole, link failure recovery, link utilization, wireless sensor network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1193
6213 Authentication Protocol for Wireless Sensor Networks

Authors: Sunil Gupta, Harsh Kumar Verma, AL Sangal

Abstract:

Wireless sensor networks can be used to measure and monitor many challenging problems and typically involve in monitoring, tracking and controlling areas such as battlefield monitoring, object tracking, habitat monitoring and home sentry systems. However, wireless sensor networks pose unique security challenges including forgery of sensor data, eavesdropping, denial of service attacks, and the physical compromise of sensor nodes. Node in a sensor networks may be vanished due to power exhaustion or malicious attacks. To expand the life span of the sensor network, a new node deployment is needed. In military scenarios, intruder may directly organize malicious nodes or manipulate existing nodes to set up malicious new nodes through many kinds of attacks. To avoid malicious nodes from joining the sensor network, a security is required in the design of sensor network protocols. In this paper, we proposed a security framework to provide a complete security solution against the known attacks in wireless sensor networks. Our framework accomplishes node authentication for new nodes with recognition of a malicious node. When deployed as a framework, a high degree of security is reachable compared with the conventional sensor network security solutions. A proposed framework can protect against most of the notorious attacks in sensor networks, and attain better computation and communication performance. This is different from conventional authentication methods based on the node identity. It includes identity of nodes and the node security time stamp into the authentication procedure. Hence security protocols not only see the identity of each node but also distinguish between new nodes and old nodes.

Keywords: Authentication, Key management, Wireless Sensornetwork, Elliptic curve cryptography (ECC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3824
6212 Online Partial Discharge Source Localization and Characterization Using Non-Conventional Method

Authors: Ammar Anwar Khan, Nissar R. Wani, Nazar Malik, Abdulrehman Al-Arainy, and Saad Alghuwainem

Abstract:

Power cables are vulnerable to failure due to aging or defects that occur with the passage of time under continuous operation and loading stresses. PD detection and characterization provide information on the location, nature, form and extent of the degradation. As a result, PD monitoring has become an important part of condition based maintenance (CBM) program among power utilities. Online partial discharge (PD) localization of defect sources in power cable system is possible using the time of flight method. The information regarding the time difference between the main and reflected pulses and cable length can help in locating the partial discharge source along the cable length. However, if the length of the cable is not known and the defect source is located at the extreme ends of the cable or in the middle of the cable, then double ended measurement is required to indicate the location of PD source. Use of multiple sensors can also help in discriminating the cable PD or local/ external PD. This paper presents the experience and results from online partial discharge measurements conducted in the laboratory and the challenges in partial discharge source localization.

Keywords: Power cables, partial discharge localization, HFCT, condition based monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2826
6211 Improving Decision Support for Organ Transplant

Authors: I. McCulloh, A. Placona, D. Stewart, D. Gause, K. Kiernan, M. Stuart, C. Zinner, L. Cartwright

Abstract:

We find in our data that an alarming number of viable deceased donor kidneys are discarded every year in the US, while waitlisted candidates are dying every day. We observe as many as 85% of transplanted organs are refused at least once for a patient that scored higher on the match list. There are hundreds of clinical variables involved in making a clinical transplant decision and there is rarely an ideal match. Decision makers exhibit an optimism bias where they may refuse an organ offer assuming a better match is imminent. We propose a semi-parametric Cox proportional hazard model, augmented by an accelerated failure time model based on patient-specific suitable organ supply and demand to estimate a time-to-next-offer. Performance is assessed with Cox-Snell residuals and decision curve analysis, demonstrating improved decision support for up to a 5-year outlook. Providing clinical decision-makers with quantitative evidence of likely patient outcomes (e.g., time to next offer and the mortality associated with waiting) may improve decisions and reduce optimism bias, thus reducing discarded organs and matching more patients on the waitlist.

Keywords: Decision science, KDPI, optimism bias, organ transplant.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 182
6210 Block-Based 2D to 3D Image Conversion Method

Authors: S. Sowmyayani, V. Murugan

Abstract:

With the advent of three-dimension (3D) technology, there are lots of research in converting 2D images to 3D images. The main difference between 2D and 3D is the visual illusion of depth in 3D images. In the recent era, there are more depth estimation techniques. The objective of this paper is to convert 2D images to 3D images with less computation time. For this, the input image is divided into blocks from which the depth information is obtained. Having the depth information, a depth map is generated. Then the 3D image is warped using the original image and the depth map. The proposed method is tested on Make3D dataset and NYU-V2 dataset. The experimental results are compared with other recent methods. The proposed method proved to work with less computation time and good accuracy.

Keywords: Depth map, 3D image warping, image rendering, bilateral filter, minimum spanning tree.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 361
6209 New Approaches on Stability Analysis for Neural Networks with Time-Varying Delay

Authors: Qingqing Wang, Shouming Zhong

Abstract:

Utilizing the Lyapunov functional method and combining linear matrix inequality (LMI) techniques and integral inequality approach (IIA) to analyze the global asymptotic stability for delayed neural networks (DNNs),a new sufficient criterion ensuring the global stability of DNNs is obtained.The criteria are formulated in terms of a set of linear matrix inequalities,which can be checked efficiently by use of some standard numercial packages.In order to show the stability condition in this paper gives much less conservative results than those in the literature,numerical examples are considered.

Keywords: Neural networks, Globally asymptotic stability , LMI approach , IIA approach , Time-varying delay.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1939
6208 A Study of the Trade-off Energy Consumption-Performance-Schedulability for DVFS Multicore Systems

Authors: Jalil Boudjadar

Abstract:

Dynamic Voltage and Frequency Scaling (DVFS) multicore platforms are promising execution platforms that enable high computational performance, less energy consumption and flexibility in scheduling the system processes. However, the resulting interleaving and memory interference together with per-core frequency tuning make real-time guarantees hard to be delivered. Besides, energy consumption represents a strong constraint for the deployment of such systems on energy-limited settings. Identifying the system configurations that would achieve a high performance and consume less energy while guaranteeing the system schedulability is a complex task in the design of modern embedded systems. This work studies the trade-off between energy consumption, cores utilization and memory bottleneck and their impact on the schedulability of DVFS multicore time-critical systems with a hierarchy of shared memories. We build a model-based framework using Parametrized Timed Automata of UPPAAL to analyze the mutual impact of performance, energy consumption and schedulability of DVFS multicore systems, and demonstrate the trade-off on an actual case study.

Keywords: Time-critical systems, multicore systems, schedulability analysis, performance, memory interference, energy consumption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 467
6207 Visual-Graphical Methods for Exploring Longitudinal Data

Authors: H. W. Ker

Abstract:

Longitudinal data typically have the characteristics of changes over time, nonlinear growth patterns, between-subjects variability, and the within errors exhibiting heteroscedasticity and dependence. The data exploration is more complicated than that of cross-sectional data. The purpose of this paper is to organize/integrate of various visual-graphical techniques to explore longitudinal data. From the application of the proposed methods, investigators can answer the research questions include characterizing or describing the growth patterns at both group and individual level, identifying the time points where important changes occur and unusual subjects, selecting suitable statistical models, and suggesting possible within-error variance.

Keywords: Data exploration, exploratory analysis, HLMs/LMEs, longitudinal data, visual-graphical methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2095
6206 A Model for Optimal Design of Mixed Renewable Warranty Policy for Non-Repairable Weibull Life Products under Conflict between Customer and Manufacturer Interests

Authors: Saleem Z. Ramadan

Abstract:

A model is presented to find the optimal design of the mixed renewable warranty policy for non-repairable Weibull life products. The optimal design considers the conflict of interests between the customer and the manufacturer: the customer interests are longer full rebate coverage period and longer total warranty coverage period, the manufacturer interests are lower warranty cost and lower risk. The design factors are full rebate and total warranty coverage periods. Results showed that mixed policy is better than full rebate policy in terms of risk and total warranty coverage period in all of the three bathtub regions. In addition, results showed that linear policy is better than mixed policy in infant mortality and constant failure regions while the mixed policy is better than linear policy in ageing region of the model. Furthermore, the results showed that using burn-in period for infant mortality products reduces warranty cost and risk.

Keywords: Reliability, Mixed warranty policy, Optimization, Weibull Distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1448
6205 Using Gaussian Process in Wind Power Forecasting

Authors: Hacene Benkhoula, Mohamed Badreddine Benabdella, Hamid Bouzeboudja, Abderrahmane Asraoui

Abstract:

The wind is a random variable difficult to master, for this, we developed a mathematical and statistical methods enable to modeling and forecast wind power. Gaussian Processes (GP) is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space or time and space. GP is an underlying process formed by unrecognized operator’s uses to solve a problem. The purpose of this paper is to present how to forecast wind power by using the GP. The Gaussian process method for forecasting are presented. To validate the presented approach, a simulation under the MATLAB environment has been given.

Keywords: Forecasting, Gaussian process, modeling, wind power.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1789
6204 A New Scheduling Algorithm Based on Traffic Classification Using Imprecise Computation

Authors: Farzad Abtahi, Sahar Khanmohamadi, Bahram Sadeghi Bigham

Abstract:

Wireless channels are characterized by more serious bursty and location-dependent errors. Many packet scheduling algorithms have been proposed for wireless networks to guarantee fairness and delay bounds. However, most existing schemes do not consider the difference of traffic natures among packet flows. This will cause the delay-weight coupling problem. In particular, serious queuing delays may be incurred for real-time flows. In this paper, it is proposed a scheduling algorithm that takes traffic types of flows into consideration when scheduling packets and also it is provided scheduling flexibility by trading off video quality to meet the playback deadline.

Keywords: Data communication, Real-time, Scheduling, Video transport.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1474
6203 Relative Radiometric Correction of Cloudy Multitemporal Satellite Imagery

Authors: Seema Biday, Udhav Bhosle

Abstract:

Repeated observation of a given area over time yields potential for many forms of change detection analysis. These repeated observations are confounded in terms of radiometric consistency due to changes in sensor calibration over time, differences in illumination, observation angles and variation in atmospheric effects. This paper demonstrates applicability of an empirical relative radiometric normalization method to a set of multitemporal cloudy images acquired by Resourcesat1 LISS III sensor. Objective of this study is to detect and remove cloud cover and normalize an image radiometrically. Cloud detection is achieved by using Average Brightness Threshold (ABT) algorithm. The detected cloud is removed and replaced with data from another images of the same area. After cloud removal, the proposed normalization method is applied to reduce the radiometric influence caused by non surface factors. This process identifies landscape elements whose reflectance values are nearly constant over time, i.e. the subset of non-changing pixels are identified using frequency based correlation technique. The quality of radiometric normalization is statistically assessed by R2 value and mean square error (MSE) between each pair of analogous band.

Keywords: Correlation, Frequency domain, Multitemporal, Relative Radiometric Correction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1981
6202 An Approach for Reducing the Computational Complexity of LAMSTAR Intrusion Detection System using Principal Component Analysis

Authors: V. Venkatachalam, S. Selvan

Abstract:

The security of computer networks plays a strategic role in modern computer systems. Intrusion Detection Systems (IDS) act as the 'second line of defense' placed inside a protected network, looking for known or potential threats in network traffic and/or audit data recorded by hosts. We developed an Intrusion Detection System using LAMSTAR neural network to learn patterns of normal and intrusive activities, to classify observed system activities and compared the performance of LAMSTAR IDS with other classification techniques using 5 classes of KDDCup99 data. LAMSAR IDS gives better performance at the cost of high Computational complexity, Training time and Testing time, when compared to other classification techniques (Binary Tree classifier, RBF classifier, Gaussian Mixture classifier). we further reduced the Computational Complexity of LAMSTAR IDS by reducing the dimension of the data using principal component analysis which in turn reduces the training and testing time with almost the same performance.

Keywords: Binary Tree Classifier, Gaussian Mixture, IntrusionDetection System, LAMSTAR, Radial Basis Function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1750
6201 Solid State Drive End to End Reliability Prediction, Characterization and Control

Authors: Mohd Azman Abdul Latif, Erwan Basiron

Abstract:

A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.

Keywords: e2e reliability prediction, SSD, TCT, Solder Joint Reliability, NUDD, connectivity issues, qualifications, characterization and control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 400
6200 Investigating the Demand for Short-shelf Life Food Products for SME Wholesalers

Authors: Yamini Raju, Parminder S. Kang, Adam Moroz, Ross Clement, Ashley Hopwell, Alistair Duffy

Abstract:

Accurate forecasting of fresh produce demand is one the challenges faced by Small Medium Enterprise (SME) wholesalers. This paper is an attempt to understand the cause for the high level of variability such as weather, holidays etc., in demand of SME wholesalers. Therefore, understanding the significance of unidentified factors may improve the forecasting accuracy. This paper presents the current literature on the factors used to predict demand and the existing forecasting techniques of short shelf life products. It then investigates a variety of internal and external possible factors, some of which is not used by other researchers in the demand prediction process. The results presented in this paper are further analysed using a number of techniques to minimize noise in the data. For the analysis past sales data (January 2009 to May 2014) from a UK based SME wholesaler is used and the results presented are limited to product ‘Milk’ focused on café’s in derby. The correlation analysis is done to check the dependencies of variability factor on the actual demand. Further PCA analysis is done to understand the significance of factors identified using correlation. The PCA results suggest that the cloud cover, weather summary and temperature are the most significant factors that can be used in forecasting the demand. The correlation of the above three factors increased relative to monthly and becomes more stable compared to the weekly and daily demand.

Keywords: Demand Forecasting, Deteriorating Products, Food Wholesalers, Principal Component Analysis and Variability Factors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3368
6199 Discrete Vector Control for Induction Motor Drives with the Rotor Time Constant Update

Authors: A.Larabi, M.S. Boucherit

Abstract:

In this paper, we investigated vector control of an induction machine taking into account discretization problems of the command. In the purpose to show how to include in a discrete model of this current control and with rotor time constant update. The results of simulation obtained are very satisfaisant. That was possible thanks to the good choice of the values of the parameters of the regulators used which shows, the founded good of the method used, for the choice of the parameters of the discrete regulators. The simulation results are presented at the end of this paper.

Keywords: Induction motor, discrete vector control, PIRegulator, transformation of park, PWM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1512
6198 An Approximation Method for Exact Boundary Controllability of Euler-Bernoulli System

Authors: Abdelaziz Khernane, Naceur Khelil, Leila Djerou

Abstract:

The aim of this work is to study the numerical implementation of the Hilbert Uniqueness Method for the exact boundary controllability of Euler-Bernoulli beam equation. This study may be difficult. This will depend on the problem under consideration (geometry, control and dimension) and the numerical method used. Knowledge of the asymptotic behaviour of the control governing the system at time T may be useful for its calculation. This idea will be developed in this study. We have characterized as a first step, the solution by a minimization principle and proposed secondly a method for its resolution to approximate the control steering the considered system to rest at time T.

Keywords: Boundary control, exact controllability, finite difference methods, functional optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487
6197 Inventory Control for a Joint Replenishment Problem with Stochastic Demand

Authors: Bassem Roushdy, Nahed Sobhy, Abdelrhim Abdelhamid, Ahmed Mahmoud

Abstract:

Most papers model Joint Replenishment Problem (JRP) as a (kT,S) where kT is a multiple value for a common review period T,and S is a predefined order up to level. In general the (T,S) policy is characterized by a long out of control period which requires a large amount of safety stock compared to the (R,Q) policy. In this paper a probabilistic model is built where an item, call it item(i), with the shortest order time between interval (T)is modeled under (R,Q) policy and its inventory is continuously reviewed, while the rest of items (j) are periodically reviewed at a definite time corresponding to item

Keywords: Inventory management, Joint replenishment, policy evaluation, stochastic process

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3049
6196 Development of an Adhesive from Prosopis africana Seed Endosperm (Okpeyi)

Authors: Nwangwu Florence Chinyere, Ene Rosemary Ndidiamaka

Abstract:

This research work is an experimental study, through development of an adhesive from Prosopis africana endosperm. The prosopis seed for this work were obtained from Enugu State in the South East part of Nigeria. The seeds were prepared by separating the endosperm from the seed coat and cotyledon. Three methods were used to separate them, which are acidic method, roasting method and boiling method. 20g of seed were treated with different concentrations (25, 40, 55, 70, and 85% w/w) at 100°C and constant time (30 minutes), under continuous stirring with magnetic stirrer. Also 20g of seed were treated with sulphuric acid of concentrations 40% w/w at 100°C with different time (10, 15, 20, 25, 30 minutes), under continuous stirring with magnetic stirrer. Finally, 20g of seed were treated with sulphuric acid of concentrations 40% w/w at different temperature (20°C, 40°C, 60°C, 80°C, and 100°C) with constant time (30 minutes), under continuous stirring with magnetic stirrer. The whole endosperm extracted was adhesive. The physical properties of the adhesive were determined (appearance, odour, taste, solubility, pH, size, and binding strength). The percentage of the adhesive yield makes the commercialization of the seed in Nigeria possible and profitable. The very high viscosity attained at low concentrations makes prosopis adhesive an excellent thickener in the food industry.

Keywords: Endosperm, adhesive, ethanol, Prosopis africana seed.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1427
6195 Affecting Factors of the Mechanical Properties to Phenolic/Fiber Composite

Authors: Thirapat Kitinirunkul, Nattawat Winya, Komson Prapunkarn

Abstract:

Influences of the amount of phenolic, curing temperature and curing time on the Mechanical Properties of phenolic/fiber composite were investigated by using two-level factorial design. The latter was used to determine the affects of those factors on mechanical properties. The purpose of this study was to investigate the affects of amount of phenolic, curing temperature and curing time of the composite to determine the best condition for mechanical properties according to MIL-I-24768 by the tensile strength is more than 103 MPa.

Keywords: Phenolic Resin, Composite, Fiber Composite, Affecting Factors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4122
6194 Effect of Pack Aluminising Conditions on βNiAl Coatings

Authors: A. D. Chandio, P. Xiao

Abstract:

In this study, nickel aluminide coatings were deposited onto CMSX-4 single crystal superalloy and pure Ni substrates by using in-situ chemical vapour deposition (CVD) technique. The microstructural evolutions and coating thickness (CT) were studied upon the variation of processing conditions i.e. time and temperature. The results demonstrated (under identical conditions) that coating formed on pure Ni contains no substrate entrapments and have lower CT in comparison to one deposited on the CMSX-4 counterpart. In addition, the interdiffusion zone (IDZ) of Ni substrate is a γ’-Ni3Al in comparison to the CMSX-4 alloy that is βNiAl phase. The higher CT on CMSX-4 superalloy is attributed to presence of γ-Ni/γ’-Ni3Al structure which contains ~ 15 at.% Al before deposition (that is already present in superalloy). Two main deposition parameters (time and temperature) of the coatings were also studied in addition to standard comparison of substrate effects. The coating formation time was found to exhibit profound effect on CT, whilst temperature was found to change coating activities. In addition, the CT showed linear trend from 800 to 1000 °C, thereafter reduction was observed. This was attributed to the change in coating activities.

Keywords: βNiAl, in-situ CVD, CT, CMSX-4, Ni, microstructure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2413
6193 Incorporation of Long-Term Redundancy in ECG Time Domain Compression Methods through Curve Simplification and Block-Sorting

Authors: Bachir Boucheham, Youcef Ferdi, Mohamed Chaouki Batouche

Abstract:

We suggest a novel method to incorporate longterm redundancy (LTR) in signal time domain compression methods. The proposition is based on block-sorting and curve simplification. The proposition is illustrated on the ECG signal as a post-processor for the FAN method. Test applications on the new so-obtained FAN+ method using the MIT-BIH database show substantial improvement of the compression ratio-distortion behavior for a higher quality reconstructed signal.

Keywords: ECG compression, Long-term redundancy, Block-sorting, Curve Simplification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1521