Search results for: network performance.
5411 Monitoring and Fault-Recovery Capacity with Waveguide Grating-based Optical Switch over WDM/OCDMA-PON
Authors: Yao-Tang Chang, Chuen-Ching Wang, Shu-Han Hu
Abstract:
In order to implement flexibility as well as survivable capacities over passive optical network (PON), a new automatic random fault-recovery mechanism with array-waveguide-grating based (AWG-based) optical switch (OSW) is presented. Firstly, wavelength-division-multiplexing and optical code-division multiple-access (WDM/OCDMA) scheme are configured to meet the various geographical locations requirement between optical network unit (ONU) and optical line terminal (OLT). The AWG-base optical switch is designed and viewed as central star-mesh topology to prohibit/decrease the duplicated redundant elements such as fiber and transceiver as well. Hence, by simple monitoring and routing switch algorithm, random fault-recovery capacity is achieved over bi-directional (up/downstream) WDM/OCDMA scheme. When error of distribution fiber (DF) takes place or bit-error-rate (BER) is higher than 10-9 requirement, the primary/slave AWG-based OSW are adjusted and controlled dynamically to restore the affected ONU groups via the other working DFs immediately.Keywords: Random fault recovery mechanism, Array-waveguide-grating based optical switch (AWG- based OSW), wavelength-division-multiplexing and optical code-divisionmultiple-access (WDM/ OCDMA)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16405410 Data Recording for Remote Monitoring of Autonomous Vehicles
Authors: Rong-Terng Juang
Abstract:
Autonomous vehicles offer the possibility of significant benefits to social welfare. However, fully automated cars might not be going to happen in the near further. To speed the adoption of the self-driving technologies, many governments worldwide are passing laws requiring data recorders for the testing of autonomous vehicles. Currently, the self-driving vehicle, (e.g., shuttle bus) has to be monitored from a remote control center. When an autonomous vehicle encounters an unexpected driving environment, such as road construction or an obstruction, it should request assistance from a remote operator. Nevertheless, large amounts of data, including images, radar and lidar data, etc., have to be transmitted from the vehicle to the remote center. Therefore, this paper proposes a data compression method of in-vehicle networks for remote monitoring of autonomous vehicles. Firstly, the time-series data are rearranged into a multi-dimensional signal space. Upon the arrival, for controller area networks (CAN), the new data are mapped onto a time-data two-dimensional space associated with the specific CAN identity. Secondly, the data are sampled based on differential sampling. Finally, the whole set of data are encoded using existing algorithms such as Huffman, arithmetic and codebook encoding methods. To evaluate system performance, the proposed method was deployed on an in-house built autonomous vehicle. The testing results show that the amount of data can be reduced as much as 1/7 compared to the raw data.
Keywords: Autonomous vehicle, data recording, remote monitoring, controller area network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13525409 A PSO-based SSSC Controller for Improvement of Transient Stability Performance
Authors: Sidhartha Panda, N. P. Padhy
Abstract:
The application of a Static Synchronous Series Compensator (SSSC) controller to improve the transient stability performance of a power system is thoroughly investigated in this paper. The design problem of SSSC controller is formulated as an optimization problem and Particle Swarm Optimization (PSO) Technique is employed to search for optimal controller parameters. By minimizing the time-domain based objective function, in which the deviation in the oscillatory rotor angle of the generator is involved; transient stability performance of the system is improved. The proposed controller is tested on a weakly connected power system subjected to different severe disturbances. The non-linear simulation results are presented to show the effectiveness of the proposed controller and its ability to provide efficient damping of low frequency oscillations. It is also observed that the proposed SSSC controller improves greatly the voltage profile of the system under severe disturbances.
Keywords: Particle swarm optimization, transient stability, power system oscillations, SSSC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26905408 Digital Predistorter with Pipelined Architecture Using CORDIC Processors
Authors: Kyunghoon Kim, Sungjoon Shim, Jun Tae Kim, Jong Tae Kim
Abstract:
In a wireless communication system, a predistorter(PD) is often employed to alleviate nonlinear distortions due to operating a power amplifier near saturation, thereby improving the system performance and reducing the interference to adjacent channels. This paper presents a new adaptive polynomial digital predistorter(DPD). The proposed DPD uses Coordinate Rotation Digital Computing(CORDIC) processors and PD process by pipelined architecture. It is simpler and faster than conventional adaptive polynomial DPD. The performance of the proposed DPD is proved by MATLAB simulation. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17885407 Network-Constrained AC Unit Commitment under Uncertainty Using a Bender’s Decomposition Approach
Authors: B. Janani, S. Thiruvenkadam
Abstract:
In this work, the system evaluates the impact of considering a stochastic approach on the day ahead basis Unit Commitment. Comparisons between stochastic and deterministic Unit Commitment solutions are provided. The Unit Commitment model consists in the minimization of the total operation costs considering unit’s technical constraints like ramping rates, minimum up and down time. Load shedding and wind power spilling is acceptable, but at inflated operational costs. The evaluation process consists in the calculation of the optimal unit commitment and in verifying the fulfillment of the considered constraints. For the calculation of the optimal unit commitment, an algorithm based on the Benders Decomposition, namely on the Dual Dynamic Programming, was developed. Two approaches were considered on the construction of stochastic solutions. Data related to wind power outputs from two different operational days are considered on the analysis. Stochastic and deterministic solutions are compared based on the actual measured wind power output at the operational day. Through a technique capability of finding representative wind power scenarios and its probabilities, the system can analyze a more detailed process about the expected final operational cost.
Keywords: Benders’ decomposition, network constrained AC unit commitment, stochastic programming, wind power uncertainty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13115406 Towards Growing Self-Organizing Neural Networks with Fixed Dimensionality
Authors: Guojian Cheng, Tianshi Liu, Jiaxin Han, Zheng Wang
Abstract:
The competitive learning is an adaptive process in which the neurons in a neural network gradually become sensitive to different input pattern clusters. The basic idea behind the Kohonen-s Self-Organizing Feature Maps (SOFM) is competitive learning. SOFM can generate mappings from high-dimensional signal spaces to lower dimensional topological structures. The main features of this kind of mappings are topology preserving, feature mappings and probability distribution approximation of input patterns. To overcome some limitations of SOFM, e.g., a fixed number of neural units and a topology of fixed dimensionality, Growing Self-Organizing Neural Network (GSONN) can be used. GSONN can change its topological structure during learning. It grows by learning and shrinks by forgetting. To speed up the training and convergence, a new variant of GSONN, twin growing cell structures (TGCS) is presented here. This paper first gives an introduction to competitive learning, SOFM and its variants. Then, we discuss some GSONN with fixed dimensionality, which include growing cell structures, its variants and the author-s model: TGCS. It is ended with some testing results comparison and conclusions.Keywords: Artificial neural networks, Competitive learning, Growing cell structures, Self-organizing feature maps.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15425405 Performance Trade-Off of File System between Overwriting and Dynamic Relocation on a Solid State Drive
Authors: Choulseung Hyun, Hunki Kwon, Jaeho Kim, Eujoon Byun, Jongmoo Choi, Donghee Lee, Sam H. Noh
Abstract:
Most file systems overwrite modified file data and metadata in their original locations, while the Log-structured File System (LFS) dynamically relocates them to other locations. We design and implement the Evergreen file system that can select between overwriting or relocation for each block of a file or metadata. Therefore, the Evergreen file system can achieve superior write performance by sequentializing write requests (similar to LFS-style relocation) when space utilization is low and overwriting when utilization is high. Another challenging issue is identifying performance benefits of LFS-style relocation over overwriting on a newly introduced SSD (Solid State Drive) which has only Flash-memory chips and control circuits without mechanical parts. Our experimental results measured on a SSD show that relocation outperforms overwriting when space utilization is below 80% and vice versa.Keywords: Evergreen File System, Overwrite, Relocation, Solid State Drive.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14775404 Thermoelectric Generators as Alternative Source for Electric Power
Authors: L. C. Ding, Bradley. G. Orr, K. Rahaoui, S. Truza, A. Date, A. Akbarzadeh
Abstract:
The research on thermoelectric has been a blooming field of research for the latest decade, owing to large amount of heat source available to be harvested, being eco-friendly and static in operation. This paper provides the performance of thermoelectric generator (TEG) with bulk material of bismuth telluride, Bi2Te3. Later, the performance of the TEGs is evaluated by considering attaching the TEGs on a plastic (polyethylene sheet) in contrast to the common method of attaching the TEGs on the metal surface.
Keywords: Electric power, heat transfer, renewable energy, thermoelectric generator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17695403 Simulation on the Performance of Carbon Dioxide and HFC-125 Heat Pumpsfor Medium-and High-Temperature Heating
Authors: Young-Jin Baikand, Minsung Kim
Abstract:
In order to compare the performance of the carbon dioxide and HFC-125 heat pumps for medium-and high-temperature heating, both heat pump cycles were optimized using a simulation method. To fairly compare the performance of the cycles by using different working fluids, each cycle was optimized from the viewpoint of heating COP by two design parameters. The first is the gas cooler exit temperature and the other is the ratio of the overall heat conductance of the gas cooler to the combined overall heat conductance of the gas cooler and the evaporator. The inlet and outlet temperatures of secondary fluid of the gas cooler were fixed at 40/90°C and 40/150°C.The results shows that the HFC-125 heat pump has 6% higher heating COP than carbon dioxide heat pump when the heat sink exit temperature is fixed at 90ºC, while the latter outperforms the former when the heat sink exit temperature is fixed at 150ºC under the simulation conditions considered in the present study.
Keywords: Carbon dioxide, HFC-125, trans critical, heat pump.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16255402 A Critics Study of Neural Networks Applied to ion-Exchange Process
Authors: John Kabuba, Antoine Mulaba-Bafubiandi, Kim Battle
Abstract:
This paper presents a critical study about the application of Neural Networks to ion-exchange process. Ionexchange is a complex non-linear process involving many factors influencing the ions uptake mechanisms from the pregnant solution. The following step includes the elution. Published data presents empirical isotherm equations with definite shortcomings resulting in unreliable predictions. Although Neural Network simulation technique encounters a number of disadvantages including its “black box", and a limited ability to explicitly identify possible causal relationships, it has the advantage to implicitly handle complex nonlinear relationships between dependent and independent variables. In the present paper, the Neural Network model based on the back-propagation algorithm Levenberg-Marquardt was developed using a three layer approach with a tangent sigmoid transfer function (tansig) at hidden layer with 11 neurons and linear transfer function (purelin) at out layer. The above mentioned approach has been used to test the effectiveness in simulating ion exchange processes. The modeling results showed that there is an excellent agreement between the experimental data and the predicted values of copper ions removed from aqueous solutions.Keywords: Copper, ion-exchange process, neural networks, simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16325401 High Speed Video Transmission for Telemedicine using ATM Technology
Authors: J. P. Dubois, H. M. Chiu
Abstract:
In this paper, we study statistical multiplexing of VBR video in ATM networks. ATM promises to provide high speed realtime multi-point to central video transmission for telemedicine applications in rural hospitals and in emergency medical services. Video coders are known to produce variable bit rate (VBR) signals and the effects of aggregating these VBR signals need to be determined in order to design a telemedicine network infrastructure capable of carrying these signals. We first model the VBR video signal and simulate it using a generic continuous-data autoregressive (AR) scheme. We carry out the queueing analysis by the Fluid Approximation Model (FAM) and the Markov Modulated Poisson Process (MMPP). The study has shown a trade off: multiplexing VBR signals reduces burstiness and improves resource utilization, however, the buffer size needs to be increased with an associated economic cost. We also show that the MMPP model and the Fluid Approximation model fit best, respectively, the cell region and the burst region. Therefore, a hybrid MMPP and FAM completely characterizes the overall performance of the ATM statistical multiplexer. The ramifications of this technology are clear: speed, reliability (lower loss rate and jitter), and increased capacity in video transmission for telemedicine. With migration to full IP-based networks still a long way to achieving both high speed and high quality of service, the proposed ATM architecture will remain of significant use for telemedicine.Keywords: ATM, multiplexing, queueing, telemedicine, VBR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17445400 Comparison of Number of Waves Surfed and Duration Using Global Positioning System and Inertial Sensors
Authors: J. Madureira, R. Lagido, I. Sousa
Abstract:
Surf is an increasingly popular sport and its performance evaluation is often qualitative. This work aims at using a smartphone to collect and analyze the GPS and inertial sensors data in order to obtain quantitative metrics of the surfing performance. Two approaches are compared for detection of wave rides, computing the number of waves rode in a surfing session, the starting time of each wave and its duration. The first approach is based on computing the velocity from the Global Positioning System (GPS) signal and finding the velocity thresholds that allow identifying the start and end of each wave ride. The second approach adds information from the Inertial Measurement Unit (IMU) of the smartphone, to the velocity thresholds obtained from the GPS unit, to determine the start and end of each wave ride. The two methods were evaluated using GPS and IMU data from two surfing sessions and validated with similar metrics extracted from video data collected from the beach. The second method, combining GPS and IMU data, was found to be more accurate in determining the number of waves, start time and duration. This paper shows that it is feasible to use smartphones for quantification of performance metrics during surfing. In particular, detection of the waves rode and their duration can be accurately determined using the smartphone GPS and IMU.
Keywords: Inertial Measurement Unit (IMU), Global Positioning System (GPS), smartphone, surfing performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16565399 The Performance of Alternating Top-Bottom Strategy for Successive Over Relaxation Scheme on Two Dimensional Boundary Value Problem
Authors: M. K. Hasan, Y. H. Ng, J. Sulaiman
Abstract:
This paper present the implementation of a new ordering strategy on Successive Overrelaxation scheme on two dimensional boundary value problems. The strategy involve two directions alternatingly; from top and bottom of the solution domain. The method shows to significantly reduce the iteration number to converge. Four numerical experiments were carried out to examine the performance of the new strategy.
Keywords: Two dimensional boundary value problems, Successive Overrelaxation scheme, Alternating Top-Bottom strategy, fast convergence
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14925398 Design of High Gain, High Bandwidth Op-Amp for Reduction of Mismatch Currents in Charge Pump PLL in 180 nm CMOS Technology
Authors: R .H. Talwekar, S. S Limaye
Abstract:
The designing of charge pump with high gain Op- Amp is a challenging task for getting faithful response .Design of high performance phase locked loop require ,a design of high performance charge pump .We have designed a operational amplifier for reducing the error caused by high speed glitch in a transistor and mismatch currents . A separate Op-Amp has designed in 180 nm CMOS technology by CADENCE VIRTUOSO tool. This paper describes the design of high performance charge pump for GHz CMOS PLL targeting orthogonal frequency division multiplexing (OFDM) application. A high speed low power consumption Op-Amp with more than 500 MHz bandwidth has designed for increasing the speed of charge pump in Phase locked loop.Keywords: Charge pump (CP) Orthogonal frequency divisionmultiplexing (OFDM), Phase locked loop (PLL), Phase frequencydetector (PFD), Voltage controlled oscillator (VCO),
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34485397 Performance Assessment of Computational Gridon Weather Indices from HOAPS Data
Authors: Madhuri Bhavsar, Anupam K Singh, Shrikant Pradhan
Abstract:
Long term rainfall analysis and prediction is a challenging task especially in the modern world where the impact of global warming is creating complications in environmental issues. These factors which are data intensive require high performance computational modeling for accurate prediction. This research paper describes a prototype which is designed and developed on grid environment using a number of coupled software infrastructural building blocks. This grid enabled system provides the demanding computational power, efficiency, resources, user-friendly interface, secured job submission and high throughput. The results obtained using sequential execution and grid enabled execution shows that computational performance has enhanced among 36% to 75%, for decade of climate parameters. Large variation in performance can be attributed to varying degree of computational resources available for job execution. Grid Computing enables the dynamic runtime selection, sharing and aggregation of distributed and autonomous resources which plays an important role not only in business, but also in scientific implications and social surroundings. This research paper attempts to explore the grid enabled computing capabilities on weather indices from HOAPS data for climate impact modeling and change detection.Keywords: Climate model, Computational Grid, GridApplication, Heterogeneous Grid
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14435396 Optimal Convolutive Filters for Real-Time Detection and Arrival Time Estimation of Transient Signals
Authors: Michal Natora, Felix Franke, Klaus Obermayer
Abstract:
Linear convolutive filters are fast in calculation and in application, and thus, often used for real-time processing of continuous data streams. In the case of transient signals, a filter has not only to detect the presence of a specific waveform, but to estimate its arrival time as well. In this study, a measure is presented which indicates the performance of detectors in achieving both of these tasks simultaneously. Furthermore, a new sub-class of linear filters within the class of filters which minimize the quadratic response is proposed. The proposed filters are more flexible than the existing ones, like the adaptive matched filter or the minimum power distortionless response beamformer, and prove to be superior with respect to that measure in certain settings. Simulations of a real-time scenario confirm the advantage of these filters as well as the usefulness of the performance measure.
Keywords: Adaptive matched filter, minimum variance distortionless response, beam forming, Capon beam former, linear filters, performance measure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15235395 A Web Oriented Spread Spectrum Watermarking Procedure for MPEG-2 Videos
Authors: Franco Frattolillo
Abstract:
In the last decade digital watermarking procedures have become increasingly applied to implement the copyright protection of multimedia digital contents distributed on the Internet. To this end, it is worth noting that a lot of watermarking procedures for images and videos proposed in literature are based on spread spectrum techniques. However, some scepticism about the robustness and security of such watermarking procedures has arisen because of some documented attacks which claim to render the inserted watermarks undetectable. On the other hand, web content providers wish to exploit watermarking procedures characterized by flexible and efficient implementations and which can be easily integrated in their existing web services frameworks or platforms. This paper presents how a simple spread spectrum watermarking procedure for MPEG-2 videos can be modified to be exploited in web contexts. To this end, the proposed procedure has been made secure and robust against some well-known and dangerous attacks. Furthermore, its basic scheme has been optimized by making the insertion procedure adaptive with respect to the terminals used to open the videos and the network transactions carried out to deliver them to buyers. Finally, two different implementations of the procedure have been developed: the former is a high performance parallel implementation, whereas the latter is a portable Java and XML based implementation. Thus, the paper demonstrates that a simple spread spectrum watermarking procedure, with limited and appropriate modifications to the embedding scheme, can still represent a valid alternative to many other well-known and more recent watermarking procedures proposed in literature.Keywords: Copyright protection, digital watermarking, intellectual property protection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15115394 Effects of Hidden Unit Sizes and Autoregressive Features in Mental Task Classification
Authors: Ramaswamy Palaniappan, Nai-Jen Huan
Abstract:
Classification of electroencephalogram (EEG) signals extracted during mental tasks is a technique that is actively pursued for Brain Computer Interfaces (BCI) designs. In this paper, we compared the classification performances of univariateautoregressive (AR) and multivariate autoregressive (MAR) models for representing EEG signals that were extracted during different mental tasks. Multilayer Perceptron (MLP) neural network (NN) trained by the backpropagation (BP) algorithm was used to classify these features into the different categories representing the mental tasks. Classification performances were also compared across different mental task combinations and 2 sets of hidden units (HU): 2 to 10 HU in steps of 2 and 20 to 100 HU in steps of 20. Five different mental tasks from 4 subjects were used in the experimental study and combinations of 2 different mental tasks were studied for each subject. Three different feature extraction methods with 6th order were used to extract features from these EEG signals: AR coefficients computed with Burg-s algorithm (ARBG), AR coefficients computed with stepwise least square algorithm (ARLS) and MAR coefficients computed with stepwise least square algorithm. The best results were obtained with 20 to 100 HU using ARBG. It is concluded that i) it is important to choose the suitable mental tasks for different individuals for a successful BCI design, ii) higher HU are more suitable and iii) ARBG is the most suitable feature extraction method.Keywords: Autoregressive, Brain-Computer Interface, Electroencephalogram, Neural Network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18035393 ECG-Based Heartbeat Classification Using Convolutional Neural Networks
Authors: Jacqueline R. T. Alipo-on, Francesca I. F. Escobar, Myles J. T. Tan, Hezerul Abdul Karim, Nouar AlDahoul
Abstract:
Electrocardiogram (ECG) signal analysis and processing are crucial in the diagnosis of cardiovascular diseases which are considered as one of the leading causes of mortality worldwide. However, the traditional rule-based analysis of large volumes of ECG data is time-consuming, labor-intensive, and prone to human errors. With the advancement of the programming paradigm, algorithms such as machine learning have been increasingly used to perform an analysis on the ECG signals. In this paper, various deep learning algorithms were adapted to classify five classes of heart beat types. The dataset used in this work is the synthetic MIT-Beth Israel Hospital (MIT-BIH) Arrhythmia dataset produced from generative adversarial networks (GANs). Various deep learning models such as ResNet-50 convolutional neural network (CNN), 1-D CNN, and long short-term memory (LSTM) were evaluated and compared. ResNet-50 was found to outperform other models in terms of recall and F1 score using a five-fold average score of 98.88% and 98.87%, respectively. 1-D CNN, on the other hand, was found to have the highest average precision of 98.93%.
Keywords: Heartbeat classification, convolutional neural network, electrocardiogram signals, ECG signals, generative adversarial networks, long short-term memory, LSTM, ResNet-50.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1895392 A Parametric Study of an Inverse Electrostatics Problem (IESP) Using Simulated Annealing, Hooke & Jeeves and Sequential Quadratic Programming in Conjunction with Finite Element and Boundary Element Methods
Authors: Ioannis N. Koukoulis, Clio G. Vossou, Christopher G. Provatidis
Abstract:
The aim of the current work is to present a comparison among three popular optimization methods in the inverse elastostatics problem (IESP) of flaw detection within a solid. In more details, the performance of a simulated annealing, a Hooke & Jeeves and a sequential quadratic programming algorithm was studied in the test case of one circular flaw in a plate solved by both the boundary element (BEM) and the finite element method (FEM). The proposed optimization methods use a cost function that utilizes the displacements of the static response. The methods were ranked according to the required number of iterations to converge and to their ability to locate the global optimum. Hence, a clear impression regarding the performance of the aforementioned algorithms in flaw identification problems was obtained. Furthermore, the coupling of BEM or FEM with these optimization methods was investigated in order to track differences in their performance.
Keywords: Elastostatic, inverse problem, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18755391 Simulation of Agri-Food Supply Chains
Authors: Sherine Beshara, Khaled S. El-Kilany, Noha M. Galal
Abstract:
Supply chain management has become more challenging with the emerging trend of globalization and sustainability. Lately, research related to perishable products supply chains, in particular agricultural food products, has emerged. This is attributed to the additional complexity of managing this type of supply chains with the recently increased concern of public health, food quality, food safety, demand and price variability, and the limited lifetime of these products. Inventory management for agrifood supply chains is of vital importance due to the product perishability and customers- strive for quality. This paper concentrates on developing a simulation model of a real life case study of a two echelon production-distribution system for agri-food products. The objective is to improve a set of performance measures by developing a simulation model that helps in evaluating and analysing the performance of these supply chains. Simulation results showed that it can help in improving overall system performance.Keywords: Agri-food supply chains, inventory model, modelling and Simulation, supply chain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33625390 Key Competences in Economics and Business Field: The Employers’ Side of the Story
Authors: Bruno Škrinjarić
Abstract:
Rapid technological developments and increase in organizations’ interdependence on international scale are changing the traditional workplace paradigm. A key feature of knowledge based economy is that employers are looking for individuals that possess both specific academic skills and knowledge, and also capability to be proactive and respond to problems creatively and autonomously. The focus of this paper is workers with Economics and Business background and its goals are threefold: (1) to explore wide range of competences and identify which are the most important to employers; (2) to investigate the existence and magnitude of gap between required and possessed level of a certain competency; and (3) to inquire how this gap is connected with performance of a company. A study was conducted on a representative sample of Croatian enterprises during the spring of 2016. Results show that generic, rather than specific, competences are more important to employers and the gap between the relative importance of certain competence and its current representation in existing workforce is greater for generic competences than for specific. Finally, results do not support the hypothesis that this gap is correlated with firms’ performance.
Keywords: Competency gap, competency matching, key competences, firm performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14685389 Fuzzy Wavelet Packet based Feature Extraction Method for Multifunction Myoelectric Control
Authors: Rami N. Khushaba, Adel Al-Jumaily
Abstract:
The myoelectric signal (MES) is one of the Biosignals utilized in helping humans to control equipments. Recent approaches in MES classification to control prosthetic devices employing pattern recognition techniques revealed two problems, first, the classification performance of the system starts degrading when the number of motion classes to be classified increases, second, in order to solve the first problem, additional complicated methods were utilized which increase the computational cost of a multifunction myoelectric control system. In an effort to solve these problems and to achieve a feasible design for real time implementation with high overall accuracy, this paper presents a new method for feature extraction in MES recognition systems. The method works by extracting features using Wavelet Packet Transform (WPT) applied on the MES from multiple channels, and then employs Fuzzy c-means (FCM) algorithm to generate a measure that judges on features suitability for classification. Finally, Principle Component Analysis (PCA) is utilized to reduce the size of the data before computing the classification accuracy with a multilayer perceptron neural network. The proposed system produces powerful classification results (99% accuracy) by using only a small portion of the original feature set.Keywords: Biomedical Signal Processing, Data mining andInformation Extraction, Machine Learning, Rehabilitation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17375388 The Application of Dynamic Network Process to Environment Planning Support Systems
Authors: Wann-Ming Wey
Abstract:
In recent years, in addition to face the external threats such as energy shortages and climate change, traffic congestion and environmental pollution have become anxious problems for many cities. Considering private automobile-oriented urban development had produced many negative environmental and social impacts, the transit-oriented development (TOD) has been considered as a sustainable urban model. TOD encourages public transport combined with friendly walking and cycling environment designs, however, non-motorized modes help improving human health, energy saving, and reducing carbon emissions. Due to environmental changes often affect the planners’ decision-making; this research applies dynamic network process (DNP) which includes the time dependent concept to promoting friendly walking and cycling environmental designs as an advanced planning support system for environment improvements.
This research aims to discuss what kinds of design strategies can improve a friendly walking and cycling environment under TOD. First of all, we collate and analyze environment designing factors by reviewing the relevant literatures as well as divide into three aspects of “safety”, “convenience”, and “amenity” from fifteen environment designing factors. Furthermore, we utilize fuzzy Delphi Technique (FDT) expert questionnaire to filter out the more important designing criteria for the study case. Finally, we utilized DNP expert questionnaire to obtain the weights changes at different time points for each design criterion. Based on the changing trends of each criterion weight, we are able to develop appropriate designing strategies as the reference for planners to allocate resources in a dynamic environment. In order to illustrate the approach we propose in this research, Taipei city as one example has been used as an empirical study, and the results are in depth analyzed to explain the application of our proposed approach.
Keywords: Environment Planning Support Systems, Walking and Cycling, Transit-oriented Development (TOD), Dynamic Network Process (DNP).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18505387 Essential Oil Blend Containing Capsaicin, Carvacrol and Cinnamaldehyde in Broiler Production Performance and Intestinal Morphometrics
Authors: Marianne D. M. Rendon, Sonia P. Acda, Veneranda A. Magpantay, Norma N. Fajardo, Amado A. Angeles
Abstract:
The aim of this study is to evaluate the effect of supplementing broiler starter diet with different levels of an essential oil blend (EOB) containing capsaicin, carvacrol and cinnamaldehyde on the performance of broilers. A total of 300 day-old straight-run Cobb broiler chicks were randomly assigned to three treatments after 7-day group brooding following a completely randomized design (CRD). Birds assigned in treatment 1 were given starter basal diet while those in treatments 2 and 3 were given starter basal diet with 400 mg/kg antibiotic growth promoter (AGP) and 150 mg/kg EOB, respectively, until the 28th day. Basal finisher feed were given for all the treatments until harvest. Following 37 d feeding, body weight gain, feed consumption, feed efficiency, dressing percentage, livability and jejunal villi height were determined. Results showed no significant differences (P>0.05) in growth performance. However, villi height and crypt depth was significantly lower for birds fed EOB.Keywords: Broiler, capsaicin, carvacrol, cinnamaldehyde, essential oil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28735386 Achieving High Availability by Implementing Beowulf Cluster
Authors: A.F.A. Abidin, N.S.M. Usop
Abstract:
A computer cluster is a group of tightly coupled computers that work together closely so that in many respects they can be viewed as though they are a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks. Clusters are usually deployed to improve performance and/or availability over that provided by a single computer, while typically being much more cost-effective than single computers of comparable speed or availability. This paper proposed the way to implement the Beowulf Cluster in order to achieve high performance as well as high availability.Keywords: Beowulf Cluster, grid computing, GridMPI, MPICH.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16775385 FEA-Based Calculation of Performances of IPM Machines with Five Topologies for Hybrid- Electric Vehicle Traction
Authors: Aimeng Wang, Dejun Ma, Hui Wang
Abstract:
The paper presents a detailed calculation of characteristic of five different topology permanent magnet machines for high performance traction including hybrid -electric vehicles using finite element analysis (FEA) method. These machines include V-shape single layer interior PM, W-shape single-layer interior PM, Segment interior PM and surface PM on the rotor and with distributed winding on the stator. The performance characteristics which include the back-emf voltage and its harmonic, magnet mass, iron loss and ripple torque are compared and analyzed. One of a 7.5kW IPM prototype was tested and verified finite-element analysis results. The aim of the paper is given some guidance and reference for machine designer which are interested in IPM machine selection for high performance traction application.
Keywords: Interior permanent magnet machine, finite-element analysis (FEA), five topologies, electric vehicle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 39255384 Analysis of Joint Source Channel LDPC Coding for Correlated Sources Transmission over Noisy Channels
Authors: Marwa Ben Abdessalem, Amin Zribi, Ammar Bouallègue
Abstract:
In this paper, a Joint Source Channel coding scheme based on LDPC codes is investigated. We consider two concatenated LDPC codes, one allows to compress a correlated source and the second to protect it against channel degradations. The original information can be reconstructed at the receiver by a joint decoder, where the source decoder and the channel decoder run in parallel by transferring extrinsic information. We investigate the performance of the JSC LDPC code in terms of Bit-Error Rate (BER) in the case of transmission over an Additive White Gaussian Noise (AWGN) channel, and for different source and channel rate parameters. We emphasize how JSC LDPC presents a performance tradeoff depending on the channel state and on the source correlation. We show that, the JSC LDPC is an efficient solution for a relatively low Signal-to-Noise Ratio (SNR) channel, especially with highly correlated sources. Finally, a source-channel rate optimization has to be applied to guarantee the best JSC LDPC system performance for a given channel.Keywords: AWGN channel, belief propagation, joint source channel coding, LDPC codes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9835383 A Distributed Cryptographically Generated Address Computing Algorithm for Secure Neighbor Discovery Protocol in IPv6
Authors: M. Moslehpour, S. Khorsandi
Abstract:
Due to shortage in IPv4 addresses, transition to IPv6 has gained significant momentum in recent years. Like Address Resolution Protocol (ARP) in IPv4, Neighbor Discovery Protocol (NDP) provides some functions like address resolution in IPv6. Besides functionality of NDP, it is vulnerable to some attacks. To mitigate these attacks, Internet Protocol Security (IPsec) was introduced, but it was not efficient due to its limitation. Therefore, SEND protocol is proposed to automatic protection of auto-configuration process. It is secure neighbor discovery and address resolution process. To defend against threats on NDP’s integrity and identity, Cryptographically Generated Address (CGA) and asymmetric cryptography are used by SEND. Besides advantages of SEND, its disadvantages like the computation process of CGA algorithm and sequentially of CGA generation algorithm are considerable. In this paper, we parallel this process between network resources in order to improve it. In addition, we compare the CGA generation time in self-computing and distributed-computing process. We focus on the impact of the malicious nodes on the CGA generation time in the network. According to the result, although malicious nodes participate in the generation process, CGA generation time is less than when it is computed in a one-way. By Trust Management System, detecting and insulating malicious nodes is easier.
Keywords: NDP, IPsec, SEND, CGA, Modifier, Malicious node, Self-Computing, Distributed-Computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13765382 Influence of a High-Resolution Land Cover Classification on Air Quality Modelling
Authors: C. Silveira, A. Ascenso, J. Ferreira, A. I. Miranda, P. Tuccella, G. Curci
Abstract:
Poor air quality is one of the main environmental causes of premature deaths worldwide, and mainly in cities, where the majority of the population lives. It is a consequence of successive land cover (LC) and use changes, as a result of the intensification of human activities. Knowing these landscape modifications in a comprehensive spatiotemporal dimension is, therefore, essential for understanding variations in air pollutant concentrations. In this sense, the use of air quality models is very useful to simulate the physical and chemical processes that affect the dispersion and reaction of chemical species into the atmosphere. However, the modelling performance should always be evaluated since the resolution of the input datasets largely dictates the reliability of the air quality outcomes. Among these data, the updated LC is an important parameter to be considered in atmospheric models, since it takes into account the Earth’s surface changes due to natural and anthropic actions, and regulates the exchanges of fluxes (emissions, heat, moisture, etc.) between the soil and the air. This work aims to evaluate the performance of the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem), when different LC classifications are used as an input. The influence of two LC classifications was tested: i) the 24-classes USGS (United States Geological Survey) LC database included by default in the model, and the ii) CLC (Corine Land Cover) and specific high-resolution LC data for Portugal, reclassified according to the new USGS nomenclature (33-classes). Two distinct WRF-Chem simulations were carried out to assess the influence of the LC on air quality over Europe and Portugal, as a case study, for the year 2015, using the nesting technique over three simulation domains (25 km2, 5 km2 and 1 km2 horizontal resolution). Based on the 33-classes LC approach, particular emphasis was attributed to Portugal, given the detail and higher LC spatial resolution (100 m x 100 m) than the CLC data (5000 m x 5000 m). As regards to the air quality, only the LC impacts on tropospheric ozone concentrations were evaluated, because ozone pollution episodes typically occur in Portugal, in particular during the spring/summer, and there are few research works relating to this pollutant with LC changes. The WRF-Chem results were validated by season and station typology using background measurements from the Portuguese air quality monitoring network. As expected, a better model performance was achieved in rural stations: moderate correlation (0.4 – 0.7), BIAS (10 – 21µg.m-3) and RMSE (20 – 30 µg.m-3), and where higher average ozone concentrations were estimated. Comparing both simulations, small differences grounded on the Leaf Area Index and air temperature values were found, although the high-resolution LC approach shows a slight enhancement in the model evaluation. This highlights the role of the LC on the exchange of atmospheric fluxes, and stresses the need to consider a high-resolution LC characterization combined with other detailed model inputs, such as the emission inventory, to improve air quality assessment.Keywords: Land cover, tropospheric ozone, WRF-Chem, air quality assessment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 796