Search results for: Business system
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9200

Search results for: Business system

1490 Poincaré Plot for Heart Rate Variability

Authors: Mazhar B. Tayel, Eslam I. AlSaba

Abstract:

Heart is the most important part in the body of living organisms. It affects and is affected by any factor in the body. Therefore, it is a good detector for all conditions in the body. Heart signal is a non-stationary signal; thus, it is utmost important to study the variability of heart signal. The Heart Rate Variability (HRV) has attracted considerable attention in psychology, medicine and has become important dependent measure in psychophysiology and behavioral medicine. The standards of measurements, physiological interpretation and clinical use for HRV that are most often used were described in many researcher papers, however, remain complex issues are fraught with pitfalls. This paper presents one of the nonlinear techniques to analyze HRV. It discusses many points like, what Poincaré plot is and how Poincaré plot works; also, Poincaré plot's merits especially in HRV. Besides, it discusses the limitation of Poincaré cause of standard deviation SD1, SD2 and how to overcome this limitation by using complex correlation measure (CCM). The CCM is most sensitive to changes in temporal structure of the Poincaré plot as compared toSD1 and SD2.

Keywords: Heart rate variability, chaotic system, Poincaré, variance, standard deviation, complex correlation measure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7451
1489 Numerical Study of Iterative Methods for the Solution of the Dirichlet-Neumann Map for Linear Elliptic PDEs on Regular Polygon Domains

Authors: A. G. Sifalakis, E. P. Papadopoulou, Y. G. Saridakis

Abstract:

A generalized Dirichlet to Neumann map is one of the main aspects characterizing a recently introduced method for analyzing linear elliptic PDEs, through which it became possible to couple known and unknown components of the solution on the boundary of the domain without solving on its interior. For its numerical solution, a well conditioned quadratically convergent sine-Collocation method was developed, which yielded a linear system of equations with the diagonal blocks of its associated coefficient matrix being point diagonal. This structural property, among others, initiated interest for the employment of iterative methods for its solution. In this work we present a conclusive numerical study for the behavior of classical (Jacobi and Gauss-Seidel) and Krylov subspace (GMRES and Bi-CGSTAB) iterative methods when they are applied for the solution of the Dirichlet to Neumann map associated with the Laplace-s equation on regular polygons with the same boundary conditions on all edges.

Keywords: Elliptic PDEs, Dirichlet to Neumann Map, Global Relation, Collocation, Iterative Methods, Jacobi, Gauss-Seidel, GMRES, Bi-CGSTAB.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1711
1488 Experimental Results about the Dynamics of the Generalized Belief Propagation Used on LDPC Codes

Authors: Jean-Christophe Sibel, Sylvain Reynal, David Declercq

Abstract:

In the context of channel coding, the Generalized Belief Propagation (GBP) is an iterative algorithm used to recover the transmission bits sent through a noisy channel. To ensure a reliable transmission, we apply a map on the bits, that is called a code. This code induces artificial correlations between the bits to send, and it can be modeled by a graph whose nodes are the bits and the edges are the correlations. This graph, called Tanner graph, is used for most of the decoding algorithms like Belief Propagation or Gallager-B. The GBP is based on a non unic transformation of the Tanner graph into a so called region-graph. A clear advantage of the GBP over the other algorithms is the freedom in the construction of this graph. In this article, we explain a particular construction for specific graph topologies that involves relevant performance of the GBP. Moreover, we investigate the behavior of the GBP considered as a dynamic system in order to understand the way it evolves in terms of the time and in terms of the noise power of the channel. To this end we make use of classical measures and we introduce a new measure called the hyperspheres method that enables to know the size of the attractors.

Keywords: iterative decoder, LDPC, region-graph, chaos.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1648
1487 Performance of Soft Handover Algorithm in Varied Propagation Environments

Authors: N. P. Singh, Brahmjit Singh

Abstract:

CDMA cellular networks support soft handover, which guarantees the continuity of wireless services and enhanced communication quality. Cellular networks support multimedia services under varied propagation environmental conditions. In this paper, we have shown the effect of characteristic parameters of the cellular environments on the soft handover performance. We consider path loss exponent, standard deviation of shadow fading and correlation coefficient of shadow fading as the characteristic parameters of the radio propagation environment. A very useful statistical measure for characterizing the performance of mobile radio system is the probability of outage. It is shown through numerical results that above parameters have decisive effect on the probability of outage and hence the overall performance of the soft handover algorithm.

Keywords: CDMA, Correlation coefficient, Path loss exponent, Probability of outage, Soft handover.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1722
1486 An Evaluation Method of Accelerated Storage Life Test for Typical Mechanical and Electronic Products

Authors: Jinyong Yao, Hongzhi Li, Chao Du, Jiao Li

Abstract:

Reliability of long-term storage products is related to the availability of the whole system, and the evaluation of storage life is of great necessity. These products are usually highly reliable and little failure information can be collected. In this paper, an analytical method based on data from accelerated storage life test is proposed to evaluate the reliability index of the long-term storage products. Firstly, singularities are eliminated by data normalization and residual analysis. Secondly, with the preprocessed data, the degradation path model is built to obtain the pseudo life values. Then by life distribution hypothesis, we can get the estimator of parameters in high stress levels and verify failure mechanism consistency. Finally, the life distribution under the normal stress level is extrapolated via the acceleration model and evaluation of the actual average life is available. An application example with the camera stabilization device is provided to illustrate the methodology we proposed.

Keywords: Accelerated storage life test, failure mechanism consistency, life distribution, reliability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2285
1485 Impact of Hard Limited Clipping Crest Factor Reduction Technique on Bit Error Rate in OFDM Based Systems

Authors: Theodore Grosch, Felipe Koji Godinho Hoshino

Abstract:

In wireless communications, 3GPP LTE is one of the solutions to meet the greater transmission data rate demand. One issue inherent to this technology is the PAPR (Peak-to-Average Power Ratio) of OFDM (Orthogonal Frequency Division Multiplexing) modulation. This high PAPR affects the efficiency of power amplifiers. One approach to mitigate this effect is the Crest Factor Reduction (CFR) technique. In this work, we simulate the impact of Hard Limited Clipping Crest Factor Reduction technique on BER (Bit Error Rate) in OFDM based Systems. In general, the results showed that CFR has more effects on higher digital modulation schemes, as expected. More importantly, we show the worst-case degradation due to CFR on QPSK, 16QAM, and 64QAM signals in a linear system. For example, hard clipping of 9 dB results in a 2 dB increase in signal to noise energy at a 1% BER for 64-QAM modulation.

Keywords: Bit error rate, crest factor reduction, OFDM, physical layer simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2140
1484 Q-Learning with Eligibility Traces to Solve Non-Convex Economic Dispatch Problems

Authors: Mohammed I. Abouheaf, Sofie Haesaert, Wei-Jen Lee, Frank L. Lewis

Abstract:

Economic Dispatch is one of the most important power system management tools. It is used to allocate an amount of power generation to the generating units to meet the load demand. The Economic Dispatch problem is a large scale nonlinear constrained optimization problem. In general, heuristic optimization techniques are used to solve non-convex Economic Dispatch problem. In this paper, ideas from Reinforcement Learning are proposed to solve the non-convex Economic Dispatch problem. Q-Learning is a reinforcement learning techniques where each generating unit learn the optimal schedule of the generated power that minimizes the generation cost function. The eligibility traces are used to speed up the Q-Learning process. Q-Learning with eligibility traces is used to solve Economic Dispatch problems with valve point loading effect, multiple fuel options, and power transmission losses.

Keywords: Economic Dispatch, Non-Convex Cost Functions, Valve Point Loading Effect, Q-Learning, Eligibility Traces.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2087
1483 A Cost Function for Joint Blind Equalization and Phase Recovery

Authors: Reza Berangi, Morteza Babaee, Majid Soleimanipour

Abstract:

In this paper a new cost function for blind equalization is proposed. The proposed cost function, referred to as the modified maximum normalized cumulant criterion (MMNC), is an extension of the previously proposed maximum normalized cumulant criterion (MNC). While the MNC requires a separate phase recovery system after blind equalization, the MMNC performs joint blind equalization and phase recovery. To achieve this, the proposed algorithm maximizes a cost function that considers both amplitude and phase of the equalizer output. The simulation results show that the proposed algorithm has an improved channel equalization effect than the MNC algorithm and simultaneously can correct the phase error that the MNC algorithm is unable to do. The simulation results also show that the MMNC algorithm has lower complexity than the MNC algorithm. Moreover, the MMNC algorithm outperforms the MNC algorithm particularly when the symbols block size is small.

Keywords: Blind equalization, maximum normalized cumulant criterion (MNC), intersymbol interference (ISI), modified MNC criterion (MMNC), phase recovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1764
1482 Ultrasonic Intensification of the Chemical Degradation of Methyl Violet: An Experimental Study

Authors: N. P. Dhanalakshmi, R. Nagarajan

Abstract:

The sonochemical decolorization and degradation of azo dye Methyl violet using Fenton-s reagent in the presence of a high-frequency acoustic field has been investigated. Dyeing and textile effluents are the major sources of azo dyes, and are most troublesome among industrial wastewaters, causing imbalance in the eco-system. The effect of various operating conditions (initial concentration of dye, liquid-phase temperature, ultrasonic power and frequency and process time) on sonochemical degradation was investigated. Conversion was found to increase with increase in initial concentration, temperature, power level and frequency. Both horntype and tank-type sonicators were used, at various power levels (250W, 400W and 500W) for frequencies ranging from 20 kHz - 1000 kHz. A 'Process Intensification' parameter PI, was defined to quantify the enhancement of the degradation reaction by ultrasound when compared to control (i.e., without ultrasound). The present work clearly demonstrates that a high-frequency ultrasonic bath can be used to achieve higher process throughput and energy efficiency at a larger scale of operation.

Keywords: Fenton oxidation, process intensification, sonochemical degradation of MV, ultrasonic frequency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2549
1481 Comprehensive Hierarchy Evaluation of Power Quality Based on an Incentive Mechanism

Authors: Tao Shun, Xiao Xiangning, HadjSaid, N.

Abstract:

In a liberalized electricity market, it is not surprising that different customers require different power quality (PQ) levels at different price. Power quality related to several power disturbances is described by many parameters, so how to define a comprehensive hierarchy evaluation system of power quality (PQCHES) has become a concerned issue. In this paper, based on four electromagnetic compatibility (EMC) levels, the numerical range of each power disturbance is divided into five grades (Grade I –Grade V), and the “barrel principle" of power quality is used for the assessment of overall PQ performance with only one grade indicator. A case study based on actual monitored data of PQ shows that the site PQ grade indicates the electromagnetic environment level and also expresses the characteristics of loads served by the site. The shortest plank principle of PQ barrel is an incentive mechanism, which can combine with the rewards/penalty mechanism (RPM) of consumed energy “on quality demand", to stimulate utilities to improve the overall PQ level and also stimulate end-user more “smart" under the infrastructure of future SmartGrid..

Keywords: Power quality, electromagnetic compatibility, SmartGrid, comprehensive evaluation, barrel principle, electricitymarket

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1571
1480 Influence of Moringa Leaves Extract on the Response of Hb Molecule to Dose Rates’ Changes: II. Relaxation Time and Its Thermodynamic Driven State Functions

Authors: Mohamed M. M. Elnasharty, Azhar M. Elwan

Abstract:

Irradiation deposits energy through ionisation changing the bio-system’s net dipole, allowing the use of dielectric parameters and thermodynamic state functions related to these parameters as biophysical detectors to electrical inhomogeneity within the biosystem. This part is concerned with the effect of Moringa leaves extract, natural supplement, on the response of the biosystem to two different dose rates of irradiation. Having Hb molecule as a representative to the biosystem to be least invasive to the biosystem, dielectric measurements were used to extract the relaxation time of certain process found in the Hb spectrum within the indicated frequency window and the interrelated thermodynamic state functions were calculated from the deduced relaxation time. The results showed that relaxation time was decreased for both dose rates indicating a strong influence of Moringa on the response of biosystem and consequently Hb molecule. This influence was presented in the relaxation time and other parameters as well.

Keywords: Activation energy, DC conductivity, dielectric relaxation, enthalpy change, moringa leaves extract, relaxation time.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 667
1479 A Superior Delay Estimation Model for VLSI Interconnect in Current Mode Signaling

Authors: Sunil Jadav, Rajeevan Chandel Munish Vashishath

Abstract:

Today’s VLSI networks demands for high speed. And in this work the compact form mathematical model for current mode signalling in VLSI interconnects is presented.RLC interconnect line is modelled using characteristic impedance of transmission line and inductive effect. The on-chip inductance effect is dominant at lower technology node is emulated into an equivalent resistance. First order transfer function is designed using finite difference equation, Laplace transform and by applying the boundary conditions at the source and load termination. It has been observed that the dominant pole determines system response and delay in the proposed model. The novel proposed current mode model shows superior performance as compared to voltage mode signalling. Analysis shows that current mode signalling in VLSI interconnects provides 2.8 times better delay performance than voltage mode. Secondly the damping factor of a lumped RLC circuit is shown to be a useful figure of merit.

Keywords: Current Mode, Voltage Mode, VLSI Interconnect.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2450
1478 Evaluation of the End Effect Impact on the Torsion Test for Determining the Shear Modulus of a Timber Beam through a Photogrammetry Approach

Authors: Niaz Gharavi, Hexin Zhang, Yanjun Xie

Abstract:

The timber beam end effect in the torsion test is evaluated using binocular stereo vision system. It is recommended by BS EN 408:2010+A1:2012 to exclude a distance of two to three times of cross-sectional thickness (b) from ends to avoid the end effect; whereas, this study indicates that this distance is not sufficiently far enough to remove this effect in slender cross-sections. The shear modulus of six timber beams with different aspect ratios is determined at the various angles and cross-sections. The result of this experiment shows that the end affected span of each specimen varies depending on their aspect ratios. It is concluded that by increasing the aspect ratio this span will increase. However, by increasing the distance from the ends to the values greater than 6b, the shear modulus trend becomes constant and end effect will be negligible. Moreover, it is concluded that end affected span is preferred to be depth-dependent rather than thickness-dependant.

Keywords: End effect, structural-size torsion test, shear properties, timber engineering, binocular stereo vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1363
1477 Kinematic Parameter-Independent Modeling and Measuring of Three-Axis Machine Tools

Authors: Yung-Yuan Hsu

Abstract:

The primary objective of this paper was to construct a “kinematic parameter-independent modeling of three-axis machine tools for geometric error measurement" technique. Improving the accuracy of the geometric error for three-axis machine tools is one of the machine tools- core techniques. This paper first applied the traditional method of HTM to deduce the geometric error model for three-axis machine tools. This geometric error model was related to the three-axis kinematic parameters where the overall errors was relative to the machine reference coordinate system. Given that the measurement of the linear axis in this model should be on the ideal motion axis, there were practical difficulties. Through a measurement method consolidating translational errors and rotational errors in the geometric error model, we simplified the three-axis geometric error model to a kinematic parameter-independent model. Finally, based on the new measurement method corresponding to this error model, we established a truly practical and more accurate error measuring technique for three-axis machine tools.

Keywords: Three-axis machine tool, Geometric error, HTM, Error measuring

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2122
1476 Genetic Folding: Analyzing the Mercer-s Kernels Effect in Support Vector Machine using Genetic Folding

Authors: Mohd A. Mezher, Maysam F. Abbod

Abstract:

Genetic Folding (GF) a new class of EA named as is introduced for the first time. It is based on chromosomes composed of floating genes structurally organized in a parent form and separated by dots. Although, the genotype/phenotype system of GF generates a kernel expression, which is the objective function of superior classifier. In this work the question of the satisfying mapping-s rules in evolving populations is addressed by analyzing populations undergoing either Mercer-s or none Mercer-s rule. The results presented here show that populations undergoing Mercer-s rules improve practically models selection of Support Vector Machine (SVM). The experiment is trained multi-classification problem and tested on nonlinear Ionosphere dataset. The target of this paper is to answer the question of evolving Mercer-s rule in SVM addressed using either genetic folding satisfied kernel-s rules or not applied to complicated domains and problems.

Keywords: Genetic Folding, GF, Evolutionary Algorithms, Support Vector Machine, Genetic Algorithm, Genetic Programming, Multi-Classification, Mercer's Rules

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1627
1475 Supporting Densification through the Planning and Implementation of Road Infrastructure in the South African Context

Authors: K. Govender, M. Sinclair

Abstract:

This paper demonstrates a proof of concept whereby shorter trips and land use densification can be promoted through an alternative approach to planning and implementation of road infrastructure in the South African context. It briefly discusses how the development of the Compact City concept relies on a combination of promoting shorter trips and densification through a change in focus in road infrastructure provision. The methodology developed in this paper uses a traffic model to test the impact of synthesized deterrence functions on congestion locations in the road network through the assignment of traffic on the study network. The results from this study demonstrate that intelligent planning of road infrastructure can indeed promote reduced urban sprawl, increased residential density and mixed-use areas which are supported by an efficient public transport system; and reduced dependence on the freeway network with a fixed road infrastructure budget. The study has resonance for all cities where urban sprawl is seemingly unstoppable.

Keywords: Compact cities, densification, road infrastructure planning, transportation modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 931
1474 Study on the Effect of Volume Fraction of Dual Phase Steel to Corrosion Behaviour and Hardness

Authors: R. Nadlene, H. Esah, S. Norliana, M.A. Mohd Irwan

Abstract:

The objective of this project is to study the corrosion behaviour and hardness based on the presence of martensite in dual phase steel. This study was conducted on six samples of dual phase steel which have different percentage of martensite. A total of 9 specimens were prepared by intercritical annealing process to study the effect of temperature to the formation of martensite. The low carbon steels specimens were heated for 25 minutes in a specified temperature ranging from 7250C to 8250C followed by rapid cooling in water. The measurement of corrosion rate was done by using extrapolation tafel method, while potentiostat was used to control and measured the current produced. This measurement is performed through a system named CMS105. The result shows that a specimen with higher percentage of martensite is likely to corrode faster. Hardness test for each specimen was conducted to compare its hardness with low carbon steel. The results obtained indicate that the specimen hardness is proportional to the amount of martensite in dual phase steel.

Keywords: dual phase steel, corrosion behaviour, hardness, intercritical annealing, martensite

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3016
1473 Attacks Classification in Adaptive Intrusion Detection using Decision Tree

Authors: Dewan Md. Farid, Nouria Harbi, Emna Bahri, Mohammad Zahidur Rahman, Chowdhury Mofizur Rahman

Abstract:

Recently, information security has become a key issue in information technology as the number of computer security breaches are exposed to an increasing number of security threats. A variety of intrusion detection systems (IDS) have been employed for protecting computers and networks from malicious network-based or host-based attacks by using traditional statistical methods to new data mining approaches in last decades. However, today's commercially available intrusion detection systems are signature-based that are not capable of detecting unknown attacks. In this paper, we present a new learning algorithm for anomaly based network intrusion detection system using decision tree algorithm that distinguishes attacks from normal behaviors and identifies different types of intrusions. Experimental results on the KDD99 benchmark network intrusion detection dataset demonstrate that the proposed learning algorithm achieved 98% detection rate (DR) in comparison with other existing methods.

Keywords: Detection rate, decision tree, intrusion detectionsystem, network security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3630
1472 Smart Surveillance using PDA

Authors: Basem Mustafa Abd. Amer , Syed Abdul Rahman Al-Attas

Abstract:

The aim of this research is to develop a fast and reliable surveillance system based on a personal digital assistant (PDA) device. This is to extend the capability of the device to detect moving objects which is already available in personal computers. Secondly, to compare the performance between Background subtraction (BS) and Temporal Frame Differencing (TFD) techniques for PDA platform as to which is more suitable. In order to reduce noise and to prepare frames for the moving object detection part, each frame is first converted to a gray-scale representation and then smoothed using a Gaussian low pass filter. Two moving object detection schemes i.e., BS and TFD have been analyzed. The background frame is updated by using Infinite Impulse Response (IIR) filter so that the background frame is adapted to the varying illuminate conditions and geometry settings. In order to reduce the effect of noise pixels resulting from frame differencing morphological filters erosion and dilation are applied. In this research, it has been found that TFD technique is more suitable for motion detection purpose than the BS in term of speed. On average TFD is approximately 170 ms faster than the BS technique

Keywords: Surveillance, PDA, Motion Detection, ImageProcessing , Background Subtraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1759
1471 A Modern Review of the Non-Invasive Continuous Blood Glucose Measuring Devices and Techniques for Remote Patient Monitoring System

Authors: Muhibul Haque Bhuyan

Abstract:

Diabetes disease that arises from the higher glucose level due to insulin shortage or insulin opposition in the human body has become a common disease in the world. No medicine can cure it completely. However, by taking medicine, maintaining diets, and having exercises regularly, a diabetes patient can keep his glucose level within the specified limits and in this way, he/she can lead a normal life like a healthy person. But to control glucose levels, a patient needs to monitor them regularly. Various techniques are being used over the last four decades. This modern review article aims to provide a comparative study report on various blood glucose monitoring techniques in a very concise and organized manner. The review mainly emphasizes working principles, cost, technology, sensors, measurement types, measurement accuracy, advantages, and disadvantages, etc. of various techniques and then compares among each other. Besides, the use of algorithms and simulators for the growth of this technology is also presented. Finally, current research trends of this measurement technology have also been discussed.

Keywords: blood glucose measurement, sensors, measurement devices, invasive and non-invasive techniques

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 975
1470 The Analysis of Hazard and Sensitivity of Potential Resource of Emergency Water Supply

Authors: A. Bumbová, M. Čáslavský, F. Božek, J. Dvořák

Abstract:

The paper deals with the analysis of hazards and sensitivity of potential resource of emergency water supply of population in a selected region of the Czech Republic. The procedure of identification and analysis of hazards and sensitivity is carried out on the basis of a unique methodology of classifying the drinking water resources earmarked for emergency supply of population. The hazard identification is based on a general register of hazards for individual parts of hydrological structure and the elements of technological equipment. It is followed by a semi-quantitative point indexation for the activation of each identified hazard, i.e. fires of anthropogenic origin, flood and the increased radioactive background accompanied by the leak of radon. Point indexation of sensitivity has been carried out at the same time. The analysis is the basis for a risk assessment of potential resource of emergency supply of population and the subsequent classification of such resource within the system of crisis planning.

Keywords: Hazard identification, sensitivity, semi-quantitative assessment, emergency water supply, crisis situation, ground water.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1611
1469 A Bi-Objective Model for Location-Allocation Problem within Queuing Framework

Authors: Amirhossein Chambari, Seyed Habib Rahmaty, Vahid Hajipour, Aida Karimi

Abstract:

This paper proposes a bi-objective model for the facility location problem under a congestion system. The idea of the model is motivated by applications of locating servers in bank automated teller machines (ATMS), communication networks, and so on. This model can be specifically considered for situations in which fixed service facilities are congested by stochastic demand within queueing framework. We formulate this model with two perspectives simultaneously: (i) customers and (ii) service provider. The objectives of the model are to minimize (i) the total expected travelling and waiting time and (ii) the average facility idle-time. This model represents a mixed-integer nonlinear programming problem which belongs to the class of NP-hard problems. In addition, to solve the model, two metaheuristic algorithms including nondominated sorting genetic algorithms (NSGA-II) and non-dominated ranking genetic algorithms (NRGA) are proposed. Besides, to evaluate the performance of the two algorithms some numerical examples are produced and analyzed with some metrics to determine which algorithm works better.

Keywords: Queuing, Location, Bi-objective, NSGA-II, NRGA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2276
1468 Transformative Leadership and Learning Management Systems Implementation: Leadership Practices in Instructional Design for Online Learning

Authors: Felix Brito

Abstract:

With the growth of online learning, several higher education institutions have attempted to incorporate technology in their curriculum. Successful technology implementation projects really on technology infrastructure and on the acceptance of education professionals towards innovation. This research study is aimed at illustrating the relevance of the human component in technology implementation projects in higher education by describing the Learning Management System implementation project executed by instructional designers working for a higher education institution in the southeast region of the United States. An analysis of the Transformative Leadership Theory, the Technology Acceptance Model, and the Diffusion of Innovation Process provide the support for a solid understanding of this issue and address recommendations for future technology implementation projects in higher education institutions.

Keywords: Learning management systems, transformative leadership theory, technology acceptance model, diffusion of innovation process, leadership, instructional design, online learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1566
1467 A Performance Comparison of Golay and Reed-Muller Coded OFDM Signal for Peak-to-Average Power Ratio Reduction

Authors: Sanjay Singh, M Sathish Kumar, H. S Mruthyunjaya

Abstract:

Multicarrier transmission system such as Orthogonal Frequency Division Multiplexing (OFDM) is a promising technique for high bit rate transmission in wireless communication systems. OFDM is a spectrally efficient modulation technique that can achieve high speed data transmission over multipath fading channels without the need for powerful equalization techniques. A major drawback of OFDM is the high Peak-to-Average Power Ratio (PAPR) of the transmit signal which can significantly impact the performance of the power amplifier. In this paper we have compared the PAPR reduction performance of Golay and Reed-Muller coded OFDM signal. From our simulation it has been found that the PAPR reduction performance of Golay coded OFDM is better than the Reed-Muller coded OFDM signal. Moreover, for the optimum PAPR reduction performance, code configuration for Golay and Reed-Muller codes has been identified.

Keywords: OFDM, PAPR, Perfect Codes, Golay Codes, Reed-Muller Codes

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1787
1466 Adjustment and Compensation Techniques for the Rotary Axes of Five-axis CNC Machine Tools

Authors: Tung-Hui Hsu, Wen-Yuh Jywe

Abstract:

Five-axis computer numerical control (CNC) machine tools (three linear and two rotary axes) are ideally suited to the fabrication of complex work pieces, such as dies, turbo blades, and cams. The locations of the axis average line and centerline of the rotary axes strongly influence the performance of these machines; however, techniques to compensate for eccentric error in the rotary axes remain weak. This paper proposes optical (Non-Bar) techniques capable of calibrating five-axis CNC machine tools and compensating for eccentric error in the rotary axes. This approach employs the measurement path in ISO/CD 10791-6 to determine the eccentric error in two rotary axes, for which compensatory measures can be implemented. Experimental results demonstrate that the proposed techniques can improve the performance of various five-axis CNC machine tools by more than 90%. Finally, a result of the cutting test using a B-type five-axis CNC machine tool confirmed to the usefulness of this proposed compensation technique.

Keywords: Calibration, compensation, rotary axis, five-axis computer numerical control (CNC) machine tools, eccentric error, optical calibration system, ISO/CD 10791-6

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4168
1465 Perception of TQM Implementation and Perceived Cost of Poor Quality: A Case Study of Local Automotive Company’s Supplier

Authors: Fakhruddin Esa, Yusri Yusof

Abstract:

The confirmatory of Total Quality Management (TQM) implementation is most vital in quality management. This paper focuses on employees' perceptions towards TQM implementation in a local automotive company supplier. The objectives of this study are first and foremost to determine the perception of TQM implementation among the staff, and secondly to ascertain the correlation between the variables, and lastly to identify the relative influence of the 10 TQM variables on the cost of poor quality (COPQ). The TQM implementation is perceived to be moderate. All correlation is found to be significant and five variables having positively moderate to high correlation. Out of 10 variables, quality system improvement, reward and recognition and customer focus influence the perceived COPQ. This study extended a discussion on these three variables contribution to TQM in general and the human resource development in the organization. A significant recommendation to lowering costs of internal error, such as trouble shooting and scraps are also discussed. Certain components of further research that would add value to this study have also been suggested and perhaps could be implemented at policy-level initiatives.

Keywords: Cost of poor quality, correlation, total quality management, variables.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1383
1464 Bridging Quantitative and Qualitative of Glaucoma Detection

Authors: Noor Elaiza Abdul Khalid, Noorhayati Mohamed Noor, Zamalia Mahmud, Saadiah Yahya, and Norharyati Md Ariff

Abstract:

Glaucoma diagnosis involves extracting three features of the fundus image; optic cup, optic disc and vernacular. Present manual diagnosis is expensive, tedious and time consuming. A number of researches have been conducted to automate this process. However, the variability between the diagnostic capability of an automated system and ophthalmologist has yet to be established. This paper discusses the efficiency and variability between ophthalmologist opinion and digital technique; threshold. The efficiency and variability measures are based on image quality grading; poor, satisfactory or good. The images are separated into four channels; gray, red, green and blue. A scientific investigation was conducted on three ophthalmologists who graded the images based on the image quality. The images are threshold using multithresholding and graded as done by the ophthalmologist. A comparison of grade from the ophthalmologist and threshold is made. The results show there is a small variability between result of ophthalmologists and digital threshold.

Keywords: Digital Fundus Image, Glaucoma Detection, Multithresholding, Segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2044
1463 An Examination of the Factors Affecting the Adoption of Cloud Enterprise Resource Planning Systems in Egyptian Companies

Authors: Mayar A. Omar, Ismail Gomaa, Heba Badawy, Hosam Moubarak

Abstract:

Enterprise resource planning (ERP) is an integrated system that helps companies in managing their resources. There are two types of ERP systems, the traditional ERP systems, and the cloud ERP systems. Cloud ERP systems were introduced after the development of cloud computing technology. This research aims to identify the factors that affect the adoption of cloud ERP in Egyptian companies. Moreover, the aim of our study is to provide guidance to Egyptian companies in the cloud ERP adoption decision and to participate in increasing the number of the cloud ERP studies that are conducted in the Middle East and in developing countries. There are many factors influencing the adoption of cloud ERP in Egyptian organizations which are discussed and explained in the research. Those factors are examined through combining the Diffusion of Innovation theory (DOI) and technology-organization-environment framework (TOE). Data were collected through a survey that was developed using constructs from the existing studies of cloud computing and cloud ERP technologies and was then modified to fit our research. The analysis of the data was based on Structural Equation Modeling (SEM) using Smart PLS software that was used for the empirical analysis of the research model.

Keywords: cloud computing, cloud ERP systems, DOI, Egypt, SEM, TOE

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 878
1462 A Simplified Adaptive Decision Feedback Equalization Technique for π/4-DQPSK Signals

Authors: V. Prapulla, A. Mitra, R. Bhattacharjee, S. Nandi

Abstract:

We present a simplified equalization technique for a π/4 differential quadrature phase shift keying ( π/4 -DQPSK) modulated signal in a multipath fading environment. The proposed equalizer is realized as a fractionally spaced adaptive decision feedback equalizer (FS-ADFE), employing exponential step-size least mean square (LMS) algorithm as the adaptation technique. The main advantage of the scheme stems from the usage of exponential step-size LMS algorithm in the equalizer, which achieves similar convergence behavior as that of a recursive least squares (RLS) algorithm with significantly reduced computational complexity. To investigate the finite-precision performance of the proposed equalizer along with the π/4 -DQPSK modem, the entire system is evaluated on a 16-bit fixed point digital signal processor (DSP) environment. The proposed scheme is found to be attractive even for those cases where equalization is to be performed within a restricted number of training samples.

Keywords: Adaptive decision feedback equalizer, Fractionally spaced equalizer, π/4 DQPSK signal, Digital signal processor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5737
1461 Evaluating the Standards of Hospital Pharmacies in Therapeutic Centers Affiliated with Kermanshah University of Medical Sciences, Iran

Authors: Tahvilian R., Siahi Shadbad MR., Hamishehkar H., Aghababa Gharehbagh V.

Abstract:

Nowadays pharmaceutical care departments located in hospitals are amongst the important pillars of the healthcare system. The aim of this study was to evaluate quality of hospital drugstores affiliated with Kermanshah University of Medical Sciences. In this cross-sectional study a validated questionnaire was used. The questionnaire was filled in by the one of the researchers in all seventeen hospital drugstores located in the teaching and nonteaching hospitals affiliated with Kermanshah University of Medical Sciences. The results shows that in observed hospitals,24% of pharmacy environments, 25% of pharmacy store and storage conditions, 49% of storage procedure, 25% of ordering drugs and supplies, 73% of receiving supplies (proper procedure are fallowed for receiving supplies), 35% of receiving supplies (prompt action taken if deterioration of drugs received is suspected), 23.35% of drugs delivery to patients and finally 0% of stock cards are used for proper inventory control have full compliance with standards.

Keywords: Hospital pharmacy standards, Kermanshah, pharmacy management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1853