Search results for: Proof of Work algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7326

Search results for: Proof of Work algorithm

3546 Gabriel-constrained Parametric Surface Triangulation

Authors: Oscar E. Ruiz, Carlos Cadavid, Juan G. Lalinde, Ricardo Serrano, Guillermo Peris-Fajarnes

Abstract:

The Boundary Representation of a 3D manifold contains FACES (connected subsets of a parametric surface S : R2 -! R3). In many science and engineering applications it is cumbersome and algebraically difficult to deal with the polynomial set and constraints (LOOPs) representing the FACE. Because of this reason, a Piecewise Linear (PL) approximation of the FACE is needed, which is usually represented in terms of triangles (i.e. 2-simplices). Solving the problem of FACE triangulation requires producing quality triangles which are: (i) independent of the arguments of S, (ii) sensitive to the local curvatures, and (iii) compliant with the boundaries of the FACE and (iv) topologically compatible with the triangles of the neighboring FACEs. In the existing literature there are no guarantees for the point (iii). This article contributes to the topic of triangulations conforming to the boundaries of the FACE by applying the concept of parameterindependent Gabriel complex, which improves the correctness of the triangulation regarding aspects (iii) and (iv). In addition, the article applies the geometric concept of tangent ball to a surface at a point to address points (i) and (ii). Additional research is needed in algorithms that (i) take advantage of the concepts presented in the heuristic algorithm proposed and (ii) can be proved correct.

Keywords: surface triangulation, conforming triangulation, surfacesampling, Gabriel complex.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646
3545 On the Mechanism Broadening of Optical Spectrum of a Solvated Electron in Ammonia

Authors: V.K. Mukhomorov

Abstract:

The solvated electron is self-trapped (polaron) owing to strong interaction with the quantum polarization field. If the electron and quantum field are strongly coupled then the collective localized state of the field and quasi-particle is formed. In such a formation the electron motion is rather intricate. On the one hand the electron oscillated within a rather deep polarization potential well and undergoes the optical transitions, and on the other, it moves together with the center of inertia of the system and participates in the thermal random walk. The problem is to separate these motions correctly, rigorously taking into account the conservation laws. This can be conveniently done using Bogolyubov-Tyablikov method of canonical transformation to the collective coordinates. This transformation removes the translational degeneracy and allows one to develop the successive approximation algorithm for the energy and wave function while simultaneously fulfilling the law of conservation of total momentum of the system. The resulting equations determine the electron transitions and depend explicitly on the translational velocity of the quasi-particle as whole. The frequency of optical transition is calculated for the solvated electron in ammonia, and an estimate is made for the thermal-induced spectral bandwidth.

Keywords: Canonical transformations, solvated electron, width of the optical spectrum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1298
3544 A Neuron Model of Facial Recognition and Detection of an Authorized Entity Using Machine Learning System

Authors: J. K. Adedeji, M. O. Oyekanmi

Abstract:

This paper has critically examined the use of Machine Learning procedures in curbing unauthorized access into valuable areas of an organization. The use of passwords, pin codes, user’s identification in recent times has been partially successful in curbing crimes involving identities, hence the need for the design of a system which incorporates biometric characteristics such as DNA and pattern recognition of variations in facial expressions. The facial model used is the OpenCV library which is based on the use of certain physiological features, the Raspberry Pi 3 module is used to compile the OpenCV library, which extracts and stores the detected faces into the datasets directory through the use of camera. The model is trained with 50 epoch run in the database and recognized by the Local Binary Pattern Histogram (LBPH) recognizer contained in the OpenCV. The training algorithm used by the neural network is back propagation coded using python algorithmic language with 200 epoch runs to identify specific resemblance in the exclusive OR (XOR) output neurons. The research however confirmed that physiological parameters are better effective measures to curb crimes relating to identities.

Keywords: Biometric characters, facial recognition, neural network, OpenCV.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 681
3543 Investigation of SSR Characteristics of SSSC With GA Based Voltage Controller

Authors: R. Thirumalaivasan, M.Janaki, Nagesh Prabhu

Abstract:

In this paper, investigation of subsynchronous resonance (SSR) characteristics of a hybrid series compensated system and the design of voltage controller for three level 24-pulse Voltage Source Converter based Static Synchronous Series Compensator (SSSC) is presented. Hybrid compensation consists of series fixed capacitor and SSSC which is a active series FACTS controller. The design of voltage controller for SSSC is based on damping torque analysis, and Genetic Algorithm (GA) is adopted for tuning the controller parameters. The SSR Characteristics of SSSC with constant reactive voltage control modes has been investigated. The results show that the constant reactive voltage control of SSSC has the effect of reducing the electrical resonance frequency, which detunes the SSR.The analysis of SSR with SSSC is carried out based on frequency domain method, eigenvalue analysis and transient simulation. While the eigenvalue and damping torque analysis are based on D-Q model of SSSC, the transient simulation considers both D-Q and detailed three phase nonlinear system model using switching functions.

Keywords: FACTS, SSR, SSSC, damping torque, GA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1717
3542 PAPR Reduction Method for OFDM Signalby Using Dummy Sub-carriers

Authors: Pisit Boonsrimuang, Arjin Numsomran, Tawil Paungma, Hideo Kobayashi

Abstract:

One of the disadvantages of using OFDM is the larger peak to averaged power ratio (PAPR) in its time domain signal. The larger PAPR signal would course the fatal degradation of bit error rate performance (BER) due to the inter-modulation noise in the nonlinear channel. This paper proposes an improved DSI (Dummy Sequence Insertion) method, which can achieve the better PAPR and BER performances. The feature of proposed method is to optimize the phase of each dummy sub-carrier so as to reduce the PAPR performance by changing all predetermined phase coefficients in the time domain signal, which is calculated for data sub-carriers and dummy sub-carriers separately. To achieve the better PAPR performance, this paper also proposes to employ the time-frequency domain swapping algorithm for fine adjustment of phase coefficient of the dummy subcarriers, which can achieve the less complexity of processing and achieves the better PAPR and BER performances than those for the conventional DSI method. This paper presents various computer simulation results to verify the effectiveness of proposed method as comparing with the conventional methods in the non-linear channel.

Keywords: OFDM, PAPR, dummy sub-carriers, non-linear

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1530
3541 Blind Channel Estimation for Frequency Hopping System Using Subspace Based Method

Authors: M. M. Qasaymeh, M. A. Khodeir

Abstract:

Subspace channel estimation methods have been studied widely, where the subspace of the covariance matrix is decomposed to separate the signal subspace from noise subspace. The decomposition is normally done by using either the eigenvalue decomposition (EVD) or the singular value decomposition (SVD) of the auto-correlation matrix (ACM). However, the subspace decomposition process is computationally expensive. This paper considers the estimation of the multipath slow frequency hopping (FH) channel using noise space based method. In particular, an efficient method is proposed to estimate the multipath time delays by applying multiple signal classification (MUSIC) algorithm which is based on the null space extracted by the rank revealing LU (RRLU) factorization. As a result, precise information is provided by the RRLU about the numerical null space and the rank, (i.e., important tool in linear algebra). The simulation results demonstrate the effectiveness of the proposed novel method by approximately decreasing the computational complexity to the half as compared with RRQR methods keeping the same performance.

Keywords: Time Delay Estimation, RRLU, RRQR, MUSIC, LS-ESPRIT, LS-ESPRIT, Frequency Hopping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2030
3540 Production Throughput Modeling under Five Uncertain Variables Using Bayesian Inference

Authors: Amir Azizi, Amir Yazid B. Ali, Loh Wei Ping

Abstract:

Throughput is an important measure of performance of production system. Analyzing and modeling of production throughput is complex in today-s dynamic production systems due to uncertainties of production system. The main reasons are that uncertainties are materialized when the production line faces changes in setup time, machinery break down, lead time of manufacturing, and scraps. Besides, demand changes are fluctuating from time to time for each product type. These uncertainties affect the production performance. This paper proposes Bayesian inference for throughput modeling under five production uncertainties. Bayesian model utilized prior distributions related to previous information about the uncertainties where likelihood distributions are associated to the observed data. Gibbs sampling algorithm as the robust procedure of Monte Carlo Markov chain was employed for sampling unknown parameters and estimating the posterior mean of uncertainties. The Bayesian model was validated with respect to convergence and efficiency of its outputs. The results presented that the proposed Bayesian models were capable to predict the production throughput with accuracy of 98.3%.

Keywords: Bayesian inference, Uncertainty modeling, Monte Carlo Markov chain, Gibbs sampling, Production throughput

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2126
3539 Implementation of Edge Detection Based on Autofluorescence Endoscopic Image of Field Programmable Gate Array

Authors: Hao Cheng, Zhiwu Wang, Guozheng Yan, Pingping Jiang, Shijia Qin, Shuai Kuang

Abstract:

Autofluorescence Imaging (AFI) is a technology for detecting early carcinogenesis of the gastrointestinal tract in recent years. Compared with traditional white light endoscopy (WLE), this technology greatly improves the detection accuracy of early carcinogenesis, because the colors of normal tissues are different from cancerous tissues. Thus, edge detection can distinguish them in grayscale images. In this paper, based on the traditional Sobel edge detection method, optimization has been performed on this method which considers the environment of the gastrointestinal, including adaptive threshold and morphological processing. All of the processes are implemented on our self-designed system based on the image sensor OV6930 and Field Programmable Gate Array (FPGA), The system can capture the gastrointestinal image taken by the lens in real time and detect edges. The final experiments verified the feasibility of our system and the effectiveness and accuracy of the edge detection algorithm.

Keywords: AFI, edge detection, adaptive threshold, morphological processing, OV6930, FPGA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 637
3538 Scaling up Detection Rates and Reducing False Positives in Intrusion Detection using NBTree

Authors: Dewan Md. Farid, Nguyen Huu Hoa, Jerome Darmont, Nouria Harbi, Mohammad Zahidur Rahman

Abstract:

In this paper, we present a new learning algorithm for anomaly based network intrusion detection using improved self adaptive naïve Bayesian tree (NBTree), which induces a hybrid of decision tree and naïve Bayesian classifier. The proposed approach scales up the balance detections for different attack types and keeps the false positives at acceptable level in intrusion detection. In complex and dynamic large intrusion detection dataset, the detection accuracy of naïve Bayesian classifier does not scale up as well as decision tree. It has been successfully tested in other problem domains that naïve Bayesian tree improves the classification rates in large dataset. In naïve Bayesian tree nodes contain and split as regular decision-trees, but the leaves contain naïve Bayesian classifiers. The experimental results on KDD99 benchmark network intrusion detection dataset demonstrate that this new approach scales up the detection rates for different attack types and reduces false positives in network intrusion detection.

Keywords: Detection rates, false positives, network intrusiondetection, naïve Bayesian tree.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2260
3537 Feature Extraction from Aerial Photos

Authors: Mesut Gündüz, Ferruh Yildiz, Ayşe Onat

Abstract:

In Geographic Information System, one of the sources of obtaining needed geographic data is digitizing analog maps and evaluation of aerial and satellite photos. In this study, a method will be discussed which can be used to extract vectorial features and creating vectorized drawing files for aerial photos. At the same time a software developed for these purpose. Converting from raster to vector is also known as vectorization and it is the most important step when creating vectorized drawing files. In the developed algorithm, first of all preprocessing on the aerial photo is done. These are; converting to grayscale if necessary, reducing noise, applying some filters and determining the edge of the objects etc. After these steps, every pixel which constitutes the photo are followed from upper left to right bottom by examining its neighborhood relationship and one pixel wide lines or polylines obtained. The obtained lines have to be erased for preventing confusion while continuing vectorization because if not erased they can be perceived as new line, but if erased it can cause discontinuity in vector drawing so the image converted from 2 bit to 8 bit and the detected pixels are expressed as a different bit. In conclusion, the aerial photo can be converted to vector form which includes lines and polylines and can be opened in any CAD application.

Keywords: Vectorization, Aerial Photos, Vectorized DrawingFile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1595
3536 An Efficient Stud Krill Herd Framework for Solving Non-Convex Economic Dispatch Problem

Authors: Bachir Bentouati, Lakhdar Chaib, Saliha Chettih, Gai-Ge Wang

Abstract:

The problem of economic dispatch (ED) is the basic problem of power framework, its main goal is to find the most favorable generation dispatch to generate each unit, reduce the whole power generation cost, and meet all system limitations. A heuristic algorithm, recently developed called Stud Krill Herd (SKH), has been employed in this paper to treat non-convex ED problems. The proposed KH has been modified using Stud selection and crossover (SSC) operator, to enhance the solution quality and avoid local optima. We are demonstrated SKH effects in two case study systems composed of 13-unit and 40-unit test systems to verify its performance and applicability in solving the ED problems. In the above systems, SKH can successfully obtain the best fuel generator and distribute the load requirements for the online generators. The results showed that the use of the proposed SKH method could reduce the total cost of generation and optimize the fulfillment of the load requirements.

Keywords: Stud Krill Herd, economic dispatch, crossover, stud selection, valve-point effect.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 867
3535 Objective Assessment of Psoriasis Lesion Thickness for PASI Scoring using 3D Digital Imaging

Authors: M.H. Ahmad Fadzil, Hurriyatul Fitriyah, Esa Prakasa, Hermawan Nugroho, S.H. Hussein, Azura Mohd. Affandi

Abstract:

Psoriasis is a chronic inflammatory skin condition which affects 2-3% of population around the world. Psoriasis Area and Severity Index (PASI) is a gold standard to assess psoriasis severity as well as the treatment efficacy. Although a gold standard, PASI is rarely used because it is tedious and complex. In practice, PASI score is determined subjectively by dermatologists, therefore inter and intra variations of assessment are possible to happen even among expert dermatologists. This research develops an algorithm to assess psoriasis lesion for PASI scoring objectively. Focus of this research is thickness assessment as one of PASI four parameters beside area, erythema and scaliness. Psoriasis lesion thickness is measured by averaging the total elevation from lesion base to lesion surface. Thickness values of 122 3D images taken from 39 patients are grouped into 4 PASI thickness score using K-means clustering. Validation on lesion base construction is performed using twelve body curvature models and show good result with coefficient of determinant (R2) is equal to 1.

Keywords: 3D digital imaging, base construction, PASI, psoriasis lesion thickness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2437
3534 Automated Heart Sound Classification from Unsegmented Phonocardiogram Signals Using Time Frequency Features

Authors: Nadia Masood Khan, Muhammad Salman Khan, Gul Muhammad Khan

Abstract:

Cardiologists perform cardiac auscultation to detect abnormalities in heart sounds. Since accurate auscultation is a crucial first step in screening patients with heart diseases, there is a need to develop computer-aided detection/diagnosis (CAD) systems to assist cardiologists in interpreting heart sounds and provide second opinions. In this paper different algorithms are implemented for automated heart sound classification using unsegmented phonocardiogram (PCG) signals. Support vector machine (SVM), artificial neural network (ANN) and cartesian genetic programming evolved artificial neural network (CGPANN) without the application of any segmentation algorithm has been explored in this study. The signals are first pre-processed to remove any unwanted frequencies. Both time and frequency domain features are then extracted for training the different models. The different algorithms are tested in multiple scenarios and their strengths and weaknesses are discussed. Results indicate that SVM outperforms the rest with an accuracy of 73.64%.

Keywords: Pattern recognition, machine learning, computer aided diagnosis, heart sound classification, and feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1256
3533 Alignment of Emission Gamma Ray Sources with Nai(Ti) Scintillation Detectors by Two Laser Beams to Pre-Operation using Alternating Minimization Technique

Authors: Abbas Ali Mahmood Karwi

Abstract:

Accurate timing alignment and stability is important to maximize the true counts and minimize the random counts in positron emission tomography So signals output from detectors must be centering with the two isotopes to pre-operation and fed signals into four units of pulse-processing units, each unit can accept up to eight inputs. The dual source computed tomography consist two units on the left for 15 detector signals of Cs-137 isotope and two units on the right are for 15 detectors signals of Co-60 isotope. The gamma spectrum consisting of either single or multiple photo peaks. This allows for the use of energy discrimination electronic hardware associated with the data acquisition system to acquire photon counts data with a specific energy, even if poor energy resolution detectors are used. This also helps to avoid counting of the Compton scatter counts especially if a single discrete gamma photo peak is emitted by the source as in the case of Cs-137. In this study the polyenergetic version of the alternating minimization algorithm is applied to the dual energy gamma computed tomography problem.

Keywords: Alignment, Spectrum, Laser, Detectors, Image

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1592
3532 Experimental Simulation Set-Up for Validating Out-Of-The-Loop Mitigation when Monitoring High Levels of Automation in Air Traffic Control

Authors: Oliver Ohneiser, Francesca De Crescenzio, Gianluca Di Flumeri, Jan Kraemer, Bruno Berberian, Sara Bagassi, Nicolina Sciaraffa, Pietro Aricò, Gianluca Borghini, Fabio Babiloni

Abstract:

An increasing degree of automation in air traffic will also change the role of the air traffic controller (ATCO). ATCOs will fulfill significantly more monitoring tasks compared to today. However, this rather passive role may lead to Out-Of-The-Loop (OOTL) effects comprising vigilance decrement and less situation awareness. The project MINIMA (Mitigating Negative Impacts of Monitoring high levels of Automation) has conceived a system to control and mitigate such OOTL phenomena. In order to demonstrate the MINIMA concept, an experimental simulation set-up has been designed. This set-up consists of two parts: 1) a Task Environment (TE) comprising a Terminal Maneuvering Area (TMA) simulator as well as 2) a Vigilance and Attention Controller (VAC) based on neurophysiological data recording such as electroencephalography (EEG) and eye-tracking devices. The current vigilance level and the attention focus of the controller are measured during the ATCO’s active work in front of the human machine interface (HMI). The derived vigilance level and attention trigger adaptive automation functionalities in the TE to avoid OOTL effects. This paper describes the full-scale experimental set-up and the component development work towards it. Hence, it encompasses a pre-test whose results influenced the development of the VAC as well as the functionalities of the final TE and the two VAC’s sub-components.

Keywords: Automation, human factors, air traffic controller, MINIMA, OOTL, Out-Of-The-Loop, EEG, electroencephalography, HMI, human machine interface.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1438
3531 On Developing an Automatic Speech Recognition System for Standard Arabic Language

Authors: R. Walha, F. Drira, H. El-Abed, A. M. Alimi

Abstract:

The Automatic Speech Recognition (ASR) applied to Arabic language is a challenging task. This is mainly related to the language specificities which make the researchers facing multiple difficulties such as the insufficient linguistic resources and the very limited number of available transcribed Arabic speech corpora. In this paper, we are interested in the development of a HMM-based ASR system for Standard Arabic (SA) language. Our fundamental research goal is to select the most appropriate acoustic parameters describing each audio frame, acoustic models and speech recognition unit. To achieve this purpose, we analyze the effect of varying frame windowing (size and period), acoustic parameter number resulting from features extraction methods traditionally used in ASR, speech recognition unit, Gaussian number per HMM state and number of embedded re-estimations of the Baum-Welch Algorithm. To evaluate the proposed ASR system, a multi-speaker SA connected-digits corpus is collected, transcribed and used throughout all experiments. A further evaluation is conducted on a speaker-independent continue SA speech corpus. The phonemes recognition rate is 94.02% which is relatively high when comparing it with another ASR system evaluated on the same corpus.

Keywords: ASR, HMM, acoustical analysis, acoustic modeling, Standard Arabic language

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1760
3530 The Latency-Amplitude Binomial of Waves Resulting from the Application of Evoked Potentials for the Diagnosis of Dyscalculia

Authors: Maria Isabel Garcia-Planas, Maria Victoria Garcia-Camba

Abstract:

Recent advances in cognitive neuroscience have allowed a step forward in perceiving the processes involved in learning from the point of view of acquiring new information or the modification of existing mental content. The evoked potentials technique reveals how basic brain processes interact to achieve adequate and flexible behaviours. The objective of this work, using evoked potentials, is to study if it is possible to distinguish if a patient suffers a specific type of learning disorder to decide the possible therapies to follow. The methodology used in this work is to analyze the dynamics of different brain areas during a cognitive activity to find the relationships between the other areas analyzed to understand the functioning of neural networks better. Also, the latest advances in neuroscience have revealed the exis-tence of different brain activity in the learning process that can be highlighted through the use of non-invasive, innocuous, low-cost and easy-access techniques such as, among others, the evoked potentials that can help to detect early possible neurodevelopmental difficulties for their subsequent assessment and therapy. From the study of the amplitudes and latencies of the evoked potentials, it is possible to detect brain alterations in the learning process, specifically in dyscalculia, to achieve specific corrective measures for the application of personalized psycho-pedagogical plans that allow obtaining an optimal integral development of the affected people.

Keywords: dyscalculia, neurodevelopment, evoked potentials, learning disabilities, neural networks

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 569
3529 A Simplified Approach for Load Flow Analysis of Radial Distribution Network

Authors: K. Vinoth Kumar, M.P. Selvan

Abstract:

This paper presents a simple approach for load flow analysis of a radial distribution network. The proposed approach utilizes forward and backward sweep algorithm based on Kirchoff-s current law (KCL) and Kirchoff-s voltage law (KVL) for evaluating the node voltages iteratively. In this approach, computation of branch current depends only on the current injected at the neighbouring node and the current in the adjacent branch. This approach starts from the end nodes of sub lateral line, lateral line and main line and moves towards the root node during branch current computation. The node voltage evaluation begins from the root node and moves towards the nodes located at the far end of the main, lateral and sub lateral lines. The proposed approach has been tested using four radial distribution systems of different size and configuration and found to be computationally efficient.

Keywords: constant current load, constant impedance load, constant power load, forward–backward sweep, load flow analysis, radial distribution system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2655
3528 Optimum Time Coordination of Overcurrent Relays using Two Phase Simplex Method

Authors: Prashant P. Bedekar, Sudhir R. Bhide, Vijay S. Kale

Abstract:

Overcurrent (OC) relays are the major protection devices in a distribution system. The operating time of the OC relays are to be coordinated properly to avoid the mal-operation of the backup relays. The OC relay time coordination in ring fed distribution networks is a highly constrained optimization problem which can be stated as a linear programming problem (LPP). The purpose is to find an optimum relay setting to minimize the time of operation of relays and at the same time, to keep the relays properly coordinated to avoid the mal-operation of relays. This paper presents two phase simplex method for optimum time coordination of OC relays. The method is based on the simplex algorithm which is used to find optimum solution of LPP. The method introduces artificial variables to get an initial basic feasible solution (IBFS). Artificial variables are removed using iterative process of first phase which minimizes the auxiliary objective function. The second phase minimizes the original objective function and gives the optimum time coordination of OC relays.

Keywords: Constrained optimization, LPP, Overcurrent relaycoordination, Two-phase simplex method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2990
3527 Analysis of GI/M(n)/1/N Queue with Single Working Vacation and Vacation Interruption

Authors: P. Vijaya Laxmi, V. Goswami, V. Suchitra

Abstract:

This paper presents a finite buffer renewal input single working vacation and vacation interruption queue with state dependent services and state dependent vacations, which has a wide range of applications in several areas including manufacturing, wireless communication systems. Service times during busy period, vacation period and vacation times are exponentially distributed and are state dependent. As a result of the finite waiting space, state dependent services and state dependent vacation policies, the analysis of these queueing models needs special attention. We provide a recursive method using the supplementary variable technique to compute the stationary queue length distributions at pre-arrival and arbitrary epochs. An efficient computational algorithm of the model is presented which is fast and accurate and easy to implement. Various performance measures have been discussed. Finally, some special cases and numerical results have been depicted in the form of tables and graphs. 

Keywords: State Dependent Service, Vacation Interruption, Supplementary Variable, Single Working Vacation, Blocking Probability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2135
3526 Fuzzy Multiple Criteria Decision Making for Unmanned Combat Aircraft Selection Using Proximity Measure Method

Authors: C. Ardil

Abstract:

Intuitionistic fuzzy sets (IFS), Pythagorean fuzzy sets (PyFS), Picture fuzzy sets (PFS), q-rung orthopair fuzzy sets (q-ROF), Spherical fuzzy sets (SFS), T-spherical FS, and Neutrosophic sets (NS) are reviewed as multidimensional extensions of fuzzy sets in order to more explicitly and informatively describe the opinions of decision-making experts under uncertainty. To handle operations with standard fuzzy sets (SFS), the necessary operators; weighted arithmetic mean (WAM), weighted geometric mean (WGM), and Minkowski distance function are defined. The algorithm of the proposed proximity measure method (PMM) is provided with a multiple criteria group decision making method (MCDM) for use in a standard fuzzy set environment. To demonstrate the feasibility of the proposed method, the problem of selecting the best drone for an Air Force procurement request is used. The proximity measure method (PMM) based multidimensional standard fuzzy sets (SFS) is introduced to demonstrate its use with an issue involving unmanned combat aircraft selection.

Keywords: standard fuzzy sets (SFS), unmanned combat aircraft selection, multiple criteria decision making (MCDM), proximity measure method (PMM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 314
3525 Low Complexity Peak-to-Average Power Ratio Reduction in Orthogonal Frequency Division Multiplexing System by Simultaneously Applying Partial Transmit Sequence and Clipping Algorithms

Authors: V. Sudha, D. Sriram Kumar

Abstract:

Orthogonal Frequency Division Multiplexing (OFDM) has been used in many advanced wireless communication systems due to its high spectral efficiency and robustness to frequency selective fading channels. However, the major concern with OFDM system is the high peak-to-average power ratio (PAPR) of the transmitted signal. Some of the popular techniques used for PAPR reduction in OFDM system are conventional partial transmit sequences (CPTS) and clipping. In this paper, a parallel combination/hybrid scheme of PAPR reduction using clipping and CPTS algorithms is proposed. The proposed method intelligently applies both the algorithms in order to reduce both PAPR as well as computational complexity. The proposed scheme slightly degrades bit error rate (BER) performance due to clipping operation and it can be reduced by selecting an appropriate value of the clipping ratio (CR). The simulation results show that the proposed algorithm achieves significant PAPR reduction with much reduced computational complexity.

Keywords: CCDF, OFDM, PAPR, PTS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1356
3524 A Local Invariant Generalized Hough Transform Method for Integrated Circuit Visual Positioning

Authors: Fei Long Wei, Hua Yang, Hai Tao Zhang, Zhou Ping Yin

Abstract:

In this study, an local invariant generalized Houghtransform (LI-GHT) method is proposed for integrated circuit (IC) visual positioning. The original generalized Hough transform (GHT) is robust to external noise; however, it is not suitable for visual positioning of IC chips due to the four-dimensionality (4D) of parameter space which leads to the substantial storage requirement and high computational complexity. The proposed LI-GHT method can reduce the dimensionality of parameter space to 2D thanks to the rotational invariance of local invariant geometric feature and it can estimate the accuracy position and rotation angle of IC chips in real-time under noise and blur influence. The experiment results show that the proposed LI-GHT can estimate position and rotation angle of IC chips with high accuracy and fast speed. The proposed LI-GHT algorithm was implemented in IC visual positioning system of radio frequency identification (RFID) packaging equipment.

Keywords: Integrated Circuit Visual Positioning, Generalized Hough Transform, Local invariant Generalized Hough Transform, ICpacking equipment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2194
3523 Addressing Global Trauma: Somatic Interventions in PTSD Treatment and Clinician Burnout Prevention

Authors: Nina Kaufmans

Abstract:

Traditional treatments for post-traumatic stress disorder (PTSD) that rely primarily on oral narratives are partially insufficient to prevent PTSD symptoms from recurrence. As a result of the global COVID-19 pandemic, war conflicts, and economic crises, a rising proportion of users of mental health services express somatically based distress in addition to their existing mental health symptoms. Furthermore, the rapid increase in demand for mental health services has resulted in substantial burnout among mental health professionals, which may further impact the quality of services provided and the sustainability of professional life-work balance. This article examines the implications of current developments and challenges in mental health services demand and subsequent responses, as well as the effects of those responses on mental health professionals. The article examines the neurobiological mechanisms underlying traumatic experiences, then discusses the premises for "bottom-up," or somatically oriented, psychotherapy approaches, and concludes with suggestions for clinical skills and interventions to be used by practitioners who work with clients diagnosed with PTSD. In addition, we examine how somatically based psychotherapy interventions performed in sessions might reduce clinician burnout and improve their well-being. We examine how incorporating somatically based therapies into counseling will boost the efficacy of mental health recovery and maintain remission while providing mental health practitioners with chances for self-care.

Keywords: Somatic psychotherapy interventions, trauma counseling, preventing and treating burnout, adults with PTSD, bottom-up skills, the effectiveness of trauma treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 168
3522 Proximate Composition and Textural Properties of Cooked Sausages Formulated from Mechanically Deboned Chicken Meat with Addition of Chicken Offal

Authors: Marija R. Jokanović, Vladimir M. Tomović, Mihajlo T. Jović, Branislav V. Šojić, Snežana B. Škaljac, Tatjana A. Tasić, Predrag M. Ikonić

Abstract:

Proximate composition (moisture, protein, total fat, and total ash) and textural characteristics (hardness, adhesiveness, springiness, cohesiveness, chewiness and firmness and work of shear) of cooked sausages formulated from mechanically deboned chicken meat (MDCM) with addition of chicken offal (heart, gizzard or liver) were investigated. Chicken offal replaced equal weight (15 kg) of MDCM in standard sausage formulation. Regarding proximate composition sausage with heart addition was significantly (P<0.05) lower in moisture content (70.45%) than sausage with liver addition (71.35%), and significantly (P<0.05) the highest in total ash content (2.83%). Sausage with gizzard addition was significantly higher in protein content (9.77%) than sausage with liver addition (9.42%). Total fat content didn’t significantly (P>0.05) differ among all three sausages. The effect of offal addition was more notable in Warner-Bratzler shear test results than in texture profile analysis test. Firmness and work of shear were significantly different (P<0.05) among all three sausages. Sausage with liver addition was significantly (P<0.05) lower in hardness (1672 g) and chewiness (1020 g) and numerically the lowest in springiness (0.90) and adhesiveness (–70 g*s) comparing with other two sausages. Sausage with heart addition was significantly (P<0.05) higher in cohesiveness (0.74) comparing with other two sausages.

Keywords: Cooked sausage, mechanically deboned chicken meat, offal, proximate composition, texture

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3913
3521 Dynamic Model Conception of Improving Services Quality in Railway Transport

Authors: Eva Nedeliakova, Jaroslav Masek, Juraj Camaj

Abstract:

This article describes the results of research focused on quality of railway freight transport services. Improvement of these services has a crucial importance in customer considering on the future use of railway transport. Processes filling the customer demands and output quality assessment were defined as a part of the research. In this contribution is introduced the map of quality planning and the algorithm of applied methodology. It characterizes a model which takes into account characters of transportation with linking a perception services quality in ordinary and extraordinary operation. Despite the fact that rail freight transport has its solid position in the transport market, lots of carriers worldwide have been experiencing a stagnation for a couple of years. Therefore, specific results of the research have a significant importance and belong to numerous initiatives aimed to develop and support railway transport not only by creating a single railway area or reducing noise but also by promoting railway services. This contribution is focused also on the application of dynamic quality models which represent an innovative method of evaluation quality services. Through this conception, time factor, expected, and perceived quality in each moment of the transportation process can be taken into account.

Keywords: Quality, railway, transport, service.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1679
3520 Soft-Sensor for Estimation of Gasoline Octane Number in Platforming Processes with Adaptive Neuro-Fuzzy Inference Systems (ANFIS)

Authors: Hamed.Vezvaei, Sepideh.Ordibeheshti, Mehdi.Ardjmand

Abstract:

Gasoline Octane Number is the standard measure of the anti-knock properties of a motor in platforming processes, that is one of the important unit operations for oil refineries and can be determined with online measurement or use CFR (Cooperative Fuel Research) engines. Online measurements of the Octane number can be done using direct octane number analyzers, that it is too expensive, so we have to find feasible analyzer, like ANFIS estimators. ANFIS is the systems that neural network incorporated in fuzzy systems, using data automatically by learning algorithms of NNs. ANFIS constructs an input-output mapping based both on human knowledge and on generated input-output data pairs. In this research, 31 industrial data sets are used (21 data for training and the rest of the data used for generalization). Results show that, according to this simulation, hybrid method training algorithm in ANFIS has good agreements between industrial data and simulated results.

Keywords: Adaptive Neuro-Fuzzy Inference Systems, GasolineOctane Number, Soft-sensor, Catalytic Naphtha Reforming

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2170
3519 The Portuguese Framework of the Professional Internship without Public Funds

Authors: Ana Lambelho

Abstract:

In an economic crisis such as the one that shook (and still shake) Europe, one does not question the importance of the measures that encourage the hiring and integration of young people into the labour market. In the mentioned context, enterprises tend to reduce the cost of labour and to seek flexible contracting instruments. The professional internships allow innovation and creativity at low cost, because, as they are not labour contracts, the enterprises do not have to respect the minimum standards related to wages, working time duration and so on. In Portugal, we observe a widespread existence of training contracts in which the trainee worked several hours without salary or was paid below the legally prescribed for the function and the work period. For this reason, under the tripartite agreement for a new system of regulation of labour relations, employment policies and social protection, between the Government and the social partners, in June 2008, foresaw a prohibition of professional internships unpaid and the legal regulation of the mandatory internships for access to an activity. The first Act about private internship contracts, i.e., internships without public funding was embodied in the Decree-Law N. 66/2011, of 1st June. This work is dedicated to the study of the legal regime of the internship contract in Portugal, by analysing the problems brought by the new set of rules and especially those which remains unresolved. In fact, we can conclude that the number of situations covered by the Act is much lower than what was expected, because of the exclusion of the mandatory internship for access to a profession when the activity is developed autonomously. Since the majority of the activities can be developed both autonomously or subordinated, it is quite easy to out of the Act requirements and, so, out of the protection that it confers to the intern. In order to complete this study, we considered not only the mentioned legal Act, but also the few doctrine and jurisprudence about the theme.

Keywords: Intern, internship contact, labour law, Portugal.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1789
3518 Perceptual JPEG Compliant Coding by Using DCT-Based Visibility Thresholds of Color Images

Authors: Kuo-Cheng Liu

Abstract:

Effective estimation of just noticeable distortion (JND) for images is helpful to increase the efficiency of a compression algorithm in which both the statistical redundancy and the perceptual redundancy should be accurately removed. In this paper, we design a DCT-based model for estimating JND profiles of color images. Based on a mathematical model of measuring the base detection threshold for each DCT coefficient in the color component of color images, the luminance masking adjustment, the contrast masking adjustment, and the cross masking adjustment are utilized for luminance component, and the variance-based masking adjustment based on the coefficient variation in the block is proposed for chrominance components. In order to verify the proposed model, the JND estimator is incorporated into the conventional JPEG coder to improve the compression performance. A subjective and fair viewing test is designed to evaluate the visual quality of the coding image under the specified viewing condition. The simulation results show that the JPEG coder integrated with the proposed DCT-based JND model gives better coding bit rates at visually lossless quality for a variety of color images.

Keywords: Just-noticeable distortion (JND), discrete cosine transform (DCT), JPEG.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2558
3517 Using Game Engines in Lightning Shielding: The Application of the Rolling Spheres Method on Virtual As-Built Power Substations

Authors: Yuri A. Gruber, Matheus Rosendo, Ulisses G. A. Casemiro, Klaus de Geus, Rafael T. Bee

Abstract:

Lightning strikes can cause severe negative impacts to the electrical sector causing direct damage to equipment as well as shutdowns, especially when occurring in power substations. In order to mitigate this problem, a meticulous planning of the power substation protection system is of vital importance. A critical part of this is the distribution of shielding wires through the substation, which creates a 3D imaginary protection mesh similar to a circus tarpaulin. Equipment enclosed in the volume defined by that 3D mesh is considered protected against lightning strikes. The use of traditional methods of longitudinal cutting analysis based on 2D CAD tools makes the process laborious and the results obtained may not guarantee satisfactory protection of electrical equipment. This work describes the application of a Game Engine to the problem of lightning protection of power substations providing the visualization of the 3D protection mesh, the amount of protected components and the highlight of equipment which remain unprotected. In addition, aspects regarding the implementation and the advantages of approaching the problem using Unreal® Engine 4 are described. In order to validate results, a comparison with traditional 2D methods is applied to the same case study to which the proposed technique has been applied. Finally, a comparative study involving different levels of protection using the technique developed in this work is presented, showing that modern game engines can be a powerful accessory for simulations in several areas of engineering.

Keywords: Game engine, rolling spheres method, substation protection, UE4, Unreal® Engine 4.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1216