Search results for: On Pseudo-Random and Orthogonal Binary Spreading Sequences
98 Optimal Manufacturing Scheduling for Dependent Details Processing
Authors: Ivan C. Mustakerov, Daniela I. Borissova
Abstract:
The increasing competitiveness in manufacturing industry is forcing manufacturers to seek effective processing schedules. The paper presents an optimization manufacture scheduling approach for dependent details processing with given processing sequences and times on multiple machines. By defining decision variables as start and end moments of details processing it is possible to use straightforward variables restrictions to satisfy different technological requirements and to formulate easy to understand and solve optimization tasks for multiple numbers of details and machines. A case study example is solved for seven base moldings for CNC metalworking machines processed on five different machines with given processing order among details and machines and known processing time-s duration. As a result of linear optimization task solution the optimal manufacturing schedule minimizing the overall processing time is obtained. The manufacturing schedule defines the moments of moldings delivery thus minimizing storage costs and provides mounting due-time satisfaction. The proposed optimization approach is based on real manufacturing plant problem. Different processing schedules variants for different technological restrictions were defined and implemented in the practice of Bulgarian company RAIS Ltd. The proposed approach could be generalized for other job shop scheduling problems for different applications.Keywords: Optimal manufacturing scheduling, linear programming, metalworking machines production, dependant details processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 148797 Design and Analysis of Electric Power Production Unit for Low Enthalpy Geothermal Reservoir Applications
Authors: Ildar Akhmadullin, Mayank Tyagi
Abstract:
The subject of this paper is the design analysis of a single well power production unit from low enthalpy geothermal resources. A complexity of the project is defined by a low temperature heat source that usually makes such projects economically disadvantageous using the conventional binary power plant approach. A proposed new compact design is numerically analyzed. This paper describes a thermodynamic analysis, a working fluid choice, downhole heat exchanger (DHE) and turbine calculation results. The unit is able to produce 321 kW of electric power from a low enthalpy underground heat source utilizing n-Pentane as a working fluid. A geo-pressured reservoir located in Vermilion Parish, Louisiana, USA is selected as a prototype for the field application. With a brine temperature of 126 , the optimal length of DHE is determined as 304.8 m (1000ft). All units (pipes, turbine, and pumps) are chosen from commercially available parts to bring this project closer to the industry requirements. Numerical calculations are based on petroleum industry standards. The project is sponsored by the Department of Energy of the US.
Keywords: Downhole Heat Exchangers, Geothermal Power Generation, Organic Rankine Cycle, Refrigerants, Working Fluids.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 267096 Methyltrioctylammonium Chloride as a Separation Solvent for Binary Mixtures: Evaluation Based on Experimental Activity Coefficients
Authors: B. Kabane, G. G. Redhi
Abstract:
An ammonium based ionic liquid (methyltrioctylammonium chloride) [N8 8 8 1] [Cl] was investigated as an extraction potential solvent for volatile organic solvents (in this regard, solutes), which includes alkenes, alkanes, ketones, alkynes, aromatic hydrocarbons, tetrahydrofuran (THF), alcohols, thiophene, water and acetonitrile based on the experimental activity coefficients at infinite THF measurements were conducted by the use of gas-liquid chromatography at four different temperatures (313.15 to 343.15) K. Experimental data of activity coefficients obtained across the examined temperatures were used in order to calculate the physicochemical properties at infinite dilution such as partial molar excess enthalpy, Gibbs free energy and entropy term. Capacity and selectivity data for selected petrochemical extraction problems (heptane/thiophene, heptane/benzene, cyclohaxane/cyclohexene, hexane/toluene, hexane/hexene) were computed from activity coefficients data and compared to the literature values with other ionic liquids. Evaluation of activity coefficients at infinite dilution expands the knowledge and provides a good understanding related to the interactions between the ionic liquid and the investigated compounds.
Keywords: Separation, activity coefficients, ionic liquid, methyltrioctylammonium chloride, capacity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 73595 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model
Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin
Abstract:
Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.
Keywords: Anomaly detection, autoencoder, data centers, deep learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 74294 Malware Beaconing Detection by Mining Large-scale DNS Logs for Targeted Attack Identification
Authors: Andrii Shalaginov, Katrin Franke, Xiongwei Huang
Abstract:
One of the leading problems in Cyber Security today is the emergence of targeted attacks conducted by adversaries with access to sophisticated tools. These attacks usually steal senior level employee system privileges, in order to gain unauthorized access to confidential knowledge and valuable intellectual property. Malware used for initial compromise of the systems are sophisticated and may target zero-day vulnerabilities. In this work we utilize common behaviour of malware called ”beacon”, which implies that infected hosts communicate to Command and Control servers at regular intervals that have relatively small time variations. By analysing such beacon activity through passive network monitoring, it is possible to detect potential malware infections. So, we focus on time gaps as indicators of possible C2 activity in targeted enterprise networks. We represent DNS log files as a graph, whose vertices are destination domains and edges are timestamps. Then by using four periodicity detection algorithms for each pair of internal-external communications, we check timestamp sequences to identify the beacon activities. Finally, based on the graph structure, we infer the existence of other infected hosts and malicious domains enrolled in the attack activities.Keywords: Malware detection, network security, targeted attack.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 610793 RRNS-Convolutional Concatenated Code for OFDM based Wireless Communication with Direct Analog-to-Residue Converter
Authors: Shahana T. K., Babita R. Jose, K. Poulose Jacob, Sreela Sasi
Abstract:
The modern telecommunication industry demands higher capacity networks with high data rate. Orthogonal frequency division multiplexing (OFDM) is a promising technique for high data rate wireless communications at reasonable complexity in wireless channels. OFDM has been adopted for many types of wireless systems like wireless local area networks such as IEEE 802.11a, and digital audio/video broadcasting (DAB/DVB). The proposed research focuses on a concatenated coding scheme that improve the performance of OFDM based wireless communications. It uses a Redundant Residue Number System (RRNS) code as the outer code and a convolutional code as the inner code. Here, a direct conversion of analog signal to residue domain is done to reduce the conversion complexity using sigma-delta based parallel analog-to-residue converter. The bit error rate (BER) performances of the proposed system under different channel conditions are investigated. These include the effect of additive white Gaussian noise (AWGN), multipath delay spread, peak power clipping and frame start synchronization error. The simulation results show that the proposed RRNS-Convolutional concatenated coding (RCCC) scheme provides significant improvement in the system performance by exploiting the inherent properties of RRNS.Keywords: Analog-to-residue converter, Concatenated codes, OFDM, Redundant Residue Number System, Sigma-delta modulator, Wireless communication
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 194492 EEG-Based Fractal Analysis of Different Motor Imagery Tasks using Critical Exponent Method
Authors: Montri Phothisonothai, Masahiro Nakagawa
Abstract:
The objective of this paper is to characterize the spontaneous Electroencephalogram (EEG) signals of four different motor imagery tasks and to show hereby a possible solution for the present binary communication between the brain and a machine ora Brain-Computer Interface (BCI). The processing technique used in this paper was the fractal analysis evaluated by the Critical Exponent Method (CEM). The EEG signal was registered in 5 healthy subjects,sampling 15 measuring channels at 1024 Hz.Each channel was preprocessed by the Laplacian space ltering so as to reduce the space blur and therefore increase the spaceresolution. The EEG of each channel was segmented and its Fractaldimension (FD) calculated. The FD was evaluated in the time interval corresponding to the motor imagery and averaged out for all the subjects (each channel). In order to characterize the FD distribution,the linear regression curves of FD over the electrodes position were applied. The differences FD between the proposed mental tasks are quantied and evaluated for each experimental subject. The obtained results of the proposed method are a substantial fractal dimension in the EEG signal of motor imagery tasks and can be considerably utilized as the multiple-states BCI applications.
Keywords: electroencephalogram (EEG), motor imagery tasks, mental tasks, biomedical signals processing, human-machine interface, fractal analysis, critical exponent method (CEM).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 225991 Performance Analysis of a Combined Ordered Successive and Interference Cancellation Using Zero-Forcing Detection over Rayleigh Fading Channels in MIMO Systems
Authors: Jamal R. Elbergali
Abstract:
Multiple Input Multiple Output (MIMO) systems are wireless systems with multiple antenna elements at both ends of the link. Wireless communication systems demand high data rate and spectral efficiency with increased reliability. MIMO systems have been popular techniques to achieve these goals because increased data rate is possible through spatial multiplexing scheme and diversity. Spatial Multiplexing (SM) is used to achieve higher possible throughput than diversity. In this paper, we propose a Zero- Forcing (ZF) detection using a combination of Ordered Successive Interference Cancellation (OSIC) and Zero Forcing using Interference Cancellation (ZF-IC). The proposed method used an OSIC based on Signal to Noise Ratio (SNR) ordering to get the estimation of last symbol, then the estimated last symbol is considered to be an input to the ZF-IC. We analyze the Bit Error Rate (BER) performance of the proposed MIMO system over Rayleigh Fading Channel, using Binary Phase Shift Keying (BPSK) modulation scheme. The results show better performance than the previous methods.Keywords: SNR, BER, BPSK, MIMO, Modulation, Zero forcing (ZF), OSIC, ZF-IC, Spatial Multiplexing (SM).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 169590 Corporate Credit Rating using Multiclass Classification Models with order Information
Authors: Hyunchul Ahn, Kyoung-Jae Kim
Abstract:
Corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has been one of the attractive research topics in the literature. In recent years, multiclass classification models such as artificial neural network (ANN) or multiclass support vector machine (MSVM) have become a very appealing machine learning approaches due to their good performance. However, most of them have only focused on classifying samples into nominal categories, thus the unique characteristic of the credit rating - ordinality - has been seldom considered in their approaches. This study proposes new types of ANN and MSVM classifiers, which are named OMANN and OMSVM respectively. OMANN and OMSVM are designed to extend binary ANN or SVM classifiers by applying ordinal pairwise partitioning (OPP) strategy. These models can handle ordinal multiple classes efficiently and effectively. To validate the usefulness of these two models, we applied them to the real-world bond rating case. We compared the results of our models to those of conventional approaches. The experimental results showed that our proposed models improve classification accuracy in comparison to typical multiclass classification techniques with the reduced computation resource.Keywords: Artificial neural network, Corporate credit rating, Support vector machines, Ordinal pairwise partitioning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 344089 Particle Filter Supported with the Neural Network for Aircraft Tracking Based on Kernel and Active Contour
Authors: Mohammad Izadkhah, Mojtaba Hoseini, Alireza Khalili Tehrani
Abstract:
In this paper we presented a new method for tracking flying targets in color video sequences based on contour and kernel. The aim of this work is to overcome the problem of losing target in changing light, large displacement, changing speed, and occlusion. The proposed method is made in three steps, estimate the target location by particle filter, segmentation target region using neural network and find the exact contours by greedy snake algorithm. In the proposed method we have used both region and contour information to create target candidate model and this model is dynamically updated during tracking. To avoid the accumulation of errors when updating, target region given to a perceptron neural network to separate the target from background. Then its output used for exact calculation of size and center of the target. Also it is used as the initial contour for the greedy snake algorithm to find the exact target's edge. The proposed algorithm has been tested on a database which contains a lot of challenges such as high speed and agility of aircrafts, background clutter, occlusions, camera movement, and so on. The experimental results show that the use of neural network increases the accuracy of tracking and segmentation.Keywords: Video tracking, particle filter, greedy snake, neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 119388 A Bayesian Kernel for the Prediction of Protein- Protein Interactions
Authors: Hany Alashwal, Safaai Deris, Razib M. Othman
Abstract:
Understanding proteins functions is a major goal in the post-genomic era. Proteins usually work in context of other proteins and rarely function alone. Therefore, it is highly relevant to study the interaction partners of a protein in order to understand its function. Machine learning techniques have been widely applied to predict protein-protein interactions. Kernel functions play an important role for a successful machine learning technique. Choosing the appropriate kernel function can lead to a better accuracy in a binary classifier such as the support vector machines. In this paper, we describe a Bayesian kernel for the support vector machine to predict protein-protein interactions. The use of Bayesian kernel can improve the classifier performance by incorporating the probability characteristic of the available experimental protein-protein interactions data that were compiled from different sources. In addition, the probabilistic output from the Bayesian kernel can assist biologists to conduct more research on the highly predicted interactions. The results show that the accuracy of the classifier has been improved using the Bayesian kernel compared to the standard SVM kernels. These results imply that protein-protein interaction can be predicted using Bayesian kernel with better accuracy compared to the standard SVM kernels.Keywords: Bioinformatics, Protein-protein interactions, Bayesian Kernel, Support Vector Machines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 216487 3D Rendering of American Sign Language Finger-Spelling: A Comparative Study of Two Animation Techniques
Authors: Nicoletta Adamo-Villani
Abstract:
In this paper we report a study aimed at determining the most effective animation technique for representing ASL (American Sign Language) finger-spelling. Specifically, in the study we compare two commonly used 3D computer animation methods (keyframe animation and motion capture) in order to ascertain which technique produces the most 'accurate', 'readable', and 'close to actual signing' (i.e. realistic) rendering of ASL finger-spelling. To accomplish this goal we have developed 20 animated clips of fingerspelled words and we have designed an experiment consisting of a web survey with rating questions. 71 subjects ages 19-45 participated in the study. Results showed that recognition of the words was correlated with the method used to animate the signs. In particular, keyframe technique produced the most accurate representation of the signs (i.e., participants were more likely to identify the words correctly in keyframed sequences rather than in motion captured ones). Further, findings showed that the animation method had an effect on the reported scores for readability and closeness to actual signing; the estimated marginal mean readability and closeness was greater for keyframed signs than for motion captured signs. To our knowledge, this is the first study aimed at measuring and comparing accuracy, readability and realism of ASL animations produced with different techniques.Keywords: 3D Animation, American Sign Language, DeafEducation, Motion Capture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 199886 Generalized Morphological 3D Shape Decomposition Grayscale Interframe Interpolation Method
Authors: Dragos Nicolae VIZIREANU
Abstract:
One of the main image representations in Mathematical Morphology is the 3D Shape Decomposition Representation, useful for Image Compression and Representation,and Pattern Recognition. The 3D Morphological Shape Decomposition representation can be generalized a number of times,to extend the scope of its algebraic characteristics as much as possible. With these generalizations, the Morphological Shape Decomposition 's role to serve as an efficient image decomposition tool is extended to grayscale images.This work follows the above line, and further develops it. Anew evolutionary branch is added to the 3D Morphological Shape Decomposition's development, by the introduction of a 3D Multi Structuring Element Morphological Shape Decomposition, which permits 3D Morphological Shape Decomposition of 3D binary images (grayscale images) into "multiparameter" families of elements. At the beginning, 3D Morphological Shape Decomposition representations are based only on "1 parameter" families of elements for image decomposition.This paper addresses the gray scale inter frame interpolation by means of mathematical morphology. The new interframe interpolation method is based on generalized morphological 3D Shape Decomposition. This article will present the theoretical background of the morphological interframe interpolation, deduce the new representation and show some application examples.Computer simulations could illustrate results.
Keywords: 3D shape decomposition representation, mathematical morphology, gray scale interframe interpolation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174485 Motion Prediction and Motion Vector Cost Reduction during Fast Block Motion Estimation in MCTF
Authors: Karunakar A K, Manohara Pai M M
Abstract:
In 3D-wavelet video coding framework temporal filtering is done along the trajectory of motion using Motion Compensated Temporal Filtering (MCTF). Hence computationally efficient motion estimation technique is the need of MCTF. In this paper a predictive technique is proposed in order to reduce the computational complexity of the MCTF framework, by exploiting the high correlation among the frames in a Group Of Picture (GOP). The proposed technique applies coarse and fine searches of any fast block based motion estimation, only to the first pair of frames in a GOP. The generated motion vectors are supplied to the next consecutive frames, even to subsequent temporal levels and only fine search is carried out around those predicted motion vectors. Hence coarse search is skipped for all the motion estimation in a GOP except for the first pair of frames. The technique has been tested for different fast block based motion estimation algorithms over different standard test sequences using MC-EZBC, a state-of-the-art scalable video coder. The simulation result reveals substantial reduction (i.e. 20.75% to 38.24%) in the number of search points during motion estimation, without compromising the quality of the reconstructed video compared to non-predictive techniques. Since the motion vectors of all the pair of frames in a GOP except the first pair will have value ±1 around the motion vectors of the previous pair of frames, the number of bits required for motion vectors is also reduced by 50%.Keywords: Motion Compensated Temporal Filtering, predictivemotion estimation, lifted wavelet transform, motion vector
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 161984 Experimental Study of Unconfined and Confined Isothermal Swirling Jets
Authors: Rohit Sharma, Fabio Cozzi
Abstract:
A 3C-2D PIV technique was applied to investigate the swirling flow generated by an axial plus tangential type swirl generator. This work is focused on the near-exit region of an isothermal swirling jet to characterize the effect of swirl on the flow field and to identify the large coherent structures both in unconfined and confined conditions for geometrical swirl number, Sg = 4.6. Effects of the Reynolds number on the flow structure were also studied. The experimental results show significant effects of the confinement on the mean velocity fields and its fluctuations. The size of the recirculation zone was significantly enlarged upon confinement compared to the free swirling jet. Increasing in the Reynolds number further enhanced the recirculation zone. The frequency characteristics have been measured with a capacitive microphone which indicates the presence of periodic oscillation related to the existence of precessing vortex core, PVC. Proper orthogonal decomposition of the jet velocity field was carried out, enabling the identification of coherent structures. The time coefficients of the first two most energetic POD modes were used to reconstruct the phase-averaged velocity field of the oscillatory motion in the swirling flow. The instantaneous minima of negative swirl strength values calculated from the instantaneous velocity field revealed the presence of two helical structures located in the inner and outer shear layers and this structure fade out at an axial location of approximately z/D = 1.5 for unconfined case and z/D = 1.2 for confined case. By phase averaging the instantaneous swirling strength maps, the 3D helical vortex structure was reconstructed.
Keywords: Acoustic probes, 3C-2D particle image velocimetry, PIV, precessing vortex core, PVC, recirculation zone.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 142483 Hydrochemical Assessment and Quality Classification of Water in Torogh and Kardeh Dam Reservoirs, North-East Iran
Authors: Mojtaba Heydarizad
Abstract:
Khorasan Razavi is the second most important province in north-east of Iran, which faces a water shortage crisis due to recent droughts and huge water consummation. Kardeh and Torogh dam reservoirs in this province provide a notable part of Mashhad metropolitan (with more than 4.5 million inhabitants) potable water needs. Hydrochemical analyses on these dam reservoirs samples demonstrate that MgHCO3 in Kardeh and CaHCO3 and to lower extent MgHCO3 water types in Torogh dam reservoir are dominant. On the other hand, Gibbs binary diagram demonstrates that rock weathering is the main factor controlling water quality in dam reservoirs. Plotting dam reservoir samples on Mg2+/Na+ and HCO3-/Na+ vs. Ca2+/ Na+ diagrams demonstrate evaporative and carbonate mineral dissolution is the dominant rock weathering ion sources in these dam reservoirs. Cluster Analyses (CA) also demonstrate intense role of rock weathering mainly (carbonate and evaporative minerals dissolution) in water quality of these dam reservoirs. Studying water quality by the U.S. National Sanitation Foundation (NSF) WQI index NSF-WQI, Oregon Water Quality Index (OWQI) and Canadian Water Quality Index DWQI index show moderate and good quality.Keywords: Hydrochemistry, water quality classification, water quality indexes, Torogh and Kardeh Dam Reservoirs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 114482 Development of Integrated GIS Interface for Characteristics of Regional Daily Flow
Authors: Ju Young Lee, Jung-Seok Yang, Jaeyoung Choi
Abstract:
The purpose of this paper primarily intends to develop GIS interface for estimating sequences of stream-flows at ungauged stations based on known flows at gauged stations. The integrated GIS interface is composed of three major steps. The first, precipitation characteristics using statistical analysis is the procedure for making multiple linear regression equation to get the long term mean daily flow at ungauged stations. The independent variables in regression equation are mean daily flow and drainage area. Traditionally, mean flow data are generated by using Thissen polygon method. However, method for obtaining mean flow data can be selected by user such as Kriging, IDW (Inverse Distance Weighted), Spline methods as well as other traditional methods. At the second, flow duration curve (FDC) is computing at unguaged station by FDCs in gauged stations. Finally, the mean annual daily flow is computed by spatial interpolation algorithm. The third step is to obtain watershed/topographic characteristics. They are the most important factors which govern stream-flows. In summary, the simulated daily flow time series are compared with observed times series. The results using integrated GIS interface are closely similar and are well fitted each other. Also, the relationship between the topographic/watershed characteristics and stream flow time series is highly correlated.Keywords: Integrated GIS interface, spatial interpolation algorithm, FDC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 151081 Multi-Stage Multi-Period Production Planning in Wire and Cable Industry
Authors: Mahnaz Hosseinzadeh, Shaghayegh Rezaee Amiri
Abstract:
This paper presents a methodology for serial production planning problem in wire and cable manufacturing process that addresses the problem of input-output imbalance in different consecutive stations, hoping to minimize the halt of machines in each stage. To this end, a linear Goal Programming (GP) model is developed, in which four main categories of constraints as per the number of runs per machine, machines’ sequences, acceptable inventories of machines at the end of each period, and the necessity of fulfillment of the customers’ orders are considered. The model is formulated based upon on the real data obtained from IKO TAK Company, an important supplier of wire and cable for oil and gas and automotive industries in Iran. By solving the model in GAMS software the optimal number of runs, end-of-period inventories, and the possible minimum idle time for each machine are calculated. The application of the numerical results in the target company has shown the efficiency of the proposed model and the solution in decreasing the lead time of the end product delivery to the customers by 20%. Accordingly, the developed model could be easily applied in wire and cable companies for the aim of optimal production planning to reduce the halt of machines in manufacturing stages.
Keywords: Serial manufacturing process, production planning, wire and cable industry, goal programming approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 93180 Numerical Analysis of Laminar Reflux Condensation from Gas-Vapour Mixtures in Vertical Parallel Plate Channels
Authors: Foad Hassaninejadafarahani, Scott Ormiston
Abstract:
Reflux condensation occurs in vertical channels and tubes when there is an upward core flow of vapour (or gas-vapour mixture) and a downward flow of the liquid film. The understanding of this condensation configuration is crucial in the design of reflux condensers, distillation columns, and in loss-of-coolant safety analyses in nuclear power plant steam generators. The unique feature of this flow is the upward flow of the vapour-gas mixture (or pure vapour) that retards the liquid flow via shear at the liquid-mixture interface. The present model solves the full, elliptic governing equations in both the film and the gas-vapour core flow. The computational mesh is non-orthogonal and adapts dynamically the phase interface, thus produces a sharp and accurate interface. Shear forces and heat and mass transfer at the interface are accounted for fundamentally. This modeling is a big step ahead of current capabilities by removing the limitations of previous reflux condensation models which inherently cannot account for the detailed local balances of shear, mass, and heat transfer at the interface. Discretisation has been done based on finite volume method and co-located variable storage scheme. An in-house computer code was developed to implement the numerical solution scheme. Detailed results are presented for laminar reflux condensation from steam-air mixtures flowing in vertical parallel plate channels. The results include velocity and gas mass fraction profiles, as well as axial variations of film thickness.
Keywords: Reflux Condensation, Heat Transfer, Channel, Laminar Flow
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 185179 3D Human Reconstruction over Cloud Based Image Data via AI and Machine Learning
Authors: Kaushik Sathupadi, Sandesh Achar
Abstract:
Human action recognition (HAR) modeling is a critical task in machine learning. These systems require better techniques for recognizing body parts and selecting optimal features based on vision sensors to identify complex action patterns efficiently. Still, there is a considerable gap and challenges between images and videos, such as brightness, motion variation, and random clutters. This paper proposes a robust approach for classifying human actions over cloud-based image data. First, we apply pre-processing and detection, human and outer shape detection techniques. Next, we extract valuable information in terms of cues. We extract two distinct features: fuzzy local binary patterns and sequence representation. Then, we applied a greedy, randomized adaptive search procedure for data optimization and dimension reduction, and for classification, we used a random forest. We tested our model on two benchmark datasets, AAMAZ and the KTH Multi-view Football datasets. Our HAR framework significantly outperforms the other state-of-the-art approaches and achieves a better recognition rate of 91% and 89.6% over the AAMAZ and KTH Multi-view Football datasets, respectively.
Keywords: Computer vision, human motion analysis, random forest, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3878 Drop Impact Study on Flexible Superhydrophobic Surface Containing Micro-Nano Hierarchical Structures
Authors: Abinash Tripathy, Girish Muralidharan, Amitava Pramanik, Prosenjit Sen
Abstract:
Superhydrophobic surfaces are abundant in nature. Several surfaces such as wings of butterfly, legs of water strider, feet of gecko and the lotus leaf show extreme water repellence behaviour. Self-cleaning, stain-free fabrics, spill-resistant protective wears, drag reduction in micro-fluidic devices etc. are few applications of superhydrophobic surfaces. In order to design robust superhydrophobic surface, it is important to understand the interaction of water with superhydrophobic surface textures. In this work, we report a simple coating method for creating large-scale flexible superhydrophobic paper surface. The surface consists of multiple layers of silanized zirconia microparticles decorated with zirconia nanoparticles. Water contact angle as high as 159±10 and contact angle hysteresis less than 80 was observed. Drop impact studies on superhydrophobic paper surface were carried out by impinging water droplet and capturing its dynamics through high speed imaging. During the drop impact, the Weber number was varied from 20 to 80 by altering the impact velocity of the drop and the parameters such as contact time, normalized spread diameter were obtained. In contrast to earlier literature reports, we observed contact time to be dependent on impact velocity on superhydrophobic surface. Total contact time was split into two components as spread time and recoil time. The recoil time was found to be dependent on the impact velocity while the spread time on the surface did not show much variation with the impact velocity. Further, normalized spreading parameter was found to increase with increase in impact velocity.
Keywords: Contact angle, contact angle hysteresis, contact time, superhydrophobic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 140577 The Effect of Stone Column (Nailing and Geogrid) on Stability of Expansive Clay
Authors: Komeil Valipourian, Mohsen Ramezan Shirazi, Orod Zarrin Kafsh
Abstract:
By enhancing the applicatıon of grounds for establishment and due to the lack of appropriate sites, engineers attempt to seek out a new method to reduce the weakness of soils. İn aspect of economic situation, various ways have been used to decrease the weak grounds. Because of the rapid development of infrastructural facilities, spreading the construction operation is an obligation. Furthermore, in various sites with the really bad soil situation, engineers have considered obvious problems. One of the most essential ways for developing the weak soils is stone column. Obviously, the method was introduced in France in 1830 to improve a native soil initially. Stone columns have an expanding range of usage in different rough foundation sites all over the world to increase the bearing capacity, to reduce the whole and differential settlements, to enhance the rate of consolidation, to stabilize slopes stability of embankments and to increase the liquefaction resistance as well. A recent procedure called installing vertical nails along the round stone columns in order to make better the performance of considered columns is offered. Moreover, thanks to the enhancing the nail diameter, number and embedment nail depth, the positive points of vertical circumferential nails increases. Based on the result of this study, load caring capacity will be develop with enhancing the length and the power of reinforcements in vertical encasement stone column (CESC). In this study, the main purpose is comparing two methods of stone columns (installed a nail surrounding the stone columns and using geogrid on clay) for enhancing the bearing capacity, decreasing the whole and various settlements.Keywords: Bearing Capacity, Clay, Geogrid, Nailing, Settlements, Stone Column.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 286676 Quality Service Standard of Food and Beverage Service Staff in Hotel
Authors: Thanasit Suksutdhi
Abstract:
This survey research aims to study the standard of service quality of food and beverage service staffs in hotel business by studying the service standard of three sample hotels, Siam Kempinski Hotel Bangkok, Four Seasons Resort Chiang Mai, and Banyan Tree Phuket. In order to find the international service standard of food and beverage service, triangular research, i.e. quantitative, qualitative, and survey were employed. In this research, questionnaires and in-depth interview were used for getting the information on the sequences and method of services. There were three parts of modified questionnaires to measure service quality and guest’s satisfaction including service facilities, attentiveness, responsibility, reliability, and circumspection. This study used sample random sampling to derive subjects with the return rate of the questionnaires was 70% or 280. Data were analyzed by SPSS to find arithmetic mean, SD, percentage, and comparison by t-test and One-way ANOVA. The results revealed that the service quality of the three hotels were in the international level which could create high satisfaction to the international customers. Recommendations for research implementations were to maintain the area of good service quality, and to improve some dimensions of service quality such as reliability. Training in service standard, product knowledge, and new technology for employees should be provided. Furthermore, in order to develop the service quality of the industry, training collaboration between hotel organization and educational institutions in food and beverage service should be considered.
Keywords: Service standard, food and beverage department, sequence of service, service method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 780475 A Hybrid Expert System for Generating Stock Trading Signals
Authors: Hosein Hamisheh Bahar, Mohammad Hossein Fazel Zarandi, Akbar Esfahanipour
Abstract:
In this paper, a hybrid expert system is developed by using fuzzy genetic network programming with reinforcement learning (GNP-RL). In this system, the frame-based structure of the system uses the trading rules extracted by GNP. These rules are extracted by using technical indices of the stock prices in the training time period. For developing this system, we applied fuzzy node transition and decision making in both processing and judgment nodes of GNP-RL. Consequently, using these method not only did increase the accuracy of node transition and decision making in GNP's nodes, but also extended the GNP's binary signals to ternary trading signals. In the other words, in our proposed Fuzzy GNP-RL model, a No Trade signal is added to conventional Buy or Sell signals. Finally, the obtained rules are used in a frame-based system implemented in Kappa-PC software. This developed trading system has been used to generate trading signals for ten companies listed in Tehran Stock Exchange (TSE). The simulation results in the testing time period shows that the developed system has more favorable performance in comparison with the Buy and Hold strategy.
Keywords: Fuzzy genetic network programming, hybrid expert system, technical trading signal, Tehran stock exchange.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 185974 Software Architecture and Support for Patient Tracking Systems in Critical Scenarios
Authors: Gianluca Cornetta, Abdellah Touhafi, David J. Santos, Jose Manuel Vazquez
Abstract:
In this work a new platform for mobile-health systems is presented. System target application is providing decision support to rescue corps or military medical personnel in combat areas. Software architecture relies on a distributed client-server system that manages a wireless ad-hoc networks hierarchy in which several different types of client operate. Each client is characterized for different hardware and software requirements. Lower hierarchy levels rely in a network of completely custom devices that store clinical information and patient status and are designed to form an ad-hoc network operating in the 2.4 GHz ISM band and complying with the IEEE 802.15.4 standard (ZigBee). Medical personnel may interact with such devices, that are called MICs (Medical Information Carriers), by means of a PDA (Personal Digital Assistant) or a MDA (Medical Digital Assistant), and transmit the information stored in their local databases as well as issue a service request to the upper hierarchy levels by using IEEE 802.11 a/b/g standard (WiFi). The server acts as a repository that stores both medical evacuation forms and associated events (e.g., a teleconsulting request). All the actors participating in the diagnostic or evacuation process may access asynchronously to such repository and update its content or generate new events. The designed system pretends to optimise and improve information spreading and flow among all the system components with the aim of improving both diagnostic quality and evacuation process.Keywords: IEEE 802.15.4 (ZigBee), IEEE 802.11 a/b/g (WiFi), distributed client-server systems, embedded databases, issue trackers, ad-hoc networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 203973 Bit Model Based Key Management Scheme for Secure Group Communication
Authors: R. Varalakshmi
Abstract:
For the last decade, researchers have started to focus their interest on Multicast Group Key Management Framework. The central research challenge is secure and efficient group key distribution. The present paper is based on the Bit model based Secure Multicast Group key distribution scheme using the most popular absolute encoder output type code named Gray Code. The focus is of two folds. The first fold deals with the reduction of computation complexity which is achieved in our scheme by performing fewer multiplication operations during the key updating process. To optimize the number of multiplication operations, an O(1) time algorithm to multiply two N-bit binary numbers which could be used in an N x N bit-model of reconfigurable mesh is used in this proposed work. The second fold aims at reducing the amount of information stored in the Group Center and group members while performing the update operation in the key content. Comparative analysis to illustrate the performance of various key distribution schemes is shown in this paper and it has been observed that this proposed algorithm reduces the computation and storage complexity significantly. Our proposed algorithm is suitable for high performance computing environment.
Keywords: Multicast Group key distribution, Bit model, Integer Multiplications, reconfigurable mesh, optimal algorithm, Gray Code, Computation Complexity, Storage Complexity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 197172 Standard Deviation of Mean and Variance of Rows and Columns of Images for CBIR
Authors: H. B. Kekre, Kavita Patil
Abstract:
This paper describes a novel and effective approach to content-based image retrieval (CBIR) that represents each image in the database by a vector of feature values called “Standard deviation of mean vectors of color distribution of rows and columns of images for CBIR". In many areas of commerce, government, academia, and hospitals, large collections of digital images are being created. This paper describes the approach that uses contents as feature vector for retrieval of similar images. There are several classes of features that are used to specify queries: colour, texture, shape, spatial layout. Colour features are often easily obtained directly from the pixel intensities. In this paper feature extraction is done for the texture descriptor that is 'variance' and 'Variance of Variances'. First standard deviation of each row and column mean is calculated for R, G, and B planes. These six values are obtained for one image which acts as a feature vector. Secondly we calculate variance of the row and column of R, G and B planes of an image. Then six standard deviations of these variance sequences are calculated to form a feature vector of dimension six. We applied our approach to a database of 300 BMP images. We have determined the capability of automatic indexing by analyzing image content: color and texture as features and by applying a similarity measure Euclidean distance.
Keywords: Standard deviation Image retrieval, color distribution, Variance, Variance of Variance, Euclidean distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 374671 Using Genetic Algorithms to Outline Crop Rotations and a Cropping-System Model
Authors: Nicolae Bold, Daniel Nijloveanu
Abstract:
The idea of cropping-system is a method used by farmers. It is an environmentally-friendly method, protecting the natural resources (soil, water, air, nutritive substances) and increase the production at the same time, taking into account some crop particularities. The combination of this powerful method with the concepts of genetic algorithms results into a possibility of generating sequences of crops in order to form a rotation. The usage of this type of algorithms has been efficient in solving problems related to optimization and their polynomial complexity allows them to be used at solving more difficult and various problems. In our case, the optimization consists in finding the most profitable rotation of cultures. One of the expected results is to optimize the usage of the resources, in order to minimize the costs and maximize the profit. In order to achieve these goals, a genetic algorithm was designed. This algorithm ensures the finding of several optimized solutions of cropping-systems possibilities which have the highest profit and, thus, which minimize the costs. The algorithm uses genetic-based methods (mutation, crossover) and structures (genes, chromosomes). A cropping-system possibility will be considered a chromosome and a crop within the rotation is a gene within a chromosome. Results about the efficiency of this method will be presented in a special section. The implementation of this method would bring benefits into the activity of the farmers by giving them hints and helping them to use the resources efficiently.Keywords: Genetic algorithm, chromosomes, genes, cropping, agriculture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 160270 Comparison between Separable and Irreducible Goppa Code in McEliece Cryptosystem
Authors: Thuraya M. Qaradaghi, Newroz N. Abdulrazaq
Abstract:
The McEliece cryptosystem is an asymmetric type of cryptography based on error correction code. The classical McEliece used irreducible binary Goppa code which considered unbreakable until now especially with parameter [1024, 524, and 101], but it is suffering from large public key matrix which leads to be difficult to be used practically. In this work Irreducible and Separable Goppa codes have been introduced. The Irreducible and Separable Goppa codes used are with flexible parameters and dynamic error vectors. A Comparison between Separable and Irreducible Goppa code in McEliece Cryptosystem has been done. For encryption stage, to get better result for comparison, two types of testing have been chosen; in the first one the random message is constant while the parameters of Goppa code have been changed. But for the second test, the parameters of Goppa code are constant (m=8 and t=10) while the random message have been changed. The results show that the time needed to calculate parity check matrix in separable are higher than the one for irreducible McEliece cryptosystem, which is considered expected results due to calculate extra parity check matrix in decryption process for g2(z) in separable type, and the time needed to execute error locator in decryption stage in separable type is better than the time needed to calculate it in irreducible type. The proposed implementation has been done by Visual studio C#.Keywords: McEliece cryptosystem, Goppa code, separable, irreducible.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 221169 Artificial Intelligence-Based Chest X-Ray Test of COVID-19 Patients
Authors: Dhurgham Al-Karawi, Nisreen Polus, Shakir Al-Zaidi, Sabah Jassim
Abstract:
The management of COVID-19 patients based on chest imaging is emerging as an essential tool for evaluating the spread of the pandemic which has gripped the global community. It has already been used to monitor the situation of COVID-19 patients who have issues in respiratory status. There has been increase to use chest imaging for medical triage of patients who are showing moderate-severe clinical COVID-19 features, this is due to the fast dispersal of the pandemic to all continents and communities. This article demonstrates the development of machine learning techniques for the test of COVID-19 patients using Chest X-Ray (CXR) images in nearly real-time, to distinguish the COVID-19 infection with a significantly high level of accuracy. The testing performance has covered a combination of different datasets of CXR images of positive COVID-19 patients, patients with viral and bacterial infections, also, people with a clear chest. The proposed AI scheme successfully distinguishes CXR scans of COVID-19 infected patients from CXR scans of viral and bacterial based pneumonia as well as normal cases with an average accuracy of 94.43%, sensitivity 95%, and specificity 93.86%. Predicted decisions would be supported by visual evidence to help clinicians speed up the initial assessment process of new suspected cases, especially in a resource-constrained environment.
Keywords: COVID-19, chest x-ray scan, artificial intelligence, texture analysis, local binary pattern transform, Gabor filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 678