Search results for: pure and hybrid automatic repeat request (ARQ).
284 Regional Stability Analysis of Rotor-Ball Bearing and Rotor- Roller Bearing Systems Considering Switching Phenomena
Authors: Jafar Abbaszadeh Chekan, Kaveh Merat, Hassan Zohoor
Abstract:
In this study the regional stability of a rotor system which is supported on rolling bearings with radial clearance is studied. The rotor is assumed to be rigid. Due to radial clearance of bearings and dynamic configuration of system, each rolling elements of bearings has the possibility to be in contact with both of the races (under compression) or lose its contact. As a result, this change in dynamic of the system makes it to be known as switching system which is a type of Hybrid systems. In this investigation by adopting Multiple Lyapunov Function theorem and using Hamiltonian function as a candidate Lyapunov function, the stability of the system is studied. The purpose of this study is to inspect the regional stability of rotor-roller bearing and rotor-ball bearing systems.
Keywords: Stability analysis, Rotor-rolling bearing systems, Switching systems, Multiple Lyapunov Function Method
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1743283 Accurate Visualization of Graphs of Functions of Two Real Variables
Authors: Zeitoun D. G., Thierry Dana-Picard
Abstract:
The study of a real function of two real variables can be supported by visualization using a Computer Algebra System (CAS). One type of constraints of the system is due to the algorithms implemented, yielding continuous approximations of the given function by interpolation. This often masks discontinuities of the function and can provide strange plots, not compatible with the mathematics. In recent years, point based geometry has gained increasing attention as an alternative surface representation, both for efficient rendering and for flexible geometry processing of complex surfaces. In this paper we present different artifacts created by mesh surfaces near discontinuities and propose a point based method that controls and reduces these artifacts. A least squares penalty method for an automatic generation of the mesh that controls the behavior of the chosen function is presented. The special feature of this method is the ability to improve the accuracy of the surface visualization near a set of interior points where the function may be discontinuous. The present method is formulated as a minimax problem and the non uniform mesh is generated using an iterative algorithm. Results show that for large poorly conditioned matrices, the new algorithm gives more accurate results than the classical preconditioned conjugate algorithm.
Keywords: Function singularities, mesh generation, point allocation, visualization, collocation least squares method, Augmented Lagrangian method, Uzawa's Algorithm, Preconditioned Conjugate Gradien
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1708282 Case Study Analysis of 2017 European Railway Traffic Management Incident: The Application of System for Investigation of Railway Interfaces Methodology
Authors: Sanjeev Kumar Appicharla
Abstract:
This paper presents the results of the modelling and analysis of the European Railway Traffic Management (ERTMS) safety critical incident to raise awareness of biases in systems engineering process on the Cambrian Railway in the UK using the RAIB 17/2019 as a primary input. The RAIB, the UK independent accident investigator, published the Report- RAIB 17/2019 giving the details of their investigation of the focal event in the form of immediate cause, causal factors and underlying factors and recommendations to prevent a repeat of the safety-critical incident on the Cambrian Line. The Systems for Investigation of Railway Interfaces (SIRI) is the Methodology used to model and analyse the safety-critical incident. The SIRI Methodology uses the Swiss Cheese Model to model the incident and identify latent failure conditions (potentially less than adequate conditions) by means of the Management Oversight and Risk Tree technique. The benefits of the SIRI Methodology are threefold: first is that it incorporates “Heuristics and Biases” approach, in the Management Oversight and Risk Tree technique to identify systematic errors. Civil engineering and programme management railway professionals are aware of role “optimism bias” plays in programme cost overruns and are aware of bow tie (fault and event tree) model-based safety risk modelling technique. However, the role of systematic errors due to “Heuristics and Biases” is not appreciated as yet. This overcomes the problems of omission of human and organisational factors from accident analysis. Second, the scope of the investigation includes all levels of the socio-technical system, including government, regulatory, railway safety bodies, duty holders, signalling firms and transport planners, and front-line staff such that lessons learned at the decision making and implementation level as well. Third, the author’s past accident case studies are supplemented with research pieces of evidence drawn from the practitioner’s and academic researchers’ publications as well. This is to discuss the role of system thinking to improve the decision making and risk management processes and practices in the IEC 15288 Systems Engineering standard, and in the industrial context such as the GB railways and Artificial Intelligence (AI) contexts as well.
Keywords: Accident analysis, AI algorithm internal audit, bounded rationality, Byzantine failures, heuristics and biases approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 379281 Feature Point Detection by Combining Advantages of Intensity-based Approach and Edge-based Approach
Authors: Sungho Kim, Chaehoon Park, Yukyung Choi, Soon Kwon, In So Kweon
Abstract:
In this paper, a novel corner detection method is presented to stably extract geometrically important corners. Intensity-based corner detectors such as the Harris corner can detect corners in noisy environments but has inaccurate corner position and misses the corners of obtuse angles. Edge-based corner detectors such as Curvature Scale Space can detect structural corners but show unstable corner detection due to incomplete edge detection in noisy environments. The proposed image-based direct curvature estimation can overcome limitations in both inaccurate structural corner detection of the Harris corner detector (intensity-based) and the unstable corner detection of Curvature Scale Space caused by incomplete edge detection. Various experimental results validate the robustness of the proposed method.Keywords: Feature, intensity, contour, hybrid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1830280 An Approach for Vocal Register Recognition Based on Spectral Analysis of Singing
Authors: Aleksandra Zysk, Pawel Badura
Abstract:
Recognizing and controlling vocal registers during singing is a difficult task for beginner vocalist. It requires among others identifying which part of natural resonators is being used when a sound propagates through the body. Thus, an application has been designed allowing for sound recording, automatic vocal register recognition (VRR), and a graphical user interface providing real-time visualization of the signal and recognition results. Six spectral features are determined for each time frame and passed to the support vector machine classifier yielding a binary decision on the head or chest register assignment of the segment. The classification training and testing data have been recorded by ten professional female singers (soprano, aged 19-29) performing sounds for both chest and head register. The classification accuracy exceeded 93% in each of various validation schemes. Apart from a hard two-class clustering, the support vector classifier returns also information on the distance between particular feature vector and the discrimination hyperplane in a feature space. Such an information reflects the level of certainty of the vocal register classification in a fuzzy way. Thus, the designed recognition and training application is able to assess and visualize the continuous trend in singing in a user-friendly graphical mode providing an easy way to control the vocal emission.Keywords: Classification, singing, spectral analysis, vocal emission, vocal register.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1313279 Automatic Detection of Breast Tumors in Sonoelastographic Images Using DWT
Authors: A. Sindhuja, V. Sadasivam
Abstract:
Breast Cancer is the most common malignancy in women and the second leading cause of death for women all over the world. Earlier the detection of cancer, better the treatment. The diagnosis and treatment of the cancer rely on segmentation of Sonoelastographic images. Texture features has not considered for Sonoelastographic segmentation. Sonoelastographic images of 15 patients containing both benign and malignant tumorsare considered for experimentation.The images are enhanced to remove noise in order to improve contrast and emphasize tumor boundary. It is then decomposed into sub-bands using single level Daubechies wavelets varying from single co-efficient to six coefficients. The Grey Level Co-occurrence Matrix (GLCM), Local Binary Pattern (LBP) features are extracted and then selected by ranking it using Sequential Floating Forward Selection (SFFS) technique from each sub-band. The resultant images undergo K-Means clustering and then few post-processing steps to remove the false spots. The tumor boundary is detected from the segmented image. It is proposed that Local Binary Pattern (LBP) from the vertical coefficients of Daubechies wavelet with two coefficients is best suited for segmentation of Sonoelastographic breast images among the wavelet members using one to six coefficients for decomposition. The results are also quantified with the help of an expert radiologist. The proposed work can be used for further diagnostic process to decide if the segmented tumor is benign or malignant.
Keywords: Breast Cancer, Segmentation, Sonoelastography, Tumor Detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2207278 Current Status of Industry 4.0 in Material Handling Automation and In-house Logistics
Authors: Orestis Κ. Efthymiou, Stavros T. Ponis
Abstract:
In the last decade, a new industrial revolution seems to be emerging, supported -once again- by the rapid advancements of Information Technology in the areas of Machine-to-Machine (M2M) communication permitting large numbers of intelligent devices, e.g. sensors to communicate with each other and take decisions without any or minimum indirect human intervention. The advent of these technologies have triggered the emergence of a new category of hybrid (cyber-physical) manufacturing systems, combining advanced manufacturing techniques with innovative M2M applications based on the Internet of Things (IoT), under the umbrella term Industry 4.0. Even though the topic of Industry 4.0 has attracted much attention during the last few years, the attempts of providing a systematic literature review of the subject are scarce. In this paper, we present the authors’ initial study of the field with a special focus on the use and applications of Industry 4.0 principles in material handling automations and in-house logistics. Research shows that despite the vivid discussion and attractiveness of the subject, there are still many challenges and issues that have to be addressed before Industry 4.0 becomes standardized and widely applicable.Keywords: Industry 4.0, internet of things, manufacturing systems, material handling, logistics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1658277 On the EM Algorithm and Bootstrap Approach Combination for Improving Satellite Image Fusion
Authors: Tijani Delleji, Mourad Zribi, Ahmed Ben Hamida
Abstract:
This paper discusses EM algorithm and Bootstrap approach combination applied for the improvement of the satellite image fusion process. This novel satellite image fusion method based on estimation theory EM algorithm and reinforced by Bootstrap approach was successfully implemented and tested. The sensor images are firstly split by a Bayesian segmentation method to determine a joint region map for the fused image. Then, we use the EM algorithm in conjunction with the Bootstrap approach to develop the bootstrap EM fusion algorithm, hence producing the fused targeted image. We proposed in this research to estimate the statistical parameters from some iterative equations of the EM algorithm relying on a reference of representative Bootstrap samples of images. Sizes of those samples are determined from a new criterion called 'hybrid criterion'. Consequently, the obtained results of our work show that using the Bootstrap EM (BEM) in image fusion improve performances of estimated parameters which involve amelioration of the fused image quality; and reduce the computing time during the fusion process.Keywords: Satellite image fusion, Bayesian segmentation, Bootstrap approach, EM algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2260276 A Comparative Study of Malware Detection Techniques Using Machine Learning Methods
Authors: Cristina Vatamanu, Doina Cosovan, Dragoş Gavriluţ, Henri Luchian
Abstract:
In the past few years, the amount of malicious software increased exponentially and, therefore, machine learning algorithms became instrumental in identifying clean and malware files through (semi)-automated classification. When working with very large datasets, the major challenge is to reach both a very high malware detection rate and a very low false positive rate. Another challenge is to minimize the time needed for the machine learning algorithm to do so. This paper presents a comparative study between different machine learning techniques such as linear classifiers, ensembles, decision trees or various hybrids thereof. The training dataset consists of approximately 2 million clean files and 200.000 infected files, which is a realistic quantitative mixture. The paper investigates the above mentioned methods with respect to both their performance (detection rate and false positive rate) and their practicability.Keywords: Detection Rate, False Positives, Perceptron, One Side Class, Ensembles, Decision Tree, Hybrid methods, Feature Selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3280275 Extraction of Data from Web Pages: A Vision Based Approach
Authors: P. S. Hiremath, Siddu P. Algur
Abstract:
With the explosive growth of information sources available on the World Wide Web, it has become increasingly difficult to identify the relevant pieces of information, since web pages are often cluttered with irrelevant content like advertisements, navigation-panels, copyright notices etc., surrounding the main content of the web page. Hence, tools for the mining of data regions, data records and data items need to be developed in order to provide value-added services. Currently available automatic techniques to mine data regions from web pages are still unsatisfactory because of their poor performance and tag-dependence. In this paper a novel method to extract data items from the web pages automatically is proposed. It comprises of two steps: (1) Identification and Extraction of the data regions based on visual clues information. (2) Identification of data records and extraction of data items from a data region. For step1, a novel and more effective method is proposed based on visual clues, which finds the data regions formed by all types of tags using visual clues. For step2 a more effective method namely, Extraction of Data Items from web Pages (EDIP), is adopted to mine data items. The EDIP technique is a list-based approach in which the list is a linear data structure. The proposed technique is able to mine the non-contiguous data records and can correctly identify data regions, irrespective of the type of tag in which it is bound. Our experimental results show that the proposed technique performs better than the existing techniques.
Keywords: Web data records, web data regions, web mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1901274 An Agent Oriented Approach to Operational Profile Management
Authors: Sunitha Ramanujam, Hany El Yamany, Miriam A. M. Capretz
Abstract:
Software reliability, defined as the probability of a software system or application functioning without failure or errors over a defined period of time, has been an important area of research for over three decades. Several research efforts aimed at developing models to improve reliability are currently underway. One of the most popular approaches to software reliability adopted by some of these research efforts involves the use of operational profiles to predict how software applications will be used. Operational profiles are a quantification of usage patterns for a software application. The research presented in this paper investigates an innovative multiagent framework for automatic creation and management of operational profiles for generic distributed systems after their release into the market. The architecture of the proposed Operational Profile MAS (Multi-Agent System) is presented along with detailed descriptions of the various models arrived at following the analysis and design phases of the proposed system. The operational profile in this paper is extended to comprise seven different profiles. Further, the criticality of operations is defined using a new composed metrics in order to organize the testing process as well as to decrease the time and cost involved in this process. A prototype implementation of the proposed MAS is included as proof-of-concept and the framework is considered as a step towards making distributed systems intelligent and self-managing.Keywords: Software reliability, Software testing, Metrics, Distributed systems, Multi-agent systems
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1857273 Anti-microbial Activity of Aristolochic Acid from Root of Aristolochia bracteata Retz
Authors: S. Angalaparameswari, T.S. Mohamed Saleem, M. Alagusundaram, S. Ramkanth, V.S. Thiruvengadarajan, K. Gnanaprakash, C. Madhusudhana Chetty, G. Pratheesh
Abstract:
The present research was designed to investigate the anti-microbial activity of aristolochic acid from the root of Aristolochia bracteata. From the methanolic & ethyl extract extracts of Aristolochia bracteata aristolochic acid I was isolated and conformed through IR, NMR & MS. The percentage purity of aristolochic acid I was determined by UV & HPLC method. Antibacterial activity of extracts of Aristolochia bracteata and the isolated compound was determined by disc diffusion method. The results reveled that the isolated aristolochic acid from methanolic extract was more pure than the compound from ethyl acetate extract. The various extracts (500μg/disc) of Aristolochia bracteata showed moderate antibacterial activity with the average zone of inhibition of 7-18 mm by disc diffusion method. Among the extracts, ethyl acetate & methanol extracts were shown good anti-microbial activity and the growth of E.coli (18 mm) was strongly inhibited. Microbial assay of isolated compound (Aristolochic acid I) from ethyl acetate & methanol extracts were shown good antimicrobial activity and the zone of inhibition of both at higher concentration 50 μg/ml was similar with the standard aristolochic acid. It may be concluded that the isolated compound of aristolochic acid I has good anti-bacterial activity.Keywords: Aristolochic acid I, Anti-microbial activity, Aristolochia bracteata, Bacillus subtilis, E.coli
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2160272 Magneto-Thermo-Mechanical Analysis of Electromagnetic Devices Using the Finite Element Method
Authors: Michael G. Pantelyat
Abstract:
Fundamental basics of pure and applied research in the area of magneto-thermo-mechanical numerical analysis and design of innovative electromagnetic devices (modern induction heaters, novel thermoelastic actuators, rotating electrical machines, induction cookers, electrophysical devices) are elaborated. Thus, mathematical models of magneto-thermo-mechanical processes in electromagnetic devices taking into account main interactions of interrelated phenomena are developed. In addition, graphical representation of coupled (multiphysics) phenomena under consideration is proposed. Besides, numerical techniques for nonlinear problems solution are developed. On this base, effective numerical algorithms for solution of actual problems of practical interest are proposed, validated and implemented in applied 2D and 3D computer codes developed. Many applied problems of practical interest regarding modern electrical engineering devices are numerically solved. Investigations of the influences of various interrelated physical phenomena (temperature dependences of material properties, thermal radiation, conditions of convective heat transfer, contact phenomena, etc.) on the accuracy of the electromagnetic, thermal and structural analyses are conducted. Important practical recommendations on the choice of rational structures, materials and operation modes of electromagnetic devices under consideration are proposed and implemented in industry.Keywords: Electromagnetic devices, multiphysics, numerical analysis, simulation and design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1721271 Effect of Compost Application on Uptake and Allocation of Heavy Metals and Plant Nutrients and Quality of Oriental Tobacco Krumovgrad 90
Authors: Violina R. Angelova, Venelina T. Popova, Radka V. Ivanova, Givko T. Ivanov, Krasimir I. Ivanov
Abstract:
A comparative research on the impact of compost on uptake and allocation of nutrients and heavy metals and quality of Oriental tobacco Krumovgrad 90 has been carried out. The experiment was performed on an agricultural field contaminated by the lead zinc smelter near the town of Kardzali, Bulgaria, after closing the lead production. The compost treatments had significant effects on the uptake and allocation of plant nutrients and heavy metals. The incorporation of compost leads to decrease in the amount of heavy metals present in the tobacco leaves, with Cd, Pb and Zn having values of 36%, 12% and 6%, respectively. Application of the compost leads to increased content of potassium, calcium and magnesium in the leaves of tobacco, and therefore, may favorably affect the burning properties of tobacco. The incorporation of compost in the soil has a negative impact on the quality and typicality of the oriental tobacco variety of Krumovgrad 90. The incorporation of compost leads to an increase in the size of the tobacco plant leaves, the leaves become darker in colour, less fleshy and undergo a change in form, becoming (much) broader in the second, third and fourth stalk position. This is accompanied by a decrease in the quality of the tobacco. The incorporation of compost also results in an increase in the mineral substances (pure ash), total nicotine and nitrogen, and a reduction in the amount of reducing sugars, which causes the quality of the tobacco leaves to deteriorate (particularly in the third and fourth harvests).
Keywords: Chemical composition, compost, oriental tobacco, quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1278270 Multichannel Scheme under Max-Min Fairness Environment for Cognitive Radio Networks
Authors: Hans R. Márquez, Cesar Hernández, Ingrid Páez
Abstract:
This paper develops a multiple channel assignment model, which allows to take advantage of spectrum opportunities in cognitive radio networks in the most efficient way. The developed scheme allows making several assignments of available and frequency adjacent channel, which require a bigger bandwidth, under an equality environment. The hybrid assignment model it is made by two algorithms, one that makes the ranking and selects available frequency channels and the other one in charge of establishing the Max-Min Fairness for not restrict the spectrum opportunities for all the other secondary users, who also claim to make transmissions. Measurements made were done for average bandwidth, average delay, as well as fairness computation for several channel assignments. Reached results were evaluated with experimental spectrum occupational data from captured GSM frequency band. The developed model shows evidence of improvement in spectrum opportunity use and a wider average transmission bandwidth for each secondary user, maintaining equality criteria in channel assignment.Keywords: Bandwidth, fairness, multichannel, secondary users.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1763269 An Energy-Latency-Efficient MAC Protocol for Wireless Sensor Networks
Authors: Tahar Ezzedine, Mohamed Miladi, Ridha Bouallegue
Abstract:
Because nodes are usually battery-powered, the energy presents a very scarce resource in wireless sensor networks. For this reason, the design of medium access control had to take energy efficiency as one of its hottest concerns. Accordingly, in order to improve the energy performance of MAC schemes in wireless sensor networks, several ways can be followed. In fact, some researchers try to limit idle listening while others focus on mitigating overhearing (i.e. a node can hear a packet which is destined to another node) or reducing the number of the used control packets. We, in this paper, propose a new hybrid MAC protocol termed ELE-MAC (i.e. Energy Latency Efficient MAC). The ELE-MAC major design goals are energy and latency efficiencies. It adopts less control packets than SMAC in order to preserve energy. We carried out ns- 2 simulations to evaluate the performance of the proposed protocol. Thus, our simulation-s results prove the ELE-MAC energy efficiency. Additionally, our solution performs statistically the same or better latency characteristic compared to adaptive SMAC.Keywords: Control packet, energy efficiency, medium access control, wireless sensor networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695268 Finite Element Prediction of Multi-Size Particulate Flow through Two-Dimensional Pump Casing
Authors: K. V. Pagalthivarthi, R. J. Visintainer
Abstract:
Two-dimensional Eulerian (volume-averaged) continuity and momentum equations governing multi-size slurry flow through pump casings are solved by applying a penalty finite element formulation. The computational strategy validated for multi-phase flow through rectangular channels is adapted to the present study. The flow fields of the carrier, mixture and each solids species, and the concentration field of each species are determined sequentially in an iterative manner. The eddy viscosity field computed using Spalart-Allmaras model for the pure carrier phase is modified for the presence of particles. Streamline upwind Petrov-Galerkin formulation is used for all the momentum equations for the carrier, mixture and each solids species and the concentration field for each species. After ensuring mesh-independence of solutions, results of multi-size particulate flow simulation are presented to bring out the effect of bulk flow rate, average inlet concentration, and inlet particle size distribution. Mono-size computations using (1) the concentration-weighted mean diameter of the slurry and (2) the D50 size of the slurry are also presented for comparison with multi-size results.
Keywords: Eulerian-Eulerian model, Multi-size particulate flow, Penalty finite elements, Pump casing, Spalart-Allmaras.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1430267 Off-Policy Q-learning Technique for Intrusion Response in Network Security
Authors: Zheni S. Stefanova, Kandethody M. Ramachandran
Abstract:
With the increasing dependency on our computer devices, we face the necessity of adequate, efficient and effective mechanisms, for protecting our network. There are two main problems that Intrusion Detection Systems (IDS) attempt to solve. 1) To detect the attack, by analyzing the incoming traffic and inspect the network (intrusion detection). 2) To produce a prompt response when the attack occurs (intrusion prevention). It is critical creating an Intrusion detection model that will detect a breach in the system on time and also challenging making it provide an automatic and with an acceptable delay response at every single stage of the monitoring process. We cannot afford to adopt security measures with a high exploiting computational power, and we are not able to accept a mechanism that will react with a delay. In this paper, we will propose an intrusion response mechanism that is based on artificial intelligence, and more precisely, reinforcement learning techniques (RLT). The RLT will help us to create a decision agent, who will control the process of interacting with the undetermined environment. The goal is to find an optimal policy, which will represent the intrusion response, therefore, to solve the Reinforcement learning problem, using a Q-learning approach. Our agent will produce an optimal immediate response, in the process of evaluating the network traffic.This Q-learning approach will establish the balance between exploration and exploitation and provide a unique, self-learning and strategic artificial intelligence response mechanism for IDS.Keywords: Intrusion prevention, network security, optimal policy, Q-learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1022266 Numerical Investigation of the Performance of a Vorsyl Separator Using a Euler-Lagrange Approach
Authors: Guozhen Li, Philip Hall, Nick Miles, Tao Wu, Jie Dong
Abstract:
This paper presents a Euler-Lagrange model of the water-particles multiphase flows in a Vorsyl separator where particles with different densities are separated. A series of particles with their densities ranging from 760 kg/m3 to 1380 kg/m3 were fed into the Vorsyl separator with water by means of tangential inlet. The simulation showed that the feed materials acquired centrifugal force which allows most portion of the particles with a density less than water to move to the center of the separator, enter the vortex finder and leave the separator through the bottom outlet. While the particles heavier than water move to the wall, reach the throat area and leave the separator through the side outlet. The particles were thus separated and particles collected at the bottom outlet are pure and clean. The influence of particle density on separation efficiency was investigated which demonstrated a positive correlation of the separation efficiency with increasing density difference between medium liquid and the particle. In addition, the influence of the split ratio on the performance was studied which showed that the separation efficiency of the Vorsyl separator can be improved by the increase of split ratio. The simulation also suggested that the Vorsyl separator may not function when the feeding velocity is smaller than a certain critical feeding in velocity. In addition, an increasing feeding velocity gives rise to increased pressure drop, however does not necessarily increase the separation efficiency.Keywords: Vorsyl separator, separation efficiency, CFD, split ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1198265 Performing Diagnosis in Building with Partially Valid Heterogeneous Tests
Authors: Houda Najeh, Mahendra Pratap Singh, Stéphane Ploix, Antoine Caucheteux, Karim Chabir, Mohamed Naceur Abdelkrim
Abstract:
Building system is highly vulnerable to different kinds of faults and human misbehaviors. Energy efficiency and user comfort are directly targeted due to abnormalities in building operation. The available fault diagnosis tools and methodologies particularly rely on rules or pure model-based approaches. It is assumed that model or rule-based test could be applied to any situation without taking into account actual testing contexts. Contextual tests with validity domain could reduce a lot of the design of detection tests. The main objective of this paper is to consider fault validity when validate the test model considering the non-modeled events such as occupancy, weather conditions, door and window openings and the integration of the knowledge of the expert on the state of the system. The concept of heterogeneous tests is combined with test validity to generate fault diagnoses. A combination of rules, range and model-based tests known as heterogeneous tests are proposed to reduce the modeling complexity. Calculation of logical diagnoses coming from artificial intelligence provides a global explanation consistent with the test result. An application example shows the efficiency of the proposed technique: an office setting at Grenoble Institute of Technology.Keywords: Heterogeneous tests, validity, building system, sensor grids, sensor fault, diagnosis, fault detection and isolation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 652264 Forecasting Electricity Spot Price with Generalized Long Memory Modeling: Wavelet and Neural Network
Authors: Souhir Ben Amor, Heni Boubaker, Lotfi Belkacem
Abstract:
This aims of this paper is to forecast the electricity spot prices. First, we focus on modeling the conditional mean of the series so we adopt a generalized fractional -factor Gegenbauer process (k-factor GARMA). Secondly, the residual from the -factor GARMA model has used as a proxy for the conditional variance; these residuals were predicted using two different approaches. In the first approach, a local linear wavelet neural network model (LLWNN) has developed to predict the conditional variance using the Back Propagation learning algorithms. In the second approach, the Gegenbauer generalized autoregressive conditional heteroscedasticity process (G-GARCH) has adopted, and the parameters of the k-factor GARMA-G-GARCH model has estimated using the wavelet methodology based on the discrete wavelet packet transform (DWPT) approach. The empirical results have shown that the k-factor GARMA-G-GARCH model outperform the hybrid k-factor GARMA-LLWNN model, and find it is more appropriate for forecasts.Keywords: k-factor, GARMA, LLWNN, G-GARCH, electricity price, forecasting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 995263 Modeling the Symptom-Disease Relationship by Using Rough Set Theory and Formal Concept Analysis
Authors: Mert Bal, Hayri Sever, Oya Kalıpsız
Abstract:
Medical Decision Support Systems (MDSSs) are sophisticated, intelligent systems that can provide inference due to lack of information and uncertainty. In such systems, to model the uncertainty various soft computing methods such as Bayesian networks, rough sets, artificial neural networks, fuzzy logic, inductive logic programming and genetic algorithms and hybrid methods that formed from the combination of the few mentioned methods are used. In this study, symptom-disease relationships are presented by a framework which is modeled with a formal concept analysis and theory, as diseases, objects and attributes of symptoms. After a concept lattice is formed, Bayes theorem can be used to determine the relationships between attributes and objects. A discernibility relation that forms the base of the rough sets can be applied to attribute data sets in order to reduce attributes and decrease the complexity of computation.
Keywords: Formal Concept Analysis, Rough Set Theory, Granular Computing, Medical Decision Support System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814262 Calculation of Density for Refrigerant Mixtures in Sub Critical Regions for Use in the Buildings
Authors: Mohammad Reza Mobinipouya, Zahra Barzegar
Abstract:
Accurate and comprehensive thermodynamic properties of pure and mixture of refrigerants are in demand by both producers and users of these materials. Information about thermodynamic properties is important initially to qualify potential candidates for working fluids in refrigeration machinery. From practical point of view, Refrigerants and refrigerant mixtures are widely used as working fluids in many industrial applications, such as refrigerators, heat pumps, and power plants The present work is devoted to evaluating seven cubic equations of state (EOS) in predicting gas and liquid phase volumetric properties of nine ozone-safe refrigerants both in super and sub-critical regions. The evaluations, in sub-critical region, show that TWU and PR EOS are capable of predicting PVT properties of refrigerants R32 within 2%, R22, R134a, R152a and R143a within 1% and R123, R124, R125, TWU and PR EOS's, from literature data are 0.5% for R22, R32, R152a, R143a, and R125, 1% for R123, R134a, and R141b, and 2% for R124. Moreover, SRK EOS predicts PVT properties of R22, R125, and R123 to within aforementioned errors. The remaining EOS's predicts volumetric properties of this class of fluids with higher errors than those above mentioned which are at most 8%.In general, the results are in favor of the preference of TWU and PR EOS over other remaining EOS's in predicting densities of all mentioned refrigerants in both super and sub critical regions. Typically, this refrigerant is known to offer advantages such as ozone depleting potential equal to zero, Global warming potential equal to 140, and no toxic.
Keywords: Refrigerant, cooling systems, Sub-CriticalRegions, volumetric properties, efficiency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2159261 Non-Coplanar Nuclei in Heavy-Ion Reactions
Authors: Sahila Chopra, Hemdeep, Arshdeep Kaur, Raj K. Gupta
Abstract:
In recent times, we noticed an interesting and important role of non-coplanar degree-of-freedom (Φ = 00) in heavy ion reactions. Using the dynamical cluster-decay model (DCM) with Φ degree-of-freedom included, we have studied three compound systems 246Bk∗, 164Yb∗ and 105Ag∗. Here, within the DCM with pocket formula for nuclear proximity potential, we look for the effects of including compact, non-coplanar configurations (Φc = 00) on the non-compound nucleus (nCN) contribution in total fusion cross section σfus. For 246Bk∗, formed in 11B+235U and 14N+232Th reaction channels, the DCM with coplanar nuclei (Φc = 00) shows an nCN contribution for 11B+235U channel, but none for 14N+232Th channel, which on including Φ gives both reaction channels as pure compound nucleus decays. In the case of 164Yb∗, formed in 64Ni+100Mo, the small nCN effects for Φ=00 are reduced to almost zero for Φ = 00. Interestingly, however, 105Ag∗ for Φ = 00 shows a small nCN contribution, which gets strongly enhanced for Φ = 00, such that the characteristic property of PCN presents a change of behaviour, like that of a strongly fissioning superheavy element to a weakly fissioning nucleus; note that 105Ag∗ is a weakly fissioning nucleus and Psurv behaves like one for a weakly fissioning nucleus for both Φ = 00 and Φ = 00. Apparently, Φ is presenting itself like a good degree-of-freedom in the DCM.Keywords: Dynamical cluster-decay model, fusion cross sections, non-compound nucleus effects, non-coplanarity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1186260 Standard Deviation of Mean and Variance of Rows and Columns of Images for CBIR
Authors: H. B. Kekre, Kavita Patil
Abstract:
This paper describes a novel and effective approach to content-based image retrieval (CBIR) that represents each image in the database by a vector of feature values called “Standard deviation of mean vectors of color distribution of rows and columns of images for CBIR". In many areas of commerce, government, academia, and hospitals, large collections of digital images are being created. This paper describes the approach that uses contents as feature vector for retrieval of similar images. There are several classes of features that are used to specify queries: colour, texture, shape, spatial layout. Colour features are often easily obtained directly from the pixel intensities. In this paper feature extraction is done for the texture descriptor that is 'variance' and 'Variance of Variances'. First standard deviation of each row and column mean is calculated for R, G, and B planes. These six values are obtained for one image which acts as a feature vector. Secondly we calculate variance of the row and column of R, G and B planes of an image. Then six standard deviations of these variance sequences are calculated to form a feature vector of dimension six. We applied our approach to a database of 300 BMP images. We have determined the capability of automatic indexing by analyzing image content: color and texture as features and by applying a similarity measure Euclidean distance.
Keywords: Standard deviation Image retrieval, color distribution, Variance, Variance of Variance, Euclidean distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3746259 Sonochemically Prepared SnO2 Quantum Dots as a Selective and Low Temperature CO Sensor
Authors: S. Mosadegh Sedghi, Y. Mortazavi, A. Khodadadi, O. Alizadeh Sahraei, M. Vesali Naseh
Abstract:
In this study, a low temperature sensor highly selective to CO in presence of methane is fabricated by using 4 nm SnO2 quantum dots (QDs) prepared by sonication assisted precipitation. SnCl4 aqueous solution was precipitated by ammonia under sonication, which continued for 2 h. A part of the sample was then dried and calcined at 400°C for 1.5 h and characterized by XRD and BET. The average particle size and the specific surface area of the SnO2 QDs as well as their sensing properties were compared with the SnO2 nano-particles which were prepared by conventional sol-gel method. The BET surface area of sonochemically as-prepared product and the one calcined at 400°C after 1.5 hr are 257 m2/gr and 212 m2/gr respectively while the specific surface area for SnO2 nanoparticles prepared by conventional sol-gel method is about 80m2/gr. XRD spectra revealed pure crystalline phase of SnO2 is formed for both as-prepared and calcined samples of SnO2 QDs. However, for the sample prepared by sol-gel method and calcined at 400°C SnO crystals are detected along with those of SnO2. Quantum dots of SnO2 show exceedingly high sensitivity to CO with different concentrations of 100, 300 and 1000 ppm in whole range of temperature (25- 350°C). At 50°C a sensitivity of 27 was obtained for 1000 ppm CO, which increases to a maximum of 147 when the temperature rises to 225°C and then drops off while the maximum sensitivity for the SnO2 sample prepared by the sol-gel method was obtained at 300°C with the amount of 47.2. At the same time no sensitivity to methane is observed in whole range of temperatures for SnO2 QDs. The response and recovery times of the sensor sharply decreases with temperature, while the high selectivity to CO does not deteriorate.
Keywords: Sonochemical, SnO2 QDs, SnO2 gas sensor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2248258 Analytical Mathematical Expression for the Channel Capacity of a Power and Rate Simultaneous Adaptive Cellular DS/FFH-CDMA Systemin a Rayleigh Fading Channel
Authors: P.Varzakas
Abstract:
In this paper, an accurate theoretical analysis for the achievable average channel capacity (in the Shannon sense) per user of a hybrid cellular direct-sequence/fast frequency hopping code-division multiple-access (DS/FFH-CDMA) system operating in a Rayleigh fading environment is presented. The analysis covers the downlink operation and leads to the derivation of an exact mathematical expression between the normalized average channel capacity available to each system-s user, under simultaneous optimal power and rate adaptation and the system-s parameters, as the number of hops per bit, the processing gain applied, the number of users per cell and the received signal-tonoise power ratio over the signal bandwidth. Finally, numerical results are presented to illustrate the proposed mathematical analysis.
Keywords: Shannon capacity, adaptive systems, code-division multiple access, fading channels.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1526257 Persian/Arabic Document Segmentation Based On Pyramidal Image Structure
Authors: Seyyed Yasser Hashemi, Khalil Monfaredi
Abstract:
Automatic transformation of paper documents into electronic documents requires document segmentation at the first stage. However, some parameters restrictions such as variations in character font sizes, different text line spacing, and also not uniform document layout structures altogether have made it difficult to design a general-purpose document layout analysis algorithm for many years. Thus in most previously reported methods it is inevitable to include these parameters. This problem becomes excessively acute and severe, especially in Persian/Arabic documents. Since the Persian/Arabic scripts differ considerably from the English scripts, most of the proposed methods for the English scripts do not render good results for the Persian scripts. In this paper, we present a novel parameter-free method for segmenting the Persian/Arabic document images which also works well for English scripts. This method segments the document image into maximal homogeneous regions and identifies them as texts and non-texts based on a pyramidal image structure. In other words the proposed method is capable of document segmentation without considering the character font sizes, text line spacing, and document layout structures. This algorithm is examined for 150 Arabic/Persian and English documents and document segmentation process are done successfully for 96 percent of documents.
Keywords: Persian/Arabic document, document segmentation, Pyramidal Image Structure, skew detection and correction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1765256 Aerodynamic Stall Control of a Generic Airfoil using Synthetic Jet Actuator
Authors: Basharat Ali Haider, Naveed Durrani, Nadeem Aizud, Salimuddin Zahir
Abstract:
The aerodynamic stall control of a baseline 13-percent thick NASA GA(W)-2 airfoil using a synthetic jet actuator (SJA) is presented in this paper. Unsteady Reynolds-averaged Navier-Stokes equations are solved on a hybrid grid using a commercial software to simulate the effects of a synthetic jet actuator located at 13% of the chord from the leading edge at a Reynolds number Re = 2.1x106 and incidence angles from 16 to 22 degrees. The experimental data for the pressure distribution at Re = 3x106 and aerodynamic coefficients at Re = 2.1x106 (angle of attack varied from -16 to 22 degrees) without SJA is compared with the computational fluid dynamic (CFD) simulation as a baseline validation. A good agreement of the CFD simulations is obtained for aerodynamic coefficients and pressure distribution. A working SJA has been integrated with the baseline airfoil and initial focus is on the aerodynamic stall control at angles of attack from 16 to 22 degrees. The results show a noticeable improvement in the aerodynamic performance with increase in lift and decrease in drag at these post stall regimes.Keywords: Active flow control, Aerodynamic stall, Airfoilperformance, Synthetic jet actuator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2311255 Investigation of the Surface Features of the Jupiter’s Galilean Moons
Authors: Revaz Chigladze
Abstract:
The purpose of the research is to investigate the surfaces of Jupiter's Galilean moons (satellites), namely to identify which moon has the most uniform surface among them, what is the difference between the front (in the direction of motion) and the back sides of each moon's surface, as well as the temporal variations of the moons. Since 1981, the E. Kharadze Georgian National Astrophysical Observatory has been conducting polarimetric (P) and photometric (M) observations of Jupiter's Galilean moons with telescopes of different diameters (40-cm and 125-cm), as well as polarimeter Automatic Scanning Electron Polarimeter (ASEP)-78, the latest generation photometer with polarimeter and modern light receiver Santana Barbara Instrument Group (SBIG). As it turns out from the analysis of the observed material, parameters P and M depend on: α, the phase angle of the moon (satellite); L, the orbital latitude of the moon (satellite); λ, the wavelength, and t, the period of observation, i.e., P = P (α, L, λ, t), and similarly: M = M (α, L, λ, t). Based on the analysis of the obtained results, we get: The magnitude of the degree of polarization of Jupiter's Galilean moons near the opposition significantly differs from zero. Europa appears to have the most uniform surface, and Callisto has the least. Time variations are most characteristic of Io, which confirms the presence of volcanic activity on its surface. Based on the observed materials, it can be seen that the intensity of light reflected from the front hemisphere of the first three moons: Io, Europa, and Ganymede, is less than the intensity of light reflected from the rear hemisphere, while the picture with Callisto is opposite. The paper provides an explanation of this fact.
Keywords: Galilean moons, polarization, degree of polarization, photometry, front and rear hemispheres.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 148