Search results for: low data rate
8421 Development of Predictive Model for Surface Roughness in End Milling of Al-SiCp Metal Matrix Composites using Fuzzy Logic
Authors: M. Chandrasekaran, D. Devarasiddappa
Abstract:
Metal matrix composites have been increasingly used as materials for components in automotive and aerospace industries because of their improved properties compared with non-reinforced alloys. During machining the selection of appropriate machining parameters to produce job for desired surface roughness is of great concern considering the economy of manufacturing process. In this study, a surface roughness prediction model using fuzzy logic is developed for end milling of Al-SiCp metal matrix composite component using carbide end mill cutter. The surface roughness is modeled as a function of spindle speed (N), feed rate (f), depth of cut (d) and the SiCp percentage (S). The predicted values surface roughness is compared with experimental result. The model predicts average percentage error as 4.56% and mean square error as 0.0729. It is observed that surface roughness is most influenced by feed rate, spindle speed and SiC percentage. Depth of cut has least influence.Keywords: End milling, fuzzy logic, metal matrix composites, surface roughness
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21748420 Scintigraphic Image Coding of Region of Interest Based On SPIHT Algorithm Using Global Thresholding and Huffman Coding
Authors: A. Seddiki, M. Djebbouri, D. Guerchi
Abstract:
Medical imaging produces human body pictures in digital form. Since these imaging techniques produce prohibitive amounts of data, compression is necessary for storage and communication purposes. Many current compression schemes provide a very high compression rate but with considerable loss of quality. On the other hand, in some areas in medicine, it may be sufficient to maintain high image quality only in region of interest (ROI). This paper discusses a contribution to the lossless compression in the region of interest of Scintigraphic images based on SPIHT algorithm and global transform thresholding using Huffman coding.
Keywords: Global Thresholding Transform, Huffman Coding, Region of Interest, SPIHT Coding, Scintigraphic images.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19848419 Nutrients Removal Control via an Intermittently Aerated Membrane Bioreactor
Authors: Junior B. N. Adohinzin, Ling Xu
Abstract:
Nitrogen is among the main nutrients encouraging the growth of organic matter and algae which cause eutrophication in water bodies. Therefore, its removal from wastewater has become a worldwide emerging concern. In this research, an innovative Membrane Bioreactor (MBR) system named “moving bed membrane bioreactor (MBMBR)” was developed and investigated under intermittently-aerated mode for simultaneous removal of organic carbon and nitrogen.
Results indicated that the variation of the intermittently aerated duration did not have an apparent impact on COD and NH4+–N removal rate, yielding the effluent with average COD and NH4+–N removal efficiency of more than 92 and 91% respectively. However, in the intermittently aerated cycle of (continuously aeration/0s mix), (aeration 90s/mix 90s) and (aeration 90s/mix 180s); the average TN removal efficiency was 67.6%, 69.5% and 87.8% respectively. At the same time, their nitrite accumulation rate was 4.5%, 49.1% and 79.4% respectively. These results indicate that the intermittently aerated mode is an efficient way to controlling the nitrification to stop at nitrition; and also the length of anoxic duration is a key factor in improving TN removal.
Keywords: Membrane bioreactor (MBR), Moving bed biofilm reactor (MBBR), Nutrients removal, Simultaneous nitrification and denitrification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24988418 Multi-Scale Damage and Mechanical Behavior of Sheet Molding Compound Composites Subjected to Fatigue, Dynamic, and Post-Fatigue Dynamic Loadings
Authors: M. Shirinbayan, J. Fitoussi, N. Abbasnezhad, A. Lucas, A. Tcharkhtchi
Abstract:
Sheet Molding Compounds (SMCs) with special microstructures are very attractive to use in automobile structures especially when they are accidentally subjected to collision type accidents because of their high energy absorption capacity. These are materials designated as standard SMC, Advanced Sheet Molding Compounds (A-SMC), Low-Density SMC (LD-SMC) and etc. In this study, testing methods have been performed to compare the mechanical responses and damage phenomena of SMC, LD-SMC, and A-SMC under quasi-static and high strain rate tensile tests. The paper also aims at investigating the effect of an initial pre-damage induced by fatigue on the tensile dynamic behavior of A-SMC. In the case of SMCs and A-SMCs, whatever the fibers orientation and applied strain rate are, the first observed phenomenon of damage corresponds to decohesion of the fiber-matrix interface which is followed by coalescence and multiplication of these micro-cracks and their propagations. For LD-SMCs, damage mechanisms depend on the presence of Hollow Glass Microspheres (HGM) and fibers orientation.
Keywords: SMC, LD-SMC, A-SMC, HGM, damage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7238417 Electrical Performance of a Solid Oxide Fuel Cell Unit with Non-Uniform Inlet Flow and High Fuel Utilization
Authors: Ping Yuan, Mu-Sheng Chiang, Syu-Fang Liu, Shih-Bin Wang, Ming-Jun Kuo
Abstract:
This study investigates the electrical performance of a planar solid oxide fuel cell unit with cross-flow configuration when the fuel utilization gets higher and the fuel inlet flow are non-uniform. A software package in this study solves two-dimensional, simultaneous, partial differential equations of mass, energy, and electro-chemistry, without considering stack direction variation. The results show that the fuel utilization increases with a decrease in the molar flow rate, and the average current density decreases when the molar flow rate drops. In addition, non-uniform Pattern A will induce more severe happening of non-reaction area in the corner of the fuel exit and the air inlet. This non-reaction area deteriorates the average current density and then deteriorates the electrical performance to –7%.Keywords: Performance, Solid oxide fuel cell, non-uniform, fuelutilization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13088416 A Competitive Replica Placement Methodology for Ad Hoc Networks
Authors: Samee Ullah Khan, C. Ardil
Abstract:
In this paper, a mathematical model for data object replication in ad hoc networks is formulated. The derived model is general, flexible and adaptable to cater for various applications in ad hoc networks. We propose a game theoretical technique in which players (mobile hosts) continuously compete in a non-cooperative environment to improve data accessibility by replicating data objects. The technique incorporates the access frequency from mobile hosts to each data object, the status of the network connectivity, and communication costs. The proposed technique is extensively evaluated against four well-known ad hoc network replica allocation methods. The experimental results reveal that the proposed approach outperforms the four techniques in both the execution time and solution qualityKeywords: Data replication, auctions, static allocation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14038415 Multidimensional Data Mining by Means of Randomly Travelling Hyper-Ellipsoids
Authors: Pavel Y. Tabakov, Kevin Duffy
Abstract:
The present study presents a new approach to automatic data clustering and classification problems in large and complex databases and, at the same time, derives specific types of explicit rules describing each cluster. The method works well in both sparse and dense multidimensional data spaces. The members of the data space can be of the same nature or represent different classes. A number of N-dimensional ellipsoids are used for enclosing the data clouds. Due to the geometry of an ellipsoid and its free rotation in space the detection of clusters becomes very efficient. The method is based on genetic algorithms that are used for the optimization of location, orientation and geometric characteristics of the hyper-ellipsoids. The proposed approach can serve as a basis for the development of general knowledge systems for discovering hidden knowledge and unexpected patterns and rules in various large databases.Keywords: Classification, clustering, data minig, genetic algorithms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17768414 Unsteadiness Effects on Variable Thrust Nozzle Performance
Authors: A. M. Tahsini, S. T. Mousavi
Abstract:
The purpose of this paper is to elucidate the flow unsteady behavior for moving plug in convergent-divergent variable thrust nozzle. Compressible axisymmetric Navier-Stokes equations are used to study this physical phenomenon. Different velocities are set for plug to investigate the effect of plug movement on flow unsteadiness. Variation of mass flow rate and thrust are compared under two conditions: First, the plug is placed at different positions and flow is simulated to reach the steady state (quasi steady simulation) and second, the plug is moved with assigned velocity and flow simulation is coupled with plug movement (unsteady simulation). If plug speed is high enough and its movement time scale is at the same order of the flow time scale, variation of the mass flow rate and thrust level versus plug position demonstrate a vital discrepancy under the quasi steady and unsteady conditions. This phenomenon should be considered especially from response time viewpoints in thrusters design.
Keywords: Nozzle, Numerical study, Unsteady, Variable thrust.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22538413 Simultaneous Term Structure Estimation of Hazard and Loss Given Default with a Statistical Model using Credit Rating and Financial Information
Authors: Tomohiro Ando, Satoshi Yamashita
Abstract:
The objective of this study is to propose a statistical modeling method which enables simultaneous term structure estimation of the risk-free interest rate, hazard and loss given default, incorporating the characteristics of the bond issuing company such as credit rating and financial information. A reduced form model is used for this purpose. Statistical techniques such as spline estimation and Bayesian information criterion are employed for parameter estimation and model selection. An empirical analysis is conducted using the information on the Japanese bond market data. Results of the empirical analysis confirm the usefulness of the proposed method.Keywords: Empirical Bayes, Hazard term structure, Loss given default.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16708412 Comparison of MFCC and Cepstral Coefficients as a Feature Set for PCG Biometric Systems
Authors: Justin Leo Cheang Loong, Khazaimatol S Subari, Muhammad Kamil Abdullah, Nurul Nadia Ahmad, RosliBesar
Abstract:
Heart sound is an acoustic signal and many techniques used nowadays for human recognition tasks borrow speech recognition techniques. One popular choice for feature extraction of accoustic signals is the Mel Frequency Cepstral Coefficients (MFCC) which maps the signal onto a non-linear Mel-Scale that mimics the human hearing. However the Mel-Scale is almost linear in the frequency region of heart sounds and thus should produce similar results with the standard cepstral coefficients (CC). In this paper, MFCC is investigated to see if it produces superior results for PCG based human identification system compared to CC. Results show that the MFCC system is still superior to CC despite linear filter-banks in the lower frequency range, giving up to 95% correct recognition rate for MFCC and 90% for CC. Further experiments show that the high recognition rate is due to the implementation of filter-banks and not from Mel-Scaling.Keywords: Biometric, Phonocardiogram, Cepstral Coefficients, Mel Frequency
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35568411 Predictions Using Data Mining and Case-based Reasoning: A Case Study for Retinopathy
Authors: Vimala Balakrishnan, Mohammad R. Shakouri, Hooman Hoodeh, Loo, Huck-Soo
Abstract:
Diabetes is one of the high prevalence diseases worldwide with increased number of complications, with retinopathy as one of the most common one. This paper describes how data mining and case-based reasoning were integrated to predict retinopathy prevalence among diabetes patients in Malaysia. The knowledge base required was built after literature reviews and interviews with medical experts. A total of 140 diabetes patients- data were used to train the prediction system. A voting mechanism selects the best prediction results from the two techniques used. It has been successfully proven that both data mining and case-based reasoning can be used for retinopathy prediction with an improved accuracy of 85%.Keywords: Case-Based Reasoning, Data Mining, Prediction, Retinopathy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30278410 Improving TNT Curing Process by Using Infrared Camera
Authors: O. Srihakulung, Y. Soongsumal
Abstract:
Among the chemicals used for ammunition production, TNT (Trinitrotoluene) play a significant role since World War I and II. Various types of military weapon utilize TNT in casting process. However, the TNT casting process for warhead is difficult to control the cooling rate of the liquid TNT. This problem occurs because the casting process lacks the equipment to detect the temperature during the casting procedure This study presents the temperature detected by infrared camera to illustrate the cooling rate and cooling zone of curing, and demonstrates the optimization of TNT condition to reduce the risk of air gap occurred in the warhead which can result in the destruction afterward. Premature initiation of explosive-filled projectiles in response to set-back forces during gunfiring cause by casting defects. Finally the study can help improving the process of the TNT casting. The operators can control the curing of TNT inside the case by rising up the heating rod at the proper time. Consequently this can reduce tremendous time of rework if the air gaps occur and increase strength to lower elastic modulus. Therefore, it can be clearly concluded that the use of Infrared Cameras in this process is another method to improve the casting procedure.
Keywords: Infrared camera, TNT casting, warhead, curing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22688409 Zero Truncated Strict Arcsine Model
Authors: Y. N. Phang, E. F. Loh
Abstract:
The zero truncated model is usually used in modeling count data without zero. It is the opposite of zero inflated model. Zero truncated Poisson and zero truncated negative binomial models are discussed and used by some researchers in analyzing the abundance of rare species and hospital stay. Zero truncated models are used as the base in developing hurdle models. In this study, we developed a new model, the zero truncated strict arcsine model, which can be used as an alternative model in modeling count data without zero and with extra variation. Two simulated and one real life data sets are used and fitted into this developed model. The results show that the model provides a good fit to the data. Maximum likelihood estimation method is used in estimating the parameters.
Keywords: Hurdle models, maximum likelihood estimation method, positive count data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18608408 Mitigation of ISI for Next Generation Wireless Channels in Outdoor Vehicular Environments
Authors: Mohd. Israil, M. Salim Beg
Abstract:
In order to accommodate various multimedia services, next generation wireless networks are characterized by very high transmission bit rates. Thus, in such systems and networks, the received signal is not only limited by noise but - especially with increasing symbols rate often more significantly by the intersymbol interference (ISI) caused by the time dispersive radio channels such as those are used in this work. This paper deals with the study of the performance of detector for high bit rate transmission on some worst case models of frequency selective fading channels for outdoor mobile radio environments. This paper deals with a number of different wireless channels with different power profiles and different number of resolvable paths. All the radio channels generated in this paper are for outdoor vehicular environments with Doppler spread of 100 Hz. A carrier frequency of 1800 MHz is used and all the channels used in this work are such that they are useful for next generation wireless systems. Schemes for mitigation of ISI with adaptive equalizers of different types have been investigated and their performances have been investigated in terms of BER measured as a function of SNR.Keywords: Mobile channels, Rayleigh Fading, Equalization, NMLD.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14258407 A New Method in Short-Term Heart Rate Variability — Five-Class Density Histogram
Authors: Liping Li, Ke Li, Changchun Liu, Chengyu Liu, Yuanyang Li
Abstract:
A five-class density histogram with an index named cumulative density was proposed to analyze the short-term HRV. 150 subjects participated in the test, falling into three groups with equal numbers -- the healthy young group (Young), the healthy old group (Old), and the group of patients with congestive heart failure (CHF). Results of multiple comparisons showed a significant differences of the cumulative density in the three groups, with values 0.0238 for Young, 0.0406 for Old and 0.0732 for CHF (p<0.001). After 7 days and 14 days, 46 subjects from the Young and Old groups were retested twice following the same test protocol. Results showed good-to-excellent interclass correlations (ICC=0.783, 95% confidence interval 0.676-0.864). The Bland-Altman plots were used to reexamine the test-retest reliability. In conclusion, the method proposed could be a valid and reliable method to the short-term HRV assessment.
Keywords: Autonomic nervous system, congestive heart failure, heart rate variability, histogram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20128406 Harmonic Pollution Caused by Non-Linear Load: Analysis and Identification
Authors: K. Khlifi, A. Haddouk, M. Hlaili, H. Mechergui
Abstract:
The present paper provides a detailed analysis of prior methods and approaches for non-linear load identification in residential buildings. The main goal of this analysis is to decipher the distorted signals and to estimate the harmonics influence on power systems. We have performed an analytical study of non-linear loads behavior in the residential environment. Simulations have been performed in order to evaluate the distorted rate of the current and follow his behavior. To complete this work, an instrumental platform has been realized to carry out practical tests on single-phase non-linear loads which illustrate the current consumption of some domestic appliances supplied with single-phase sinusoidal voltage. These non-linear loads have been processed and tracked in order to limit their influence on the power grid and to reduce the Joule effect losses. As a result, the study has allowed to identify responsible circuits of harmonic pollution.
Keywords: Distortion rate, harmonic analysis, harmonic pollution, non-linear load, power factor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8598405 Li-Fi Technology: Data Transmission through Visible Light
Authors: Shahzad Hassan, Kamran Saeed
Abstract:
People are always in search of Wi-Fi hotspots because Internet is a major demand nowadays. But like all other technologies, there is still room for improvement in the Wi-Fi technology with regards to the speed and quality of connectivity. In order to address these aspects, Harald Haas, a professor at the University of Edinburgh, proposed what we know as the Li-Fi (Light Fidelity). Li-Fi is a new technology in the field of wireless communication to provide connectivity within a network environment. It is a two-way mode of wireless communication using light. Basically, the data is transmitted through Light Emitting Diodes which can vary the intensity of light very fast, even faster than the blink of an eye. From the research and experiments conducted so far, it can be said that Li-Fi can increase the speed and reliability of the transfer of data. This paper pays particular attention on the assessment of the performance of this technology. In other words, it is a 5G technology which uses LED as the medium of data transfer. For coverage within the buildings, Wi-Fi is good but Li-Fi can be considered favorable in situations where large amounts of data are to be transferred in areas with electromagnetic interferences. It brings a lot of data related qualities such as efficiency, security as well as large throughputs to the table of wireless communication. All in all, it can be said that Li-Fi is going to be a future phenomenon where the presence of light will mean access to the Internet as well as speedy data transfer.
Keywords: Communication, LED, Li-Fi, Wi-Fi.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21728404 Optimal Water Conservation in a Mechanical Cooling Tower Operations
Authors: M. Boumaza, Y. Bakhabkhi
Abstract:
Water recycling represents an important challenge for many countries, in particular in countries where this natural resource is rare. On the other hand, in many operations, water is used as a cooling medium, as a high proportion of water consumed in industry is used for cooling purposes. Generally this water is rejected directly to the nature. This reject will cause serious environment damages as well as an important waste of this precious element.. On way to solve these problems is to reuse and recycle this warm water, through the use of natural cooling medium, such as air in a heat exchanger unit, known as a cooling tower. A poor performance, design or reliability of cooling towers will result in lower flow rate of cooling water an increase in the evaporation of water, an hence losses of water and energy. This paper which presents an experimental investigate of thermal and hydraulic performances of a mechanical cooling tower, enables to show that the water evaporation rate, Mev, increases with an increase in the air and water flow rates, as well as inlet water temperature and for fixed air flow rates, the pressure drop (ΔPw/Z) increases with increasing , L, due to the hydrodynamic behavior of the air/water flow.
Keywords: water, recycle, performance, cooling tower
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28208403 Business Rules for Data Warehouse
Authors: Rajeev Kaula
Abstract:
Business rules and data warehouse are concepts and technologies that impact a wide variety of organizational tasks. In general, each area has evolved independently, impacting application development and decision-making. Generating knowledge from data warehouse is a complex process. This paper outlines an approach to ease import of information and knowledge from a data warehouse star schema through an inference class of business rules. The paper utilizes the Oracle database for illustrating the working of the concepts. The star schema structure and the business rules are stored within a relational database. The approach is explained through a prototype in Oracle-s PL/SQL Server Pages.Keywords: Business Rules, Data warehouse, PL/SQL ServerPages, Relational model, Web Application.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29888402 A Double Differential Chaos Shift Keying Scheme for Ultra-Wideband Chaotic Communication Technology Applied in Low-Rate Wireless Personal Area Network
Authors: Ghobad Gorji, Hasan Golabi
Abstract:
The goal of this paper is to describe the design of an ultra-wideband (UWB) system that is optimized for the low-rate wireless personal area network application. To this aim, we propose a system based on direct chaotic communication (DCC) technology. Based on this system, a 2-GHz wide chaotic signal is produced into the UWB spectrum lower band, i.e., 3.1–5.1 GHz. For this system, two simple modulation schemes, namely chaotic on-off keying (COOK) and differential chaos shift keying (DCSK) are evaluated first. We propose a modulation scheme, namely Double DCSK, to improve the performance of UWB DCC. Different characteristics of these systems, with Monte Carlo simulations based on the Additive White Gaussian Noise (AWGN) and the IEEE 802.15.4a standard channel models, are compared.
Keywords: Ultra-wideband, UWB, Direct Chaotic Communication, DCC, IEEE 802.15.4a, COOK, DCSK.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2198401 An Analysis of Real-Time Distributed System under Different Priority Policies
Authors: Y. Jayanta Singh, Suresh C. Mehrotra
Abstract:
A real time distributed computing has heterogeneously networked computers to solve a single problem. So coordination of activities among computers is a complex task and deadlines make more complex. The performances depend on many factors such as traffic workloads, database system architecture, underlying processors, disks speeds, etc. Simulation study have been performed to analyze the performance under different transaction scheduling: different workloads, arrival rate, priority policies, altering slack factors and Preemptive Policy. The performance metric of the experiments is missed percent that is the percentage of transaction that the system is unable to complete. The throughput of the system is depends on the arrival rate of transaction. The performance can be enhanced with altering the slack factor value. Working on slack value for the transaction can helps to avoid some of transactions from killing or aborts. Under the Preemptive Policy, many extra executions of new transactions can be carried out.Keywords: Real distributed systems, slack factors, transaction scheduling, priority policies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16288400 Aerobic Treatment of Oily Wastewater: Effect of Aeration and Sludge Concentration to Pollutant Reduction and PHB Accumulation
Authors: Budhi Primasari, Shaliza Ibrahim, M Suffian M Annuar, Lim Xung Ian Remmie
Abstract:
This study is aimed to investigate feasibility of the aerobic biological process to treat oily wastewater from palm oil food industry. Effect of aeration and sludge concentrations are studied. Raw sludge and raw wastewater was mixed and acclimatized for five days in a stirred tank reactor. The aeration rate (no aeration, low; 1.5L/min and high rate; 2L/min) and sludge concentration (3675, 7350, and 11025mg/L of VSS) were varied. Responses of process were pH, COD, oil and grease, VSS, and PHB content. It was found that the treatment can remove 85.1 to 97.1 % of COD and remove 12.9 to 54.8% of oil & grease. The PHB yield was found to be within 0.15% to 2.4% as PHB/VSS ratio and 0.01% to 0.12% as PHB/COD removed. The higher aeration results a high COD removal and oil & grease removal, while experiment without aeration gives better PHB yield. Higher sludge concentrations (11025mg/L VSS) give higher removal of oil & grease while moderate sludge concentration (7350mg/L VSS) give better result in COD removal. Higher PHB yield is obtained in low sludge concentration (3675mg/L).Keywords: oily wastewater, COD, PHB, oil and grease
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42558399 Thiopental-Fentanyl versus Midazolam-Fentanyl for Emergency Department Procedural Sedation and Analgesia in Patients with Shoulder Dislocation and Distal Radial Fracture-Dislocation: A Randomized Double-Blind Controlled Trial
Authors: D. Farsi, Gh. Dokhtvasi, S. Abbasi, S. Shafiee Ardestani, E. Payani
Abstract:
Background and aim: It has not been well studied whether fentanyl-thiopental (FT) is effective and safe for PSA in orthopedic procedures in Emergency Department (ED). The aim of this trial was to evaluate the effectiveness of intravenous FT versus fentanyl-midazolam (FM) in patients who suffered from shoulder dislocation or distal radial fracture-dislocation. Methods: In this randomized double-blinded study, Seventy-six eligible patients were entered the study and randomly received intravenous FT or FM. The success rate, onset of action and recovery time, pain score, physicians’ satisfaction and adverse events were assessed and recorded by treating emergency physicians. The statistical analysis was intention to treat. Results: The success rate after administrating loading dose in FT group was significantly higher than FM group (71.7% vs. 48.9%, p=0.04); however, the ultimate unsuccessful rate after 3 doses of drugs in the FT group was higher than the FM group (3 to 1) but it did not reach to significant level (p=0.61). Despite near equal onset of action time in two study group (P=0.464), the recovery period in patients receiving FT was markedly shorter than FM group (P<0.001). The occurrence of adverse effects was low in both groups (p=0.31). Conclusion: PSA using FT is effective and appears to be safe for orthopedic procedures in the ED. Therefore, regarding the prompt onset of action, short recovery period of thiopental, it seems that this combination can be considered more for performing PSA in orthopedic procedures in ED.
Keywords: Procedural Sedation and Analgesia, Thiopental, Fentanyl, Midazolam, Orthopedic Procedure, Emergency Department, Pain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21178398 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model
Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin
Abstract:
Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.
Keywords: Anomaly detection, autoencoder, data centers, deep learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7538397 A Transform Domain Function Controlled VSSLMS Algorithm for Sparse System Identification
Authors: Cemil Turan, Mohammad Shukri Salman
Abstract:
The convergence rate of the least-mean-square (LMS) algorithm deteriorates if the input signal to the filter is correlated. In a system identification problem, this convergence rate can be improved if the signal is white and/or if the system is sparse. We recently proposed a sparse transform domain LMS-type algorithm that uses a variable step-size for a sparse system identification. The proposed algorithm provided high performance even if the input signal is highly correlated. In this work, we investigate the performance of the proposed TD-LMS algorithm for a large number of filter tap which is also a critical issue for standard LMS algorithm. Additionally, the optimum value of the most important parameter is calculated for all experiments. Moreover, the convergence analysis of the proposed algorithm is provided. The performance of the proposed algorithm has been compared to different algorithms in a sparse system identification setting of different sparsity levels and different number of filter taps. Simulations have shown that the proposed algorithm has prominent performance compared to the other algorithms.Keywords: Adaptive filtering, sparse system identification, VSSLMS algorithm, TD-LMS algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10028396 Effect of Sand Particle Transportation in Oil and Gas Pipeline Erosion
Authors: Christopher Deekia Nwimae, Nigel Simms, Liyun Lao
Abstract:
Erosion in a pipe bends caused by particles is a major concern in the oil and gas fields and might cause breakdown to production equipment. This work investigates the effect of sand particle transport in an elbow using computational fluid dynamics (CFD) approach. Two-way coupled Euler-Lagrange and discrete phase model is employed to calculate the air/solid particle flow in the elbow. Generic erosion model in Ansys fluent and three particle rebound models are used to predict the erosion rate on the 90° elbows. The model result is compared with experimental data from the open literature validating the CFD-based predictions which reveals that due to the sand particles impinging on the wall of the elbow at high velocity, a point on the pipe elbow were observed to have started turning red due to velocity increase and the maximum erosion locations occur at 48°.
Keywords: Erosion, prediction, elbow, computational fluid dynamics, CFD.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5168395 AnQL: A Query Language for Annotation Documents
Authors: Neerja Bhatnagar, Ben A. Juliano, Renee S. Renner
Abstract:
This paper presents data annotation models at five levels of granularity (database, relation, column, tuple, and cell) of relational data to address the problem of unsuitability of most relational databases to express annotations. These models do not require any structural and schematic changes to the underlying database. These models are also flexible, extensible, customizable, database-neutral, and platform-independent. This paper also presents an SQL-like query language, named Annotation Query Language (AnQL), to query annotation documents. AnQL is simple to understand and exploits the already-existent wide knowledge and skill set of SQL.
Keywords: Annotation query language, data annotations, data annotation models, semantic data annotations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18588394 EZW Coding System with Artificial Neural Networks
Authors: Saudagar Abdul Khader Jilani, Syed Abdul Sattar
Abstract:
Image compression plays a vital role in today-s communication. The limitation in allocated bandwidth leads to slower communication. To exchange the rate of transmission in the limited bandwidth the Image data must be compressed before transmission. Basically there are two types of compressions, 1) LOSSY compression and 2) LOSSLESS compression. Lossy compression though gives more compression compared to lossless compression; the accuracy in retrievation is less in case of lossy compression as compared to lossless compression. JPEG, JPEG2000 image compression system follows huffman coding for image compression. JPEG 2000 coding system use wavelet transform, which decompose the image into different levels, where the coefficient in each sub band are uncorrelated from coefficient of other sub bands. Embedded Zero tree wavelet (EZW) coding exploits the multi-resolution properties of the wavelet transform to give a computationally simple algorithm with better performance compared to existing wavelet transforms. For further improvement of compression applications other coding methods were recently been suggested. An ANN base approach is one such method. Artificial Neural Network has been applied to many problems in image processing and has demonstrated their superiority over classical methods when dealing with noisy or incomplete data for image compression applications. The performance analysis of different images is proposed with an analysis of EZW coding system with Error Backpropagation algorithm. The implementation and analysis shows approximately 30% more accuracy in retrieved image compare to the existing EZW coding system.Keywords: Accuracy, Compression, EZW, JPEG2000, Performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19368393 Machine Learning-Enabled Classification of Climbing Using Small Data
Authors: Nicholas Milburn, Yu Liang, Dalei Wu
Abstract:
Athlete performance scoring within the climbing domain presents interesting challenges as the sport does not have an objective way to assign skill. Assessing skill levels within any sport is valuable as it can be used to mark progress while training, and it can help an athlete choose appropriate climbs to attempt. Machine learning-based methods are popular for complex problems like this. The dataset available was composed of dynamic force data recorded during climbing; however, this dataset came with challenges such as data scarcity, imbalance, and it was temporally heterogeneous. Investigated solutions to these challenges include data augmentation, temporal normalization, conversion of time series to the spectral domain, and cross validation strategies. The investigated solutions to the classification problem included light weight machine classifiers KNN and SVM as well as the deep learning with CNN. The best performing model had an 80% accuracy. In conclusion, there seems to be enough information within climbing force data to accurately categorize climbers by skill.
Keywords: Classification, climbing, data imbalance, data scarcity, machine learning, time sequence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5738392 Performance Analysis of an Adaptive Threshold Hybrid Double-Dwell System with Antenna Diversity for Acquisition in DS-CDMA Systems
Authors: H. Krouma, M. Barkat, K. Kemih, M. Benslama, Y. Yacine
Abstract:
In this paper, we consider the analysis of the acquisition process for a hybrid double-dwell system with antenna diversity for DS-CDMA (direct sequence-code division multiple access) using an adaptive threshold. Acquisition systems with a fixed threshold value are unable to adapt to fast varying mobile communications environments and may result in a high false alarm rate, and/or low detection probability. Therefore, we propose an adaptively varying threshold scheme through the use of a cellaveraging constant false alarm rate (CA-CFAR) algorithm, which is well known in the field of radar detection. We derive exact expressions for the probabilities of detection and false alarm in Rayleigh fading channels. The mean acquisition time of the system under consideration is also derived. The performance of the system is analyzed and compared to that of a hybrid single dwell system.Keywords: Adaptive threshold, hybrid double-dwell system, CA-CFAR algorithm, DS-CDMA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1720