Search results for: wasteless method of ores processing
20320 Data Clustering in Wireless Sensor Network Implemented on Self-Organization Feature Map (SOFM) Neural Network
Authors: Krishan Kumar, Mohit Mittal, Pramod Kumar
Abstract:
Wireless sensor network is one of the most promising communication networks for monitoring remote environmental areas. In this network, all the sensor nodes are communicated with each other via radio signals. The sensor nodes have capability of sensing, data storage and processing. The sensor nodes collect the information through neighboring nodes to particular node. The data collection and processing is done by data aggregation techniques. For the data aggregation in sensor network, clustering technique is implemented in the sensor network by implementing self-organizing feature map (SOFM) neural network. Some of the sensor nodes are selected as cluster head nodes. The information aggregated to cluster head nodes from non-cluster head nodes and then this information is transferred to base station (or sink nodes). The aim of this paper is to manage the huge amount of data with the help of SOM neural network. Clustered data is selected to transfer to base station instead of whole information aggregated at cluster head nodes. This reduces the battery consumption over the huge data management. The network lifetime is enhanced at a greater extent.Keywords: artificial neural network, data clustering, self organization feature map, wireless sensor network
Procedia PDF Downloads 51620319 A Family of Second Derivative Methods for Numerical Integration of Stiff Initial Value Problems in Ordinary Differential Equations
Authors: Luke Ukpebor, C. E. Abhulimen
Abstract:
Stiff initial value problems in ordinary differential equations are problems for which a typical solution is rapidly decaying exponentially, and their numerical investigations are very tedious. Conventional numerical integration solvers cannot cope effectively with stiff problems as they lack adequate stability characteristics. In this article, we developed a new family of four-step second derivative exponentially fitted method of order six for the numerical integration of stiff initial value problem of general first order differential equations. In deriving our method, we employed the idea of breaking down the general multi-derivative multistep method into predator and corrector schemes which possess free parameters that allow for automatic fitting into exponential functions. The stability analysis of the method was discussed and the method was implemented with numerical examples. The result shows that the method is A-stable and competes favorably with existing methods in terms of efficiency and accuracy.Keywords: A-stable, exponentially fitted, four step, predator-corrector, second derivative, stiff initial value problems
Procedia PDF Downloads 25620318 A New Method to Reduce 5G Application Layer Payload Size
Authors: Gui Yang Wu, Bo Wang, Xin Wang
Abstract:
Nowadays, 5G service-based interface architecture uses text-based payload like JSON to transfer business data between network functions, which has obvious advantages as internet services but causes unnecessarily larger traffic. In this paper, a new 5G application payload size reduction method is presented to provides the mechanism to negotiate about new capability between network functions when network communication starts up and how 5G application data are reduced according to negotiated information with peer network function. Without losing the advantages of 5G text-based payload, this method demonstrates an excellent result on application payload size reduction and does not increase the usage quota of computing resource. Implementation of this method does not impact any standards or specifications and not change any encoding or decoding functionality too. In a real 5G network, this method will contribute to network efficiency and eventually save considerable computing resources.Keywords: 5G, JSON, payload size, service-based interface
Procedia PDF Downloads 18020317 Determination of Starting Design Parameters for Reactive-Dividing Wall Distillation Column Simulation Using a Modified Shortcut Design Method
Authors: Anthony P. Anies, Jose C. Muñoz
Abstract:
A new shortcut method for the design of reactive-dividing wall columns (RDWC) is proposed in this work. The RDWC is decomposed into its thermodynamically equivalent configuration naming the Petlyuk column, which consists of a reactive prefractionator and an unreactive main fractionator. The modified FUGK(Fenske-Underwood-Gilliland-Kirkbride) shortcut distillation method, which incorporates the effect of reaction on the Underwood equations and the Gilliland correlation, is used to design the reactive prefractionator. On the other hand, the conventional FUGK shortcut method is used to design the unreactive main fractionator. The shortcut method is applied to the synthesis of dimethyl ether (DME) through the liquid phase dehydration of methanol, and the results were used as the starting design inputs for rigorous simulation in Aspen Plus V8.8. A mole purity of 99 DME in the distillate stream, 99% methanol in the side draw stream, and 99% water in the bottoms stream were obtained in the simulation, thereby making the proposed shortcut method applicable for the preliminary design of RDWC.Keywords: aspen plus, dimethyl ether, petlyuk column, reactive-dividing wall column, shortcut method, FUGK
Procedia PDF Downloads 19120316 Parameter Estimation for the Mixture of Generalized Gamma Model
Authors: Wikanda Phaphan
Abstract:
Mixture generalized gamma distribution is a combination of two distributions: generalized gamma distribution and length biased generalized gamma distribution. These two distributions were presented by Suksaengrakcharoen and Bodhisuwan in 2014. The findings showed that probability density function (pdf) had fairly complexities, so it made problems in estimating parameters. The problem occurred in parameter estimation was that we were unable to calculate estimators in the form of critical expression. Thus, we will use numerical estimation to find the estimators. In this study, we presented a new method of the parameter estimation by using the expectation – maximization algorithm (EM), the conjugate gradient method, and the quasi-Newton method. The data was generated by acceptance-rejection method which is used for estimating α, β, λ and p. λ is the scale parameter, p is the weight parameter, α and β are the shape parameters. We will use Monte Carlo technique to find the estimator's performance. Determining the size of sample equals 10, 30, 100; the simulations were repeated 20 times in each case. We evaluated the effectiveness of the estimators which was introduced by considering values of the mean squared errors and the bias. The findings revealed that the EM-algorithm had proximity to the actual values determined. Also, the maximum likelihood estimators via the conjugate gradient and the quasi-Newton method are less precision than the maximum likelihood estimators via the EM-algorithm.Keywords: conjugate gradient method, quasi-Newton method, EM-algorithm, generalized gamma distribution, length biased generalized gamma distribution, maximum likelihood method
Procedia PDF Downloads 21820315 An Improved Prediction Model of Ozone Concentration Time Series Based on Chaotic Approach
Authors: Nor Zila Abd Hamid, Mohd Salmi M. Noorani
Abstract:
This study is focused on the development of prediction models of the Ozone concentration time series. Prediction model is built based on chaotic approach. Firstly, the chaotic nature of the time series is detected by means of phase space plot and the Cao method. Then, the prediction model is built and the local linear approximation method is used for the forecasting purposes. Traditional prediction of autoregressive linear model is also built. Moreover, an improvement in local linear approximation method is also performed. Prediction models are applied to the hourly ozone time series observed at the benchmark station in Malaysia. Comparison of all models through the calculation of mean absolute error, root mean squared error and correlation coefficient shows that the one with improved prediction method is the best. Thus, chaotic approach is a good approach to be used to develop a prediction model for the Ozone concentration time series.Keywords: chaotic approach, phase space, Cao method, local linear approximation method
Procedia PDF Downloads 32920314 Machine Learning Approach for Mutation Testing
Authors: Michael Stewart
Abstract:
Mutation testing is a type of software testing proposed in the 1970s where program statements are deliberately changed to introduce simple errors so that test cases can be validated to determine if they can detect the errors. Test cases are executed against the mutant code to determine if one fails, detects the error and ensures the program is correct. One major issue with this type of testing was it became intensive computationally to generate and test all possible mutations for complex programs. This paper used reinforcement learning and parallel processing within the context of mutation testing for the selection of mutation operators and test cases that reduced the computational cost of testing and improved test suite effectiveness. Experiments were conducted using sample programs to determine how well the reinforcement learning-based algorithm performed with one live mutation, multiple live mutations and no live mutations. The experiments, measured by mutation score, were used to update the algorithm and improved accuracy for predictions. The performance was then evaluated on multiple processor computers. With reinforcement learning, the mutation operators utilized were reduced by 50 – 100%.Keywords: automated-testing, machine learning, mutation testing, parallel processing, reinforcement learning, software engineering, software testing
Procedia PDF Downloads 19720313 Metareasoning Image Optimization Q-Learning
Authors: Mahasa Zahirnia
Abstract:
The purpose of this paper is to explore new and effective ways of optimizing satellite images using artificial intelligence, and the process of implementing reinforcement learning to enhance the quality of data captured within the image. In our implementation of Bellman's Reinforcement Learning equations, associated state diagrams, and multi-stage image processing, we were able to enhance image quality, detect and define objects. Reinforcement learning is the differentiator in the area of artificial intelligence, and Q-Learning relies on trial and error to achieve its goals. The reward system that is embedded in Q-Learning allows the agent to self-evaluate its performance and decide on the best possible course of action based on the current and future environment. Results show that within a simulated environment, built on the images that are commercially available, the rate of detection was 40-90%. Reinforcement learning through Q-Learning algorithm is not just desired but required design criteria for image optimization and enhancements. The proposed methods presented are a cost effective method of resolving uncertainty of the data because reinforcement learning finds ideal policies to manage the process using a smaller sample of images.Keywords: Q-learning, image optimization, reinforcement learning, Markov decision process
Procedia PDF Downloads 21320312 Automatic Near-Infrared Image Colorization Using Synthetic Images
Authors: Yoganathan Karthik, Guhanathan Poravi
Abstract:
Colorizing near-infrared (NIR) images poses unique challenges due to the absence of color information and the nuances in light absorption. In this paper, we present an approach to NIR image colorization utilizing a synthetic dataset generated from visible light images. Our method addresses two major challenges encountered in NIR image colorization: accurately colorizing objects with color variations and avoiding over/under saturation in dimly lit scenes. To tackle these challenges, we propose a Generative Adversarial Network (GAN)-based framework that learns to map NIR images to their corresponding colorized versions. The synthetic dataset ensures diverse color representations, enabling the model to effectively handle objects with varying hues and shades. Furthermore, the GAN architecture facilitates the generation of realistic colorizations while preserving the integrity of dimly lit scenes, thus mitigating issues related to over/under saturation. Experimental results on benchmark NIR image datasets demonstrate the efficacy of our approach in producing high-quality colorizations with improved color accuracy and naturalness. Quantitative evaluations and comparative studies validate the superiority of our method over existing techniques, showcasing its robustness and generalization capability across diverse NIR image scenarios. Our research not only contributes to advancing NIR image colorization but also underscores the importance of synthetic datasets and GANs in addressing domain-specific challenges in image processing tasks. The proposed framework holds promise for various applications in remote sensing, medical imaging, and surveillance where accurate color representation of NIR imagery is crucial for analysis and interpretation.Keywords: computer vision, near-infrared images, automatic image colorization, generative adversarial networks, synthetic data
Procedia PDF Downloads 4220311 Deep Learning Based 6D Pose Estimation for Bin-Picking Using 3D Point Clouds
Authors: Hesheng Wang, Haoyu Wang, Chungang Zhuang
Abstract:
Estimating the 6D pose of objects is a core step for robot bin-picking tasks. The problem is that various objects are usually randomly stacked with heavy occlusion in real applications. In this work, we propose a method to regress 6D poses by predicting three points for each object in the 3D point cloud through deep learning. To solve the ambiguity of symmetric pose, we propose a labeling method to help the network converge better. Based on the predicted pose, an iterative method is employed for pose optimization. In real-world experiments, our method outperforms the classical approach in both precision and recall.Keywords: pose estimation, deep learning, point cloud, bin-picking, 3D computer vision
Procedia PDF Downloads 15920310 Spectrophotometric Determination of Phenylephrine Hydrochloride by Coupling with Diazotized 2,4-Dinitroaniline
Authors: Sulaiman Gafar Muhamad
Abstract:
A rapid spectrophotometric method for the micro-determination of phenylephrine-HCl (PHE) has been developed. The proposed method involves the coupling of phenylephrine-HCl with diazotized 2,4-dinitroaniline in alkaline medium at λmax 455 nm. Under the present optimum condition, Beer’s law was obeyed in the range of 1.0-20 μg/ml of PHE with molar absorptivity of 1.915 ×104 l. mol-1.cm-1, with a relative error of 0.015 and a relative standard deviation of 0.024%. The current method has been applied successfully to estimate phenylephrine-HCl in pharmaceutical preparations (nose drop and syrup).Keywords: diazo-coupling, 2, 4-dinitroaniline, phenylephrine-HCl, spectrophotometry
Procedia PDF Downloads 25520309 Rational Probabilistic Method for Calculating Thermal Cracking Risk of Mass Concrete Structures
Authors: Naoyuki Sugihashi, Toshiharu Kishi
Abstract:
The probability of occurrence of thermal cracks in mass concrete in Japan is evaluated by the cracking probability diagram that represents the relationship between the thermal cracking index and the probability of occurrence of cracks in the actual structure. In this paper, we propose a method to directly calculate the cracking probability, following a probabilistic theory by modeling the variance of tensile stress and tensile strength. In this method, the relationship between the variance of tensile stress and tensile strength, the thermal cracking index, and the cracking probability are formulated and presented. In addition, standard deviation of tensile stress and tensile strength was identified, and the method of calculating cracking probability in a general construction controlled environment was also demonstrated.Keywords: thermal crack control, mass concrete, thermal cracking probability, durability of concrete, calculating method of cracking probability
Procedia PDF Downloads 34520308 A New Family of Integration Methods for Nonlinear Dynamic Analysis
Authors: Shuenn-Yih Chang, Chiu-LI Huang, Ngoc-Cuong Tran
Abstract:
A new family of structure-dependent integration methods, whose coefficients of the difference equation for displacement increment are functions of the initial structural properties and the step size for time integration, is proposed in this work. This family method can simultaneously integrate the controllable numerical dissipation, explicit formulation and unconditional stability together. In general, its numerical dissipation can be continuously controlled by a parameter and it is possible to achieve zero damping. In addition, it can have high-frequency damping to suppress or even remove the spurious oscillations high frequency modes. Whereas, the low frequency modes can be very accurately integrated due to the almost zero damping for these low frequency modes. It is shown herein that the proposed family method can have exactly the same numerical properties as those of HHT-α method for linear elastic systems. In addition, it still preserves the most important property of a structure-dependent integration method, which is an explicit formulation for each time step. Consequently, it can save a huge computational efforts in solving inertial problems when compared to the HHT-α method. In fact, it is revealed by numerical experiments that the CPU time consumed by the proposed family method is only about 1.6% of that consumed by the HHT-α method for the 125-DOF system while it reduces to be 0.16% for the 1000-DOF system. Apparently, the saving of computational efforts is very significant.Keywords: structure-dependent integration method, nonlinear dynamic analysis, unconditional stability, numerical dissipation, accuracy
Procedia PDF Downloads 63720307 On Periodic Integer-Valued Moving Average Models
Authors: Aries Nawel, Bentarzi Mohamed
Abstract:
This paper deals with the study of some probabilistic and statistical properties of a Periodic Integer-Valued Moving Average Model (PINMA_{S}(q)). The closed forms of the mean, the second moment and the periodic autocovariance function are obtained. Furthermore, the time reversibility of the model is discussed in details. Moreover, the estimation of the underlying parameters are obtained by the Yule-Walker method, the Conditional Least Square method (CLS) and the Weighted Conditional Least Square method (WCLS). A simulation study is carried out to evaluate the performance of the estimation method. Moreover, an application on real data set is provided.Keywords: periodic integer-valued moving average, periodically correlated process, time reversibility, count data
Procedia PDF Downloads 20020306 A Physical Treatment Method as a Prevention Method for Barium Sulfate Scaling
Authors: M. A. Salman, G. Al-Nuwaibit, M. Safar, M. Rughaibi, A. Al-Mesri
Abstract:
Barium sulfate (BaSO₄) is a hard scaling usually precipitates on the surface of equipment in many industrial systems, as oil and gas production, desalination and cooling and boiler operation. It is a scale that extremely resistance to both chemical and mechanical cleaning. So, BaSO₄ is a problematic and expensive scaling. Although barium ions are present in most natural waters at a very low concentration as low as 0.008 mg/l, it could result of scaling problems in the presence of high concentration of sulfate ion or when mixing with incompatible waters as in oil produced water. The scaling potential of BaSO₄ using seawater at the intake of seven desalination plants in Kuwait, brine water and Kuwait oil produced water was calculated and compared then the best location in regards of barium sulfate scaling was reported. Finally, a physical treatment method (magnetic treatment method) and chemical treatment method were used to control BaSO₄ scaling using saturated solutions at different operating temperatures, flow velocities, feed pHs and different magnetic strengths. The results of the two methods were discussed, and the more economical one with the reasonable performance was recommended, which is the physical treatment method.Keywords: magnetic field strength, flow velocity, retention time, barium sulfate
Procedia PDF Downloads 26620305 Simulation Study of the Microwave Heating of the Hematite and Coal Mixture
Authors: Prasenjit Singha, Sunil Yadav, Soumya Ranjan Mohantry, Ajay Kumar Shukla
Abstract:
Temperature distribution in the hematite ore mixed with 7.5% coal was predicted by solving a 1-D heat conduction equation using an implicit finite difference approach. In this work, it was considered a square slab of 20 cm x 20 cm, which assumed the coal to be uniformly mixed with hematite ore. It was solved the equations with the use of MATLAB 2018a software. Heat transfer effects in this 1D dimensional slab convective and the radiative boundary conditions are also considered. Temperature distribution obtained inside hematite slab by considering microwave heating time, thermal conductivity, heat capacity, carbon percentage, sample dimensions, and many other factors such as penetration depth, permittivity, and permeability of coal and hematite ore mixtures. The resulting temperature profile can be used as a guiding tool for optimizing the microwave-assisted carbothermal reduction process of hematite slab was extended to other dimensions as well, viz., 1 cm x 1 cm, 5 cm x 5 cm, 10 cm x 10 cm, 20 cm x 20 cm. The model predictions are in good agreement with experimental results.Keywords: hematite ore, coal, microwave processing, heat transfer, implicit method, temperature distribution
Procedia PDF Downloads 16520304 Early Detection of Lymphedema in Post-Surgery Oncology Patients
Authors: Sneha Noble, Rahul Krishnan, Uma G., D. K. Vijaykumar
Abstract:
Breast-Cancer related Lymphedema is a major problem that affects many women. Lymphedema is the swelling that generally occurs in the arms or legs caused by the removal of or damage to lymph nodes as a part of cancer treatment. Treating it at the earliest possible stage is the best way to manage the condition and prevent it from leading to pain, recurrent infection, reduced mobility, and impaired function. So, this project aims to focus on the multi-modal approaches to identify the risks of Lymphedema in post-surgical oncology patients and prevent it at the earliest. The Kinect IR Sensor is utilized to capture the images of the body and after image processing techniques, the region of interest is obtained. Then, performing the voxelization method will provide volume measurements in pre-operative and post-operative periods in patients. The formation of a mathematical model will help in the comparison of values. Clinical pathological data of patients will be investigated to assess the factors responsible for the development of lymphedema and its risks.Keywords: Kinect IR sensor, Lymphedema, voxelization, lymph nodes
Procedia PDF Downloads 13720303 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach
Authors: Kristina Pflug, Markus Busch
Abstract:
Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology
Procedia PDF Downloads 12320302 Getting to Know the Types of Asphalt, Its Manufacturing and Processing Methods and Its Application in Road Construction
Authors: Hamid Fallah
Abstract:
Asphalt is generally a mixture of stone materials with continuous granulation and a binder, which is usually bitumen. Asphalt is made in different shapes according to its use. The most familiar type of asphalt is hot asphalt or hot asphalt concrete. Stone materials usually make up more than 90% of the asphalt mixture. Therefore, stone materials have a significant impact on the quality of the resulting asphalt. According to the method of application and mixing, asphalt is divided into three categories: hot asphalt, protective asphalt, and cold asphalt. Cold mix asphalt is a mixture of stone materials and mixed bitumen or bitumen emulsion whose raw materials are mixed at ambient temperature. In some types of cold asphalt, the bitumen may be heated as necessary, but other materials are mixed with the bitumen without heating. Protective asphalts are used to make the roadbed impermeable, increase its abrasion and sliding resistance, and also temporarily improve the existing asphalt and concrete surfaces. This type of paving is very economical compared to hot asphalt due to the speed and ease of implementation and the limited need for asphalt machines and equipment. The present article, which is prepared in descriptive library form, introduces asphalt, its types, characteristics, and its application.Keywords: asphalt, type of asphalt, asphalt concrete, sulfur concrete, bitumen in asphalt, sulfur, stone materials
Procedia PDF Downloads 6520301 The Diagnostic Utility and Sensitivity of the Xpert® MTB/RIF Assay in Diagnosing Mycobacterium tuberculosis in Bone Marrow Aspirate Specimens
Authors: Nadhiya N. Subramony, Jenifer Vaughan, Lesley E. Scott
Abstract:
In South Africa, the World Health Organisation estimated 454000 new cases of Mycobacterium tuberculosis (M.tb) infection (MTB) in 2015. Disseminated tuberculosis arises from the haematogenous spread and seeding of the bacilli in extrapulmonary sites. The gold standard for the detection of MTB in bone marrow is TB culture which has an average turnaround time of 6 weeks. Histological examinations of trephine biopsies to diagnose MTB also have a time delay owing mainly to the 5-7 day processing period prior to microscopic examination. Adding to the diagnostic delay is the non-specific nature of granulomatous inflammation which is the hallmark of MTB involvement of the bone marrow. A Ziehl-Neelson stain (which highlights acid-fast bacilli) is therefore mandatory to confirm the diagnosis but can take up to 3 days for processing and evaluation. Owing to this delay in diagnosis, many patients are lost to follow up or remain untreated whilst results are awaited, thus encouraging the spread of undiagnosed TB. The Xpert® MTB/RIF (Cepheid, Sunnyvale, CA) is the molecular test used in the South African national TB program as the initial diagnostic test for pulmonary TB. This study investigates the optimisation and performance of the Xpert® MTB/RIF on bone marrow aspirate specimens (BMA), a first since the introduction of the assay in the diagnosis of extrapulmonary TB. BMA received for immunophenotypic analysis as part of the investigation into disseminated MTB or in the evaluation of cytopenias in immunocompromised patients were used. Processing BMA on the Xpert® MTB/RIF was optimised to ensure bone marrow in EDTA and heparin did not inhibit the PCR reaction. Inactivated M.tb was spiked into the clinical bone marrow specimen and distilled water (as a control). A volume of 500mcl and an incubation time of 15 minutes with sample reagent were investigated as the processing protocol. A total of 135 BMA specimens had sufficient residual volume for Xpert® MTB/RIF testing however 22 specimens (16.3%) were not included in the final statistical analysis as an adequate trephine biopsy and/or TB culture was not available. Xpert® MTB/RIF testing was not affected by BMA material in the presence of heparin or EDTA, but the overall detection of MTB in BMA was low compared to histology and culture. Sensitivity of the Xpert® MTB/RIF compared to both histology and culture was 8.7% (95% confidence interval (CI): 1.07-28.04%) and sensitivity compared to histology only was 11.1% (95% CI: 1.38-34.7%). Specificity of the Xpert® MTB/RIF was 98.9% (95% CI: 93.9-99.7%). Although the Xpert® MTB/RIF generates a faster result than histology and TB culture and is less expensive than culture and drug susceptibility testing, the low sensitivity of the Xpert® MTB/RIF precludes its use for the diagnosis of MTB in bone marrow aspirate specimens and warrants alternative/additional testing to optimise the assay.Keywords: bone marrow aspirate , extrapulmonary TB, low sensitivity, Xpert® MTB/RIF
Procedia PDF Downloads 16820300 Estimation and Validation of Free Lime Analysis of Clinker by Quantitative Phase Analysis Using X ray diffraction
Authors: Suresh Palla, Kalpna Sharma, Gaurav Bhatnagar, S. K. Chaturvedi, B. N. Mohapatra
Abstract:
Determining the content of free lime is especially important to judge reactivity of the raw materials and clinker quality. The free lime limit isn’t the same for all cements; it depends on several factors, especially the temperature reached during the cooking and the grain size distribution in cement after grinding. Estimation of free lime by conventional method is influenced by the presence of portlandite and misleads the actual free lime content in the clinker for quality check up conditions. To ensure the product quality according to the standard specifications in terms of within the quality limits or not, a reliable, precise, and very reproducible method to quantify the relative phase abundances in the Portland Cement clinker and Portland Cements is to use X-ray diffraction (XRD) in combination with the Rietveld method. In the present study, a methodology was proposed using XRD to validate the obtained results of free lime by conventional method. The XRD and TG/DTA results confirm the presence of portlandite in the clinker to take the decision on the obtained free lime results through conventional method.Keywords: free lime, quantitative phase analysis, conventional method, x ray diffraction
Procedia PDF Downloads 13520299 Using Derivative Free Method to Improve the Error Estimation of Numerical Quadrature
Authors: Chin-Yun Chen
Abstract:
Numerical integration is an essential tool for deriving different physical quantities in engineering and science. The effectiveness of a numerical integrator depends on different factors, where the crucial one is the error estimation. This work presents an error estimator that combines a derivative free method to improve the performance of verified numerical quadrature.Keywords: numerical quadrature, error estimation, derivative free method, interval computation
Procedia PDF Downloads 46220298 Virtual 3D Environments for Image-Based Navigation Algorithms
Authors: V. B. Bastos, M. P. Lima, P. R. G. Kurka
Abstract:
This paper applies to the creation of virtual 3D environments for the study and development of mobile robot image based navigation algorithms and techniques, which need to operate robustly and efficiently. The test of these algorithms can be performed in a physical way, from conducting experiments on a prototype, or by numerical simulations. Current simulation platforms for robotic applications do not have flexible and updated models for image rendering, being unable to reproduce complex light effects and materials. Thus, it is necessary to create a test platform that integrates sophisticated simulated applications of real environments for navigation, with data and image processing. This work proposes the development of a high-level platform for building 3D model’s environments and the test of image-based navigation algorithms for mobile robots. Techniques were used for applying texture and lighting effects in order to accurately represent the generation of rendered images regarding the real world version. The application will integrate image processing scripts, trajectory control, dynamic modeling and simulation techniques for physics representation and picture rendering with the open source 3D creation suite - Blender.Keywords: simulation, visual navigation, mobile robot, data visualization
Procedia PDF Downloads 25420297 Investigations of Bergy Bits and Ship Interactions in Extreme Waves Using Smoothed Particle Hydrodynamics
Authors: Mohammed Islam, Jungyong Wang, Dong Cheol Seo
Abstract:
The Smoothed Particle Hydrodynamics (SPH) method is a novel, meshless, and Lagrangian technique based numerical method that has shown promises to accurately predict the hydrodynamics of water and structure interactions in violent flow conditions. The main goal of this study is to build confidence on the versatility of the Smoothed Particle Hydrodynamics (SPH) based tool, to use it as a complementary tool to the physical model testing capabilities and support research need for the performance evaluation of ships and offshore platforms exposed to an extreme and harsh environment. In the current endeavor, an open-sourced SPH-based tool was used and validated for modeling and predictions of the hydrodynamic interactions of a 6-DOF ship and bergy bits. The study involved the modeling of a modern generic drillship and simplified bergy bits in floating and towing scenarios and in regular and irregular wave conditions. The predictions were validated using the model-scale measurements on a moored ship towed at multiple oblique angles approaching a floating bergy bit in waves. Overall, this study results in a thorough comparison between the model scale measurements and the prediction outcomes from the SPH tool for performance and accuracy. The SPH predicted ship motions and forces were primarily within ±5% of the measurements. The velocity and pressure distribution and wave characteristics over the free surface depicts realistic interactions of the wave, ship, and the bergy bit. This work identifies and presents several challenges in preparing the input file, particularly while defining the mass properties of complex geometry, the computational requirements, and the post-processing of the outcomes.Keywords: SPH, ship and bergy bit, hydrodynamic interactions, model validation, physical model testing
Procedia PDF Downloads 13020296 The Grammatical Dictionary Compiler: A System for Kartvelian Languages
Authors: Liana Lortkipanidze, Nino Amirezashvili, Nino Javashvili
Abstract:
The purpose of the grammatical dictionary is to provide information on the morphological and syntactic characteristics of the basic word in the dictionary entry. The electronic grammatical dictionaries are used as a tool of automated morphological analysis for texts processing. The Georgian Grammatical Dictionary should contain grammatical information for each word: part of speech, type of declension/conjugation, grammatical forms of the word (paradigm), alternative variants of basic word/lemma. In this paper, we present the system for compiling the Georgian Grammatical Dictionary automatically. We propose dictionary-based methods for extending grammatical lexicons. The input lexicon contains only a few number of words with identical grammatical features. The extension is based on similarity measures between features of words; more precisely, we add words to the extended lexicons, which are similar to those, which are already in the grammatical dictionary. Our dictionaries are corpora-based, and for the compiling, we introduce the method for lemmatization of unknown words, i.e., words of which neither full form nor lemma is in the grammatical dictionary.Keywords: acquisition of lexicon, Georgian grammatical dictionary, lemmatization rules, morphological processor
Procedia PDF Downloads 14220295 Improvement of Parallel Compressor Model in Dealing Outlet Unequal Pressure Distribution
Authors: Kewei Xu, Jens Friedrich, Kevin Dwinger, Wei Fan, Xijin Zhang
Abstract:
Parallel Compressor Model (PCM) is a simplified approach to predict compressor performance with inlet distortions. In PCM calculation, it is assumed that the sub-compressors’ outlet static pressure is uniform and therefore simplifies PCM calculation procedure. However, if the compressor’s outlet duct is not long and straight, such assumption frequently induces error ranging from 10% to 15%. This paper provides a revised calculation method of PCM that can correct the error. The revised method employs energy equation, momentum equation and continuity equation to acquire needed parameters and replace the equal static pressure assumption. Based on the revised method, PCM is applied on two compression system with different blades types. The predictions of their performance in non-uniform inlet conditions are yielded through the revised calculation method and are employed to evaluate the method’s efficiency. Validating the results by experimental data, it is found that although little deviation occurs, calculated result agrees well with experiment data whose error ranges from 0.1% to 3%. Therefore, this proves the revised calculation method of PCM possesses great advantages in predicting the performance of the distorted compressor with limited exhaust duct.Keywords: parallel compressor model (pcm), revised calculation method, inlet distortion, outlet unequal pressure distribution
Procedia PDF Downloads 32920294 Experimental Investigation of Performance Anode Side of PEM Fuel Cell with Spin Method Coated with YSZ+SDC
Authors: Gürol Önal, Kevser Dinçer, Salih Yayla
Abstract:
In this study, performance of proton exchange membrane PEM fuel cell was experimentally investigated. Coating on the anode side of the PEM fuel cell was accomplished with the spin method by using YSZ+SDC. A solution having 0,1 gr YttriaStabilized Zirconia (YSZ) + 0,1 Samarium-Doped Ceria (SDC) + 10 mL methanol was prepared. This solution was taken out and filled into a micro-pipette. Then the anode side of PEM fuel cell was coated with YSZ+ SDC by using spin method. In the experimental study, current, voltage and power performances before and after coating were recorded and then compared to each other. It was found that the efficiency of PEM fuel cell increases after the coating with YSZ+SDC.Keywords: fuel cell, Polymer Electrolyte Membrane (PEM), membrane, spin method
Procedia PDF Downloads 56120293 A Relationship Extraction Method from Literary Fiction Considering Korean Linguistic Features
Authors: Hee-Jeong Ahn, Kee-Won Kim, Seung-Hoon Kim
Abstract:
The knowledge of the relationship between characters can help readers to understand the overall story or plot of the literary fiction. In this paper, we present a method for extracting the specific relationship between characters from a Korean literary fiction. Generally, methods for extracting relationships between characters in text are statistical or computational methods based on the sentence distance between characters without considering Korean linguistic features. Furthermore, it is difficult to extract the relationship with direction from text, such as one-sided love, because they consider only the weight of relationship, without considering the direction of the relationship. Therefore, in order to identify specific relationships between characters, we propose a statistical method considering linguistic features, such as syntactic patterns and speech verbs in Korean. The result of our method is represented by a weighted directed graph of the relationship between the characters. Furthermore, we expect that proposed method could be applied to the relationship analysis between characters of other content like movie or TV drama.Keywords: data mining, Korean linguistic feature, literary fiction, relationship extraction
Procedia PDF Downloads 37920292 Analysis of CO₂ Capture Products from Carbon Capture and Utilization Plant
Authors: Bongjae Lee, Beom Goo Hwang, Hye Mi Park
Abstract:
CO₂ capture products manufactured through Carbon Capture and Utilization (CCU) Plant that collect CO₂ directly from power plants require accurate measurements of the amount of CO₂ captured. For this purpose, two tests were carried out on the weight loss test. And one was analyzed using a carbon dioxide quantification device. First, the ignition loss analysis was performed by measuring the weight of the sample at 550°C after the first conversation and then confirming the loss when ignited at 950°C. Second, in the thermogravimetric analysis, the sample was divided into two sections of 40 to 500°C and 500 to 800°C to confirm the reduction. The results of thermal weight loss analysis and thermogravimetric analysis were confirmed to be almost similar. However, the temperature of the ignition loss analysis method was 950°C, which was 150°C higher than that of the thermogravimetric method at a temperature of 800°C, so that the difference in the amount of weight loss was 3 to 4% higher by the heat loss analysis method. In addition, the tendency that the CO₂ content increases as the reaction time become longer is similarly confirmed. Third, the results of the wet titration method through the carbon dioxide quantification device were found to be significantly lower than the weight loss method. Therefore, based on the results obtained through the above three analysis methods, we will establish a method to analyze the accurate amount of CO₂. Acknowledgements: This work was supported by the Korea Institute of Energy Technology Evaluation and planning (No. 20152010201850).Keywords: carbon capture and utilization, CCU, CO2, CO2 capture products, analysis method
Procedia PDF Downloads 21520291 The Proposal of Modification of California Pipe Method for Inclined Pipe
Authors: Wojciech Dąbrowski, Joanna Bąk, Laurent Solliec
Abstract:
Nowadays technical and technological progress and constant development of methods and devices applied to sanitary engineering is indispensable. Issues related to sanitary engineering involve flow measurements for water and wastewater. The precise measurement is very important and pivotal for further actions, like monitoring. There are many methods and techniques of flow measurement in the area of sanitary engineering. Weirs and flumes are well–known methods and common used. But also there are alternative methods. Some of them are very simple methods, others are solutions using high technique. The old–time method combined with new technique could be more useful than earlier. Paper describes substitute method of flow gauging (California pipe method) and proposal of modification of this method used for inclined pipe. Examination of possibility of improving and developing old–time methods is direction of the investigation.Keywords: California pipe, sewerage, flow rate measurement, water, wastewater, improve, modification, hydraulic monitoring, stream
Procedia PDF Downloads 438