Search results for: subspace rotation algorithm
1973 Kalman Filter for Bilinear Systems with Application
Authors: Abdullah E. Al-Mazrooei
Abstract:
In this paper, we present a new kind of the bilinear systems in the form of state space model. The evolution of this system depends on the product of state vector by its self. The well known Lotak Volterra and Lorenz models are special cases of this new model. We also present here a generalization of Kalman filter which is suitable to work with the new bilinear model. An application to real measurements is introduced to illustrate the efficiency of the proposed algorithm.Keywords: bilinear systems, state space model, Kalman filter, application, models
Procedia PDF Downloads 4411972 The Influence of Phosphate Fertilizers on Radiological Situation of Cultivated Lands: ²¹⁰Po, ²²⁶Ra, ²³²Th, ⁴⁰K and ¹³⁷Cs Concentrations in Soil
Authors: Grzegorz Szaciłowski, Marta Konop, Małgorzata Dymecka, Jakub Ośko
Abstract:
In 1996, the European Council Directive 96/29/EURATOM pointed phosphate fertilizers to have a potentially negative influence on the environment from the radiation protection point of view. Fertilizers along with irrigation and crop rotation were the milestones that allowed to increase agricultural productivity. Firstly based on natural materials such as compost, manure, fish processing waste, etc., and since the 19th century created synthetically, fertilizers caused a boom in crop yield and helped to propel global food production, especially after World War II. In this work the concentrations of ²¹⁰Po, ²²⁶Ra, ²³²Th, ⁴⁰K, and ¹³⁷Cs in selected fertilizers and soil samples were determined. The results were used to calculate the annual addition of natural radionuclides and increment of the external radiation exposure caused by the use of studied fertilizers. Soils intended for different types of crops were sampled in early spring when no vegetation had occurred yet. Analysed fertilizers were those with which the soil was previously fertilized. For gamma radionuclides, a high purity germanium detector GX3520 from Canberra was used. The polonium concentration was determined by radiochemical separation followed by measurement by means of alpha spectrometry. The spectrometer used in this study was equipped with 450 cm² PIPS detector from Canberra. Obtained results showed significant differences in radionuclide composition between phosphate and nitrogenous fertilizers (e.g. the radium equivalent activity for phosphate fertilizer was 207.7 Bq/kg in comparison to <5.6 Bq/kg for nitrogenous fertilizer). The calculated increase of external radiation exposure due to use of phosphate fertilizer ranged between 3.4 and 5.4 nG/h, which represents up to 10% of the polish average outdoor exposure due to terrestrial gamma radiation (45 nGy/h).Keywords: ²¹⁰Po, alpha spectrometry, exposure, gamma spectrometry, phosphate fertilizer, soil
Procedia PDF Downloads 3001971 Simulation of Nano Drilling Fluid in an Extended Reach Well
Authors: Lina Jassim, Robiah Yunus, , Amran Salleh
Abstract:
Since nano particles have been assessed as thermo stabilizer, rheology enhancer, and ecology safer, nano drilling fluid can be utilized to overcome the complexity of hole cleaning in highly deviated interval of an extended reach wells. The eccentric annular flow is a flow with special considerations; it forms a vital part of drilling fluid flow analysis in an extended reach wells. In this work eccentric, dual phase flow (different types of rock cuttings with different size were blended with nano fluid) through horizontal well (an extended reach well) are simulated with the help of CFD, Fluent package. In horizontal wells flow occurs in an adverse pressure gradient condition, that makes the particle inside it susceptible to reversed flow. Thus the flow has to be analyzed in a three dimensional manner. Moreover the non-Newtonian behavior of the nano fluid makes the problem really challenging in numerical and physical aspects. The primary objective of the work is to establish a relationship between different flow characteristics with the speed of inner wall rotation. The nano fluid flow characteristics include swirl of flow and its effect on wellbore cleaning ability , wall shear stress and its effect on fluid viscosity to suspend and carry the rock cuttings, axial velocity and its effect on transportation of rock cuttings to the wellbore surface, finally pressure drop and its effect on managed of drilling pressure. The importance of eccentricity of the inner cylinder has to be analyzed as a part of it. Practical horizontal well flows contain a good amount of particles (rock cuttings) with moderate axial velocity, which verified nano drilling fluid ability of carrying and transferring cuttings particles in the highly deviated eccentric annular flow is also of utmost importance.Keywords: Non-Newtonian, dual phase, eccentric annular, CFD
Procedia PDF Downloads 4341970 Automatic Identification of Pectoral Muscle
Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina
Abstract:
Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle
Procedia PDF Downloads 3501969 Two-Level Separation of High Air Conditioner Consumers and Demand Response Potential Estimation Based on Set Point Change
Authors: Mehdi Naserian, Mohammad Jooshaki, Mahmud Fotuhi-Firuzabad, Mohammad Hossein Mohammadi Sanjani, Ashknaz Oraee
Abstract:
In recent years, the development of communication infrastructure and smart meters have facilitated the utilization of demand-side resources which can enhance stability and economic efficiency of power systems. Direct load control programs can play an important role in the utilization of demand-side resources in the residential sector. However, investments required for installing control equipment can be a limiting factor in the development of such demand response programs. Thus, selection of consumers with higher potentials is crucial to the success of a direct load control program. Heating, ventilation, and air conditioning (HVAC) systems, which due to the heat capacity of buildings feature relatively high flexibility, make up a major part of household consumption. Considering that the consumption of HVAC systems depends highly on the ambient temperature and bearing in mind the high investments required for control systems enabling direct load control demand response programs, in this paper, a recent solution is presented to uncover consumers with high air conditioner demand among large number of consumers and to measure the demand response potential of such consumers. This can pave the way for estimating the investments needed for the implementation of direct load control programs for residential HVAC systems and for estimating the demand response potentials in a distribution system. In doing so, we first cluster consumers into several groups based on the correlation coefficients between hourly consumption data and hourly temperature data using K-means algorithm. Then, by applying a recent algorithm to the hourly consumption and temperature data, consumers with high air conditioner consumption are identified. Finally, demand response potential of such consumers is estimated based on the equivalent desired temperature setpoint changes.Keywords: communication infrastructure, smart meters, power systems, HVAC system, residential HVAC systems
Procedia PDF Downloads 671968 Failure Analysis of the Gasoline Engines Injection System
Authors: Jozef Jurcik, Miroslav Gutten, Milan Sebok, Daniel Korenciak, Jerzy Roj
Abstract:
The paper presents the research results of electronic fuel injection system, which can be used for diagnostics of automotive systems. In the paper is described the construction and operation of a typical fuel injection system and analyzed its electronic part. It has also been proposed method for the detection of the injector malfunction, based on the analysis of differential current or voltage characteristics. In order to detect the fault state, it is needed to use self-learning process, by the use of an appropriate self-learning algorithm.Keywords: electronic fuel injector, diagnostics, measurement, testing device
Procedia PDF Downloads 5521967 DQN for Navigation in Gazebo Simulator
Authors: Xabier Olaz Moratinos
Abstract:
Drone navigation is critical, particularly during the initial phases, such as the initial ascension, where pilots may fail due to strong external interferences that could potentially lead to a crash. In this ongoing work, a drone has been successfully trained to perform an ascent of up to 6 meters at speeds with external disturbances pushing it up to 24 mph, with the DQN algorithm managing external forces affecting the system. It has been demonstrated that the system can control its height, position, and stability in all three axes (roll, pitch, and yaw) throughout the process. The learning process is carried out in the Gazebo simulator, which emulates interferences, while ROS is used to communicate with the agent.Keywords: machine learning, DQN, gazebo, navigation
Procedia PDF Downloads 1131966 Optimization of the Numerical Fracture Mechanics
Authors: H. Hentati, R. Abdelmoula, Li Jia, A. Maalej
Abstract:
In this work, we present numerical simulations of the quasi-static crack propagation based on the variation approach. We perform numerical simulations of a piece of brittle material without initial crack. An alternate minimization algorithm is used. Based on these numerical results, we determine the influence of numerical parameters on the location of crack. We show the importance of trying to optimize the time of numerical computation and we present the first attempt to develop a simple numerical method to optimize this time.Keywords: fracture mechanics, optimization, variation approach, mechanic
Procedia PDF Downloads 6061965 Towards Learning Query Expansion
Authors: Ahlem Bouziri, Chiraz Latiri, Eric Gaussier
Abstract:
The steady growth in the size of textual document collections is a key progress-driver for modern information retrieval techniques whose effectiveness and efficiency are constantly challenged. Given a user query, the number of retrieved documents can be overwhelmingly large, hampering their efficient exploitation by the user. In addition, retaining only relevant documents in a query answer is of paramount importance for an effective meeting of the user needs. In this situation, the query expansion technique offers an interesting solution for obtaining a complete answer while preserving the quality of retained documents. This mainly relies on an accurate choice of the added terms to an initial query. Interestingly enough, query expansion takes advantage of large text volumes by extracting statistical information about index terms co-occurrences and using it to make user queries better fit the real information needs. In this respect, a promising track consists in the application of data mining methods to extract dependencies between terms, namely a generic basis of association rules between terms. The key feature of our approach is a better trade off between the size of the mining result and the conveyed knowledge. Thus, face to the huge number of derived association rules and in order to select the optimal combination of query terms from the generic basis, we propose to model the problem as a classification problem and solve it using a supervised learning algorithm such as SVM or k-means. For this purpose, we first generate a training set using a genetic algorithm based approach that explores the association rules space in order to find an optimal set of expansion terms, improving the MAP of the search results. The experiments were performed on SDA 95 collection, a data collection for information retrieval. It was found that the results were better in both terms of MAP and NDCG. The main observation is that the hybridization of text mining techniques and query expansion in an intelligent way allows us to incorporate the good features of all of them. As this is a preliminary attempt in this direction, there is a large scope for enhancing the proposed method.Keywords: supervised leaning, classification, query expansion, association rules
Procedia PDF Downloads 3251964 Automatic Vowel and Consonant's Target Formant Frequency Detection
Authors: Othmane Bouferroum, Malika Boudraa
Abstract:
In this study, a dual exponential model for CV formant transition is derived from locus theory of speech perception. Then, an algorithm for automatic vowel and consonant’s target formant frequency detection is developed and tested on real speech. The results show that vowels and consonants are detected through transitions rather than their small stable portions. Also, vowel reduction is clearly observed in our data. These results are confirmed by the observations made in perceptual experiments in the literature.Keywords: acoustic invariance, coarticulation, formant transition, locus equation
Procedia PDF Downloads 2711963 Application of Multivariate Statistics and Hydro-Chemical Approach for Groundwater Quality Assessment: A Study on Birbhum District, West Bengal, India
Authors: N. C. Ghosh, Niladri Das, Prolay Mondal, Ranajit Ghosh
Abstract:
Groundwater quality deterioration due to human activities has become a prime factor of modern life. The major concern of the study is to access spatial variation of groundwater quality and to identify the sources of groundwater chemicals and its impact on human health of the concerned area. Multivariate statistical techniques, cluster, principal component analysis, and hydrochemical fancies are been applied to measure groundwater quality data on 14 parameters from 107 sites distributed randomly throughout the Birbhum district. Five factors have been extracted using Varimax rotation with Kaiser Normalization. The first factor explains 27.61% of the total variance where high positive loading have been concentrated in TH, Ca, Mg, Cl and F (Fluoride). In the studied region, due to the presence of basaltic Rajmahal trap fluoride contamination is highly concentrated and that has an adverse impact on human health such as fluorosis. The second factor explains 24.41% of the total variance which includes Na, HCO₃, EC, and SO₄. The last factor or the fifth factor explains 8.85% of the total variance, and it includes pH which maintains the acidic and alkaline character of the groundwater. Hierarchical cluster analysis (HCA) grouped the 107 sampling station into two clusters. One cluster having high pollution and another cluster having less pollution. Moreover hydromorphological facies viz. Wilcox diagram, Doneen’s chart, and USSL diagram reveal the quality of the groundwater like the suitability of the groundwater for irrigation or water used for drinking purpose like permeability index of the groundwater, quality assessment of groundwater for irrigation. Gibb’s diagram depicts that the major portion of the groundwater of this region is rock dominated origin, as the western part of the region characterized by the Jharkhand plateau fringe comprises basalt, gneiss, granite rocks.Keywords: correlation, factor analysis, hydrological facies, hydrochemistry
Procedia PDF Downloads 2131962 Polymer Mixing in the Cavity Transfer Mixer
Authors: Giovanna Grosso, Martien A. Hulsen, Arash Sarhangi Fard, Andrew Overend, Patrick. D. Anderson
Abstract:
In many industrial applications and, in particular in polymer industry, the quality of mixing between different materials is fundamental to guarantee the desired properties of finished products. However, properly modelling and understanding polymer mixing often presents noticeable difficulties, because of the variety and complexity of the physical phenomena involved. This is the case of the Cavity Transfer Mixer (CTM), for which a clear understanding of mixing mechanisms is still missing, as well as clear guidelines for the system optimization. This device, invented and patented by Gale at Rapra Technology Limited, is an add-on to be mounted downstream of existing extruders, in order to improve distributive mixing. It consists of two concentric cylinders, the rotor and stator, both provided with staggered rows of hemispherical cavities. The inner cylinder (rotor) rotates, while the outer (stator) remains still. At the same time, the pressure load imposed upstream, pushes the fluid through the CTM. Mixing processes are driven by the flow field generated by the complex interaction between the moving geometry, the imposed pressure load and the rheology of the fluid. In such a context, the present work proposes a complete and accurate three dimensional modelling of the CTM and results of a broad range of simulations assessing the impact on mixing of several geometrical and functioning parameters. Among them, we find: the number of cavities per row, the number of rows, the size of the mixer, the rheology of the fluid and the ratio between the rotation speed and the fluid throughput. The model is composed of a flow part and a mixing part: a finite element solver computes the transient velocity field, which is used in the mapping method implementation in order to simulate the concentration field evolution. Results of simulations are summarized in guidelines for the device optimization.Keywords: Mixing, non-Newtonian fluids, polymers, rheology.
Procedia PDF Downloads 3791961 Assessment of Mortgage Applications Using Fuzzy Logic
Authors: Swathi Sampath, V. Kalaichelvi
Abstract:
The assessment of the risk posed by a borrower to a lender is one of the common problems that financial institutions have to deal with. Consumers vying for a mortgage are generally compared to each other by the use of a number called the Credit Score, which is generated by applying a mathematical algorithm to information in the applicant’s credit report. The higher the credit score, the lower the risk posed by the candidate, and the better he is to be taken on by the lender. The objective of the present work is to use fuzzy logic and linguistic rules to create a model that generates Credit Scores.Keywords: credit scoring, fuzzy logic, mortgage, risk assessment
Procedia PDF Downloads 4051960 Effect of Modulation Factors on Tomotherapy Plans and Their Quality Analysis
Authors: Asawari Alok Pawaskar
Abstract:
This study was aimed at investigating quality assurance (QA) done with IBA matrix, the discrepancies observed for helical tomotherapy plans. A selection of tomotherapy plans that initially failed the with Matrix process was chosen for this investigation. These plans failed the fluence analysis as assessed using gamma criteria (3%, 3 mm). Each of these plans was modified (keeping the planning constraints the same), beamlets rebatched and reoptimized. By increasing and decreasing the modulation factor, the fluence in a circumferential plane as measured with a diode array was assessed. A subset of these plans was investigated using varied pitch values. Factors for each plan that were examined were point doses, fluences, leaf opening times, planned leaf sinograms, and uniformity indices. In order to ensure that the treatment constraints remained the same, the dose-volume histograms (DVHs) of all the modulated plans were compared to the original plan. It was observed that a large increase in the modulation factor did not significantly improve DVH uniformity, but reduced the gamma analysis pass rate. This also increased the treatment delivery time by slowing down the gantry rotation speed which then increases the maximum to mean non-zero leaf open time ratio. Increasing and decreasing the pitch value did not substantially change treatment time, but the delivery accuracy was adversely affected. This may be due to many other factors, such as the complexity of the treatment plan and site. Patient sites included in this study were head and neck, breast, abdomen. The impact of leaf timing inaccuracies on plans was greater with higher modulation factors. Point-dose measurements were seen to be less susceptible to changes in pitch and modulation factors. The initial modulation factor used by the optimizer, such that the TPS generated ‘actual’ modulation factor within the range of 1.4 to 2.5, resulted in an improved deliverable plan.Keywords: dose volume histogram, modulation factor, IBA matrix, tomotherapy
Procedia PDF Downloads 1771959 Limit-Cycles Method for the Navigation and Avoidance of Any Form of Obstacles for Mobile Robots in Cluttered Environment
Authors: F. Boufera, F. Debbat
Abstract:
This paper deals with an approach based on limit-cycles method for the problem of obstacle avoidance of mobile robots in unknown environments for any form of obstacles. The purpose of this approach is the improvement of limit-cycles method in order to obtain safe and flexible navigation. The proposed algorithm has been successfully tested in different configuration on simulation.Keywords: mobile robot, navigation, avoidance of obstacles, limit-cycles method
Procedia PDF Downloads 4291958 Increasing System Adequacy Using Integration of Pumped Storage: Renewable Energy to Reduce Thermal Power Generations Towards RE100 Target, Thailand
Authors: Mathuravech Thanaphon, Thephasit Nat
Abstract:
The Electricity Generating Authority of Thailand (EGAT) is focusing on expanding its pumped storage hydropower (PSH) capacity to increase the reliability of the system during peak demand and allow for greater integration of renewables. To achieve this requirement, Thailand will have to double its current renewable electricity production. To address the challenges of balancing supply and demand in the grid with increasing levels of RE penetration, as well as rising peak demand, EGAT has already been studying the potential for additional PSH capacity for several years to enable an increased share of RE and replace existing fossil fuel-fired generation. In addition, the role that pumped-storage hydropower would play in fulfilling multiple grid functions and renewable integration. The proposed sites for new PSH would help increase the reliability of power generation in Thailand. However, most of the electricity generation will come from RE, chiefly wind and photovoltaic, and significant additional Energy Storage capacity will be needed. In this paper, the impact of integrating the PSH system on the adequacy of renewable rich power generating systems to reduce the thermal power generating units is investigated. The variations of system adequacy indices are analyzed for different PSH-renewables capacities and storage levels. Power Development Plan 2018 rev.1 (PDP2018 rev.1), which is modified by integrating a six-new PSH system and RE planning and development aftermath in 2030, is the very challenge. The system adequacy indices through power generation are obtained using Multi-Objective Genetic Algorithm (MOGA) Optimization. MOGA is a probabilistic heuristic and stochastic algorithm that is able to find the global minima, which have the advantage that the fitness function does not necessarily require the gradient. In this sense, the method is more flexible in solving reliability optimization problems for a composite power system. The optimization with hourly time step takes years of planning horizon much larger than the weekly horizon that usually sets the scheduling studies. The objective function is to be optimized to maximize RE energy generation, minimize energy imbalances, and minimize thermal power generation using MATLAB. The PDP2018 rev.1 was set to be simulated based on its planned capacity stepping into 2030 and 2050. Therefore, the four main scenario analyses are conducted as the target of renewables share: 1) Business-As-Usual (BAU), 2) National Targets (30% RE in 2030), 3) Carbon Neutrality Targets (50% RE in 2050), and 5) 100% RE or full-decarbonization. According to the results, the generating system adequacy is significantly affected by both PSH-RE and Thermal units. When a PSH is integrated, it can provide hourly capacity to the power system as well as better allocate renewable energy generation to reduce thermal generations and improve system reliability. These results show that a significant level of reliability improvement can be obtained by PSH, especially in renewable-rich power systems.Keywords: pumped storage hydropower, renewable energy integration, system adequacy, power development planning, RE100, multi-objective genetic algorithm
Procedia PDF Downloads 571957 Parametric Study of a Washing Machine to Develop an Energy Efficient Program Regarding the Enhanced Washing Efficiency Index and Micro Organism Removal Performance
Authors: Peli̇n Yilmaz, Gi̇zemnur Yildiz Uysal, Emi̇ne Bi̇rci̇, Berk Özcan, Burak Koca, Ehsan Tuzcuoğlu, Fati̇h Kasap
Abstract:
Development of Energy Efficient Programs (EEP) is one of the most significant trends in the wet appliance industry of the recent years. Thanks to the EEP, the energy consumption of a washing machine as one of the most energy-consuming home appliances can shrink considerably, while its washing performance and the textile hygiene should remain almost unchanged. Here in, the goal of the present study is to achieve an optimum EEP algorithm providing excellent textile hygiene results as well as cleaning performance in a domestic washing machine. In this regard, steam-pretreated cold wash approach with a combination of innovative algorithm solution in a relatively short washing cycle duration was implemented. For the parametric study, steam exposure time, washing load, total water consumption, main-washing time, and spinning rpm as the significant parameters affecting the textile hygiene and cleaning performance were investigated within a Design of Experiment study using Minitab 2021 statistical program. For the textile hygiene studies, specific loads containing the contaminated cotton carriers with Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa bacteria were washed. Then, the microbial removal performance of the designed programs was expressed as log reduction calculated as a difference of microbial count per ml of the liquids in which the cotton carriers before and after washing. For the cleaning performance studies, tests were carried out with various types of detergents and EMPA Standard Stain Strip. According to the results, the optimum EEP program provided an excellent hygiene performance of more than 2 log reduction of microorganism and a perfect Washing Efficiency Index (Iw) of 1.035, which is greater than the value specified by EU ecodesign regulation 2019/2023.Keywords: washing machine, energy efficient programs, hygiene, washing efficiency index, microorganism, escherichia coli, staphylococcus aureus, pseudomonas aeruginosa, laundry
Procedia PDF Downloads 1351956 Tool for Fast Detection of Java Code Snippets
Authors: Tomáš Bublík, Miroslav Virius
Abstract:
This paper presents general results on the Java source code snippet detection problem. We propose the tool which uses graph and sub graph isomorphism detection. A number of solutions for all of these tasks have been proposed in the literature. However, although that all these solutions are really fast, they compare just the constant static trees. Our solution offers to enter an input sample dynamically with the Scripthon language while preserving an acceptable speed. We used several optimizations to achieve very low number of comparisons during the matching algorithm.Keywords: AST, Java, tree matching, scripthon source code recognition
Procedia PDF Downloads 4251955 Enhancement of Critical Current Density of Liquid Infiltration Processed Y-Ba-Cu-O Bulk Superconductors Used for Flywheel Energy Storage System
Authors: Asif Mahmood, Yousef Alzeghayer
Abstract:
The size effects of a precursor Y2BaCuO5 (Y211) powder on the microstructure and critical current density (Jc) of liquid infiltration growth (LIG)-processed YBa2Cu3O7-y (Y123) bulk superconductors were investigated in terms of milling time (t). YBCO bulk samples having high Jc values have been selected for the flywheel energy storage system. Y211 powders were attrition-milled for 0-10 h in 2 h increments at a fixed rotation speed of 400 RPM. Y211 pre-forms were made by pelletizing the milled Y211 powders followed by subsequent sintering, after which an LIG process with top seeding was applied to the Y211/Ba3Cu5O8 (Y035) pre-forms. Spherical pores were observed in all LIG-processed Y123 samples, and the pore density gradually decreased as t increased from 0 h to 8 h. In addition to the reduced pore density, the Y211 particle size in the final Y123 products also decreased with increasing t. As t increased further to 10 h, unexpected Y211 coarsening and large pore evolutions were observed. The magnetic susceptibility-temperature curves showed that the onset superconducting transition temperature (Tc, onset) of all samples was the same (91.5 K), but the transition width became greater as t increased. The Jc of the Y123 bulk superconductors fabricated in this study was observed to correlate well with t of the Y211 precursor powder. The maximum Jc of 1.0×105 A cm-2 (at 77 K, 0 T) was achieved at t = 8 h, which is attributed to the reduction in pore density and Y211 particle size. The prolonged milling time of t = 10 h decreased the Jc of the LIG-processed Y123 superconductor owing to the evolution of large pores and exaggerated Y211 growth. YBCO bulk samples having high Jc (samples prepared using 8 h milled powders) have been used for the energy storage system in flywheel energy storage system.Keywords: critical current, bulk superconductor, liquid infiltration, bioinformatics
Procedia PDF Downloads 2121954 A New Criterion Using Pose and Shape of Objects for Collision Risk Estimation
Authors: DoHyeung Kim, DaeHee Seo, ByungDoo Kim, ByungGil Lee
Abstract:
As many recent researches being implemented in aviation and maritime aspects, strong doubts have been raised concerning the reliability of the estimation of collision risk. It is shown that using position and velocity of objects can lead to imprecise results. In this paper, therefore, a new approach to the estimation of collision risks using pose and shape of objects is proposed. Simulation results are presented validating the accuracy of the new criterion to adapt to collision risk algorithm based on fuzzy logic.Keywords: collision risk, pose, shape, fuzzy logic
Procedia PDF Downloads 5291953 Improved Computational Efficiency of Machine Learning Algorithm Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK
Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick
Abstract:
The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning archetypal that could forecast COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organisation (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data is split into 8:2 ratio for training and testing purposes to forecast future new COVID cases. Support Vector Machines (SVM), Random Forests, and linear regression algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID cases is evaluated. Random Forest outperformed the other two Machine Learning algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n=30. The mean square error obtained for Random Forest is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis Random Forest algorithm can perform more effectively and efficiently in predicting the new COVID cases, which could help the health sector to take relevant control measures for the spread of the virus.Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest
Procedia PDF Downloads 1211952 A Survey on Important Factors of the Ethereum Network Performance
Authors: Ali Mohammad Mobaser Azad, Alireza Akhlaghinia
Abstract:
Blockchain is changing our world and launching a new generation of decentralized networks. Meanwhile, Blockchain-based networks like Ethereum have been created and they will facilitate these processes using tools like smart contracts. The Ethereum has fundamental structures, each of which affects the activity of the nodes. Our purpose in this paper is to review similar research and examine various components to demonstrate the performance of the Ethereum network and to do this, and we used the data published by the Ethereum Foundation in different time spots to examine the number of changes that determine the status of network performance. This will help other researchers understand better Ethereum in different situations.Keywords: blockchain, ethereum, smart contract, decentralization consensus algorithm
Procedia PDF Downloads 2261951 Time Domain Dielectric Relaxation Microwave Spectroscopy
Authors: A. C. Kumbharkhane
Abstract:
Time domain dielectric relaxation microwave spectroscopy (TDRMS) is a term used to describe a technique of observing the time dependant response of a sample after application of time dependant electromagnetic field. A TDRMS probes the interaction of a macroscopic sample with a time dependent electrical field. The resulting complex permittivity spectrum, characterizes amplitude (voltage) and time scale of the charge-density fluctuations within the sample. These fluctuations may arise from the reorientation of the permanent dipole moments of individual molecules or from the rotation of dipolar moieties in flexible molecules, like polymers. The time scale of these fluctuations depends on the sample and its relative relaxation mechanism. Relaxation times range from some picoseconds in low viscosity liquids to hours in glasses, Therefore the TDRS technique covers an extensive dynamical process. The corresponding frequencies range from 10-4 Hz to 1012 Hz. This inherent ability to monitor the cooperative motion of molecular ensemble distinguishes dielectric relaxation from methods like NMR or Raman spectroscopy, which yield information on the motions of individual molecules. Recently, we have developed and established the TDR technique in laboratory that provides information regarding dielectric permittivity in the frequency range 10 MHz to 30 GHz. The TDR method involves the generation of step pulse with rise time of 20 pico-seconds in a coaxial line system and monitoring the change in pulse shape after reflection from the sample placed at the end of the coaxial line. There is a great interest to study the dielectric relaxation behaviour in liquid systems to understand the role of hydrogen bond in liquid system. The intermolecular interaction through hydrogen bonds in molecular liquids results in peculiar dynamical properties. The dynamics of hydrogen-bonded liquids have been studied. The theoretical model to explain the experimental results will be discussed.Keywords: microwave, time domain reflectometry (TDR), dielectric measurement, relaxation time
Procedia PDF Downloads 3361950 Comparative Analysis of Reinforcement Learning Algorithms for Autonomous Driving
Authors: Migena Mana, Ahmed Khalid Syed, Abdul Malik, Nikhil Cherian
Abstract:
In recent years, advancements in deep learning enabled researchers to tackle the problem of self-driving cars. Car companies use huge datasets to train their deep learning models to make autonomous cars a reality. However, this approach has certain drawbacks in that the state space of possible actions for a car is so huge that there cannot be a dataset for every possible road scenario. To overcome this problem, the concept of reinforcement learning (RL) is being investigated in this research. Since the problem of autonomous driving can be modeled in a simulation, it lends itself naturally to the domain of reinforcement learning. The advantage of this approach is that we can model different and complex road scenarios in a simulation without having to deploy in the real world. The autonomous agent can learn to drive by finding the optimal policy. This learned model can then be easily deployed in a real-world setting. In this project, we focus on three RL algorithms: Q-learning, Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). To model the environment, we have used TORCS (The Open Racing Car Simulator), which provides us with a strong foundation to test our model. The inputs to the algorithms are the sensor data provided by the simulator such as velocity, distance from side pavement, etc. The outcome of this research project is a comparative analysis of these algorithms. Based on the comparison, the PPO algorithm gives the best results. When using PPO algorithm, the reward is greater, and the acceleration, steering angle and braking are more stable compared to the other algorithms, which means that the agent learns to drive in a better and more efficient way in this case. Additionally, we have come up with a dataset taken from the training of the agent with DDPG and PPO algorithms. It contains all the steps of the agent during one full training in the form: (all input values, acceleration, steering angle, break, loss, reward). This study can serve as a base for further complex road scenarios. Furthermore, it can be enlarged in the field of computer vision, using the images to find the best policy.Keywords: autonomous driving, DDPG (deep deterministic policy gradient), PPO (proximal policy optimization), reinforcement learning
Procedia PDF Downloads 1481949 Classic Training of a Neural Observer for Estimation Purposes
Authors: R. Loukil, M. Chtourou, T. Damak
Abstract:
This paper investigates the training of multilayer neural network using the classic approach. Then, for estimation purposes, we suggest the use of a specific neural observer that we study its training algorithm which is the back-propagation one in the case of the disponibility of the state and in the case of an unmeasurable state. A MATLAB simulation example will be studied to highlight the usefulness of this kind of observer.Keywords: training, estimation purposes, neural observer, back-propagation, unmeasurable state
Procedia PDF Downloads 5741948 An Object-Based Image Resizing Approach
Authors: Chin-Chen Chang, I-Ta Lee, Tsung-Ta Ke, Wen-Kai Tai
Abstract:
Common methods for resizing image size include scaling and cropping. However, these two approaches have some quality problems for reduced images. In this paper, we propose an image resizing algorithm by separating the main objects and the background. First, we extract two feature maps, namely, an enhanced visual saliency map and an improved gradient map from an input image. After that, we integrate these two feature maps to an importance map. Finally, we generate the target image using the importance map. The proposed approach can obtain desired results for a wide range of images.Keywords: energy map, visual saliency, gradient map, seam carving
Procedia PDF Downloads 4761947 Adaptive CFAR Analysis for Non-Gaussian Distribution
Authors: Bouchemha Amel, Chachoui Takieddine, H. Maalem
Abstract:
Automatic detection of targets in a modern communication system RADAR is based primarily on the concept of adaptive CFAR detector. To have an effective detection, we must minimize the influence of disturbances due to the clutter. The detection algorithm adapts the CFAR detection threshold which is proportional to the average power of the clutter, maintaining a constant probability of false alarm. In this article, we analyze the performance of two variants of adaptive algorithms CA-CFAR and OS-CFAR and we compare the thresholds of these detectors in the marine environment (no-Gaussian) with a Weibull distribution.Keywords: CFAR, threshold, clutter, distribution, Weibull, detection
Procedia PDF Downloads 5891946 Analysis of the Inverse Kinematics for 5 DOF Robot Arm Using D-H Parameters
Authors: Apurva Patil, Maithilee Kulkarni, Ashay Aswale
Abstract:
This paper proposes an algorithm to develop the kinematic model of a 5 DOF robot arm. The formulation of the problem is based on finding the D-H parameters of the arm. Brute Force iterative method is employed to solve the system of non linear equations. The focus of the paper is to obtain the accurate solutions by reducing the root mean square error. The result obtained will be implemented to grip the objects. The trajectories followed by the end effector for the required workspace coordinates are plotted. The methodology used here can be used in solving the problem for any other kinematic chain of up to six DOF.Keywords: 5 DOF robot arm, D-H parameters, inverse kinematics, iterative method, trajectories
Procedia PDF Downloads 2021945 Principal Component Analysis of Body Weight and Morphometric Traits of New Zealand Rabbits Raised under Semi-Arid Condition in Nigeria
Authors: Emmanuel Abayomi Rotimi
Abstract:
Context: Rabbits production plays important role in increasing animal protein supply in Nigeria. Rabbit production provides a cheap, affordable, and healthy source of meat. The growth of animals involves an increase in body weight, which can change the conformation of various parts of the body. Live weight and linear measurements are indicators of growth rate in rabbits and other farm animals. Aims: This study aimed to define the body dimensions of New Zealand rabbits and also to investigate the morphometric traits variables that contribute to body conformation by the use of principal component analysis (PCA). Methods: Data were obtained from 80 New Zealand rabbits (40 bucks and 40 does) raised in Livestock Teaching and Research Farm, Federal University Dutsinma. Data were taken on body weight (BWT), body length (BL), ear length (EL), tail length (TL), heart girth (HG) and abdominal circumference (AC). Data collected were subjected to multivariate analysis using SPSS 20.0 statistical package. Key results: The descriptive statistics showed that the mean BWT, BL, EL, TL, HG, and AC were 0.91kg, 27.34cm, 10.24cm, 8.35cm, 19.55cm and 21.30cm respectively. Sex showed significant (P<0.05) effect on all the variables examined, with higher values recorded for does. The phenotypic correlation coefficient values (r) between the morphometric traits were all positive and ranged from r = 0.406 (between EL and BL) to r = 0.909 (between AC and HG). HG is the most correlated with BWT (r = 0.786). The principal component analysis with variance maximizing orthogonal rotation was used to extract the components. Two principal components (PCs) from the factor analysis of morphometric traits explained about 80.42% of the total variance. PC1 accounted for 64.46% while PC2 accounted for 15.97% of the total variances. Three variables, representing body conformation, loaded highest in PC1. PC1 had the highest contribution (64.46%) to the total variance, and it is regarded as body conformation traits. Conclusions: This component could be used as selection criteria for improving body weight of rabbits.Keywords: conformation, multicollinearity, multivariate, rabbits and principal component analysis
Procedia PDF Downloads 1301944 Nonlinear Observer Canonical Form for Genetic Regulation Process
Authors: Bououden Soraya
Abstract:
This paper aims to study the existence of the change of coordinates which permits to transform a class of nonlinear dynamical systems into the so-called nonlinear observer canonical form (NOCF). Moreover, an algorithm to construct such a change of coordinates is given. Based on this form, we can design an observer with a linear error dynamic. This enables us to estimate the state of a nonlinear dynamical system. A concrete example (biological model) is provided to illustrate the feasibility of the proposed results.Keywords: nonlinear observer canonical form, observer, design, gene regulation, gene expression
Procedia PDF Downloads 432