Search results for: Taylor’s Series Method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21048

Search results for: Taylor’s Series Method

19548 A Review of Research on Pre-training Technology for Natural Language Processing

Authors: Moquan Gong

Abstract:

In recent years, with the rapid development of deep learning, pre-training technology for natural language processing has made great progress. The early field of natural language processing has long used word vector methods such as Word2Vec to encode text. These word vector methods can also be regarded as static pre-training techniques. However, this context-free text representation brings very limited improvement to subsequent natural language processing tasks and cannot solve the problem of word polysemy. ELMo proposes a context-sensitive text representation method that can effectively handle polysemy problems. Since then, pre-training language models such as GPT and BERT have been proposed one after another. Among them, the BERT model has significantly improved its performance on many typical downstream tasks, greatly promoting the technological development in the field of natural language processing, and has since entered the field of natural language processing. The era of dynamic pre-training technology. Since then, a large number of pre-trained language models based on BERT and XLNet have continued to emerge, and pre-training technology has become an indispensable mainstream technology in the field of natural language processing. This article first gives an overview of pre-training technology and its development history, and introduces in detail the classic pre-training technology in the field of natural language processing, including early static pre-training technology and classic dynamic pre-training technology; and then briefly sorts out a series of enlightening technologies. Pre-training technology, including improved models based on BERT and XLNet; on this basis, analyze the problems faced by current pre-training technology research; finally, look forward to the future development trend of pre-training technology.

Keywords: natural language processing, pre-training, language model, word vectors

Procedia PDF Downloads 57
19547 Effect of Adjacent Footings on Elastic Settlement of Shallow Foundations

Authors: Mustafa Aytekin

Abstract:

In this study, impact of adjacent footings is considered on the estimation of elastic settlement of shallow foundations. In the estimation of elastic settlement, the Schmertmann’s method that is a very popular method in the elastic settlement estimation of shallow foundations is employed. In order to consider affect of neighboring footings on elastic settlement of main footing in different configurations, a MATLAB script has been generated. Elastic settlements of the various configurations are estimated by the script and several conclusions have been reached.

Keywords: elastic (immediate) settlement, Schmertman Method, adjacent footings, shallow foundations

Procedia PDF Downloads 467
19546 Multiple Negative-Differential Resistance Regions Based on AlN/GaN Resonant Tunneling Structures by the Vertical Growth of Molecular Beam Epitaxy

Authors: Yao Jiajia, Wu Guanlin, LIU Fang, Xue Junshuai, Zhang Jincheng, Hao Yue

Abstract:

Resonant tunneling diodes (RTDs) based on GaN have been extensively studied. However, no results of multiple logic states achieved by RTDs were reported by the methods of epitaxy in the GaN materials. In this paper, the multiple negative-differential resistance regions by combining two discrete double-barrier RTDs in series have been first demonstrated. Plasma-assisted molecular beam epitaxy (PA-MBE) was used to grow structures consisting of two vertical RTDs. The substrate was a GaN-on-sapphire template. Each resonant tunneling structure was composed of a double barrier of AlN and a single well of GaN with undoped 4-nm space layers of GaN on each side. The AlN barriers were 1.5 nm thick, and the GaN well was 2 nm thick. The resonant tunneling structures were separated from each other by 30-nm thick n+ GaN layers. The bottom and top layers of the structures, grown neighboring to the spacer layers that consist of 200-nm-thick n+ GaN. These devices with two tunneling structures exhibited uniform peaks and valleys current and also had two negative differential resistance NDR regions equally spaced in bias voltage. The current-voltage (I-V) characteristics of resonant tunneling structures with diameters of 1 and 2 μm were analyzed in this study. These structures exhibit three stable operating points, which are investigated in detail. This research demonstrates that using molecular beam epitaxy MBE to vertically grow multiple resonant tunneling structures is a promising method for achieving multiple negative differential resistance regions and stable logic states. These findings have significant implications for the development of digital circuits capable of multi-value logic, which can be achieved with a small number of devices.

Keywords: GaN, AlN, RTDs, MBE, logic state

Procedia PDF Downloads 92
19545 Forecasting Nokoué Lake Water Levels Using Long Short-Term Memory Network

Authors: Namwinwelbere Dabire, Eugene C. Ezin, Adandedji M. Firmin

Abstract:

The prediction of hydrological flows (rainfall-depth or rainfall-discharge) is becoming increasingly important in the management of hydrological risks such as floods. In this study, the Long Short-Term Memory (LSTM) network, a state-of-the-art algorithm dedicated to time series, is applied to predict the daily water level of Nokoue Lake in Benin. This paper aims to provide an effective and reliable method enable of reproducing the future daily water level of Nokoue Lake, which is influenced by a combination of two phenomena: rainfall and river flow (runoff from the Ouémé River, the Sô River, the Porto-Novo lagoon, and the Atlantic Ocean). Performance analysis based on the forecasting horizon indicates that LSTM can predict the water level of Nokoué Lake up to a forecast horizon of t+10 days. Performance metrics such as Root Mean Square Error (RMSE), coefficient of correlation (R²), Nash-Sutcliffe Efficiency (NSE), and Mean Absolute Error (MAE) agree on a forecast horizon of up to t+3 days. The values of these metrics remain stable for forecast horizons of t+1 days, t+2 days, and t+3 days. The values of R² and NSE are greater than 0.97 during the training and testing phases in the Nokoué Lake basin. Based on the evaluation indices used to assess the model's performance for the appropriate forecast horizon of water level in the Nokoué Lake basin, the forecast horizon of t+3 days is chosen for predicting future daily water levels.

Keywords: forecasting, long short-term memory cell, recurrent artificial neural network, Nokoué lake

Procedia PDF Downloads 64
19544 Electricity Consumption and Economic Growth: The Case of Mexico

Authors: Mario Gómez, José Carlos Rodríguez

Abstract:

The causal relationship between energy consumption and economic growth has been an important issue in the economic literature. This paper studies the causal relationship between electricity consumption and economic growth in Mexico for the period of 1971-2011. In so doing, unit root tests and causality test are applied. The results show that the series are stationary in levels and that there is causality running from economic growth to energy consumption. The energy conservation policies have little or no impact on economic growth in México.

Keywords: causality, economic growth, energy consumption, Mexico

Procedia PDF Downloads 858
19543 Function Approximation with Radial Basis Function Neural Networks via FIR Filter

Authors: Kyu Chul Lee, Sung Hyun Yoo, Choon Ki Ahn, Myo Taeg Lim

Abstract:

Recent experimental evidences have shown that because of a fast convergence and a nice accuracy, neural networks training via extended Kalman filter (EKF) method is widely applied. However, as to an uncertainty of the system dynamics or modeling error, the performance of the method is unreliable. In order to overcome this problem in this paper, a new finite impulse response (FIR) filter based learning algorithm is proposed to train radial basis function neural networks (RBFN) for nonlinear function approximation. Compared to the EKF training method, the proposed FIR filter training method is more robust to those environmental conditions. Furthermore, the number of centers will be considered since it affects the performance of approximation.

Keywords: extended Kalman filter, classification problem, radial basis function networks (RBFN), finite impulse response (FIR) filter

Procedia PDF Downloads 456
19542 An Efficient Fundamental Matrix Estimation for Moving Object Detection

Authors: Yeongyu Choi, Ju H. Park, S. M. Lee, Ho-Youl Jung

Abstract:

In this paper, an improved method for estimating fundamental matrix is proposed. The method is applied effectively to monocular camera based moving object detection. The method consists of corner points detection, moving object’s motion estimation and fundamental matrix calculation. The corner points are obtained by using Harris corner detector, motions of moving objects is calculated from pyramidal Lucas-Kanade optical flow algorithm. Through epipolar geometry analysis using RANSAC, the fundamental matrix is calculated. In this method, we have improved the performances of moving object detection by using two threshold values that determine inlier or outlier. Through the simulations, we compare the performances with varying the two threshold values.

Keywords: corner detection, optical flow, epipolar geometry, RANSAC

Procedia PDF Downloads 409
19541 Incremental Learning of Independent Topic Analysis

Authors: Takahiro Nishigaki, Katsumi Nitta, Takashi Onoda

Abstract:

In this paper, we present a method of applying Independent Topic Analysis (ITA) to increasing the number of document data. The number of document data has been increasing since the spread of the Internet. ITA was presented as one method to analyze the document data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis (ICA). ICA is a technique in the signal processing; however, it is difficult to apply the ITA to increasing number of document data. Because ITA must use the all document data so temporal and spatial cost is very high. Therefore, we present Incremental ITA which extracts the independent topics from increasing number of document data. Incremental ITA is a method of updating the independent topics when the document data is added after extracted the independent topics from a just previous the data. In addition, Incremental ITA updates the independent topics when the document data is added. And we show the result applied Incremental ITA to benchmark datasets.

Keywords: text mining, topic extraction, independent, incremental, independent component analysis

Procedia PDF Downloads 309
19540 Na Promoted Ni/γ-Al2O3 Catalysts Prepared by Solution Combustion Method for Syngas Methanation

Authors: Yan Zeng, Hongfang Ma, Haitao Zhang, Weiyong Ying

Abstract:

Ni-based catalysts with different amounts of Na as promoter from 2 to 6 wt % were prepared by solution combustion method. The catalytic activity was investigated in syngas methanation reaction. Carbon oxides conversion and methane selectivity are greatly influenced by sodium loading. Adding 2 wt% Na remarkably improves catalytic activity and long-term stability, attributed to its smaller mean NiO particle size, better distribution, and milder metal-support interaction. However, excess addition of Na results in deactivation distinctly due to the blockage of active sites.

Keywords: nickel catalysts, syngas methanation, sodium, solution combustion method

Procedia PDF Downloads 407
19539 Comparison of DPC and FOC Vector Control Strategies on Reducing Harmonics Caused by Nonlinear Load in the DFIG Wind Turbine

Authors: Hamid Havasi, Mohamad Reza Gholami Dehbalaei, Hamed Khorami, Shahram Karimi, Hamdi Abdi

Abstract:

Doubly-fed induction generator (DFIG) equipped with a power converter is an efficient tool for converting mechanical energy of a variable speed system to a fixed-frequency electrical grid. Since electrical energy sources faces with production problems such as harmonics caused by nonlinear loads, so in this paper, compensation performance of DPC and FOC method on harmonics reduction of a DFIG wind turbine connected to a nonlinear load in MATLAB Simulink model has been simulated and effect of each method on nonlinear load harmonic elimination has been compared. Results of the two mentioned control methods shows the advantage of the FOC method on DPC method for harmonic compensation. Also, the fifth and seventh harmonic components of the network and THD greatly reduced.

Keywords: DFIG machine, energy conversion, nonlinear load, THD, DPC, FOC

Procedia PDF Downloads 589
19538 Computer Simulations of Stress Corrosion Studies of Quartz Particulate Reinforced ZA-27 Metal Matrix Composites

Authors: K. Vinutha

Abstract:

The stress corrosion resistance of ZA-27 / TiO2 metal matrix composites (MMC’s) in high temperature acidic media has been evaluated using an autoclave. The liquid melt metallurgy technique using vortex method was used to fabricate MMC’s. TiO2 particulates of 50-80 µm in size are added to the matrix. ZA-27 containing 2,4,6 weight percentage of TiO2 are prepared. Stress corrosion tests were conducted by weight loss method for different exposure time, normality and temperature of the acidic medium. The corrosion rates of composites were lower to that of matrix ZA-27 alloy under all conditions.

Keywords: autoclave, MMC’s, stress corrosion, vortex method

Procedia PDF Downloads 476
19537 Temperature Distribution Inside Hybrid photovoltaic-Thermoelectric Generator Systems and their Dependency on Exposition Angles

Authors: Slawomir Wnuk

Abstract:

Due to widespread implementation of the renewable energy development programs the, solar energy use increasing constantlyacross the world. Accordingly to REN21, in 2020, both on-grid and off-grid solar photovoltaic systems installed capacity reached 760 GWDCand increased by 139 GWDC compared to previous year capacity. However, the photovoltaic solar cells used for primary solar energy conversion into electrical energy has exhibited significant drawbacks. The fundamentaldownside is unstable andlow efficiencythe energy conversion being negatively affected by a rangeof factors. To neutralise or minimise the impact of those factors causing energy losses, researchers have come out withvariedideas. One ofpromising technological solutionsoffered by researchers is PV-MTEG multilayer hybrid system combiningboth photovoltaic cells and thermoelectric generators advantages. A series of experiments was performed on Glasgow Caledonian University laboratory to investigate such a system in operation. In the experiments, the solar simulator Sol3A series was employed as a stable solar irradiation source, and multichannel voltage and temperature data loggers were utilised for measurements. The two layer proposed hybrid systemsimulation model was built up and tested for its energy conversion capability under a variety of the exposure angles to the solar irradiation with a concurrent examination of the temperature distribution inside proposed PV-MTEG structure. The same series of laboratory tests were carried out for a range of various loads, with the temperature and voltage generated being measured and recordedfor each exposure angle and load combination. It was found that increase of the exposure angle of the PV-MTEG structure to an irradiation source causes the decrease of the temperature gradient ΔT between the system layers as well as reduces overall system heating. The temperature gradient’s reduction influences negatively the voltage generation process. The experiments showed that for the exposureangles in the range from 0° to 45°, the ‘generated voltage – exposure angle’ dependence is reflected closely by the linear characteristics. It was also found that the voltage generated by MTEG structures working with the optimal load determined and applied would drop by approximately 0.82% per each 1° degree of the exposure angle increase. This voltage drop occurs at the higher loads applied, getting more steep with increasing the load over the optimal value, however, the difference isn’t significant. Despite of linear character of the generated by MTEG voltage-angle dependence, the temperature reduction between the system structure layers andat tested points on its surface was not linear. In conclusion, the PV-MTEG exposure angle appears to be important parameter affecting efficiency of the energy generation by thermo-electrical generators incorporated inside those hybrid structures. The research revealedgreat potential of the proposed hybrid system. The experiments indicated interesting behaviour of the tested structures, and the results appear to provide valuable contribution into thedevelopment and technological design process for large energy conversion systems utilising similar structural solutions.

Keywords: photovoltaic solar systems, hybrid systems, thermo-electrical generators, renewable energy

Procedia PDF Downloads 89
19536 Pricing European Continuous-Installment Options under Regime-Switching Models

Authors: Saghar Heidari

Abstract:

In this paper, we study the valuation problem of European continuous-installment options under Markov-modulated models with a partial differential equation approach. Due to the opportunity for continuing or stopping to pay installments, the valuation problem under regime-switching models can be formulated as coupled partial differential equations (CPDE) with free boundary features. To value the installment options, we express the truncated CPDE as a linear complementarity problem (LCP), then a finite element method is proposed to solve the resulted variational inequality. Under some appropriate assumptions, we establish the stability of the method and illustrate some numerical results to examine the rate of convergence and accuracy of the proposed method for the pricing problem under the regime-switching model.

Keywords: continuous-installment option, European option, regime-switching model, finite element method

Procedia PDF Downloads 137
19535 Divergence Regularization Method for Solving Ill-Posed Cauchy Problem for the Helmholtz Equation

Authors: Benedict Barnes, Anthony Y. Aidoo

Abstract:

A Divergence Regularization Method (DRM) is used to regularize the ill-posed Helmholtz equation where the boundary deflection is inhomogeneous in a Hilbert space H. The DRM incorporates a positive integer scaler which homogenizes the inhomogeneous boundary deflection in Cauchy problem of the Helmholtz equation. This ensures the existence, as well as, uniqueness of solution for the equation. The DRM restores all the three conditions of well-posedness in the sense of Hadamard.

Keywords: divergence regularization method, Helmholtz equation, ill-posed inhomogeneous Cauchy boundary conditions

Procedia PDF Downloads 189
19534 Pattern of Stress Distribution in Different Ligature-Wire-Brackets Systems: A FE and Experimental Analysis

Authors: Afef Dridi, Salah Mezlini

Abstract:

Since experimental devices cannot calculate stress and deformation of complex structures. The Finite Element Method FEM has been widely used in several fields of research. One of these fields is orthodontics. The advantage of using such a method is the use of an accurate and non invasive method that allows us to have a sufficient data about the physiological reactions can happening in soft tissues. Most of researches done in this field were interested in the study of stresses and deformations induced by orthodontic apparatus in soft tissues (alveolar tissues). Only few studies were interested in the distribution of stress and strain in the orthodontic brackets. These studies, although they tried to be as close as possible to real conditions, their models did not reproduce the clinical cases. For this reason, the model generated by our research is the closest one to reality. In this study, a numerical model was developed to explore the stress and strain distribution under the application of real conditions. A comparison between different material properties was also done.

Keywords: visco-hyperelasticity, FEM, orthodontic treatment, inverse method

Procedia PDF Downloads 259
19533 A Robust Spatial Feature Extraction Method for Facial Expression Recognition

Authors: H. G. C. P. Dinesh, G. Tharshini, M. P. B. Ekanayake, G. M. R. I. Godaliyadda

Abstract:

This paper presents a new spatial feature extraction method based on principle component analysis (PCA) and Fisher Discernment Analysis (FDA) for facial expression recognition. It not only extracts reliable features for classification, but also reduces the feature space dimensions of pattern samples. In this method, first each gray scale image is considered in its entirety as the measurement matrix. Then, principle components (PCs) of row vectors of this matrix and variance of these row vectors along PCs are estimated. Therefore, this method would ensure the preservation of spatial information of the facial image. Afterwards, by incorporating the spectral information of the eigen-filters derived from the PCs, a feature vector was constructed, for a given image. Finally, FDA was used to define a set of basis in a reduced dimension subspace such that the optimal clustering is achieved. The method of FDA defines an inter-class scatter matrix and intra-class scatter matrix to enhance the compactness of each cluster while maximizing the distance between cluster marginal points. In order to matching the test image with the training set, a cosine similarity based Bayesian classification was used. The proposed method was tested on the Cohn-Kanade database and JAFFE database. It was observed that the proposed method which incorporates spatial information to construct an optimal feature space outperforms the standard PCA and FDA based methods.

Keywords: facial expression recognition, principle component analysis (PCA), fisher discernment analysis (FDA), eigen-filter, cosine similarity, bayesian classifier, f-measure

Procedia PDF Downloads 425
19532 A New Method to Winner Determination for Economic Resource Allocation in Cloud Computing Systems

Authors: Ebrahim Behrouzian Nejad, Rezvan Alipoor Sabzevari

Abstract:

Cloud computing systems are large-scale distributed systems, so that they focus more on large scale resource sharing, cooperation of several organizations and their use in new applications. One of the main challenges in this realm is resource allocation. There are many different ways to resource allocation in cloud computing. One of the common methods to resource allocation are economic methods. Among these methods, the auction-based method has greater prominence compared with Fixed-Price method. The double combinatorial auction is one of the proper ways of resource allocation in cloud computing. This method includes two phases: winner determination and resource allocation. In this paper a new method has been presented to determine winner in double combinatorial auction-based resource allocation using Imperialist Competitive Algorithm (ICA). The experimental results show that in our new proposed the number of winner users is higher than genetic algorithm. On other hand, in proposed algorithm, the number of winner providers is higher in genetic algorithm.

Keywords: cloud computing, resource allocation, double auction, winner determination

Procedia PDF Downloads 359
19531 Simulation and Controller Tunning in a Photo-Bioreactor Applying by Taguchi Method

Authors: Hosein Ghahremani, MohammadReza Khoshchehre, Pejman Hakemi

Abstract:

This study involves numerical simulations of a vertical plate-type photo-bioreactor to investigate the performance of Microalgae Spirulina and Control and optimization of parameters for the digital controller by Taguchi method that MATLAB software and Qualitek-4 has been made. Since the addition of parameters such as temperature, dissolved carbon dioxide, biomass, and ... Some new physical parameters such as light intensity and physiological conditions like photosynthetic efficiency and light inhibitors are involved in biological processes, control is facing many challenges. Not only facilitate the commercial production photo-bioreactor Microalgae as feed for aquaculture and food supplements are efficient systems but also as a possible platform for the production of active molecules such as antibiotics or innovative anti-tumor agents, carbon dioxide removal and removal of heavy metals from wastewater is used. Digital controller is designed for controlling the light bioreactor until Microalgae growth rate and carbon dioxide concentration inside the bioreactor is investigated. The optimal values of the controller parameters of the S/N and ANOVA analysis software Qualitek-4 obtained With Reaction curve, Cohen-Con and Ziegler-Nichols method were compared. The sum of the squared error obtained for each of the control methods mentioned, the Taguchi method as the best method for controlling the light intensity was selected photo-bioreactor. This method compared to control methods listed the higher stability and a shorter interval to be answered.

Keywords: photo-bioreactor, control and optimization, Light intensity, Taguchi method

Procedia PDF Downloads 393
19530 Determination of the Oxidative Potential of Organic Materials: Method Development

Authors: Jui Afrin, Akhtarul Islam

Abstract:

In this paper, the solution of glucose, yeast and glucose yeast mixture are being used as sample solution for determining the chemical oxygen demand (COD). In general COD determination method used to determine the different rang of oxidative potential. But in this work has shown to determine the definite oxidative potential for different concentration for known COD value and wanted to see the difference between experimental value and the theoretical value for evaluating the method drawbacks. In this study, made the values of oxidative potential like 400 mg/L, 500 mg/L, 600 mg/L, 700 mg/L and 800mg/L for various sample solutions and determined the oxidative potential according to our developed method. Plotting the experimental COD values vs. sample solutions of various concentrations in mg/L to draw the curve. From these curves see that the curves for glucose solution is not linear; its deviate from linearity for the lower concentration and the reason for this deviation is unknown. If these drawback can be removed this method can be effectively used to determine Oxidative Potential of Industrial wastewater (such as: Leather industry wastewater, Municipal wastewater, Food industry wastewater, Textile wastewater, Pharmaceuticals waste water) that’s why more experiment and study required.

Keywords: bod (biological oxygen demand), cod (chemical oxygen demand), oxidative potential, titration, waste water, development

Procedia PDF Downloads 229
19529 Numerical Modeling of Wave Run-Up in Shallow Water Flows Using Moving Wet/Dry Interfaces

Authors: Alia Alghosoun, Michael Herty, Mohammed Seaid

Abstract:

We present a new class of numerical techniques to solve shallow water flows over dry areas including run-up. Many recent investigations on wave run-up in coastal areas are based on the well-known shallow water equations. Numerical simulations have also performed to understand the effects of several factors on tsunami wave impact and run-up in the presence of coastal areas. In all these simulations the shallow water equations are solved in entire domain including dry areas and special treatments are used for numerical solution of singularities at these dry regions. In the present study we propose a new method to deal with these difficulties by reformulating the shallow water equations into a new system to be solved only in the wetted domain. The system is obtained by a change in the coordinates leading to a set of equations in a moving domain for which the wet/dry interface is the reconstructed using the wave speed. To solve the new system we present a finite volume method of Lax-Friedrich type along with a modified method of characteristics. The method is well-balanced and accurately resolves dam-break problems over dry areas.

Keywords: dam-break problems, finite volume method, run-up waves, shallow water flows, wet/dry interfaces

Procedia PDF Downloads 145
19528 Screening of Strategic Management Criterions in Hospitals Using Delphi-Fuzzy Method

Authors: Helia Moayedi, Mahdi Moaidi

Abstract:

Nowadays, the managing and planning of hospitals is facing many problems. Failure to recognize the main criteria for strategic management to ensure long-term hospital performance can lead to many health problems. To achieve this goal, a qualitative-quantitate method titled Delphi-Fuzzy has been applied. This strategy makes it possible for experts to screen among the most important criteria in strategic management. To conduct this operation, a statistical society consisting of 20 experts in Ahwaz hospitals has been questioned. The final model confirms the key criterions after three stages of Delphi. This model provides the possibility to focus on the basic criteria and can determine the organization’s main orientation.

Keywords: Delphi-fuzzy method, hospital management, long-term planning, qualitative-quantitate method, screening of strategic criteria, strategic planning

Procedia PDF Downloads 131
19527 Effectiveness of Buteyko Method in Asthma Control and Quality of Life of School-Age Children

Authors: Romella C. Lina, Matthew Daniel V. Leysa, Zarah D. F. Libozada, Maria Francesca I. Lirio, Angelo A. Liwag, Gabriel D. Ramos, Margaret M. Natividad

Abstract:

This study aimed to determine the effectiveness of Buteyko Method in asthma control and quality of life of school-age children wherein a pretest-posttest design was utilized to measure the changes after the administration of Buteyko Method. Fourteen (14) subjects with bronchial asthma, aged 7-11 participated in the study. They were equally divided into two groups: the control group received no intervention while the experimental group was asked to attend sessions of Buteyko Method lecture and demonstration. The experimental group was visited for three (3) consecutive weeks to monitor their progress and compliance. Both groups were asked to answer ACQ pre- and post-intervention and PAQLQ before the start of the intervention phase and every week during the follow-up visits. In comparing the asthma control pre-test and post-test mean scores of the control group, no significant difference was noted (p=0.177) while the experimental group showed a significant difference after the administration of Buteyko Method (p=0.002). Moreover, the quality of life pre-test and post-test mean scores of the control group showed no significant difference in any week within one month of follow-up (p=0.736, 0.604, 0.689) while the experimental group showed a significant difference on the third week (p = 0.035) and fourth week (p=0.002) but no significant difference on the second week (p=0.111). Therefore, the use of Buteyko Method within 3-4 weeks as an adjunct to conventional management of asthma helps in improving asthma control and quality of life of school-age children.

Keywords: Buteyko Method, asthma, school-age children, asthma control, quality of life

Procedia PDF Downloads 424
19526 Analysis of Three-Dimensional Cracks in an Isotropic Medium by the Semi-Analytical Method

Authors: Abdoulnabi Tavangari, Nasim Salehzadeh

Abstract:

We presume a cylindrical medium that is under a uniform loading and there is a penny shaped crack located in the center of cylinder. In the crack growth analysis, the Stress Intensity Factor (SIF) is a fundamental prerequisite. In the present study, according to the RITZ method and by considering a cylindrical coordinate system as the main coordinate and a local polar coordinate, the mode-I SIF of threedimensional penny-shaped crack is obtained. In this method the unknown coefficients will be obtained with minimizing the potential energy that is including the strain energy and the external force work. By using the hook's law, stress fields will be obtained and then by using the Irvine equations, the amount of SIF will be obtained near the edge of the crack. This question has been solved for extreme medium in the Tada handbook and the result of the present research has been compared with that.

Keywords: three-dimensional cracks, penny-shaped crack, stress intensity factor, fracture mechanics, Ritz method

Procedia PDF Downloads 396
19525 Determining Which Material Properties Resist the Tool Wear When Machining Pre-Sintered Zirconia

Authors: David Robert Irvine

Abstract:

In the dental restoration sector, there has been a shift to using zirconia. With the ever increasing need to decrease lead times to deliver restorations faster the zirconia is machined in its pre-sintered state instead of grinding the very hard sintered state. As with all machining, there is tool wear and while investigating the tooling used to machine pre-sintered zirconia it became apparent that the wear rate is based more on material build up and abrasion than it is on plastic deformation like conventional metal machining. It also came to light that the tool material can currently not be selected based on wear resistance, as there is no data. Different works have analysed the effect of the individual wear mechanism separately using similar if not the same material. In this work, the testing method used to analyse the wear was a modified from ISO 8688:1989 to use the pre-sintered zirconia and the cutting conditions used in dental to machine it. This understanding was developed through a series of tests based in machining operations, to give the best representation of the multiple wear factors that can occur in machining of pre-sintered zirconia such as 3 body abrasion, material build up, surface welding, plastic deformation, tool vibration and thermal cracking. From the testing, it found that carbide grades with low trans-granular rupture toughness would fail due to abrasion while those with high trans-granular rupture toughness failed due to edge chipping from build up or thermal properties. The results gained can assist the development of these tools and the restorative dental process. This work was completed with the aim of assisting in the selection of tool material for future tools along with a deeper understanding of the properties that assist in abrasive wear resistance and material build up.

Keywords: abrasive wear, cemented carbide, pre-sintered zirconia, tool wear

Procedia PDF Downloads 159
19524 Well Inventory Data Entry: Utilization of Developed Technologies to Progress the Integrated Asset Plan

Authors: Danah Al-Selahi, Sulaiman Al-Ghunaim, Bashayer Sadiq, Fatma Al-Otaibi, Ali Ameen

Abstract:

In light of recent changes affecting the Oil & Gas Industry, optimization measures have become imperative for all companies globally, including Kuwait Oil Company (KOC). To keep abreast of the dynamic market, a detailed Integrated Asset Plan (IAP) was developed to drive optimization across the organization, which was facilitated through the in-house developed software “Well Inventory Data Entry” (WIDE). This comprehensive and integrated approach enabled centralization of all planned asset components for better well planning, enhancement of performance, and to facilitate continuous improvement through performance tracking and midterm forecasting. Traditionally, this was hard to achieve as, in the past, various legacy methods were used. This paper briefly describes the methods successfully adopted to meet the company’s objective. IAPs were initially designed using computerized spreadsheets. However, as data captured became more complex and the number of stakeholders requiring and updating this information grew, the need to automate the conventional spreadsheets became apparent. WIDE, existing in other aspects of the company (namely, the Workover Optimization project), was utilized to meet the dynamic requirements of the IAP cycle. With the growth of extensive features to enhance the planning process, the tool evolved into a centralized data-hub for all asset-groups and technical support functions to analyze and infer from, leading WIDE to become the reference two-year operational plan for the entire company. To achieve WIDE’s goal of operational efficiency, asset-groups continuously add their parameters in a series of predefined workflows that enable the creation of a structured process which allows risk factors to be flagged and helps mitigation of the same. This tool dictates assigned responsibilities for all stakeholders in a method that enables continuous updates for daily performance measures and operational use. The reliable availability of WIDE, combined with its user-friendliness and easy accessibility, created a platform of cross-functionality amongst all asset-groups and technical support groups to update contents of their respective planning parameters. The home-grown entity was implemented across the entire company and tailored to feed in internal processes of several stakeholders across the company. Furthermore, the implementation of change management and root cause analysis techniques captured the dysfunctionality of previous plans, which in turn resulted in the improvement of already existing mechanisms of planning within the IAP. The detailed elucidation of the 2 year plan flagged any upcoming risks and shortfalls foreseen in the plan. All results were translated into a series of developments that propelled the tool’s capabilities beyond planning and into operations (such as Asset Production Forecasts, setting KPIs, and estimating operational needs). This process exemplifies the ability and reach of applying advanced development techniques to seamlessly integrated the planning parameters of various assets and technical support groups. These techniques enables the enhancement of integrating planning data workflows that ultimately lay the founding plans towards an epoch of accuracy and reliability. As such, benchmarks of establishing a set of standard goals are created to ensure the constant improvement of the efficiency of the entire planning and operational structure.

Keywords: automation, integration, value, communication

Procedia PDF Downloads 146
19523 Half-Circle Fuzzy Number Threshold Determination via Swarm Intelligence Method

Authors: P. W. Tsai, J. W. Chen, C. W. Chen, C. Y. Chen

Abstract:

In recent years, many researchers are involved in the field of fuzzy theory. However, there are still a lot of issues to be resolved. Especially on topics related to controller design such as the field of robot, artificial intelligence, and nonlinear systems etc. Besides fuzzy theory, algorithms in swarm intelligence are also a popular field for the researchers. In this paper, a concept of utilizing one of the swarm intelligence method, which is called Bacterial-GA Foraging, to find the stabilized common P matrix for the fuzzy controller system is proposed. An example is given in in the paper, as well.

Keywords: half-circle fuzzy numbers, predictions, swarm intelligence, Lyapunov method

Procedia PDF Downloads 685
19522 Streamwise Vorticity in the Wake of a Sliding Bubble

Authors: R. O’Reilly Meehan, D. B. Murray

Abstract:

In many practical situations, bubbles are dispersed in a liquid phase. Understanding these complex bubbly flows is therefore a key issue for applications such as shell and tube heat exchangers, mineral flotation and oxidation in water treatment. Although a large body of work exists for bubbles rising in an unbounded medium, that of bubbles rising in constricted geometries has received less attention. The particular case of a bubble sliding underneath an inclined surface is common to two-phase flow systems. The current study intends to expand this knowledge by performing experiments to quantify the streamwise flow structures associated with a single sliding air bubble under an inclined surface in quiescent water. This is achieved by means of two-dimensional, two-component particle image velocimetry (PIV), performed with a continuous wave laser and high-speed camera. PIV vorticity fields obtained in a plane perpendicular to the sliding surface show that there is significant bulk fluid motion away from the surface. The associated momentum of the bubble means that this wake motion persists for a significant time before viscous dissipation. The magnitude and direction of the flow structures in the streamwise measurement plane are found to depend on the point on its path through which the bubble enters the plane. This entry point, represented by a phase angle, affects the nature and strength of the vortical structures. This study reconstructs the vorticity field in the wake of the bubble, converting the field at different instances in time to slices of a large-scale wake structure. This is, in essence, Taylor’s ”frozen turbulence” hypothesis. Applying this to the vorticity fields provides a pseudo three-dimensional representation from 2-D data, allowing for a more intuitive understanding of the bubble wake. This study provides insights into the complex dynamics of a situation common to many engineering applications, particularly shell and tube heat exchangers in the nucleate boiling regime.

Keywords: bubbly flow, particle image velocimetry, two-phase flow, wake structures

Procedia PDF Downloads 377
19521 RFID and Intelligence: A Smart Authentication Method for Blind People​

Authors: V. Vishu, R. Manimegalai

Abstract:

A combination of Intelligence and Radio frequency identification to bring an enhanced authentication method for the improvement of visually challenged people. The main goal is to provide an improved authentication by combining Advanced Encryption Standard algorithm and Intelligence. Here the encryption key will be generated as a combination of intelligent information from sensors and tag values. The main challenges are security, privacy and cost. Besides, the method was created to evaluate the amount of interaction between sensors and significant influence on the level of visually challenged people’s mental and physical states. The proposal is to apply various ideas on independent living or to assist them for a good life.

Keywords: AES, encryption, intelligence, smart key

Procedia PDF Downloads 241
19520 Why is the Recurrence Rate of Residual or Recurrent Disease Following Endoscopic Mucosal Resection (EMR) of the Oesophageal Dysplasia’s and T1 Tumours Higher in the Greater Midlands Cancer Network?

Authors: Harshadkumar Rajgor, Jeff Butterworth

Abstract:

Background: Barretts oesophagus increases the risk of developing oesophageal adenocarcinoma. Over the last 40 years, there has been a 6 fold increase in the incidence of oesophageal adenocarcinoma in the western world and the incidence rates are increasing at a greater rate than cancers of the colon, breast and lung. Endoscopic mucosal resection (EMR) is a relatively new technique being used by 2 centres in the greater midlands cancer network. EMR can be used for curative or staging purposes, for high-grade dysplasia’s and T1 tumours of the oesophagus. EMR is also suitable for those who are deemed high risk for oesophagectomy. EMR has a recurrence rate of 21% according to the Wiesbaden data. Method: A retrospective study of prospectively collected data was carried out involving 24 patients who had EMR for curative or staging purposes. Complications of residual or recurrent disease following EMR that required further treatment were investigated. Results: In 54% of cases residual or recurrent disease was suspected. 96% of patients were given clear and concise information regarding their diagnosis of high-grade dysplasia or T1 tumours. All 24 patients consulted the same specialist healthcare team. Conclusion: EMR is a safe and effective treatment for patients who have high-grade dysplasia and T1NO tumours. In 54% of cases residual or recurrent disease was suspected. Initially, only single resections were undertaken. Multiple resections are now being carried out to reduce the risk of recurrence. Complications from EMR remain low in this series and consisted of a single episode of post procedural bleeding.

Keywords: endoscopic mucosal resection, oesophageal dysplasia, T1 tumours, cancer network

Procedia PDF Downloads 316
19519 The Realization of a System’s State Space Based on Markov Parameters by Using Flexible Neural Networks

Authors: Ali Isapour, Ramin Nateghi

Abstract:

— Markov parameters are unique parameters of the system and remain unchanged under similarity transformations. Markov parameters from a power series that is convergent only if the system matrix’s eigenvalues are inside the unity circle. Therefore, Markov parameters of a stable discrete-time system are convergent. In this study, we aim to realize the system based on Markov parameters by using Artificial Neural Networks (ANN), and this end, we use Flexible Neural Networks. Realization means determining the elements of matrices A, B, C, and D.

Keywords: Markov parameters, realization, activation function, flexible neural network

Procedia PDF Downloads 194