Search results for: Bit Error Rate (BER)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3727

Search results for: Bit Error Rate (BER)

1777 Design of a Fuzzy Feed-forward Controller for Monitor HAGC System of Cold Rolling Mill

Authors: S. Khosravi, A. Afshar, F. Barazandeh

Abstract:

In this study we propose a novel monitor hydraulic automatic gauge control (HAGC) system based on fuzzy feedforward controller. This is used in the development of cold rolling mill automation system to improve the quality of cold strip. According to features/ properties of entry steel strip like its average yield stress, width of strip, and desired exit thickness, this controller realizes the compensation for the exit thickness error. The traditional methods of adjusting the roller position, can-t tolerate the variance in the entry steel strip. The proposed method uses a mathematical model of the system together with the expert knowledge to perform this adjustment while minimizing the effect of the stated problem. In order to improve the speed of the controller in rejecting disturbances introduced by entry strip thickness variations, expert knowledge is added as a feed-forward term to the HAGC system. Simulation results for the application of the proposed controller to a real cold mill show that the exit strip quality is highly improved.

Keywords: Fuzzy feed-forward controller, monitor HAGC system, dynamic mathematical model, entry strip thickness deviation compensation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2188
1776 Development of Prediction Models of Day-Ahead Hourly Building Electricity Consumption and Peak Power Demand Using the Machine Learning Method

Authors: Dalin Si, Azizan Aziz, Bertrand Lasternas

Abstract:

To encourage building owners to purchase electricity at the wholesale market and reduce building peak demand, this study aims to develop models that predict day-ahead hourly electricity consumption and demand using artificial neural network (ANN) and support vector machine (SVM). All prediction models are built in Python, with tool Scikit-learn and Pybrain. The input data for both consumption and demand prediction are time stamp, outdoor dry bulb temperature, relative humidity, air handling unit (AHU), supply air temperature and solar radiation. Solar radiation, which is unavailable a day-ahead, is predicted at first, and then this estimation is used as an input to predict consumption and demand. Models to predict consumption and demand are trained in both SVM and ANN, and depend on cooling or heating, weekdays or weekends. The results show that ANN is the better option for both consumption and demand prediction. It can achieve 15.50% to 20.03% coefficient of variance of root mean square error (CVRMSE) for consumption prediction and 22.89% to 32.42% CVRMSE for demand prediction, respectively. To conclude, the presented models have potential to help building owners to purchase electricity at the wholesale market, but they are not robust when used in demand response control.

Keywords: Building energy prediction, data mining, demand response, electricity market.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2187
1775 Determination of an Efficient Differentiation Pathway of Stem Cells Employing Predictory Neural Network Model

Authors: Mughal Yar M, Israr Ul Haq, Bushra Noman

Abstract:

The stem cells have ability to differentiated themselves through mitotic cell division and various range of specialized cell types. Cellular differentiation is a way by which few specialized cell develops into more specialized.This paper studies the fundamental problem of computational schema for an artificial neural network based on chemical, physical and biological variables of state. By doing this type of study system could be model for a viable propagation of various economically important stem cells differentiation. This paper proposes various differentiation outcomes of artificial neural network into variety of potential specialized cells on implementing MATLAB version 2009. A feed-forward back propagation kind of network was created to input vector (five input elements) with single hidden layer and one output unit in output layer. The efficiency of neural network was done by the assessment of results achieved from this study with that of experimental data input and chosen target data. The propose solution for the efficiency of artificial neural network assessed by the comparatative analysis of “Mean Square Error" at zero epochs. There are different variables of data in order to test the targeted results.

Keywords: Computational shcmin, meiosis, mitosis, neuralnetwork, Stem cell SOM;

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1488
1774 Probability-Based Damage Detection of Structures Using Kriging Surrogates and Enhanced Ideal Gas Molecular Movement Algorithm

Authors: M. R. Ghasemi, R. Ghiasi, H. Varaee

Abstract:

Surrogate model has received increasing attention for use in detecting damage of structures based on vibration modal parameters. However, uncertainties existing in the measured vibration data may lead to false or unreliable output result from such model. In this study, an efficient approach based on Monte Carlo simulation is proposed to take into account the effect of uncertainties in developing a surrogate model. The probability of damage existence (PDE) is calculated based on the probability density function of the existence of undamaged and damaged states. The kriging technique allows one to genuinely quantify the surrogate error, therefore it is chosen as metamodeling technique. Enhanced version of ideal gas molecular movement (EIGMM) algorithm is used as main algorithm for model updating. The developed approach is applied to detect simulated damage in numerical models of 72-bar space truss and 120-bar dome truss. The simulation results show the proposed method can perform well in probability-based damage detection of structures with less computational effort compared to direct finite element model.

Keywords: Enhanced ideal gas molecular movement, Kriging, probability-based damage detection, probability of damage existence, surrogate modeling, uncertainty quantification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 929
1773 A New Intelligent, Dynamic and Real Time Management System of Sewerage

Authors: R. Tlili Yaakoubi, H. Nakouri, O. Blanpain, S. Lallahem

Abstract:

The current tools for real time management of sewer systems are based on two software tools: the software of weather forecast and the software of hydraulic simulation. The use of the first ones is an important cause of imprecision and uncertainty, the use of the second requires temporal important steps of decision because of their need in times of calculation. This way of proceeding fact that the obtained results are generally different from those waited. The major idea of this project is to change the basic paradigm by approaching the problem by the "automatic" face rather than by that "hydrology". The objective is to make possible the realization of a large number of simulations at very short times (a few seconds) allowing to take place weather forecasts by using directly the real time meditative pluviometric data. The aim is to reach a system where the decision-making is realized from reliable data and where the correction of the error is permanent. A first model of control laws was realized and tested with different return-period rainfalls. The gains obtained in rejecting volume vary from 19 to 100 %. The development of a new algorithm was then used to optimize calculation time and thus to overcome the subsequent combinatorial problem in our first approach. Finally, this new algorithm was tested with 16- year-rainfall series. The obtained gains are 40 % of total volume rejected to the natural environment and of 65 % in the number of discharges.

Keywords: Automation, optimization, paradigm, RTC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1469
1772 Automatic Generation Control Design Based on Full State Vector Feedback for a Multi-Area Energy System Connected via Parallel AC/DC Lines

Authors: Gulshan Sharma

Abstract:

This article presents the design of optimal automatic generation control (AGC) based on full state feedback control for a multi-area interconnected power system. An extra high voltage AC transmission line in parallel with a high voltage DC link is considered as an area interconnection between the areas. The optimal AGC are designed and implemented in the wake of 1% load perturbation in one of the areas and the system dynamic response plots for various system states are obtained to investigate the system dynamic performance. The pattern of closed-loop eigenvalues are also determined to analyze the system stability. From the investigations carried out in the work, it is revealed that the dynamic performance of the system under consideration has an appreciable improvement when a high voltage DC line is paralleled with an extra high voltage AC line as an interconnection between the areas. The investigation of closed-loop eigenvalues reveals that the system stability is ensured in all case studies carried out with the designed optimal AGC.

Keywords: Automatic generation control, area control error, DC link, optimal AGC regulator, closed-loop eigenvalues.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 807
1771 Secure Low-Bandwidth Video Streaming through Reliable Multipath Propagation in MANETs

Authors: S. Mohideen Badhusha, K. Duraiswamy

Abstract:

Most of the existing video streaming protocols provide video services without considering security aspects in decentralized mobile ad-hoc networks. The security policies adapted to the currently existing non-streaming protocols, do not comply with the live video streaming protocols resulting in considerable vulnerability, high bandwidth consumption and unreliability which cause severe security threats, low bandwidth and error prone transmission respectively in video streaming applications. Therefore a synergized methodology is required to reduce vulnerability and bandwidth consumption, and enhance reliability in the video streaming applications in MANET. To ensure the security measures with reduced bandwidth consumption and improve reliability of the video streaming applications, a Secure Low-bandwidth Video Streaming through Reliable Multipath Propagation (SLVRMP) protocol architecture has been proposed by incorporating the two algorithms namely Secure Low-bandwidth Video Streaming Algorithm and Reliable Secure Multipath Propagation Algorithm using Layered Video Coding in non-overlapping zone routing network topology. The performances of the proposed system are compared to those of the other existing secure multipath protocols Sec-MR, SPREAD using NS 2.34 and the simulation results show that the performances of the proposed system get considerably improved.

Keywords: Bandwidth consumption, layered video coding, multipath propagation, reliability, security threats, video streaming applications, vulnerability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1867
1770 Main Tendencies of Youth Unemployment and the Regulation Mechanisms for Decreasing Its Rate in Georgia

Authors: Nino Paresashvili, Nino Abesadze

Abstract:

The modern world faces huge challenges. Globalization changed the socio-economic conditions of many countries. The current processes in the global environment have a different impact on countries with different cultures. However, an alleviation of poverty and improvement of living conditions is still the basic challenge for the majority of countries, because much of the population still lives under the official threshold of poverty. It is very important to stimulate youth employment. In order to prepare young people for the labour market, it is essential to provide them with the appropriate professional skills and knowledge. It is necessary to plan efficient activities for decreasing an unemployment rate and for developing the perfect mechanisms for regulation of a labour market. Such planning requires thorough study and analysis of existing reality, as well as development of corresponding mechanisms. Statistical analysis of unemployment is one of the main platforms for regulation of the labour market key mechanisms. The corresponding statistical methods should be used in the study process. Such methods are observation, gathering, grouping, and calculation of the generalized indicators. Unemployment is one of the most severe socioeconomic problems in Georgia. According to the past as well as the current statistics, unemployment rates always have been the most problematic issue to resolve for policy makers. Analytical works towards to the above-mentioned problem will be the basis for the next sustainable steps to solve the main problem. The results of the study showed that the choice of young people is not often due to their inclinations, their interests and the labour market demand. That is why the wrong professional orientation of young people in most cases leads to their unemployment. At the same time, it was shown that there are a number of professions in the labour market with a high demand because of the deficit the appropriate specialties. To achieve healthy competitiveness in youth employment, it is necessary to formulate regional employment programs with taking into account the regional infrastructure specifications.

Keywords: Unemployment. analysis, methods, tendencies, regulation mechanisms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 751
1769 Influence of Infrared Radiation on the Growth Rate of Microalgae Chlorella sorokiniana

Authors: Natalia Politaeva, Iuliia Smiatskaia, Iuliia Bazarnova, Iryna Atamaniuk, Kerstin Kuchta

Abstract:

Nowadays, the progressive decrease of primary natural resources and ongoing upward trend in terms of energy demand, have resulted in development of new generation technological processes which are focused on step-wise production and residues utilization. Thus, microalgae-based 3rd generation bioeconomy is considered one of the most promising approaches that allow production of value-added products and sophisticated utilization of residues biomass. In comparison to conventional biomass, microalgae can be cultivated in wide range of conditions without compromising food and feed production, and thus, addressing issues associated with negative social and environmental impacts. However, one of the most challenging tasks is to undergo seasonal variations and to achieve optimal growing conditions for indoor closed systems that can cover further demand for material and energetic utilization of microalgae. For instance, outdoor cultivation in St. Petersburg (Russia) is only suitable within rather narrow time frame (from mid-May to mid-September). At earlier and later periods, insufficient sunlight and heat for the growth of microalgae were detected. On the other hand, without additional physical effects, the biomass increment in summer is 3-5 times per week, depending on the solar radiation and the ambient temperature. In order to increase biomass production, scientists from all over the world have proposed various technical solutions for cultivators and have been studying the influence of various physical factors affecting biomass growth namely: magnetic field, radiation impact, and electric field, etc. In this paper, the influence of infrared radiation (IR) and fluorescent light on the growth rate of microalgae Chlorella sorokiniana has been studied. The cultivation of Chlorella sorokiniana was carried out in 500 ml cylindrical glass vessels, which were constantly aerated. To accelerate the cultivation process, the mixture was stirred for 15 minutes at 500 rpm following 120 minutes of rest time. At the same time, the metabolic needs in nutrients were provided by the addition of micro- and macro-nutrients in the microalgae growing medium. Lighting was provided by fluorescent lamps with the intensity of 2500 ± 300 lx. The influence of IR was determined using IR lamps with a voltage of 220 V, power of 250 W, in order to achieve the intensity of 13 600 ± 500 lx. The obtained results show that under the influence of fluorescent lamps along with the combined effect of active aeration and variable mixing, the biomass increment on the 2nd day was three times, and on the 7th day, it was eight-fold. The growth rate of microalgae under the influence of IR radiation was lower and has reached 22.6·106 cells·mL-1. However, application of IR lamps for the biomass growth allows maintaining the optimal temperature of microalgae suspension at approximately 25-28°C, which might especially be beneficial during the cold season in extreme climate zones.

Keywords: Biomass, fluorescent lamp, infrared radiation, microalgae.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 991
1768 A Tuning Method for Microwave Filter via Complex Neural Network and Improved Space Mapping

Authors: Shengbiao Wu, Weihua Cao, Min Wu, Can Liu

Abstract:

This paper presents an intelligent tuning method of microwave filter based on complex neural network and improved space mapping. The tuning process consists of two stages: the initial tuning and the fine tuning. At the beginning of the tuning, the return loss of the filter is transferred to the passband via the error of phase. During the fine tuning, the phase shift caused by the transmission line and the higher order mode is removed by the curve fitting. Then, an Cauchy method based on the admittance parameter (Y-parameter) is used to extract the coupling matrix. The influence of the resonant cavity loss is eliminated during the parameter extraction process. By using processed data pairs (the amount of screw variation and the variation of the coupling matrix), a tuning model is established by the complex neural network. In view of the improved space mapping algorithm, the mapping relationship between the actual model and the ideal model is established, and the amplitude and direction of the tuning is constantly updated. Finally, the tuning experiment of the eight order coaxial cavity filter shows that the proposed method has a good effect in tuning time and tuning precision.

Keywords: Microwave filter, scattering parameter (s-parameter), coupling matrix, intelligent tuning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1290
1767 Arc Length of Rational Bezier Curves and Use for CAD Reparametrization

Authors: Maharavo Randrianarivony

Abstract:

The length  of a given rational B'ezier curve is efficiently estimated. Since a rational B'ezier function is nonlinear, it is usually impossible to evaluate its length exactly. The length is approximated by using subdivision and the accuracy of the approximation n is investigated. In order to improve the efficiency, adaptivity is used with some length estimator. A rigorous theoretical analysis of the rate of convergence of n to  is given. The required number of subdivisions to attain a prescribed accuracy is also analyzed. An application to CAD parametrization is briefly described. Numerical results are reported to supplement the theory.

Keywords: Adaptivity, Length, Parametrization, Rational Bezier

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1777
1766 Ensemble Approach for Predicting Student's Academic Performance

Authors: L. A. Muhammad, M. S. Argungu

Abstract:

Educational data mining (EDM) has recorded substantial considerations. Techniques of data mining in one way or the other have been proposed to dig out out-of-sight knowledge in educational data. The result of the study got assists academic institutions in further enhancing their process of learning and methods of passing knowledge to students. Consequently, the performance of students boasts and the educational products are by no doubt enhanced. This study adopted a student performance prediction model premised on techniques of data mining with Students' Essential Features (SEF). SEF are linked to the learner's interactivity with the e-learning management system. The performance of the student's predictive model is assessed by a set of classifiers, viz. Bayes Network, Logistic Regression, and Reduce Error Pruning Tree (REP). Consequently, ensemble methods of Bagging, Boosting, and Random Forest (RF) are applied to improve the performance of these single classifiers. The study reveals that the result shows a robust affinity between learners' behaviors and their academic attainment. Result from the study shows that the REP Tree and its ensemble record the highest accuracy of 83.33% using SEF. Hence, in terms of the Receiver Operating Curve (ROC), boosting method of REP Tree records 0.903, which is the best. This result further demonstrates the dependability of the proposed model.

Keywords: Ensemble, bagging, Random Forest, boosting, data mining, classifiers, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 714
1765 Active Intra-ONU Scheduling with Cooperative Prediction Mechanism in EPONs

Authors: Chuan-Ching Sue, Shi-Zhou Chen, Ting-Yu Huang

Abstract:

Dynamic bandwidth allocation in EPONs can be generally separated into inter-ONU scheduling and intra-ONU scheduling. In our previous work, the active intra-ONU scheduling (AS) utilizes multiple queue reports (QRs) in each report message to cooperate with the inter-ONU scheduling and makes the granted bandwidth fully utilized without leaving unused slot remainder (USR). This scheme successfully solves the USR problem originating from the inseparability of Ethernet frame. However, without proper setting of threshold value in AS, the number of QRs constrained by the IEEE 802.3ah standard is not enough, especially in the unbalanced traffic environment. This limitation may be solved by enlarging the threshold value. The large threshold implies the large gap between the adjacent QRs, thus resulting in the large difference between the best granted bandwidth and the real granted bandwidth. In this paper, we integrate AS with a cooperative prediction mechanism and distribute multiple QRs to reduce the penalty brought by the prediction error. Furthermore, to improve the QoS and save the usage of queue reports, the highest priority (EF) traffic which comes during the waiting time is granted automatically by OLT and is not considered in the requested bandwidth of ONU. The simulation results show that the proposed scheme has better performance metrics in terms of bandwidth utilization and average delay for different classes of packets.

Keywords: EPON, Inter-ONU and Intra-ONU scheduling, Prediction, Unused slot remainder

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1576
1764 Higher-Dimensional Quantum Cryptography

Authors: Bradley Christensen, Kevin T. McCusker, Daniel J. Gauthier, Daniel Kumor, Venkat Chandar, P. G. Kwiat

Abstract:

We report on a high-speed quantum cryptography system that utilizes simultaneous entanglement in polarization and in “time-bins". With multiple degrees of freedom contributing to the secret key, we can achieve over ten bits of random entropy per detected coincidence. In addition, we collect from multiple spots o the downconversion cone to further amplify the data rate, allowing usto achieve over 10 Mbits of secure key per second.

Keywords: Downconversion, Hyper-entanglement, Quantum Cryptography

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1672
1763 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis

Authors: N. R. N. Idris, S. Baharom

Abstract:

A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates.On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.

Keywords: Aggregate data, combined-level data, Individual patient data, meta analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1725
1762 Load Discontinuity in Shock Response and Its Remedies

Authors: Shuenn-Yih Chang, Chiu-Li Huang

Abstract:

It has been shown that a load discontinuity at the end of an impulse will result in an extra impulse and hence an extra amplitude distortion if a step-by-step integration method is employed to yield the shock response. In order to overcome this difficulty, three remedies are proposed to reduce the extra amplitude distortion. The first remedy is to solve the momentum equation of motion instead of the force equation of motion in the step-by-step solution of the shock response, where an external momentum is used in the solution of the momentum equation of motion. Since the external momentum is a resultant of the time integration of external force, the problem of load discontinuity will automatically disappear. The second remedy is to perform a single small time step immediately upon termination of the applied impulse while the other time steps can still be conducted by using the time step determined from general considerations. This is because that the extra impulse caused by a load discontinuity at the end of an impulse is almost linearly proportional to the step size. Finally, the third remedy is to use the average value of the two different values at the integration point of the load discontinuity to replace the use of one of them for loading input. The basic motivation of this remedy originates from the concept of no loading input error associated with the integration point of load discontinuity. The feasibility of the three remedies are analytically explained and numerically illustrated.

Keywords: Dynamic analysis, load discontinuity, shock response, step-by-step integration

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1314
1761 Project Selection by Using Fuzzy AHP and TOPSIS Technique

Authors: S. Mahmoodzadeh, J. Shahrabi, M. Pariazar, M. S. Zaeri

Abstract:

In this article, by using fuzzy AHP and TOPSIS technique we propose a new method for project selection problem. After reviewing four common methods of comparing alternatives investment (net present value, rate of return, benefit cost analysis and payback period) we use them as criteria in AHP tree. In this methodology by utilizing improved Analytical Hierarchy Process by Fuzzy set theory, first we try to calculate weight of each criterion. Then by implementing TOPSIS algorithm, assessment of projects has been done. Obtained results have been tested in a numerical example.

Keywords: Fuzzy AHP, Project Selection, TOPSIS Technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6574
1760 Design, Simulation, and Implementation of a Digital Pulse Oxygen Saturation Measurement System Using the Arduino Microcontroller

Authors: Muhibul Haque Bhuyan, Md. Refat Sarder

Abstract:

If a person can monitor his/her oxygen saturation level intermittently then he/she can identify his/her condition early and thus he/she can seek a doctor’s help. This paper reports the design, simulation, and implementation of a low-cost pulse oxygen saturation measurement device based on a reflective photoplethysmography (PPG) system using an integrated circuit sensor as the fundamental component of this health status checking device. The measurement of the physiological parameter is the blood oxygen saturation level (SpO2) in the peripheral capillary. This work has been implemented using an Arduino Uno R3 microcontroller along with this sensor integrated circuit (IC). The system is designed in the Proteus environment and then simulated to check its performance. After that, the hardware implementation is performed. We used a clipping type optical sensor to sense the arterial oxygen saturation level of blood signal from the fingertips of an individual and then transformed it into the digital data in the microcontroller through its programming its instruction. The designed system was tested by measuring the SpO2 level for several people of different ages, from 12 to 57 years of age. Besides, the same people were tested using a standard machine purchased from the market. Test results were found very satisfactory as the average percentage of error was very low, 1.59% only.

Keywords: Digital pulse oxygen saturation level, oximeter, measurement, design, simulation, implementation, proteus, Arduino Uno microcontroller.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1824
1759 An Efficient Backward Semi-Lagrangian Scheme for Nonlinear Advection-Diffusion Equation

Authors: Soyoon Bak, Sunyoung Bu, Philsu Kim

Abstract:

In this paper, a backward semi-Lagrangian scheme combined with the second-order backward difference formula is designed to calculate the numerical solutions of nonlinear advection-diffusion equations. The primary aims of this paper are to remove any iteration process and to get an efficient algorithm with the convergence order of accuracy 2 in time. In order to achieve these objects, we use the second-order central finite difference and the B-spline approximations of degree 2 and 3 in order to approximate the diffusion term and the spatial discretization, respectively. For the temporal discretization, the second order backward difference formula is applied. To calculate the numerical solution of the starting point of the characteristic curves, we use the error correction methodology developed by the authors recently. The proposed algorithm turns out to be completely iteration free, which resolves the main weakness of the conventional backward semi-Lagrangian method. Also, the adaptability of the proposed method is indicated by numerical simulations for Burgers’ equations. Throughout these numerical simulations, it is shown that the numerical results is in good agreement with the analytic solution and the present scheme offer better accuracy in comparison with other existing numerical schemes.

Keywords: Semi-Lagrangian method, Iteration free method, Nonlinear advection-diffusion equation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2465
1758 Statistical Characteristics of Distribution of Radiation-Induced Defects under Random Generation

Authors: Pavlo Selyshchev

Abstract:

We consider fluctuations of defects density taking into account their interaction. Stochastic field of displacement generation rate gives random defect distribution. We determinate statistical characteristics (mean and dispersion) of random field of point defect distribution as function of defect generation parameters, temperature and properties of irradiated crystal.

 

Keywords: Irradiation, Primary Defects, Interaction, Fluctuations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1818
1757 Solar Radiation Time Series Prediction

Authors: Cameron Hamilton, Walter Potter, Gerrit Hoogenboom, Ronald McClendon, Will Hobbs

Abstract:

A model was constructed to predict the amount of solar radiation that will make contact with the surface of the earth in a given location an hour into the future. This project was supported by the Southern Company to determine at what specific times during a given day of the year solar panels could be relied upon to produce energy in sufficient quantities. Due to their ability as universal function approximators, an artificial neural network was used to estimate the nonlinear pattern of solar radiation, which utilized measurements of weather conditions collected at the Griffin, Georgia weather station as inputs. A number of network configurations and training strategies were utilized, though a multilayer perceptron with a variety of hidden nodes trained with the resilient propagation algorithm consistently yielded the most accurate predictions. In addition, a modeled direct normal irradiance field and adjacent weather station data were used to bolster prediction accuracy. In later trials, the solar radiation field was preprocessed with a discrete wavelet transform with the aim of removing noise from the measurements. The current model provides predictions of solar radiation with a mean square error of 0.0042, though ongoing efforts are being made to further improve the model’s accuracy.

Keywords: Artificial Neural Networks, Resilient Propagation, Solar Radiation, Time Series Forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2738
1756 Similitude for Thermal Scale-up of a Multiphase Thermolysis Reactor in the Cu-Cl Cycle of a Hydrogen Production

Authors: Mohammed W. Abdulrahman

Abstract:

The thermochemical copper-chlorine (Cu-Cl) cycle is considered as a sustainable and efficient technology for a hydrogen production, when linked with clean-energy systems such as nuclear reactors or solar thermal plants. In the Cu-Cl cycle, water is decomposed thermally into hydrogen and oxygen through a series of intermediate reactions. This paper investigates the thermal scale up analysis of the three phase oxygen production reactor in the Cu-Cl cycle, where the reaction is endothermic and the temperature is about 530 oC. The paper focuses on examining the size and number of oxygen reactors required to provide enough heat input for different rates of hydrogen production. The type of the multiphase reactor used in this paper is the continuous stirred tank reactor (CSTR) that is heated by a half pipe jacket. The thermal resistance of each section in the jacketed reactor system is studied to examine its effect on the heat balance of the reactor. It is found that the dominant contribution to the system thermal resistance is from the reactor wall. In the analysis, the Cu-Cl cycle is assumed to be driven by a nuclear reactor where two types of nuclear reactors are examined as the heat source to the oxygen reactor. These types are the CANDU Super Critical Water Reactor (CANDU-SCWR) and High Temperature Gas Reactor (HTGR). It is concluded that a better heat transfer rate has to be provided for CANDU-SCWR by 3-4 times than HTGR. The effect of the reactor aspect ratio is also examined in this paper and is found that increasing the aspect ratio decreases the number of reactors and the rate of decrease in the number of reactors decreases by increasing the aspect ratio. Finally, a comparison between the results of heat balance and existing results of mass balance is performed and is found that the size of the oxygen reactor is dominated by the heat balance rather than the material balance.

Keywords: Clean energy, Cu-Cl cycle, heat transfer, sustainable energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630
1755 A Numerical Model for Simulation of Blood Flow in Vascular Networks

Authors: Houman Tamaddon, Mehrdad Behnia, Masud Behnia

Abstract:

An accurate study of blood flow is associated with an accurate vascular pattern and geometrical properties of the organ of interest. Due to the complexity of vascular networks and poor accessibility in vivo, it is challenging to reconstruct the entire vasculature of any organ experimentally. The objective of this study is to introduce an innovative approach for the reconstruction of a full vascular tree from available morphometric data. Our method consists of implementing morphometric data on those parts of the vascular tree that are smaller than the resolution of medical imaging methods. This technique reconstructs the entire arterial tree down to the capillaries. Vessels greater than 2 mm are obtained from direct volume and surface analysis using contrast enhanced computed tomography (CT). Vessels smaller than 2mm are reconstructed from available morphometric and distensibility data and rearranged by applying Murray’s Laws. Implementation of morphometric data to reconstruct the branching pattern and applying Murray’s Laws to every vessel bifurcation simultaneously, lead to an accurate vascular tree reconstruction. The reconstruction algorithm generates full arterial tree topography down to the first capillary bifurcation. Geometry of each order of the vascular tree is generated separately to minimize the construction and simulation time. The node-to-node connectivity along with the diameter and length of every vessel segment is established and order numbers, according to the diameter-defined Strahler system, are assigned. During the simulation, we used the averaged flow rate for each order to predict the pressure drop and once the pressure drop is predicted, the flow rate is corrected to match the computed pressure drop for each vessel. The final results for 3 cardiac cycles is presented and compared to the clinical data.

Keywords: Blood flow, Morphometric data, Vascular tree, Strahler ordering system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2083
1754 Parametric Analysis and Optimal Design of Functionally Graded Plates Using Particle Swarm Optimization Algorithm and a Hybrid Meshless Method

Authors: Foad Nazari, Seyed Mahmood Hosseini, Mohammad Hossein Abolbashari, Mohammad Hassan Abolbashari

Abstract:

The present study is concerned with the optimal design of functionally graded plates using particle swarm optimization (PSO) algorithm. In this study, meshless local Petrov-Galerkin (MLPG) method is employed to obtain the functionally graded (FG) plate’s natural frequencies. Effects of two parameters including thickness to height ratio and volume fraction index on the natural frequencies and total mass of plate are studied by using the MLPG results. Then the first natural frequency of the plate, for different conditions where MLPG data are not available, is predicted by an artificial neural network (ANN) approach which is trained by back-error propagation (BEP) technique. The ANN results show that the predicted data are in good agreement with the actual one. To maximize the first natural frequency and minimize the mass of FG plate simultaneously, the weighted sum optimization approach and PSO algorithm are used. However, the proposed optimization process of this study can provide the designers of FG plates with useful data.

Keywords: Optimal design, natural frequency, FG plate, hybrid meshless method, MLPG method, ANN approach, particle swarm optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1408
1753 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: Anomaly detection, autoencoder, data centers, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 705
1752 Fusion of Finger Inner Knuckle Print and Hand Geometry Features to Enhance the Performance of Biometric Verification System

Authors: M. L. Anitha, K. A. Radhakrishna Rao

Abstract:

With the advent of modern computing technology, there is an increased demand for developing recognition systems that have the capability of verifying the identity of individuals. Recognition systems are required by several civilian and commercial applications for providing access to secured resources. Traditional recognition systems which are based on physical identities are not sufficiently reliable to satisfy the security requirements due to the use of several advances of forgery and identity impersonation methods. Recognizing individuals based on his/her unique physiological characteristics known as biometric traits is a reliable technique, since these traits are not transferable and they cannot be stolen or lost. Since the performance of biometric based recognition system depends on the particular trait that is utilized, the present work proposes a fusion approach which combines Inner knuckle print (IKP) trait of the middle, ring and index fingers with the geometrical features of hand. The hand image captured from a digital camera is preprocessed to find finger IKP as region of interest (ROI) and hand geometry features. Geometrical features are represented as the distances between different key points and IKP features are extracted by applying local binary pattern descriptor on the IKP ROI. The decision level AND fusion was adopted, which has shown improvement in performance of the combined scheme. The proposed approach is tested on the database collected at our institute. Proposed approach is of significance since both hand geometry and IKP features can be extracted from the palm region of the hand. The fusion of these features yields a false acceptance rate of 0.75%, false rejection rate of 0.86% for verification tests conducted, which is less when compared to the results obtained using individual traits. The results obtained confirm the usefulness of proposed approach and suitability of the selected features for developing biometric based recognition system based on features from palmar region of hand.

Keywords: Biometrics, hand geometry features, inner knuckle print, recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1139
1751 Gas Sensing Properties of SnO2 Thin Films Modified by Ag Nanoclusters Synthesized by SILD Method

Authors: G. Korotcenkov, B. K. Cho, L. B. Gulina, V. P. Tolstoy

Abstract:

The effect of SnO2 surface modification by Ag nanoclusters, synthesized by SILD method, on the operating characteristics of thin film gas sensors was studied and models for the promotional role of Ag additives were discussed. It was found that mentioned above approach can be used for improvement both the sensitivity and the rate of response of the SnO2-based gas sensors to CO and H2. At the same time, the presence of the Ag clusters on the surface of SnO2 depressed the sensor response to ozone.

Keywords: Ag nanoparticles, deposition, characterization, gas sensors, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2371
1750 Introduce Applicability of Multi-Layer Perceptron to Predict the Behaviour of Semi-Interlocking Masonry Panel

Authors: O. Zarrin, M. Ramezanshirazi

Abstract:

The Semi Interlocking Masonry (SIM) system has been developed in Masonry Research Group at the University of Newcastle, Australia. The main purpose of this system is to enhance the seismic resistance of framed structures with masonry panels. In this system, SIM panels dissipate energy through the sliding friction between rows of SIM units during earthquake excitation. This paper aimed to find the applicability of artificial neural network (ANN) to predict the displacement behaviour of the SIM panel under out-of-plane loading. The general concept of ANN needs to be trained by related force-displacement data of SIM panel. The overall data to train and test the network are 70 increments of force-displacement from three tests, which comprise of none input nodes. The input data contain height and length of panels, height, length and width of the brick and friction and geometry angle of brick along the compressive strength of the brick with the lateral load applied to the panel. The aim of designed network is prediction displacement of the SIM panel by Multi-Layer Perceptron (MLP). The mean square error (MSE) of network was 0.00042 and the coefficient of determination (R2) values showed the 0.91. The result revealed that the ANN has significant agreement to predict the SIM panel behaviour.

Keywords: Semi interlocking masonry, artificial neural network, ANN, multi-layer perceptron, MLP, displacement, prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 793
1749 Modified Levenberg-Marquardt Method for Neural Networks Training

Authors: Amir Abolfazl Suratgar, Mohammad Bagher Tavakoli, Abbas Hoseinabadi

Abstract:

In this paper a modification on Levenberg-Marquardt algorithm for MLP neural network learning is proposed. The proposed algorithm has good convergence. This method reduces the amount of oscillation in learning procedure. An example is given to show usefulness of this method. Finally a simulation verifies the results of proposed method.

Keywords: Levenberg-Marquardt, modification, neural network, variable learning rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5022
1748 High Order Accurate Runge Kutta Nodal Discontinuous Galerkin Method for Numerical Solution of Linear Convection Equation

Authors: Faheem Ahmed, Fareed Ahmed, Yongheng Guo, Yong Yang

Abstract:

This paper deals with a high-order accurate Runge Kutta Discontinuous Galerkin (RKDG) method for the numerical solution of the wave equation, which is one of the simple case of a linear hyperbolic partial differential equation. Nodal DG method is used for a finite element space discretization in 'x' by discontinuous approximations. This method combines mainly two key ideas which are based on the finite volume and finite element methods. The physics of wave propagation being accounted for by means of Riemann problems and accuracy is obtained by means of high-order polynomial approximations within the elements. High order accurate Low Storage Explicit Runge Kutta (LSERK) method is used for temporal discretization in 't' that allows the method to be nonlinearly stable regardless of its accuracy. The resulting RKDG methods are stable and high-order accurate. The L1 ,L2 and L∞ error norm analysis shows that the scheme is highly accurate and effective. Hence, the method is well suited to achieve high order accurate solution for the scalar wave equation and other hyperbolic equations.

Keywords: Nodal Discontinuous Galerkin Method, RKDG, Scalar Wave Equation, LSERK

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2453