Search results for: Symbol Error Rate (SER)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3790

Search results for: Symbol Error Rate (SER)

1780 Ensemble Approach for Predicting Student's Academic Performance

Authors: L. A. Muhammad, M. S. Argungu

Abstract:

Educational data mining (EDM) has recorded substantial considerations. Techniques of data mining in one way or the other have been proposed to dig out out-of-sight knowledge in educational data. The result of the study got assists academic institutions in further enhancing their process of learning and methods of passing knowledge to students. Consequently, the performance of students boasts and the educational products are by no doubt enhanced. This study adopted a student performance prediction model premised on techniques of data mining with Students' Essential Features (SEF). SEF are linked to the learner's interactivity with the e-learning management system. The performance of the student's predictive model is assessed by a set of classifiers, viz. Bayes Network, Logistic Regression, and Reduce Error Pruning Tree (REP). Consequently, ensemble methods of Bagging, Boosting, and Random Forest (RF) are applied to improve the performance of these single classifiers. The study reveals that the result shows a robust affinity between learners' behaviors and their academic attainment. Result from the study shows that the REP Tree and its ensemble record the highest accuracy of 83.33% using SEF. Hence, in terms of the Receiver Operating Curve (ROC), boosting method of REP Tree records 0.903, which is the best. This result further demonstrates the dependability of the proposed model.

Keywords: Ensemble, bagging, Random Forest, boosting, data mining, classifiers, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 784
1779 Active Intra-ONU Scheduling with Cooperative Prediction Mechanism in EPONs

Authors: Chuan-Ching Sue, Shi-Zhou Chen, Ting-Yu Huang

Abstract:

Dynamic bandwidth allocation in EPONs can be generally separated into inter-ONU scheduling and intra-ONU scheduling. In our previous work, the active intra-ONU scheduling (AS) utilizes multiple queue reports (QRs) in each report message to cooperate with the inter-ONU scheduling and makes the granted bandwidth fully utilized without leaving unused slot remainder (USR). This scheme successfully solves the USR problem originating from the inseparability of Ethernet frame. However, without proper setting of threshold value in AS, the number of QRs constrained by the IEEE 802.3ah standard is not enough, especially in the unbalanced traffic environment. This limitation may be solved by enlarging the threshold value. The large threshold implies the large gap between the adjacent QRs, thus resulting in the large difference between the best granted bandwidth and the real granted bandwidth. In this paper, we integrate AS with a cooperative prediction mechanism and distribute multiple QRs to reduce the penalty brought by the prediction error. Furthermore, to improve the QoS and save the usage of queue reports, the highest priority (EF) traffic which comes during the waiting time is granted automatically by OLT and is not considered in the requested bandwidth of ONU. The simulation results show that the proposed scheme has better performance metrics in terms of bandwidth utilization and average delay for different classes of packets.

Keywords: EPON, Inter-ONU and Intra-ONU scheduling, Prediction, Unused slot remainder

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1601
1778 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis

Authors: N. R. N. Idris, S. Baharom

Abstract:

A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates.On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.

Keywords: Aggregate data, combined-level data, Individual patient data, meta analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1746
1777 Load Discontinuity in Shock Response and Its Remedies

Authors: Shuenn-Yih Chang, Chiu-Li Huang

Abstract:

It has been shown that a load discontinuity at the end of an impulse will result in an extra impulse and hence an extra amplitude distortion if a step-by-step integration method is employed to yield the shock response. In order to overcome this difficulty, three remedies are proposed to reduce the extra amplitude distortion. The first remedy is to solve the momentum equation of motion instead of the force equation of motion in the step-by-step solution of the shock response, where an external momentum is used in the solution of the momentum equation of motion. Since the external momentum is a resultant of the time integration of external force, the problem of load discontinuity will automatically disappear. The second remedy is to perform a single small time step immediately upon termination of the applied impulse while the other time steps can still be conducted by using the time step determined from general considerations. This is because that the extra impulse caused by a load discontinuity at the end of an impulse is almost linearly proportional to the step size. Finally, the third remedy is to use the average value of the two different values at the integration point of the load discontinuity to replace the use of one of them for loading input. The basic motivation of this remedy originates from the concept of no loading input error associated with the integration point of load discontinuity. The feasibility of the three remedies are analytically explained and numerically illustrated.

Keywords: Dynamic analysis, load discontinuity, shock response, step-by-step integration

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1337
1776 Design, Simulation, and Implementation of a Digital Pulse Oxygen Saturation Measurement System Using the Arduino Microcontroller

Authors: Muhibul Haque Bhuyan, Md. Refat Sarder

Abstract:

If a person can monitor his/her oxygen saturation level intermittently then he/she can identify his/her condition early and thus he/she can seek a doctor’s help. This paper reports the design, simulation, and implementation of a low-cost pulse oxygen saturation measurement device based on a reflective photoplethysmography (PPG) system using an integrated circuit sensor as the fundamental component of this health status checking device. The measurement of the physiological parameter is the blood oxygen saturation level (SpO2) in the peripheral capillary. This work has been implemented using an Arduino Uno R3 microcontroller along with this sensor integrated circuit (IC). The system is designed in the Proteus environment and then simulated to check its performance. After that, the hardware implementation is performed. We used a clipping type optical sensor to sense the arterial oxygen saturation level of blood signal from the fingertips of an individual and then transformed it into the digital data in the microcontroller through its programming its instruction. The designed system was tested by measuring the SpO2 level for several people of different ages, from 12 to 57 years of age. Besides, the same people were tested using a standard machine purchased from the market. Test results were found very satisfactory as the average percentage of error was very low, 1.59% only.

Keywords: Digital pulse oxygen saturation level, oximeter, measurement, design, simulation, implementation, proteus, Arduino Uno microcontroller.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1871
1775 An Efficient Backward Semi-Lagrangian Scheme for Nonlinear Advection-Diffusion Equation

Authors: Soyoon Bak, Sunyoung Bu, Philsu Kim

Abstract:

In this paper, a backward semi-Lagrangian scheme combined with the second-order backward difference formula is designed to calculate the numerical solutions of nonlinear advection-diffusion equations. The primary aims of this paper are to remove any iteration process and to get an efficient algorithm with the convergence order of accuracy 2 in time. In order to achieve these objects, we use the second-order central finite difference and the B-spline approximations of degree 2 and 3 in order to approximate the diffusion term and the spatial discretization, respectively. For the temporal discretization, the second order backward difference formula is applied. To calculate the numerical solution of the starting point of the characteristic curves, we use the error correction methodology developed by the authors recently. The proposed algorithm turns out to be completely iteration free, which resolves the main weakness of the conventional backward semi-Lagrangian method. Also, the adaptability of the proposed method is indicated by numerical simulations for Burgers’ equations. Throughout these numerical simulations, it is shown that the numerical results is in good agreement with the analytic solution and the present scheme offer better accuracy in comparison with other existing numerical schemes.

Keywords: Semi-Lagrangian method, Iteration free method, Nonlinear advection-diffusion equation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2504
1774 Main Tendencies of Youth Unemployment and the Regulation Mechanisms for Decreasing Its Rate in Georgia

Authors: Nino Paresashvili, Nino Abesadze

Abstract:

The modern world faces huge challenges. Globalization changed the socio-economic conditions of many countries. The current processes in the global environment have a different impact on countries with different cultures. However, an alleviation of poverty and improvement of living conditions is still the basic challenge for the majority of countries, because much of the population still lives under the official threshold of poverty. It is very important to stimulate youth employment. In order to prepare young people for the labour market, it is essential to provide them with the appropriate professional skills and knowledge. It is necessary to plan efficient activities for decreasing an unemployment rate and for developing the perfect mechanisms for regulation of a labour market. Such planning requires thorough study and analysis of existing reality, as well as development of corresponding mechanisms. Statistical analysis of unemployment is one of the main platforms for regulation of the labour market key mechanisms. The corresponding statistical methods should be used in the study process. Such methods are observation, gathering, grouping, and calculation of the generalized indicators. Unemployment is one of the most severe socioeconomic problems in Georgia. According to the past as well as the current statistics, unemployment rates always have been the most problematic issue to resolve for policy makers. Analytical works towards to the above-mentioned problem will be the basis for the next sustainable steps to solve the main problem. The results of the study showed that the choice of young people is not often due to their inclinations, their interests and the labour market demand. That is why the wrong professional orientation of young people in most cases leads to their unemployment. At the same time, it was shown that there are a number of professions in the labour market with a high demand because of the deficit the appropriate specialties. To achieve healthy competitiveness in youth employment, it is necessary to formulate regional employment programs with taking into account the regional infrastructure specifications.

Keywords: Unemployment. analysis, methods, tendencies, regulation mechanisms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 789
1773 Influence of Infrared Radiation on the Growth Rate of Microalgae Chlorella sorokiniana

Authors: Natalia Politaeva, Iuliia Smiatskaia, Iuliia Bazarnova, Iryna Atamaniuk, Kerstin Kuchta

Abstract:

Nowadays, the progressive decrease of primary natural resources and ongoing upward trend in terms of energy demand, have resulted in development of new generation technological processes which are focused on step-wise production and residues utilization. Thus, microalgae-based 3rd generation bioeconomy is considered one of the most promising approaches that allow production of value-added products and sophisticated utilization of residues biomass. In comparison to conventional biomass, microalgae can be cultivated in wide range of conditions without compromising food and feed production, and thus, addressing issues associated with negative social and environmental impacts. However, one of the most challenging tasks is to undergo seasonal variations and to achieve optimal growing conditions for indoor closed systems that can cover further demand for material and energetic utilization of microalgae. For instance, outdoor cultivation in St. Petersburg (Russia) is only suitable within rather narrow time frame (from mid-May to mid-September). At earlier and later periods, insufficient sunlight and heat for the growth of microalgae were detected. On the other hand, without additional physical effects, the biomass increment in summer is 3-5 times per week, depending on the solar radiation and the ambient temperature. In order to increase biomass production, scientists from all over the world have proposed various technical solutions for cultivators and have been studying the influence of various physical factors affecting biomass growth namely: magnetic field, radiation impact, and electric field, etc. In this paper, the influence of infrared radiation (IR) and fluorescent light on the growth rate of microalgae Chlorella sorokiniana has been studied. The cultivation of Chlorella sorokiniana was carried out in 500 ml cylindrical glass vessels, which were constantly aerated. To accelerate the cultivation process, the mixture was stirred for 15 minutes at 500 rpm following 120 minutes of rest time. At the same time, the metabolic needs in nutrients were provided by the addition of micro- and macro-nutrients in the microalgae growing medium. Lighting was provided by fluorescent lamps with the intensity of 2500 ± 300 lx. The influence of IR was determined using IR lamps with a voltage of 220 V, power of 250 W, in order to achieve the intensity of 13 600 ± 500 lx. The obtained results show that under the influence of fluorescent lamps along with the combined effect of active aeration and variable mixing, the biomass increment on the 2nd day was three times, and on the 7th day, it was eight-fold. The growth rate of microalgae under the influence of IR radiation was lower and has reached 22.6·106 cells·mL-1. However, application of IR lamps for the biomass growth allows maintaining the optimal temperature of microalgae suspension at approximately 25-28°C, which might especially be beneficial during the cold season in extreme climate zones.

Keywords: Biomass, fluorescent lamp, infrared radiation, microalgae.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1039
1772 Arc Length of Rational Bezier Curves and Use for CAD Reparametrization

Authors: Maharavo Randrianarivony

Abstract:

The length  of a given rational B'ezier curve is efficiently estimated. Since a rational B'ezier function is nonlinear, it is usually impossible to evaluate its length exactly. The length is approximated by using subdivision and the accuracy of the approximation n is investigated. In order to improve the efficiency, adaptivity is used with some length estimator. A rigorous theoretical analysis of the rate of convergence of n to  is given. The required number of subdivisions to attain a prescribed accuracy is also analyzed. An application to CAD parametrization is briefly described. Numerical results are reported to supplement the theory.

Keywords: Adaptivity, Length, Parametrization, Rational Bezier

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1803
1771 Higher-Dimensional Quantum Cryptography

Authors: Bradley Christensen, Kevin T. McCusker, Daniel J. Gauthier, Daniel Kumor, Venkat Chandar, P. G. Kwiat

Abstract:

We report on a high-speed quantum cryptography system that utilizes simultaneous entanglement in polarization and in “time-bins". With multiple degrees of freedom contributing to the secret key, we can achieve over ten bits of random entropy per detected coincidence. In addition, we collect from multiple spots o the downconversion cone to further amplify the data rate, allowing usto achieve over 10 Mbits of secure key per second.

Keywords: Downconversion, Hyper-entanglement, Quantum Cryptography

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1702
1770 Solar Radiation Time Series Prediction

Authors: Cameron Hamilton, Walter Potter, Gerrit Hoogenboom, Ronald McClendon, Will Hobbs

Abstract:

A model was constructed to predict the amount of solar radiation that will make contact with the surface of the earth in a given location an hour into the future. This project was supported by the Southern Company to determine at what specific times during a given day of the year solar panels could be relied upon to produce energy in sufficient quantities. Due to their ability as universal function approximators, an artificial neural network was used to estimate the nonlinear pattern of solar radiation, which utilized measurements of weather conditions collected at the Griffin, Georgia weather station as inputs. A number of network configurations and training strategies were utilized, though a multilayer perceptron with a variety of hidden nodes trained with the resilient propagation algorithm consistently yielded the most accurate predictions. In addition, a modeled direct normal irradiance field and adjacent weather station data were used to bolster prediction accuracy. In later trials, the solar radiation field was preprocessed with a discrete wavelet transform with the aim of removing noise from the measurements. The current model provides predictions of solar radiation with a mean square error of 0.0042, though ongoing efforts are being made to further improve the model’s accuracy.

Keywords: Artificial Neural Networks, Resilient Propagation, Solar Radiation, Time Series Forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2772
1769 Project Selection by Using Fuzzy AHP and TOPSIS Technique

Authors: S. Mahmoodzadeh, J. Shahrabi, M. Pariazar, M. S. Zaeri

Abstract:

In this article, by using fuzzy AHP and TOPSIS technique we propose a new method for project selection problem. After reviewing four common methods of comparing alternatives investment (net present value, rate of return, benefit cost analysis and payback period) we use them as criteria in AHP tree. In this methodology by utilizing improved Analytical Hierarchy Process by Fuzzy set theory, first we try to calculate weight of each criterion. Then by implementing TOPSIS algorithm, assessment of projects has been done. Obtained results have been tested in a numerical example.

Keywords: Fuzzy AHP, Project Selection, TOPSIS Technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6611
1768 Parametric Analysis and Optimal Design of Functionally Graded Plates Using Particle Swarm Optimization Algorithm and a Hybrid Meshless Method

Authors: Foad Nazari, Seyed Mahmood Hosseini, Mohammad Hossein Abolbashari, Mohammad Hassan Abolbashari

Abstract:

The present study is concerned with the optimal design of functionally graded plates using particle swarm optimization (PSO) algorithm. In this study, meshless local Petrov-Galerkin (MLPG) method is employed to obtain the functionally graded (FG) plate’s natural frequencies. Effects of two parameters including thickness to height ratio and volume fraction index on the natural frequencies and total mass of plate are studied by using the MLPG results. Then the first natural frequency of the plate, for different conditions where MLPG data are not available, is predicted by an artificial neural network (ANN) approach which is trained by back-error propagation (BEP) technique. The ANN results show that the predicted data are in good agreement with the actual one. To maximize the first natural frequency and minimize the mass of FG plate simultaneously, the weighted sum optimization approach and PSO algorithm are used. However, the proposed optimization process of this study can provide the designers of FG plates with useful data.

Keywords: Optimal design, natural frequency, FG plate, hybrid meshless method, MLPG method, ANN approach, particle swarm optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1440
1767 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: Anomaly detection, autoencoder, data centers, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 756
1766 Statistical Characteristics of Distribution of Radiation-Induced Defects under Random Generation

Authors: Pavlo Selyshchev

Abstract:

We consider fluctuations of defects density taking into account their interaction. Stochastic field of displacement generation rate gives random defect distribution. We determinate statistical characteristics (mean and dispersion) of random field of point defect distribution as function of defect generation parameters, temperature and properties of irradiated crystal.

 

Keywords: Irradiation, Primary Defects, Interaction, Fluctuations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1851
1765 Introduce Applicability of Multi-Layer Perceptron to Predict the Behaviour of Semi-Interlocking Masonry Panel

Authors: O. Zarrin, M. Ramezanshirazi

Abstract:

The Semi Interlocking Masonry (SIM) system has been developed in Masonry Research Group at the University of Newcastle, Australia. The main purpose of this system is to enhance the seismic resistance of framed structures with masonry panels. In this system, SIM panels dissipate energy through the sliding friction between rows of SIM units during earthquake excitation. This paper aimed to find the applicability of artificial neural network (ANN) to predict the displacement behaviour of the SIM panel under out-of-plane loading. The general concept of ANN needs to be trained by related force-displacement data of SIM panel. The overall data to train and test the network are 70 increments of force-displacement from three tests, which comprise of none input nodes. The input data contain height and length of panels, height, length and width of the brick and friction and geometry angle of brick along the compressive strength of the brick with the lateral load applied to the panel. The aim of designed network is prediction displacement of the SIM panel by Multi-Layer Perceptron (MLP). The mean square error (MSE) of network was 0.00042 and the coefficient of determination (R2) values showed the 0.91. The result revealed that the ANN has significant agreement to predict the SIM panel behaviour.

Keywords: Semi interlocking masonry, artificial neural network, ANN, multi-layer perceptron, MLP, displacement, prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 826
1764 High Order Accurate Runge Kutta Nodal Discontinuous Galerkin Method for Numerical Solution of Linear Convection Equation

Authors: Faheem Ahmed, Fareed Ahmed, Yongheng Guo, Yong Yang

Abstract:

This paper deals with a high-order accurate Runge Kutta Discontinuous Galerkin (RKDG) method for the numerical solution of the wave equation, which is one of the simple case of a linear hyperbolic partial differential equation. Nodal DG method is used for a finite element space discretization in 'x' by discontinuous approximations. This method combines mainly two key ideas which are based on the finite volume and finite element methods. The physics of wave propagation being accounted for by means of Riemann problems and accuracy is obtained by means of high-order polynomial approximations within the elements. High order accurate Low Storage Explicit Runge Kutta (LSERK) method is used for temporal discretization in 't' that allows the method to be nonlinearly stable regardless of its accuracy. The resulting RKDG methods are stable and high-order accurate. The L1 ,L2 and L∞ error norm analysis shows that the scheme is highly accurate and effective. Hence, the method is well suited to achieve high order accurate solution for the scalar wave equation and other hyperbolic equations.

Keywords: Nodal Discontinuous Galerkin Method, RKDG, Scalar Wave Equation, LSERK

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2471
1763 Similitude for Thermal Scale-up of a Multiphase Thermolysis Reactor in the Cu-Cl Cycle of a Hydrogen Production

Authors: Mohammed W. Abdulrahman

Abstract:

The thermochemical copper-chlorine (Cu-Cl) cycle is considered as a sustainable and efficient technology for a hydrogen production, when linked with clean-energy systems such as nuclear reactors or solar thermal plants. In the Cu-Cl cycle, water is decomposed thermally into hydrogen and oxygen through a series of intermediate reactions. This paper investigates the thermal scale up analysis of the three phase oxygen production reactor in the Cu-Cl cycle, where the reaction is endothermic and the temperature is about 530 oC. The paper focuses on examining the size and number of oxygen reactors required to provide enough heat input for different rates of hydrogen production. The type of the multiphase reactor used in this paper is the continuous stirred tank reactor (CSTR) that is heated by a half pipe jacket. The thermal resistance of each section in the jacketed reactor system is studied to examine its effect on the heat balance of the reactor. It is found that the dominant contribution to the system thermal resistance is from the reactor wall. In the analysis, the Cu-Cl cycle is assumed to be driven by a nuclear reactor where two types of nuclear reactors are examined as the heat source to the oxygen reactor. These types are the CANDU Super Critical Water Reactor (CANDU-SCWR) and High Temperature Gas Reactor (HTGR). It is concluded that a better heat transfer rate has to be provided for CANDU-SCWR by 3-4 times than HTGR. The effect of the reactor aspect ratio is also examined in this paper and is found that increasing the aspect ratio decreases the number of reactors and the rate of decrease in the number of reactors decreases by increasing the aspect ratio. Finally, a comparison between the results of heat balance and existing results of mass balance is performed and is found that the size of the oxygen reactor is dominated by the heat balance rather than the material balance.

Keywords: Clean energy, Cu-Cl cycle, heat transfer, sustainable energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1670
1762 A Numerical Model for Simulation of Blood Flow in Vascular Networks

Authors: Houman Tamaddon, Mehrdad Behnia, Masud Behnia

Abstract:

An accurate study of blood flow is associated with an accurate vascular pattern and geometrical properties of the organ of interest. Due to the complexity of vascular networks and poor accessibility in vivo, it is challenging to reconstruct the entire vasculature of any organ experimentally. The objective of this study is to introduce an innovative approach for the reconstruction of a full vascular tree from available morphometric data. Our method consists of implementing morphometric data on those parts of the vascular tree that are smaller than the resolution of medical imaging methods. This technique reconstructs the entire arterial tree down to the capillaries. Vessels greater than 2 mm are obtained from direct volume and surface analysis using contrast enhanced computed tomography (CT). Vessels smaller than 2mm are reconstructed from available morphometric and distensibility data and rearranged by applying Murray’s Laws. Implementation of morphometric data to reconstruct the branching pattern and applying Murray’s Laws to every vessel bifurcation simultaneously, lead to an accurate vascular tree reconstruction. The reconstruction algorithm generates full arterial tree topography down to the first capillary bifurcation. Geometry of each order of the vascular tree is generated separately to minimize the construction and simulation time. The node-to-node connectivity along with the diameter and length of every vessel segment is established and order numbers, according to the diameter-defined Strahler system, are assigned. During the simulation, we used the averaged flow rate for each order to predict the pressure drop and once the pressure drop is predicted, the flow rate is corrected to match the computed pressure drop for each vessel. The final results for 3 cardiac cycles is presented and compared to the clinical data.

Keywords: Blood flow, Morphometric data, Vascular tree, Strahler ordering system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2106
1761 A Comparison of Experimental Data with Monte Carlo Calculations for Optimisation of the Sourceto- Detector Distance in Determining the Efficiency of a LaBr3:Ce (5%) Detector

Authors: H. Aldousari, T. Buchacher, N. M. Spyrou

Abstract:

Cerium-doped lanthanum bromide LaBr3:Ce(5%) crystals are considered to be one of the most advanced scintillator materials used in PET scanning, combining a high light yield, fast decay time and excellent energy resolution. Apart from the correct choice of scintillator, it is also important to optimise the detector geometry, not least in terms of source-to-detector distance in order to obtain reliable measurements and efficiency. In this study a commercially available 25 mm x 25 mm BrilLanCeTM 380 LaBr3: Ce (5%) detector was characterised in terms of its efficiency at varying source-to-detector distances. Gamma-ray spectra of 22Na, 60Co, and 137Cs were separately acquired at distances of 5, 10, 15, and 20cm. As a result of the change in solid angle subtended by the detector, the geometric efficiency reduced in efficiency with increasing distance. High efficiencies at low distances can cause pulse pile-up when subsequent photons are detected before previously detected events have decayed. To reduce this systematic error the source-to-detector distance should be balanced between efficiency and pulse pile-up suppression as otherwise pile-up corrections would need to be necessary at short distances. In addition to the experimental measurements Monte Carlo simulations have been carried out for the same setup, allowing a comparison of results. The advantages and disadvantages of each approach have been highlighted.

Keywords: BrilLanCeTM380 LaBr3:Ce(5%), Coincidence summing, GATE simulation, Geometric efficiency

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1901
1760 Robust Iterative PID Controller Based on Linear Matrix Inequality for a Sample Power System

Authors: Ahmed Bensenouci

Abstract:

This paper provides the design steps of a robust Linear Matrix Inequality (LMI) based iterative multivariable PID controller whose duty is to drive a sample power system that comprises a synchronous generator connected to a large network via a step-up transformer and a transmission line. The generator is equipped with two control-loops, namely, the speed/power (governor) and voltage (exciter). Both loops are lumped in one where the error in the terminal voltage and output active power represent the controller inputs and the generator-exciter voltage and governor-valve position represent its outputs. Multivariable PID is considered here because of its wide use in the industry, simple structure and easy implementation. It is also preferred in plants of higher order that cannot be reduced to lower ones. To improve its robustness to variation in the controlled variables, H∞-norm of the system transfer function is used. To show the effectiveness of the controller, divers tests, namely, step/tracking in the controlled variables, and variation in plant parameters, are applied. A comparative study between the proposed controller and a robust H∞ LMI-based output feedback is given by its robustness to disturbance rejection. From the simulation results, the iterative multivariable PID shows superiority.

Keywords: Linear matrix inequality, power system, robust iterative PID, robust output feedback control

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2065
1759 Geopotential Models Evaluation in Algeria Using Stochastic Method, GPS/Leveling and Topographic Data

Authors: M. A. Meslem

Abstract:

For precise geoid determination, we use a reference field to subtract long and medium wavelength of the gravity field from observations data when we use the remove-compute-restore technique. Therefore, a comparison study between considered models should be made in order to select the optimal reference gravity field to be used. In this context, two recent global geopotential models have been selected to perform this comparison study over Northern Algeria. The Earth Gravitational Model (EGM2008) and the Global Gravity Model (GECO) conceived with a combination of the first model with anomalous potential derived from a GOCE satellite-only global model. Free air gravity anomalies in the area under study have been used to compute residual data using both gravity field models and a Digital Terrain Model (DTM) to subtract the residual terrain effect from the gravity observations. Residual data were used to generate local empirical covariance functions and their fitting to the closed form in order to compare their statistical behaviors according to both cases. Finally, height anomalies were computed from both geopotential models and compared to a set of GPS levelled points on benchmarks using least squares adjustment. The result described in details in this paper regarding these two models has pointed out a slight advantage of GECO global model globally through error degree variances comparison and ground-truth evaluation.

Keywords: Quasigeoid, gravity anomalies, covariance, GGM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 910
1758 Laser Data Based Automatic Generation of Lane-Level Road Map for Intelligent Vehicles

Authors: Zehai Yu, Hui Zhu, Linglong Lin, Huawei Liang, Biao Yu, Weixin Huang

Abstract:

With the development of intelligent vehicle systems, a high-precision road map is increasingly needed in many aspects. The automatic lane lines extraction and modeling are the most essential steps for the generation of a precise lane-level road map. In this paper, an automatic lane-level road map generation system is proposed. To extract the road markings on the ground, the multi-region Otsu thresholding method is applied, which calculates the intensity value of laser data that maximizes the variance between background and road markings. The extracted road marking points are then projected to the raster image and clustered using a two-stage clustering algorithm. Lane lines are subsequently recognized from these clusters by the shape features of their minimum bounding rectangle. To ensure the storage efficiency of the map, the lane lines are approximated to cubic polynomial curves using a Bayesian estimation approach. The proposed lane-level road map generation system has been tested on urban and expressway conditions in Hefei, China. The experimental results on the datasets show that our method can achieve excellent extraction and clustering effect, and the fitted lines can reach a high position accuracy with an error of less than 10 cm.

Keywords: Curve fitting, lane-level road map, line recognition, multi-thresholding, two-stage clustering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 519
1757 Fusion of Finger Inner Knuckle Print and Hand Geometry Features to Enhance the Performance of Biometric Verification System

Authors: M. L. Anitha, K. A. Radhakrishna Rao

Abstract:

With the advent of modern computing technology, there is an increased demand for developing recognition systems that have the capability of verifying the identity of individuals. Recognition systems are required by several civilian and commercial applications for providing access to secured resources. Traditional recognition systems which are based on physical identities are not sufficiently reliable to satisfy the security requirements due to the use of several advances of forgery and identity impersonation methods. Recognizing individuals based on his/her unique physiological characteristics known as biometric traits is a reliable technique, since these traits are not transferable and they cannot be stolen or lost. Since the performance of biometric based recognition system depends on the particular trait that is utilized, the present work proposes a fusion approach which combines Inner knuckle print (IKP) trait of the middle, ring and index fingers with the geometrical features of hand. The hand image captured from a digital camera is preprocessed to find finger IKP as region of interest (ROI) and hand geometry features. Geometrical features are represented as the distances between different key points and IKP features are extracted by applying local binary pattern descriptor on the IKP ROI. The decision level AND fusion was adopted, which has shown improvement in performance of the combined scheme. The proposed approach is tested on the database collected at our institute. Proposed approach is of significance since both hand geometry and IKP features can be extracted from the palm region of the hand. The fusion of these features yields a false acceptance rate of 0.75%, false rejection rate of 0.86% for verification tests conducted, which is less when compared to the results obtained using individual traits. The results obtained confirm the usefulness of proposed approach and suitability of the selected features for developing biometric based recognition system based on features from palmar region of hand.

Keywords: Biometrics, hand geometry features, inner knuckle print, recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1160
1756 Gas Sensing Properties of SnO2 Thin Films Modified by Ag Nanoclusters Synthesized by SILD Method

Authors: G. Korotcenkov, B. K. Cho, L. B. Gulina, V. P. Tolstoy

Abstract:

The effect of SnO2 surface modification by Ag nanoclusters, synthesized by SILD method, on the operating characteristics of thin film gas sensors was studied and models for the promotional role of Ag additives were discussed. It was found that mentioned above approach can be used for improvement both the sensitivity and the rate of response of the SnO2-based gas sensors to CO and H2. At the same time, the presence of the Ag clusters on the surface of SnO2 depressed the sensor response to ozone.

Keywords: Ag nanoparticles, deposition, characterization, gas sensors, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2396
1755 Modified Levenberg-Marquardt Method for Neural Networks Training

Authors: Amir Abolfazl Suratgar, Mohammad Bagher Tavakoli, Abbas Hoseinabadi

Abstract:

In this paper a modification on Levenberg-Marquardt algorithm for MLP neural network learning is proposed. The proposed algorithm has good convergence. This method reduces the amount of oscillation in learning procedure. An example is given to show usefulness of this method. Finally a simulation verifies the results of proposed method.

Keywords: Levenberg-Marquardt, modification, neural network, variable learning rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5062
1754 Holistic Face Recognition using Multivariate Approximation, Genetic Algorithms and AdaBoost Classifier: Preliminary Results

Authors: C. Villegas-Quezada, J. Climent

Abstract:

Several works regarding facial recognition have dealt with methods which identify isolated characteristics of the face or with templates which encompass several regions of it. In this paper a new technique which approaches the problem holistically dispensing with the need to identify geometrical characteristics or regions of the face is introduced. The characterization of a face is achieved by randomly sampling selected attributes of the pixels of its image. From this information we construct a set of data, which correspond to the values of low frequencies, gradient, entropy and another several characteristics of pixel of the image. Generating a set of “p" variables. The multivariate data set with different polynomials minimizing the data fitness error in the minimax sense (L∞ - Norm) is approximated. With the use of a Genetic Algorithm (GA) it is able to circumvent the problem of dimensionality inherent to higher degree polynomial approximations. The GA yields the degree and values of a set of coefficients of the polynomials approximating of the image of a face. By finding a family of characteristic polynomials from several variables (pixel characteristics) for each face (say Fi ) in the data base through a resampling process the system in use, is trained. A face (say F ) is recognized by finding its characteristic polynomials and using an AdaBoost Classifier from F -s polynomials to each of the Fi -s polynomials. The winner is the polynomial family closer to F -s corresponding to target face in data base.

Keywords: AdaBoost Classifier, Holistic Face Recognition, Minimax Multivariate Approximation, Genetic Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1509
1753 ISTER (Immune System - Tumor Efficiency Rate): An Important Key for Planning in Radiotherapic Facilities

Authors: O. Sotolongo-Grau, D. Rodriguez-Perez, J. A. Santos-Miranda, M. M. Desco, O. Sotolongo-Costa, J. C. Antoranz

Abstract:

The use of the oncologic index ISTER allows for a more effective planning of the radiotherapic facilities in the hospitals. Any change in the radiotherapy treatment, due to unexpected stops, may be adapted by recalculating the doses to the new treatment duration while keeping the optimal prognosis. The results obtained in a simulation model on millions of patients allow the definition of optimal success probability algorithms.

Keywords: Mathematical model, radiation oncology, dynamical systems applications.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1569
1752 Stewardship of Urban Greenery in an Era of Global Urbanisation

Authors: Rhoda M. Darkwah, Patrick B. Cobbinah

Abstract:

Urban greenery remains the bastion of urban landscape and a key to sustainable development due to its integral connections to the general health and wellbeing of urban residents. However, in an era of rapid urbanisation, recent studies indicate that urban greenery, especially ecologically sensitive areas, in many African cities is becoming increasingly depleted. Given the scale and rate of natural and anthropogenic change, effective management of urban greenery as the ultimate goal of restoring depleting urban landscapes is urgent. This review advocates for an urban resilience model to managing urban greenery.

Keywords: Green spaces, resilience, urbanisation, urban greenery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3038
1751 Open-Loop Vector Control of Induction Motor with Space Vector Pulse Width Modulation Technique

Authors: Karchung, S. Ruangsinchaiwanich

Abstract:

This paper presents open-loop vector control method of induction motor with space vector pulse width modulation (SVPWM) technique. Normally, the closed loop speed control is preferred and is believed to be more accurate. However, it requires a position sensor to track the rotor position which is not desirable to use it for certain workspace applications. This paper exhibits the performance of three-phase induction motor with the simplest control algorithm without the use of a position sensor nor an estimation block to estimate rotor position for sensorless control. The motor stator currents are measured and are transformed to synchronously rotating (d-q-axis) frame by use of Clarke and Park transformation. The actual control happens in this frame where the measured currents are compared with the reference currents. The error signal is fed to a conventional PI controller, and the corrected d-q voltage is generated. The controller outputs are transformed back to three phase voltages and are fed to SVPWM block which generates PWM signal for the voltage source inverter. The open loop vector control model along with SVPWM algorithm is modeled in MATLAB/Simulink software and is experimented and validated in TMS320F28335 DSP board.

Keywords: Electric drive, induction motor, open-loop vector control, space vector pulse width modulation technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 982