Search results for: uncorrected refractive error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2061

Search results for: uncorrected refractive error

1461 Characterization on Molecular Weight of Polyamic Acids Using GPC Coupled with Multiple Detectors

Authors: Mei Hong, Wei Liu, Xuemin Dai, Yanxiong Pan, Xiangling Ji

Abstract:

Polyamic acid (PAA) is the precursor of polyimide (PI) prepared by a two-step method, its molecular weight and molecular weight distribution not only play an important role during the preparation and processing, but also influence the final performance of PI. However, precise characterization on molecular weight of PAA is still a challenge because of the existence of very complicated interactions in the solution system, including the electrostatic interaction, hydrogen bond interaction, dipole-dipole interaction, etc. Thus, it is necessary to establisha suitable strategy which can completely suppress these complex effects and get reasonable data on molecular weight. Herein, the gel permeation chromatography (GPC) coupled with differential refractive index (RI) and multi-angle laser light scattering (MALLS) detectors were applied to measure the molecular weight of (6FDA-DMB) PAA using different mobile phases, LiBr/DMF, LiBr/H3PO4/THF/DMF, LiBr/HAc/THF/DMF, and LiBr/HAc/DMF, respectively. It was found that combination of LiBr with HAc can shield the above-mentioned complex interactions and is more conducive to the separation of PAA than only addition of LiBr in DMF. LiBr/HAc/DMF was employed for the first time as a mild mobile phase to effectively separate PAA and determine its molecular weight. After a series of conditional experiments, 0.02M LiBr/0.2M HAc/DMF was fixed as an optimized mobile phase to measure the relative and absolute molecular weights of (6FDA-DMB) PAA prepared, and the obtained Mw from GPC-MALLS and GPC-RI were 35,300 g/mol and 125,000 g/mol, respectively. Particularly, such a mobile phase is also applicable to other PAA samples with different structures, and the final results on molecular weight are also reproducible.

Keywords: Polyamic acids, Polyelectrolyte effects, Gel permeation chromatography, Mobile phase, Molecular weight

Procedia PDF Downloads 55
1460 The Bayesian Premium Under Entropy Loss

Authors: Farouk Metiri, Halim Zeghdoudi, Mohamed Riad Remita

Abstract:

Credibility theory is an experience rating technique in actuarial science which can be seen as one of quantitative tools that allows the insurers to perform experience rating, that is, to adjust future premiums based on past experiences. It is used usually in automobile insurance, worker's compensation premium, and IBNR (incurred but not reported claims to the insurer) where credibility theory can be used to estimate the claim size amount. In this study, we focused on a popular tool in credibility theory which is the Bayesian premium estimator, considering Lindley distribution as a claim distribution. We derive this estimator under entropy loss which is asymmetric and squared error loss which is a symmetric loss function with informative and non-informative priors. In a purely Bayesian setting, the prior distribution represents the insurer’s prior belief about the insured’s risk level after collection of the insured’s data at the end of the period. However, the explicit form of the Bayesian premium in the case when the prior is not a member of the exponential family could be quite difficult to obtain as it involves a number of integrations which are not analytically solvable. The paper finds a solution to this problem by deriving this estimator using numerical approximation (Lindley approximation) which is one of the suitable approximation methods for solving such problems, it approaches the ratio of the integrals as a whole and produces a single numerical result. Simulation study using Monte Carlo method is then performed to evaluate this estimator and mean squared error technique is made to compare the Bayesian premium estimator under the above loss functions.

Keywords: bayesian estimator, credibility theory, entropy loss, monte carlo simulation

Procedia PDF Downloads 334
1459 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS

Authors: Eunsu Jang, Kang Park

Abstract:

In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.

Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis

Procedia PDF Downloads 401
1458 Gradient Index Metalens for WLAN Applications

Authors: Akram Boubakri, Fethi Choubeni, Tan Hoa Vuong, Jacques David

Abstract:

The control of electromagnetic waves is a key aim of several researches over the past decade. In this regard, Metamaterials have shown a strong ability to manipulate the electromagnetic waves on a subwavelength scales thanks to its unconventional properties that are not available in natural materials such as negative refraction index, super imaging and invisibility cloaking. Metalenses were used to avoid some drawbacks presented by conventional lenses since focusing with conventional lenses suffered from the limited resolution because they were only able to focus the propagating wave component. Nevertheless, Metalenses were able to go beyond the diffraction limit and enhance the resolution not only by collecting the propagating waves but also by restoring the amplitude of evanescent waves that decay rapidly when going far from the source and that contains the finest details of the image. Metasurfaces have many mechanical advantages over three-dimensional metamaterial structures especially the ease of fabrication and a smaller required volume. Those structures have been widely used for antenna performance improvement and to build flat metalenses. In this work, we showed that a well-designed metasurface lens operating at the frequency of 5.9GHz, has efficiently enhanced the radiation characteristics of a patch antenna and can be used for WLAN applications (IEEE 802.11 a). The proposed metasurface lens is built with a geometrically modified unit cells which lead to a change in the response of the lens at different position and allow the control of the wavefront beam of the incident wave thanks to the gradient refractive index.

Keywords: focusing, gradient index, metasurface, metalens, WLAN Applications

Procedia PDF Downloads 254
1457 Use of Polymeric Materials in the Architectural Preservation

Authors: F. Z. Benabid, F. Zouai, A. Douibi, D. Benachour

Abstract:

These Fluorinated polymers and polyacrylics have known a wide use in the field of historical monuments. PVDF provides a great easiness to processing, a good UV resistance and good chemical inertia. Although the quality of physical characteristics of the PMMA and its low price with a respect to PVDF, its deterioration against UV radiations limits its use as protector agent for the stones. On the other hand, PVDF/PMMA blend is a compromise of a great development in the field of architectural restoration, since it is the best method in term of quality and price to make new polymeric materials having enhanced properties. Films of different compositions based on the two polymers within an adequate solvent (DMF) were obtained to perform an exposition to artificial ageing and to the salted fog, a spectroscopic analysis (FTIR and UV) and optical analysis (refractive index). Based on its great interest in the field of building, a variety of standard tests has been elaborated for the first time at the central laboratory of ENAP (Souk-Ahras) in order to evaluate our blend performance. The obtained results have allowed observing the behavior of the different compositions of the blend under various tests. The addition of PVDF to PMMA enhances the properties of this last to know the exhibition to the natural and artificial ageing and to the saline fog. On the other hand, PMMA enhances the optical properties of the blend. Finally, 70/30 composition of the blend is in concordance with results of previous works and it is the adequate proportion for an eventual application.

Keywords: blend, PVDF, PMMA, preservation, historic monuments

Procedia PDF Downloads 309
1456 Structural and Optical Properties of RF-Sputtered ZnS and Zn(S,O) Thin Films

Authors: Ould Mohamed Cheikh, Mounir Chaik, Hind El Aakib, Mohamed Aggour, Abdelkader Outzourhit

Abstract:

Zinc sulfide [ZnS] and oxygenated zinc sulfide Zn(O,S) thin films were deposited on glass substrates, by reactive cathodic radio-frequency (RF) sputtering. The substrates power and percentage of oxygen were varied in the range of 100W to 250W and from 5% to 20% respectively. The structural, morphological and optical properties of these thin films were investigated. The optical properties (mainly the refractive index, absorption coefficient and optical band gap) were examined by optical transmission measurements in the ultraviolet-visible-near Infrared wavelength range. XRD analysis indicated that all sputtered ZnS films were a single phase with a preferential orientation along the (111) plane of zinc blend (ZB). The crystallite size was in the range of 19.5 nm to 48.5 nm, the crystallite size varied with RF power reaching a maximum at 200 W. The Zn(O,S) films, on the other hand, were amorphous. UV-Visible, measurements showed that the ZnS film had more than 80% transmittance in the visible wavelength region while that of Zn(O,S is 85%. Moreover, it was observed that the band gap energy of the ZnS films increases slightly from 3.4 to 3.52 eV as the RF power was increased. The optical band gap of Zn(O,S), on the other hand, decreased from 4.2 to 3.89 eV as the oxygen partial pressure is increased in the sputtering atmosphere at a fixed RF-power. Scanning electron microscopy observations revealed smooth surfaces for both type of films. The X-ray reflectometry measurements on the ZnS films showed that the density of the films (3.9 g/cm3) is close that of bulk ZnS.

Keywords: thin films Zn(O, S) properties, Zn(O, S) by Rf-sputtering, ZnS for solar cells, thin films for renewable energy

Procedia PDF Downloads 282
1455 Microstructural and Optical Characterization of Heterostructures of ZnS/CdS and CdS/ZnS Synthesized by Chemical Bath Deposition Method

Authors: Temesgen Geremew

Abstract:

ZnS/glass and CdS/glass single layers and ZnS/CdS and CdS/ZnS heterojunction thin films were deposited by the chemical bath deposition method using zinc acetate and cadmium acetate as the metal ion sources and thioacetamide as a nonmetallic ion source in acidic medium. Na2EDTA was used as a complexing agent to control the free cation concentration. +e single layer and heterojunction thin films were characterized with X-ray diffraction (XRD), a scanning electron microscope (SEM), energy dispersive X-ray (EDX), and a UV-VIS spectrometer. +e XRD patterns of the CdS/glass thin film deposited on the soda lime glass substrate crystalized in the cubic structure with a single peak along the (111) plane. +e ZnS/CdS heterojunction and ZnS/glass single layer thin films were crystalized in the hexagonal ZnS structure. +e CdS/ZnS heterojunction thin film is nearly amorphous.The optical analysis results confirmed single band gap values of 2.75 eV and 2.5 eV for ZnS/CdS and CdS/ZnS heterojunction thin films, respectively. +e CdS/glass and CdS/ZnS thin films have more imaginary dielectric components than the real part. The optical conductivity of the single layer and heterojunction films is in the order of 1015 1/s. +e optical study also confirmed refractive index values between 2 and 2.7 for ZnS/glass, ZnS/CdS, and CdS/ZnS thin films for incident photon energies between 1.2 eV and 3.8 eV. +e surface morphology studies revealed compacted spherical grains covering the substrate surfaces with few cracks on ZnS/glass, ZnS/CdS, and CdS/glass and voids on CdS/ZnS thin films. +e EDX result confirmed nearly 1 :1 metallic to nonmetallic ion ratio in the single-layered thin films and the dominance of Zn ion over Cd ion in both ZnS/CdS and CdS/ZnS heterojunction thin films.

Keywords: SERS, sensor, Hg2+, water detection, polythiophene

Procedia PDF Downloads 65
1454 Application of Grey Theory in the Forecast of Facility Maintenance Hours for Office Building Tenants and Public Areas

Authors: Yen Chia-Ju, Cheng Ding-Ruei

Abstract:

This study took case office building as subject and explored the responsive work order repair request of facilities and equipment in offices and public areas by gray theory, with the purpose of providing for future related office building owners, executive managers, property management companies, mechanical and electrical companies as reference for deciding and assessing forecast model. Important conclusions of this study are summarized as follows according to the study findings: 1. Grey Relational Analysis discusses the importance of facilities repair number of six categories, namely, power systems, building systems, water systems, air conditioning systems, fire systems and manpower dispatch in order. In terms of facilities maintenance importance are power systems, building systems, water systems, air conditioning systems, manpower dispatch and fire systems in order. 2. GM (1,N) and regression method took maintenance hours as dependent variables and repair number, leased area and tenants number as independent variables and conducted single month forecast based on 12 data from January to December 2011. The mean absolute error and average accuracy of GM (1,N) from verification results were 6.41% and 93.59%; the mean absolute error and average accuracy of regression model were 4.66% and 95.34%, indicating that they have highly accurate forecast capability.

Keywords: rey theory, forecast model, Taipei 101, office buildings, property management, facilities, equipment

Procedia PDF Downloads 444
1453 Microwave Dielectric Constant Measurements of Titanium Dioxide Using Five Mixture Equations

Authors: Jyh Sheen, Yong-Lin Wang

Abstract:

This research dedicates to find a different measurement procedure of microwave dielectric properties of ceramic materials with high dielectric constants. For the composite of ceramic dispersed in the polymer matrix, the dielectric constants of the composites with different concentrations can be obtained by various mixture equations. The other development of mixture rule is to calculate the permittivity of ceramic from measurements on composite. To do this, the analysis method and theoretical accuracy on six basic mixture laws derived from three basic particle shapes of ceramic fillers have been reported for dielectric constants of ceramic less than 40 at microwave frequency. Similar researches have been done for other well-known mixture rules. They have shown that both the physical curve matching with experimental results and low potential theory error are important to promote the calculation accuracy. Recently, a modified of mixture equation for high dielectric constant ceramics at microwave frequency has also been presented for strontium titanate (SrTiO3) which was selected from five more well known mixing rules and has shown a good accuracy for high dielectric constant measurements. However, it is still not clear the accuracy of this modified equation for other high dielectric constant materials. Therefore, the five more well known mixing rules are selected again to understand their application to other high dielectric constant ceramics. The other high dielectric constant ceramic, TiO2 with dielectric constant 100, was then chosen for this research. Their theoretical error equations are derived. In addition to the theoretical research, experimental measurements are always required. Titanium dioxide is an interesting ceramic for microwave applications. In this research, its powder is adopted as the filler material and polyethylene powder is like the matrix material. The dielectric constants of those ceramic-polyethylene composites with various compositions were measured at 10 GHz. The theoretical curves of the five published mixture equations are shown together with the measured results to understand the curve matching condition of each rule. Finally, based on the experimental observation and theoretical analysis, one of the five rules was selected and modified to a new powder mixture equation. This modified rule has show very good curve matching with the measurement data and low theoretical error. We can then calculate the dielectric constant of pure filler medium (titanium dioxide) by those mixing equations from the measured dielectric constants of composites. The accuracy on the estimating dielectric constant of pure ceramic by various mixture rules will be compared. This modified mixture rule has also shown good measurement accuracy on the dielectric constant of titanium dioxide ceramic. This study can be applied to the microwave dielectric properties measurements of other high dielectric constant ceramic materials in the future.

Keywords: microwave measurement, dielectric constant, mixture rules, composites

Procedia PDF Downloads 367
1452 Thermal Conductivity and Optical Absorption of GaAsPN/GaP for Tandem Solar Cells: Effect of Rapid Thermal Annealing

Authors: S. Ilahi, S. Almosni, F. Chouchene, M. Perrin, K. Zelazna, N. Yacoubi, R. Kudraweic, P. Rale, L. Lombez, J. F. Guillemoles, O. Durand, C. Cornet

Abstract:

Great efforts have been dedicated to obtain high quality of GaAsPN. The properties of GaAsPN have played a great part on the development of solar cells devices based in Si substrate. The incorporation of N in GaAsPN that having a band gap around of 1.7 eV is of special interest in view of growing in Si substrate. In fact, post-growth and rapid thermal annealing (RTA) could be an effective way to improve the quality of the layer. Then, the influence of growth conditions and post-growth annealing on optical and thermal parameters is considered. We have used Photothermal deflection spectroscopy PDS to investigate the impact of rapid thermal annealing on thermal and optical properties of GaAsPN. In fact, the principle of the PDS consists to illuminate the sample by a modulated monochromatic light beam. Then, the absorbed energy is converted into heat through the nonradiative recombination process. The generated thermal wave propagates into the sample and surrounding media creating a refractive-index gradient giving rise to the deflection of a laser probe beam skimming the sample surface. The incident light is assumed to be uniform, and only the sample absorbs the light. In conclusion, the results are promising revealing an improvement in absorption coefficient and thermal conductivity.

Keywords: GaAsPN absorber, photothermal defelction technique PDS, photonics on silicon, thermal conductivity

Procedia PDF Downloads 354
1451 Attention States in the Sustained Attention to Response Task: Effects of Trial Duration, Mind-Wandering and Focus

Authors: Aisling Davies, Ciara Greene

Abstract:

Over the past decade the phenomenon of mind-wandering in cognitive tasks has attracted widespread scientific attention. Research indicates that mind-wandering occurrences can be detected through behavioural responses in the Sustained Attention to Response Task (SART) and several studies have attributed a specific pattern of responding around an error in this task to an observable effect of a mind-wandering state. SART behavioural responses are also widely accepted as indices of sustained attention and of general attention lapses. However, evidence suggests that these same patterns of responding may be attributable to other factors associated with more focused states and that it may also be possible to distinguish the two states within the same task. To use behavioural responses in the SART to study mind-wandering, it is essential to establish both the SART parameters that would increase the likelihood of errors due to mind-wandering, and exactly what type of responses are indicative of mind-wandering, neither of which have yet been determined. The aims of this study were to compare different versions of the SART to establish which task would induce the most mind-wandering episodes and to determine whether mind-wandering related errors can be distinguished from errors during periods of focus, by behavioural responses in the SART. To achieve these objectives, 25 Participants completed four modified versions of the SART that differed from the classic paradigm in several ways so to capture more instances of mind-wandering. The duration that trials were presented for was increased proportionately across each of the four versions of the task; Standard, Medium Slow, Slow, and Very Slow and participants intermittently responded to thought probes assessing their level of focus and degree of mind-wandering throughout. Error rates, reaction times and variability in reaction times decreased in proportion to the decrease in trial duration rate and the proportion of mind-wandering related errors increased, until the Very Slow condition where the extra decrease in duration no longer had an effect. Distinct reaction time patterns around an error, dependent on level of focus (high/low) and level of mind-wandering (high/low) were also observed indicating four separate attention states occurring within the SART. This study establishes the optimal duration of trial presentation for inducing mind-wandering in the SART, provides evidence supporting the idea that different attention states can be observed within the SART and highlights the importance of addressing other factors contributing to behavioural responses when studying mind-wandering during this task. A notable finding in relation to the standard SART, was that while more errors were observed in this version of the task, most of these errors were during periods of focus, raising significant questions about our current understanding of mind-wandering and associated failures of attention.

Keywords: attention, mind-wandering, trial duration rate, Sustained Attention to Response Task (SART)

Procedia PDF Downloads 182
1450 Near Optimal Closed-Loop Guidance Gains Determination for Vector Guidance Law, from Impact Angle Errors and Miss Distance Considerations

Authors: Karthikeyan Kalirajan, Ashok Joshi

Abstract:

An optimization problem is to setup to maximize the terminal kinetic energy of a maneuverable reentry vehicle (MaRV). The target location, the impact angle is given as constraints. The MaRV uses an explicit guidance law called Vector guidance. This law has two gains which are taken as decision variables. The problem is to find the optimal value of these gains which will result in minimum miss distance and impact angle error. Using a simple 3DOF non-rotating flat earth model and Lockheed martin HP-MARV as the reentry vehicle, the nature of solutions of the optimization problem is studied. This is achieved by carrying out a parametric study for a range of closed loop gain values and the corresponding impact angle error and the miss distance values are generated. The results show that there are well defined lower and upper bounds on the gains that result in near optimal terminal guidance solution. It is found from this study, that there exist common permissible regions (values of gains) where all constraints are met. Moreover, the permissible region lies between flat regions and hence the optimization algorithm has to be chosen carefully. It is also found that, only one of the gain values is independent and that the other dependent gain value is related through a simple straight-line expression. Moreover, to reduce the computational burden of finding the optimal value of two gains, a guidance law called Diveline guidance is discussed, which uses single gain. The derivation of the Diveline guidance law from Vector guidance law is discussed in this paper.

Keywords: Marv guidance, reentry trajectory, trajectory optimization, guidance gain selection

Procedia PDF Downloads 427
1449 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest

Procedia PDF Downloads 231
1448 Evaluating the Validity of CFD Model of Dispersion in a Complex Urban Geometry Using Two Sets of Experimental Measurements

Authors: Mohammad R. Kavian Nezhad, Carlos F. Lange, Brian A. Fleck

Abstract:

This research presents the validation study of a computational fluid dynamics (CFD) model developed to simulate the scalar dispersion emitted from rooftop sources around the buildings at the University of Alberta North Campus. The ANSYS CFX code was used to perform the numerical simulation of the wind regime and pollutant dispersion by solving the 3D steady Reynolds-averaged Navier-Stokes (RANS) equations on a building-scale high-resolution grid. The validation study was performed in two steps. First, the CFD model performance in 24 cases (eight wind directions and three wind speeds) was evaluated by comparing the predicted flow fields with the available data from the previous measurement campaign designed at the North Campus, using the standard deviation method (SDM), while the estimated results of the numerical model showed maximum average percent errors of approximately 53% and 37% for wind incidents from the North and Northwest, respectively. Good agreement with the measurements was observed for the other six directions, with an average error of less than 30%. In the second step, the reliability of the implemented turbulence model, numerical algorithm, modeling techniques, and the grid generation scheme was further evaluated using the Mock Urban Setting Test (MUST) dispersion dataset. Different statistical measures, including the fractional bias (FB), the geometric mean bias (MG), and the normalized mean square error (NMSE), were used to assess the accuracy of the predicted dispersion field. Our CFD results are in very good agreement with the field measurements.

Keywords: CFD, plume dispersion, complex urban geometry, validation study, wind flow

Procedia PDF Downloads 135
1447 Forecasting Nokoué Lake Water Levels Using Long Short-Term Memory Network

Authors: Namwinwelbere Dabire, Eugene C. Ezin, Adandedji M. Firmin

Abstract:

The prediction of hydrological flows (rainfall-depth or rainfall-discharge) is becoming increasingly important in the management of hydrological risks such as floods. In this study, the Long Short-Term Memory (LSTM) network, a state-of-the-art algorithm dedicated to time series, is applied to predict the daily water level of Nokoue Lake in Benin. This paper aims to provide an effective and reliable method enable of reproducing the future daily water level of Nokoue Lake, which is influenced by a combination of two phenomena: rainfall and river flow (runoff from the Ouémé River, the Sô River, the Porto-Novo lagoon, and the Atlantic Ocean). Performance analysis based on the forecasting horizon indicates that LSTM can predict the water level of Nokoué Lake up to a forecast horizon of t+10 days. Performance metrics such as Root Mean Square Error (RMSE), coefficient of correlation (R²), Nash-Sutcliffe Efficiency (NSE), and Mean Absolute Error (MAE) agree on a forecast horizon of up to t+3 days. The values of these metrics remain stable for forecast horizons of t+1 days, t+2 days, and t+3 days. The values of R² and NSE are greater than 0.97 during the training and testing phases in the Nokoué Lake basin. Based on the evaluation indices used to assess the model's performance for the appropriate forecast horizon of water level in the Nokoué Lake basin, the forecast horizon of t+3 days is chosen for predicting future daily water levels.

Keywords: forecasting, long short-term memory cell, recurrent artificial neural network, Nokoué lake

Procedia PDF Downloads 64
1446 Acceleration-Based Motion Model for Visual Simultaneous Localization and Mapping

Authors: Daohong Yang, Xiang Zhang, Lei Li, Wanting Zhou

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) is a technology that obtains information in the environment for self-positioning and mapping. It is widely used in computer vision, robotics and other fields. Many visual SLAM systems, such as OBSLAM3, employ a constant-speed motion model that provides the initial pose of the current frame to improve the speed and accuracy of feature matching. However, in actual situations, the constant velocity motion model is often difficult to be satisfied, which may lead to a large deviation between the obtained initial pose and the real value, and may lead to errors in nonlinear optimization results. Therefore, this paper proposed a motion model based on acceleration, which can be applied on most SLAM systems. In order to better describe the acceleration of the camera pose, we decoupled the pose transformation matrix, and calculated the rotation matrix and the translation vector respectively, where the rotation matrix is represented by rotation vector. We assume that, in a short period of time, the changes of rotating angular velocity and translation vector remain the same. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of constant velocity model was analyzed theoretically. Finally, we applied our proposed approach to the ORBSLAM3 system and evaluated two sets of sequences on the TUM dataset. The results showed that our proposed method had a more accurate initial pose estimation and the accuracy of ORBSLAM3 system is improved by 6.61% and 6.46% respectively on the two test sequences.

Keywords: error estimation, constant acceleration motion model, pose estimation, visual SLAM

Procedia PDF Downloads 94
1445 The Impact of Introspective Models on Software Engineering

Authors: Rajneekant Bachan, Dhanush Vijay

Abstract:

The visualization of operating systems has refined the Turing machine, and current trends suggest that the emulation of 32 bit architectures will soon emerge. After years of technical research into Web services, we demonstrate the synthesis of gigabit switches, which embodies the robust principles of theory. Loam, our new algorithm for forward-error correction, is the solution to all of these challenges.

Keywords: software engineering, architectures, introspective models, operating systems

Procedia PDF Downloads 538
1444 The Per Capita Income, Energy production and Environmental Degradation: A Comprehensive Assessment of the existence of the Environmental Kuznets Curve Hypothesis in Bangladesh

Authors: Ashique Mahmud, MD. Ataul Gani Osmani, Shoria Sharmin

Abstract:

In the first quarter of the twenty-first century, the most substantial global concern is environmental contamination, and it has gained the prioritization of both the national and international community. Keeping in mind this crucial fact, this study conducted different statistical and econometrical methods to identify whether the gross national income of the country has a significant impact on electricity production from nonrenewable sources and different air pollutants like carbon dioxide, nitrous oxide, and methane emissions. Besides, the primary objective of this research was to analyze whether the environmental Kuznets curve hypothesis holds for the examined variables. After analyzing different statistical properties of the variables, this study came to the conclusion that the environmental Kuznets curve hypothesis holds for gross national income and carbon dioxide emission in Bangladesh in the short run as well as the long run. This study comes to this conclusion based on the findings of ordinary least square estimations, ARDL bound tests, short-run causality analysis, the Error Correction Model, and other pre-diagnostic and post-diagnostic tests that have been employed in the structural model. Moreover, this study wants to demonstrate that the outline of gross national income and carbon dioxide emissions is in its initial stage of development and will increase up to the optimal peak. The compositional effect will then force the emission to decrease, and the environmental quality will be restored in the long run.

Keywords: environmental Kuznets curve hypothesis, carbon dioxide emission in Bangladesh, gross national income in Bangladesh, autoregressive distributed lag model, granger causality, error correction model

Procedia PDF Downloads 150
1443 Lamb Waves Wireless Communication in Healthy Plates Using Coherent Demodulation

Authors: Rudy Bahouth, Farouk Benmeddour, Emmanuel Moulin, Jamal Assaad

Abstract:

Guided ultrasonic waves are used in Non-Destructive Testing (NDT) and Structural Health Monitoring (SHM) for inspection and damage detection. Recently, wireless data transmission using ultrasonic waves in solid metallic channels has gained popularity in some industrial applications such as nuclear, aerospace and smart vehicles. The idea is to find a good substitute for electromagnetic waves since they are highly attenuated near metallic components due to Faraday shielding. The proposed solution is to use ultrasonic guided waves such as Lamb waves as an information carrier due to their capability of propagation for long distances. In addition to this, valuable information about the health of the structure could be extracted simultaneously. In this work, the reliable frequency bandwidth for communication is extracted experimentally from dispersion curves at first. Then, an experimental platform for wireless communication using Lamb waves is described and built. After this, coherent demodulation algorithm used in telecommunications is tested for Amplitude Shift Keying, On-Off Keying and Binary Phase Shift Keying modulation techniques. Signal processing parameters such as threshold choice, number of cycles per bit and Bit Rate are optimized. Experimental results are compared based on the average Bit Error Rate. Results have shown high sensitivity to threshold selection for Amplitude Shift Keying and On-Off Keying techniques resulting a Bit Rate decrease. Binary Phase Shift Keying technique shows the highest stability and data rate between all tested modulation techniques.

Keywords: lamb waves communication, wireless communication, coherent demodulation, bit error rate

Procedia PDF Downloads 260
1442 Crystallization in the TeO2 - Ta2O5 - Bi2O3 System: From Glass to Anti-Glass to Transparent Ceramic

Authors: Hasnaa Benchorfi

Abstract:

The Tellurite glasses exhibit interesting properties, notably their low melting point (700-900°C), high refractive index (≈2), high transparency in the infrared region (up to 5−6 μm), interesting linear and non-linear optical properties and high rare earth ions solubility. These properties give tellurite glasses a great interest in various optical applications. Transparent ceramics present advantages compared to glasses, such as improved mechanical, thermal and optical properties. But, the elaboration process of these ceramics requires complex sintering conditions. The full crystallization of glass into transparent ceramics is an alternative to circumvent the technical challenges related to the ceramics obtained by conventional processing. In this work, a crystallization study of a specific glass composition in the system TeO2-Ta2O5-Bi2O3 shows structural transitions from the glass to the stabilization of an unreported anti-glass phase to a transparent ceramic upon heating. An anti-glass is a material with a cationic long-range order and a disordered anion sublattice. Thus, the X-ray diffraction patterns show sharp peaks, while the Raman bands are broad and similar to those of the parent glass. The structure and microstructure of the anti-glass and corresponding ceramic were characterized by Powder X-Ray Diffraction, Electron Back Scattered Diffraction, Transmission Electron Microscopy and Raman spectroscopy. The optical properties of the Er3+-doped samples are also discussed.

Keywords: glass, congruent crystallization, anti-glass, glass-ceramic, optics

Procedia PDF Downloads 79
1441 A Pilot Study to Investigate the Use of Machine Translation Post-Editing Training for Foreign Language Learning

Authors: Hong Zhang

Abstract:

The main purpose of this study is to show that machine translation (MT) post-editing (PE) training can help our Chinese students learn Spanish as a second language. Our hypothesis is that they might make better use of it by learning PE skills specific for foreign language learning. We have developed PE training materials based on the data collected in a previous study. Training material included the special error types of the output of MT and the error types that our Chinese students studying Spanish could not detect in the experiment last year. This year we performed a pilot study in order to evaluate the PE training materials effectiveness and to what extent PE training helps Chinese students who study the Spanish language. We used screen recording to record these moments and made note of every action done by the students. Participants were speakers of Chinese with intermediate knowledge of Spanish. They were divided into two groups: Group A performed PE training and Group B did not. We prepared a Chinese text for both groups, and participants translated it by themselves (human translation), and then used Google Translate to translate the text and asked them to post-edit the raw MT output. Comparing the results of PE test, Group A could identify and correct the errors faster than Group B students, Group A did especially better in omission, word order, part of speech, terminology, mistranslation, official names, and formal register. From the results of this study, we can see that PE training can help Chinese students learn Spanish as a second language. In the future, we could focus on the students’ struggles during their Spanish studies and complete the PE training materials to teach Chinese students learning Spanish with machine translation.

Keywords: machine translation, post-editing, post-editing training, Chinese, Spanish, foreign language learning

Procedia PDF Downloads 144
1440 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea

Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng

Abstract:

During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.

Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea

Procedia PDF Downloads 172
1439 Design of the Compliant Mechanism of a Biomechanical Assistive Device for the Knee

Authors: Kevin Giraldo, Juan A. Gallego, Uriel Zapata, Fanny L. Casado

Abstract:

Compliant mechanisms are designed to deform in a controlled manner in response to external forces, utilizing the flexibility of their components to store potential elastic energy during deformation, gradually releasing it upon returning to its original form. This article explores the design of a knee orthosis intended to assist users during stand-up motion. The orthosis makes use of a compliant mechanism to balance the user’s weight, thereby minimizing the strain on leg muscles during standup motion. The primary function of the compliant mechanism is to store and exchange potential energy, so when coupled with the gravitational potential of the user, the total potential energy variation is minimized. The design process for the semi-rigid knee orthosis involved material selection and the development of a numerical model for the compliant mechanism seen as a spring. Geometric properties are obtained through the numerical modeling of the spring once the desired stiffness and safety factor values have been attained. Subsequently, a 3D finite element analysis was conducted. The study demonstrates a strong correlation between the maximum stress in the mathematical model (250.22 MPa) and the simulation (239.8 MPa), with a 4.16% error. Both analyses safety factors: 1.02 for the mathematical approach and 1.1 for the simulation, with a consistent 7.84% margin of error. The spring’s stiffness, calculated at 90.82 Nm/rad analytically and 85.71 Nm/rad in the simulation, exhibits a 5.62% difference. These results suggest significant potential for the proposed device in assisting patients with knee orthopedic restrictions, contributing to ongoing efforts in advancing the understanding and treatment of knee osteoarthritis.

Keywords: biomechanics, complaint mechanisms, gonarthrosis, orthoses

Procedia PDF Downloads 36
1438 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes

Authors: Angela U. Makolo

Abstract:

Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.

Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation

Procedia PDF Downloads 68
1437 Finite Element Modeling of Mass Transfer Phenomenon and Optimization of Process Parameters for Drying of Paddy in a Hybrid Solar Dryer

Authors: Aprajeeta Jha, Punyadarshini P. Tripathy

Abstract:

Drying technologies for various food processing operations shares an inevitable linkage with energy, cost and environmental sustainability. Hence, solar drying of food grains has become imperative choice to combat duo challenges of meeting high energy demand for drying and to address climate change scenario. But performance and reliability of solar dryers depend hugely on sunshine period, climatic conditions, therefore, offer a limited control over drying conditions and have lower efficiencies. Solar drying technology, supported by Photovoltaic (PV) power plant and hybrid type solar air collector can potentially overpower the disadvantages of solar dryers. For development of such robust hybrid dryers; to ensure quality and shelf-life of paddy grains the optimization of process parameter becomes extremely critical. Investigation of the moisture distribution profile within the grains becomes necessary in order to avoid over drying or under drying of food grains in hybrid solar dryer. Computational simulations based on finite element modeling can serve as potential tool in providing a better insight of moisture migration during drying process. Hence, present work aims at optimizing the process parameters and to develop a 3-dimensional (3D) finite element model (FEM) for predicting moisture profile in paddy during solar drying. COMSOL Multiphysics was employed to develop a 3D finite element model for predicting moisture profile. Furthermore, optimization of process parameters (power level, air velocity and moisture content) was done using response surface methodology in design expert software. 3D finite element model (FEM) for predicting moisture migration in single kernel for every time step has been developed and validated with experimental data. The mean absolute error (MAE), mean relative error (MRE) and standard error (SE) were found to be 0.003, 0.0531 and 0.0007, respectively, indicating close agreement of model with experimental results. Furthermore, optimized process parameters for drying paddy were found to be 700 W, 2.75 m/s at 13% (wb) with optimum temperature, milling yield and drying time of 42˚C, 62%, 86 min respectively, having desirability of 0.905. Above optimized conditions can be successfully used to dry paddy in PV integrated solar dryer in order to attain maximum uniformity, quality and yield of product. PV-integrated hybrid solar dryers can be employed as potential and cutting edge drying technology alternative for sustainable energy and food security.

Keywords: finite element modeling, moisture migration, paddy grain, process optimization, PV integrated hybrid solar dryer

Procedia PDF Downloads 150
1436 Polydimethylsiloxane Applications in Interferometric Optical Fiber Sensors

Authors: Zeenat Parveen, Ashiq Hussain

Abstract:

This review paper consists of applications of PDMS (polydimethylsiloxane) materials for enhanced performance, optical fiber sensors in acousto-ultrasonic, mechanical measurements, current applications, sensing, measurements and interferometric optical fiber sensors. We will discuss the basic working principle of fiber optic sensing technology, various types of fiber optic and the PDMS as a coating material to increase the performance. Optical fiber sensing methods for detecting dynamic strain signals, including general sound and acoustic signals, high frequency signals i.e. ultrasonic/ultrasound, and other signals such as acoustic emission and impact induced dynamic strain. Optical fiber sensors have Industrial and civil engineering applications in mechanical measurements. Sometimes it requires different configurations and parameters of sensors. Optical fiber current sensors are based on Faraday Effect due to which we obtain better performance as compared to the conventional current transformer. Recent advancement and cost reduction has simulated interest in optical fiber sensing. Optical techniques are also implemented in material measurement. Fiber optic interferometers are used to sense various physical parameters including temperature, pressure and refractive index. There are four types of interferometers i.e. Fabry–perot, Mach-Zehnder, Michelson, and Sagnac. This paper also describes the future work of fiber optic sensors.

Keywords: fiber optic sensing, PDMS materials, acoustic, ultrasound, current sensor, mechanical measurements

Procedia PDF Downloads 388
1435 Effect of Extraction Methods on the Fatty Acids and Physicochemical Properties of Serendipity Berry Seed Oil

Authors: Olufunmilola A. Abiodun, Adegbola O. Dauda, Ayobami Ojo, Samson A. Oyeyinka

Abstract:

Serendipity berry (Dioscoreophyllum cumminsii diel) is a tropical dioecious rainforest vine and native to tropical Africa. The vine grows during the raining season and is used mainly as sweetener. The sweetener in the berry is known as monellin which is sweeter than sucrose. The sweetener is extracted from the fruits and the seed is discarded. The discarded seeds contain bitter principles but had high yield of oil. Serendipity oil was extracted using three methods (N-hexane, expression and expression/n-hexane). Fatty acids and physicochemical properties of the oil obtained were determined. The oil obtained was clear, liquid and have odour similar to hydrocarbon. The percentage oil yield was 38.59, 12.34 and 49.57% for hexane, expression and expression-hexane method respectively. The seed contained high percentage of oil especially using combination of expression and hexane. Low percentage of oil was obtained using expression method. The refractive index values obtained were 1.443, 1.442 and 1.478 for hexane, expression and expression-hexane methods respectively. Peroxide value obtained for expression-hexane was higher than those for hexane and expression. The viscosities of the oil were 125.8, 128.76 and 126.87 cm³/s for hexane, expression and expression-hexane methods respectively which showed that the oil from expression method was more viscous than the other oils. The major fatty acids in serendipity seed oil were oleic acid (62.81%), linoleic acid (22.65%), linolenic (6.11%), palmitic acid (5.67%), stearic acid (2.21%) in decreasing order. Oleic acid which is monounsaturated fatty acid had the highest value. Total unsaturated fatty acids were 91.574, 92.256 and 90.426% for hexane, expression, and expression-hexane respectively. Combination of expression and hexane for extraction of serendipity oil produced high yield of oil. The oil could be refined for food and non-food application.

Keywords: serendipity seed oil, expression method, fatty acid, hexane

Procedia PDF Downloads 273
1434 Theory of the Optimum Signal Approximation Clarifying the Importance in the Recognition of Parallel World and Application to Secure Signal Communication with Feedback

Authors: Takuro Kida, Yuichi Kida

Abstract:

In this paper, it is shown a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detail algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output-signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory, and it is indicated that introducing conversations with feedback do not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.

Keywords: matrix filterbank, optimum signal approximation, category theory, simultaneous minimization

Procedia PDF Downloads 143
1433 Application of Particle Swarm Optimization to Thermal Sensor Placement for Smart Grid

Authors: Hung-Shuo Wu, Huan-Chieh Chiu, Xiang-Yao Zheng, Yu-Cheng Yang, Chien-Hao Wang, Jen-Cheng Wang, Chwan-Lu Tseng, Joe-Air Jiang

Abstract:

Dynamic Thermal Rating (DTR) provides crucial information by estimating the ampacity of transmission lines to improve power dispatching efficiency. To perform the DTR, it is necessary to install on-line thermal sensors to monitor conductor temperature and weather variables. A simple and intuitive strategy is to allocate a thermal sensor to every span of transmission lines, but the cost of sensors might be too high to bear. To deal with the cost issue, a thermal sensor placement problem must be solved. This research proposes and implements a hybrid algorithm which combines proper orthogonal decomposition (POD) with particle swarm optimization (PSO) methods. The proposed hybrid algorithm solves a multi-objective optimization problem that concludes the minimum number of sensors and the minimum error on conductor temperature, and the optimal sensor placement is determined simultaneously. The data of 345 kV transmission lines and the hourly weather data from the Taiwan Power Company and Central Weather Bureau (CWB), respectively, are used by the proposed method. The simulated results indicate that the number of sensors could be reduced using the optimal placement method proposed by the study and an acceptable error on conductor temperature could be achieved. This study provides power companies with a reliable reference for efficiently monitoring and managing their power grids.

Keywords: dynamic thermal rating, proper orthogonal decomposition, particle swarm optimization, sensor placement, smart grid

Procedia PDF Downloads 432
1432 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 418