Search results for: uncorrected refractive error
921 SVM-Based Modeling of Mass Transfer Potential of Multiple Plunging Jets
Authors: Surinder Deswal, Mahesh Pal
Abstract:
The paper investigates the potential of support vector machines based regression approach to model the mass transfer capacity of multiple plunging jets, both vertical (θ = 90°) and inclined (θ = 60°). The data set used in this study consists of four input parameters with a total of eighty eight cases. For testing, tenfold cross validation was used. Correlation coefficient values of 0.971 and 0.981 (root mean square error values of 0.0025 and 0.0020) were achieved by using polynomial and radial basis kernel functions based support vector regression respectively. Results suggest an improved performance by radial basis function in comparison to polynomial kernel based support vector machines. The estimated overall mass transfer coefficient, by both the kernel functions, is in good agreement with actual experimental values (within a scatter of ±15 %); thereby suggesting the utility of support vector machines based regression approach.Keywords: mass transfer, multiple plunging jets, support vector machines, ecological sciences
Procedia PDF Downloads 464920 Indigenous Patch Clamp Technique: Design of Highly Sensitive Amplifier Circuit for Measuring and Monitoring of Real Time Ultra Low Ionic Current through Cellular Gates
Authors: Moez ul Hassan, Bushra Noman, Sarmad Hameed, Shahab Mehmood, Asma Bashir
Abstract:
The importance of Noble prize winning “Patch Clamp Technique” is well documented. However, Patch Clamp Technique is very expensive and hence hinders research in developing countries. In this paper, detection, processing and recording of ultra low current from induced cells by using transimpedence amplifier is described. The sensitivity of the proposed amplifier is in the range of femto amperes (fA). Capacitive-feedback is used with active load to obtain a 20MΩ transimpedance gain. The challenging task in designing includes achieving adequate performance in gain, noise immunity and stability. The circuit designed by the authors was able to measure current in the rangeof 300fA to 100pA. Adequate performance shown by the amplifier with different input current and outcome result was found to be within the acceptable error range. Results were recorded using LabVIEW 8.5®for further research.Keywords: drug discovery, ionic current, operational amplifier, patch clamp
Procedia PDF Downloads 519919 Experimental Assessment of Micromechanical Models for Mechanical Properties of Recycled Short Fiber Composites
Authors: Mohammad S. Rouhi, Magdalena Juntikka
Abstract:
Processing of polymer fiber composites has a remarkable influence on their mechanical performance. These mechanical properties are even more influenced when using recycled reinforcement. Therefore, we place particular attention on the evaluation of micromechanical models to estimate the mechanical properties and compare them against the experimental results of the manufactured composites. For the manufacturing process, an epoxy matrix and carbon fiber production cut-offs as reinforcing material are incorporated using a vacuum infusion process. In addition, continuous textile reinforcement in combination with the epoxy matrix is used as reference material to evaluate the kick-down in mechanical performance of the recycled composite. The experimental results show less degradation of the composite stiffness compared to the strength properties. Observations from the modeling also show the same trend as the error between the theoretical and experimental results is lower for stiffness comparisons than the strength calculations. Yet still, good mechanical performance for specific applications can be expected from these materials.Keywords: composite recycling, carbon fibers, mechanical properties, micromechanics
Procedia PDF Downloads 161918 Subband Coding and Glottal Closure Instant (GCI) Using SEDREAMS Algorithm
Authors: Harisudha Kuresan, Dhanalakshmi Samiappan, T. Rama Rao
Abstract:
In modern telecommunication applications, Glottal Closure Instants location finding is important and is directly evaluated from the speech waveform. Here, we study the GCI using Speech Event Detection using Residual Excitation and the Mean Based Signal (SEDREAMS) algorithm. Speech coding uses parameter estimation using audio signal processing techniques to model the speech signal combined with generic data compression algorithms to represent the resulting modeled in a compact bit stream. This paper proposes a sub-band coder SBC, which is a type of transform coding and its performance for GCI detection using SEDREAMS are evaluated. In SBCs code in the speech signal is divided into two or more frequency bands and each of these sub-band signal is coded individually. The sub-bands after being processed are recombined to form the output signal, whose bandwidth covers the whole frequency spectrum. Then the signal is decomposed into low and high-frequency components and decimation and interpolation in frequency domain are performed. The proposed structure significantly reduces error, and precise locations of Glottal Closure Instants (GCIs) are found using SEDREAMS algorithm.Keywords: SEDREAMS, GCI, SBC, GOI
Procedia PDF Downloads 356917 Application of Refractometric Methodology for Simultaneous Determination of Alcohol and Residual Sugar Concentrations during Alcoholic Fermentation Bioprocess of Date Juice
Authors: Boukhiar Aissa, Halladj Fatima, Iguergaziz Nadia, Lamrani yasmina, Benamara Salem
Abstract:
Determining the alcohol content in alcoholic fermentation bioprocess is of great importance. In fact, it is a key indicator for monitoring this bioprocess. Several methodologies (chemical, spectrophotometric, chromatographic) are used to the determination of this parameter. However, these techniques are very long and they require: rigorous preparations, sometimes dangerous chemical reagents and/or expensive equipment. In the present study, the date juice is used as the substrate of alcoholic fermentation. The extracted juice undergoes an alcoholic fermentation by Saccharomyces cerevisiae. The study of the possible use of refractometry as a sole means for the in situ control of alcoholic fermentation revealed a good correlation (R2=0.98) between initial and final °Brix: °Brixf=0.377×°Brixi. In addition, the relationship between Δ°Brix and alcoholic content of the final product (A,%) has been determined: Δ°Brix/A=1.1. The obtained results allowed us to establish iso-responses abacus, which can be used for the determination of alcohol and residual sugar content, with a mean relative error (MRE) of 5.35%.Keywords: alcoholic fermentation, date juice, refractometry, residual sugar
Procedia PDF Downloads 341916 Deflection Effect on Mirror for Space Applications
Authors: Maamar Fatouma
Abstract:
Mirror optical performance can experience varying levels of stress and tolerances, which can have a notable impact on optical parametric systems. to ensure proper optical figure and position of mirror mounting within design tolerances, it is crucial to have a robust support structure in place for optical systems. The optical figure tolerance determines the allowable deviation from the ideal form of the mirror and the position tolerance determines the location and orientations of the optical axis of the optical systems. A variety of factors influence the optical figure of the mirror. Included are self-weight (Deflection), excitation from temperature change, temperature gradients and dimensional instability. This study employs an analytical approach and finite element method to examine the effects of stress resulting from mirror mounting on the wavefront passing through the mirror. The combined effect of tolerance and deflection on mirror performance is represented by an error budget. Numerical mirror mounting is presented to illustrate the space application of performance techniques.Keywords: opto-mechanical, bonded optic, tolerance, self-weight distortion, Rayleigh criteria
Procedia PDF Downloads 89915 R Software for Parameter Estimation of Spatio-Temporal Model
Authors: Budi Nurani Ruchjana, Atje Setiawan Abdullah, I. Gede Nyoman Mindra Jaya, Eddy Hermawan
Abstract:
In this paper, we propose the application package to estimate parameters of spatiotemporal model based on the multivariate time series analysis using the R open-source software. We build packages mainly to estimate the parameters of the Generalized Space Time Autoregressive (GSTAR) model. GSTAR is a combination of time series and spatial models that have parameters vary per location. We use the method of Ordinary Least Squares (OLS) and use the Mean Average Percentage Error (MAPE) to fit the model to spatiotemporal real phenomenon. For case study, we use oil production data from volcanic layer at Jatibarang Indonesia or climate data such as rainfall in Indonesia. Software R is very user-friendly and it is making calculation easier, processing the data is accurate and faster. Limitations R script for the estimation of model parameters spatiotemporal GSTAR built is still limited to a stationary time series model. Therefore, the R program under windows can be developed either for theoretical studies and application.Keywords: GSTAR Model, MAPE, OLS method, oil production, R software
Procedia PDF Downloads 242914 Application of Association Rule Using Apriori Algorithm for Analysis of Industrial Accidents in 2013-2014 in Indonesia
Authors: Triano Nurhikmat
Abstract:
Along with the progress of science and technology, the development of the industrialized world in Indonesia took place very rapidly. This leads to a process of industrialization of society Indonesia faster with the establishment of the company and the workplace are diverse. Development of the industry relates to the activity of the worker. Where in these work activities do not cover the possibility of an impending crash on either the workers or on a construction project. The cause of the occurrence of industrial accidents was the fault of electrical damage, work procedures, and error technique. The method of an association rule is one of the main techniques in data mining and is the most common form used in finding the patterns of data collection. In this research would like to know how relations of the association between the incidence of any industrial accidents. Therefore, by using methods of analysis association rule patterns associated with combination obtained two iterations item set (2 large item set) when every factor of industrial accidents with a West Jakarta so industrial accidents caused by the occurrence of an electrical value damage = 0.2 support and confidence value = 1, and the reverse pattern with value = 0.2 support and confidence = 0.75.Keywords: association rule, data mining, industrial accidents, rules
Procedia PDF Downloads 299913 Power Supply Feedback Regulation Loop Design Using Cadence PSpice Tool: Determining Converter Stability by Simulation
Authors: Debabrata Das
Abstract:
This paper explains how to design a regulation loop for a power supply circuit. It also discusses the need of a regulation loop and the improvement of a circuit with regulation loop. A sample design is used to demonstrate how to use PSpice to design feedback loop to control output voltage of a power supply and how to check if the power supply is stable or oscillatory. A sample design is made using a specific Integrated Circuit (IC) available in the PSpice library. A designer can experiment feedback loop design using Cadence Pspice tool. PSpice is easy to use, reliable, and convenient. To test a feedback loop, generally, engineers use trial and error method with the hardware which takes a lot of time and manpower. Moreover, it is expensive because component and Printed Circuit Board (PCB) may go bad. PSpice can be used by designers to test their loop designs without using hardware circuits. A designer can save time, cost, manpower and simulate his/her power supply circuit accurately before making a real hardware using this software package.Keywords: power electronics, feedback loop, regulation, stability, pole, zero, oscillation
Procedia PDF Downloads 346912 Modification of Underwood's Equation to Calculate Minimum Reflux Ratio for Column with One Side Stream Upper Than Feed
Authors: S. Mousavian, A. Abedianpour, A. Khanmohammadi, S. Hematian, Gh. Eidi Veisi
Abstract:
Distillation is one of the most important and utilized separation methods in the industrial practice. There are different ways to design of distillation column. One of these ways is short cut method. In short cut method, material balance and equilibrium are employed to calculate number of tray in distillation column. There are different methods that are classified in short cut method. One of these methods is Fenske-Underwood-Gilliland method. In this method, minimum reflux ratio should be calculated by underwood equation. Underwood proposed an equation that is useful for simple distillation column with one feed and one top and bottom product. In this study, underwood method is developed to predict minimum reflux ratio for column with one side stream upper than feed. The result of this model compared with McCabe-Thiele method. The result shows that proposed method able to calculate minimum reflux ratio with very small error.Keywords: minimum reflux ratio, side stream, distillation, Underwood’s method
Procedia PDF Downloads 406911 Estimation of Synchronous Machine Synchronizing and Damping Torque Coefficients
Authors: Khaled M. EL-Naggar
Abstract:
Synchronizing and damping torque coefficients of a synchronous machine can give a quite clear picture for machine behavior during transients. These coefficients are used as a power system transient stability measurement. In this paper, a crow search optimization algorithm is presented and implemented to study the power system stability during transients. The algorithm makes use of the machine responses to perform the stability study in time domain. The problem is formulated as a dynamic estimation problem. An objective function that minimizes the error square in the estimated coefficients is designed. The method is tested using practical system with different study cases. Results are reported and a thorough discussion is presented. The study illustrates that the proposed method can estimate the stability coefficients for the critical stable cases where other methods may fail. The tests proved that the proposed tool is an accurate and reliable tool for estimating the machine coefficients for assessment of power system stability.Keywords: optimization, estimation, synchronous, machine, crow search
Procedia PDF Downloads 140910 Blind Super-Resolution Reconstruction Based on PSF Estimation
Authors: Osama A. Omer, Amal Hamed
Abstract:
Successful blind image Super-Resolution algorithms require the exact estimation of the Point Spread Function (PSF). In the absence of any prior information about the imagery system and the true image; this estimation is normally done by trial and error experimentation until an acceptable restored image quality is obtained. Multi-frame blind Super-Resolution algorithms often have disadvantages of slow convergence and sensitiveness to complex noises. This paper presents a Super-Resolution image reconstruction algorithm based on estimation of the PSF that yields the optimum restored image quality. The estimation of PSF is performed by the knife-edge method and it is implemented by measuring spreading of the edges in the reproduced HR image itself during the reconstruction process. The proposed image reconstruction approach is using L1 norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. A series of experiment results show that the proposed method can outperform other previous work robustly and efficiently.Keywords: blind, PSF, super-resolution, knife-edge, blurring, bilateral, L1 norm
Procedia PDF Downloads 365909 The Application of Maintenance Strategy in Energy Power Plant: A Case Study
Authors: Steven Vusmuzi Mashego, Opeyeolu Timothy Laseinde
Abstract:
This paper presents a case study on applying maintenance strategies observed in a turbo-generator at a coal power plant. Turbo generators are one of the primary and critical components in energy generation. It is essential to apply correct maintenance strategies and apply operational procedures accordingly. The maintenance strategies are implemented to ensure the high reliability of the equipment. The study was carried out at a coal power station which will transit to a cleaner energy source in the nearest future. The study is relevant as lessons learned in this system will support plans and operational models implemented when cleaner energy sources replace coal-powered turbines. This paper first outlines different maintenance strategies executed on the turbo-generator modules. Secondly, the impacts of human factors on a coal power station are discussed, and the findings prompted recommendations for future actions.Keywords: maintenance strategies, turbo generator, operational error, human factor, electricity generation
Procedia PDF Downloads 112908 Identification of Shocks from Unconventional Monetary Policy Measures
Authors: Margarita Grushanina
Abstract:
After several prominent central banks including European Central Bank (ECB), Federal Reserve System (Fed), Bank of Japan and Bank of England employed unconventional monetary policies in the aftermath of the financial crisis of 2008-2009 the problem of identification of the effects from such policies became of great interest. One of the main difficulties in identification of shocks from unconventional monetary policy measures in structural VAR analysis is that they often are anticipated, which leads to a non-fundamental MA representation of the VAR model. Moreover, the unconventional monetary policy actions may indirectly transmit to markets information about the future stance of the interest rate, which raises a question of the plausibility of the assumption of orthogonality between shocks from unconventional and conventional policy measures. This paper offers a method of identification that takes into account the abovementioned issues. The author uses factor-augmented VARs to increase the information set and identification through heteroskedasticity of error terms and rank restrictions on the errors’ second moments’ matrix to deal with the cross-correlation of the structural shocks.Keywords: factor-augmented VARs, identification through heteroskedasticity, monetary policy, structural VARs
Procedia PDF Downloads 348907 Oil Reservoir Asphalting Precipitation Estimating during CO2 Injection
Authors: I. Alhajri, G. Zahedi, R. Alazmi, A. Akbari
Abstract:
In this paper, an Artificial Neural Network (ANN) was developed to predict Asphaltene Precipitation (AP) during the injection of carbon dioxide into crude oil reservoirs. In this study, the experimental data from six different oil fields were collected. Seventy percent of the data was used to develop the ANN model, and different ANN architectures were examined. A network with the Trainlm training algorithm was found to be the best network to estimate the AP. To check the validity of the proposed model, the model was used to predict the AP for the thirty percent of the data that was unevaluated. The Mean Square Error (MSE) of the prediction was 0.0018, which confirms the excellent prediction capability of the proposed model. In the second part of this study, the ANN model predictions were compared with modified Hirschberg model predictions. The ANN was found to provide more accurate estimates compared to the modified Hirschberg model. Finally, the proposed model was employed to examine the effect of different operating parameters during gas injection on the AP. It was found that the AP is mostly sensitive to the reservoir temperature. Furthermore, the carbon dioxide concentration in liquid phase increases the AP.Keywords: artificial neural network, asphaltene, CO2 injection, Hirschberg model, oil reservoirs
Procedia PDF Downloads 364906 Simulation of Kinetic Friction in L-Bending of Sheet Metals
Authors: Maziar Ramezani, Thomas Neitzert, Timotius Pasang
Abstract:
This paper aims at experimental and numerical investigation of springback behavior of sheet metals during L-bending process with emphasis on Stribeck-type friction modeling. The coefficient of friction in Stribeck curve depends on sliding velocity and contact pressure. The springback behavior of mild steel and aluminum alloy 6022-T4 sheets was studied experimentally and using numerical simulations with ABAQUS software with two types of friction model: Coulomb friction and Stribeck friction. The influence of forming speed on springback behavior was studied experimentally and numerically. The results showed that Stribeck-type friction model has better results in predicting springback in sheet metal forming. The FE prediction error for mild steel and 6022-T4 AA is 23.8%, 25.5% respectively, using Coulomb friction model and 11%, 13% respectively, using Stribeck friction model. These results show that Stribeck model is suitable for simulation of sheet metal forming especially at higher forming speed.Keywords: friction, L-bending, springback, Stribeck curves
Procedia PDF Downloads 491905 The Functions of “Question” and Its Role in Education Process: Quranic Approach
Authors: Sara Tusian, Zahra Salehi Motaahed, Narges Sajjadie, Nikoo Dialame
Abstract:
One of the methods which have frequently been used in Quran is the “question”. In the Quran, in addition to the content, methods are also important. Using analysis-interpretation method, the present study has investigated Quranic questions, and extracted its functions from educational perspective. In so doing, it has first investigated all the questions in Quran and then taking the three-stage classification of education into account, it has offered question functions. The results obtained from this study suggest that question functions in Quran are presented in three categories: the preparation stage (including preparation of the audience, revising the insights, and internal Evolution); main body (including the granting the insight, and elimination of intellectual negligence and the question of innate and logical axioms, the introducting of the realm of thinking, creating emotional arousal and alleged in the claim) and the third stage as modification and revision (including invitation to move in the framework of tasks using the individual beliefs to reveal the contradictions and, Error detection and contribution to change the function) that each of which has a special role in the education process.Keywords: education, question, Quranic questions, Quran
Procedia PDF Downloads 503904 Engineering Topology of Photonic Systems for Sustainable Molecular Structure: Autopoiesis Systems
Authors: Moustafa Osman Mohammed
Abstract:
This paper introduces topological order in descried social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. Topological order is important in describing the physical systems for exploiting optical systems and improving photonic devices. The stats of topological order have some interesting properties of topological degeneracy and fractional statistics that reveal the entanglement origin of topological order, etc. Topological ideas in photonics form exciting developments in solid-state materials, that being; insulating in the bulk, conducting electricity on their surface without dissipation or back-scattering, even in the presence of large impurities. A specific type of autopoiesis system is interrelated to the main categories amongst existing groups of the ecological phenomena interaction social and medical sciences. The hypothesis, nevertheless, has a nonlinear interaction with its natural environment 'interactional cycle' for exchange photon energy with molecules without changes in topology. The engineering topology of a biosensor is based on the excitation boundary of surface electromagnetic waves in photonic band gap multilayer films. The device operation is similar to surface Plasmonic biosensors in which a photonic band gap film replaces metal film as the medium when surface electromagnetic waves are excited. The use of photonic band gap film offers sharper surface wave resonance leading to the potential of greatly enhanced sensitivity. So, the properties of the photonic band gap material are engineered to operate a sensor at any wavelength and conduct a surface wave resonance that ranges up to 470 nm. The wavelength is not generally accessible with surface Plasmon sensing. Lastly, the photonic band gap films have robust mechanical functions that offer new substrates for surface chemistry to understand the molecular design structure and create sensing chips surface with different concentrations of DNA sequences in the solution to observe and track the surface mode resonance under the influences of processes that take place in the spectroscopic environment. These processes led to the development of several advanced analytical technologies: which are; automated, real-time, reliable, reproducible, and cost-effective. This results in faster and more accurate monitoring and detection of biomolecules on refractive index sensing, antibody-antigen reactions with a DNA or protein binding. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other in order to form unique spatial structure and dynamics of biological molecules for providing the environment mutual contribution in investigation of changes due to the pathogenic archival architecture of cell clusters.Keywords: autopoiesis, photonics systems, quantum topology, molecular structure, biosensing
Procedia PDF Downloads 94903 Fracture Pressure Predict Based on Well Logs of Depleted Reservoir in Southern Iraqi Oilfield
Authors: Raed H. Allawi
Abstract:
Formation pressure is the most critical parameter in hydrocarbon exploration and exploitation. Specifically, predicting abnormal pressures (high formation pressures) and subnormal pressure zones can provide valuable information to minimize uncertainty for anticipated drilling challenges and risks. This study aims to interpret and delineate the pore and fracture pressure of the Mishrif reservoir in the southern Iraq Oilfield. The data required to implement this study included acoustic compression wave, gamma-ray, bulk density, and drilling events. Furthermore, supporting these models needs the pore pressure measurement from the Modular Formation Dynamics Tester (MDT). Many measured values of pore pressure were used to validate the accurate model. Using sonic velocity approaches, the mean absolute percentage error (MAPE) was about 4%. The fracture pressure results were consistent with the measurement data, actual drilling report, and events. The model's results will be a guide for successful drilling in future wells in the same oilfield.Keywords: pore pressure, fracture pressure, overburden pressure, effective stress, drilling events
Procedia PDF Downloads 83902 Representativity Based Wasserstein Active Regression
Authors: Benjamin Bobbia, Matthias Picard
Abstract:
In recent years active learning methodologies based on the representativity of the data seems more promising to limit overfitting. The presented query methodology for regression using the Wasserstein distance measuring the representativity of our labelled dataset compared to the global distribution. In this work a crucial use of GroupSort Neural Networks is made therewith to draw a double advantage. The Wasserstein distance can be exactly expressed in terms of such neural networks. Moreover, one can provide explicit bounds for their size and depth together with rates of convergence. However, heterogeneity of the dataset is also considered by weighting the Wasserstein distance with the error of approximation at the previous step of active learning. Such an approach leads to a reduction of overfitting and high prediction performance after few steps of query. After having detailed the methodology and algorithm, an empirical study is presented in order to investigate the range of our hyperparameters. The performances of this method are compared, in terms of numbers of query needed, with other classical and recent query methods on several UCI datasets.Keywords: active learning, Lipschitz regularization, neural networks, optimal transport, regression
Procedia PDF Downloads 80901 Comparative Study of Different Enhancement Techniques for Computed Tomography Images
Authors: C. G. Jinimole, A. Harsha
Abstract:
One of the key problems facing in the analysis of Computed Tomography (CT) images is the poor contrast of the images. Image enhancement can be used to improve the visual clarity and quality of the images or to provide a better transformation representation for further processing. Contrast enhancement of images is one of the acceptable methods used for image enhancement in various applications in the medical field. This will be helpful to visualize and extract details of brain infarctions, tumors, and cancers from the CT image. This paper presents a comparison study of five contrast enhancement techniques suitable for the contrast enhancement of CT images. The types of techniques include Power Law Transformation, Logarithmic Transformation, Histogram Equalization, Contrast Stretching, and Laplacian Transformation. All these techniques are compared with each other to find out which enhancement provides better contrast of CT image. For the comparison of the techniques, the parameters Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) are used. Logarithmic Transformation provided the clearer and best quality image compared to all other techniques studied and has got the highest value of PSNR. Comparison concludes with better approach for its future research especially for mapping abnormalities from CT images resulting from Brain Injuries.Keywords: computed tomography, enhancement techniques, increasing contrast, PSNR and MSE
Procedia PDF Downloads 314900 Optimal Configuration for Polarimetric Surface Plasmon Resonance Sensors
Authors: Ibrahim Watad, Ibrahim Abdulhalim
Abstract:
Conventional spectroscopic surface plasmon resonance (SPR) sensors are widely used, both in fundamental research and environmental monitoring as well as healthcare diagnostics. However, they still lack the low limit of detection (LOD) and there still a place for improvement. SPR conventional sensors are based on the detection of a dip in the reflectivity spectrum which is relatively wide. To improve the performance of these sensors, many techniques and methods proposed either to reduce the width of the dip or to increase the sensitivity. Together with that, profiting from the sharp jump in the phase spectrum under SPR, several works suggested the extraction of the phase of the reflected wave. However, existing phase measurement setups are in general more complicated compared to the conventional setups, require more stability and are very sensitive to external vibrations and noises. In this study, a simple polarimetric technique for phase extraction under SPR is presented, followed by a theoretical error analysis and an experimental verification. The advantages of the proposed technique upon existing techniques will be elaborated, together with conclusions regarding the best polarimetric function, and its corresponding optimal metal layer range of thicknesses to use under the conventional Kretschmann-Raether configuration.Keywords: plasmonics, polarimetry, thin films, optical sensors
Procedia PDF Downloads 404899 Cross Coupling Sliding Mode Synchronization Control of Dual-Driving Feed System
Authors: Hong Lu, Wei Fan, Yongquan Zhang, Junbo Zhang
Abstract:
A cross coupling sliding synchronization control strategy is proposed for the dual-driving feed system. This technology will minimize the position error oscillation and achieve the precise synchronization performance in the high speed and high precision drive system, especially some high speed and high precision machine. Moreover, a cross coupling compensation matrix is provided to offset the mismatched disturbance and the disturbance observer is established to eliminate the chattering phenomenon. Performance comparisons of proposed dual-driving cross coupling sliding mode control (CCSMC), normal cross coupling control (CCC) strategy with PID control, and electronic virtual main shaft control (EVMSC) strategy with SMC control are investigated by simulation and a dual-driving control system; the results show the effectiveness of the proposed control scheme.Keywords: cross coupling matrix, dual motors, synchronization control, sliding mode control
Procedia PDF Downloads 365898 Relative Navigation with Laser-Based Intermittent Measurement for Formation Flying Satellites
Authors: Jongwoo Lee, Dae-Eun Kang, Sang-Young Park
Abstract:
This study presents a precise relative navigational method for satellites flying in formation using laser-based intermittent measurement data. The measurement data for the relative navigation between two satellites consist of a relative distance measured by a laser instrument and relative attitude angles measured by attitude determination. The relative navigation solutions are estimated by both the Extended Kalman filter (EKF) and unscented Kalman filter (UKF). The solutions estimated by the EKF may become inaccurate or even diverge as measurement outage time gets longer because the EKF utilizes a linearization approach. However, this study shows that the UKF with the appropriate scaling parameters provides a stable and accurate relative navigation solutions despite the long measurement outage time and large initial error as compared to the relative navigation solutions of the EKF. Various navigation results have been analyzed by adjusting the scaling parameters of the UKF.Keywords: satellite relative navigation, laser-based measurement, intermittent measurement, unscented Kalman filter
Procedia PDF Downloads 357897 Profit-Based Artificial Neural Network (ANN) Trained by Migrating Birds Optimization: A Case Study in Credit Card Fraud Detection
Authors: Ashkan Zakaryazad, Ekrem Duman
Abstract:
A typical classification technique ranks the instances in a data set according to the likelihood of belonging to one (positive) class. A credit card (CC) fraud detection model ranks the transactions in terms of probability of being fraud. In fact, this approach is often criticized, because firms do not care about fraud probability but about the profitability or costliness of detecting a fraudulent transaction. The key contribution in this study is to focus on the profit maximization in the model building step. The artificial neural network proposed in this study works based on profit maximization instead of minimizing the error of prediction. Moreover, some studies have shown that the back propagation algorithm, similar to other gradient–based algorithms, usually gets trapped in local optima and swarm-based algorithms are more successful in this respect. In this study, we train our profit maximization ANN using the Migrating Birds optimization (MBO) which is introduced to literature recently.Keywords: neural network, profit-based neural network, sum of squared errors (SSE), MBO, gradient descent
Procedia PDF Downloads 475896 Instability Index Method and Logistic Regression to Assess Landslide Susceptibility in County Route 89, Taiwan
Authors: Y. H. Wu, Ji-Yuan Lin, Yu-Ming Liou
Abstract:
This study aims to set up the landslide susceptibility map of County Route 89 at Ren-Ai Township in Nantou County using the Instability Index Method and Logistic regression. Seven susceptibility factors including Slope Angle, Aspect, Elevation, Distance to fold, Distance to River, Distance to Road and Accumulated Rainfall were obtained by GIS based on the Typhoon Toraji landslide area identified by Industrial Technology Research Institute in 2001. To calculate the landslide percentage of each factor and acquire the weight and grade the grid by means of Instability Index Method. In this study, landslide susceptibility can be classified into four grades: high, medium high, medium low and low, in order to determine the advantages and disadvantages of the two models. The precision of this model is verified by classification error matrix and SRC curve. These results suggest that the logistic regression model is a preferred method than instability index in the assessment of landslide susceptibility. It is suitable for the landslide prediction and precaution in this area in the future.Keywords: instability index method, logistic regression, landslide susceptibility, SRC curve
Procedia PDF Downloads 292895 Waveguiding in an InAs Quantum Dots Nanomaterial for Scintillation Applications
Authors: Katherine Dropiewski, Michael Yakimov, Vadim Tokranov, Allan Minns, Pavel Murat, Serge Oktyabrsky
Abstract:
InAs Quantum Dots (QDs) in a GaAs matrix is a well-documented luminescent material with high light yield, as well as thermal and ionizing radiation tolerance due to quantum confinement. These benefits can be leveraged for high-efficiency, room temperature scintillation detectors. The proposed scintillator is composed of InAs QDs acting as luminescence centers in a GaAs stopping medium, which also acts as a waveguide. This system has appealing potential properties, including high light yield (~240,000 photons/MeV) and fast capture of photoelectrons (2-5ps), orders of magnitude better than currently used inorganic scintillators, such as LYSO or BaF2. The high refractive index of the GaAs matrix (n=3.4) ensures light emitted by the QDs is waveguided, which can be collected by an integrated photodiode (PD). Scintillation structures were grown using Molecular Beam Epitaxy (MBE) and consist of thick GaAs waveguiding layers with embedded sheets of modulation p-type doped InAs QDs. An AlAs sacrificial layer is grown between the waveguide and the GaAs substrate for epitaxial lift-off to separate the scintillator film and transfer it to a low-index substrate for waveguiding measurements. One consideration when using a low-density material like GaAs (~5.32 g/cm³) as a stopping medium is the matrix thickness in the dimension of radiation collection. Therefore, luminescence properties of very thick (4-20 microns) waveguides with up to 100 QD layers were studied. The optimization of the medium included QD shape, density, doping, and AlGaAs barriers at the waveguide surfaces to prevent non-radiative recombination. To characterize the efficiency of QD luminescence, low temperature photoluminescence (PL) (77-450 K) was measured and fitted using a kinetic model. The PL intensity degrades by only 40% at RT, with an activation energy for electron escape from QDs to the barrier of ~60 meV. Attenuation within the waveguide (WG) is a limiting factor for the lateral size of a scintillation detector, so PL spectroscopy in the waveguiding configuration was studied. Spectra were measured while the laser (630 nm) excitation point was scanned away from the collecting fiber coupled to the edge of the WG. The QD ground state PL peak at 1.04 eV (1190 nm) was inhomogeneously broadened with FWHM of 28 meV (33 nm) and showed a distinct red-shift due to self-absorption in the QDs. Attenuation stabilized after traveling over 1 mm through the WG, at about 3 cm⁻¹. Finally, a scintillator sample was used to test detection and evaluate timing characteristics using 5.5 MeV alpha particles. With a 2D waveguide and a small area of integrated PD, the collected charge averaged 8.4 x10⁴ electrons, corresponding to a collection efficiency of about 7%. The scintillation response had 80 ps noise-limited time resolution and a QD decay time of 0.6 ns. The data confirms unique properties of this scintillation detector which can be potentially much faster than any currently used inorganic scintillator.Keywords: GaAs, InAs, molecular beam epitaxy, quantum dots, III-V semiconductor
Procedia PDF Downloads 256894 Propagation of Ultra-High Energy Cosmic Rays through Extragalactic Magnetic Fields: An Exploratory Study of the Distance Amplification from Rectilinear Propagation
Authors: Rubens P. Costa, Marcelo A. Leigui de Oliveira
Abstract:
The comprehension of features on the energy spectra, the chemical compositions, and the origins of Ultra-High Energy Cosmic Rays (UHECRs) - mainly atomic nuclei with energies above ~1.0 EeV (exa-electron volts) - are intrinsically linked to the problem of determining the magnitude of their deflections in cosmic magnetic fields on cosmological scales. In addition, as they propagate from the source to the observer, modifications are expected in their original energy spectra, anisotropy, and the chemical compositions due to interactions with low energy photons and matter. This means that any consistent interpretation of the nature and origin of UHECRs has to include the detailed knowledge of their propagation in a three-dimensional environment, taking into account the magnetic deflections and energy losses. The parameter space range for the magnetic fields in the universe is very large because the field strength and especially their orientation have big uncertainties. Particularly, the strength and morphology of the Extragalactic Magnetic Fields (EGMFs) remain largely unknown, because of the intrinsic difficulty of observing them. Monte Carlo simulations of charged particles traveling through a simulated magnetized universe is the straightforward way to study the influence of extragalactic magnetic fields on UHECRs propagation. However, this brings two major difficulties: an accurate numerical modeling of charged particles diffusion in magnetic fields, and an accurate numerical modeling of the magnetized Universe. Since magnetic fields do not cause energy losses, it is important to impose that the particle tracking method conserve the particle’s total energy and that the energy changes are results of the interactions with background photons only. Hence, special attention should be paid to computational effects. Additionally, because of the number of particles necessary to obtain a relevant statistical sample, the particle tracking method must be computationally efficient. In this work, we present an analysis of the propagation of ultra-high energy charged particles in the intergalactic medium. The EGMFs are considered to be coherent within cells of 1 Mpc (mega parsec) diameter, wherein they have uniform intensities of 1 nG (nano Gauss). Moreover, each cell has its field orientation randomly chosen, and a border region is defined such that at distances beyond 95% of the cell radius from the cell center smooth transitions have been applied in order to avoid discontinuities. The smooth transitions are simulated by weighting the magnetic field orientation by the particle's distance to the two nearby cells. The energy losses have been treated in the continuous approximation parameterizing the mean energy loss per unit path length by the energy loss length. We have shown, for a particle with the typical energy of interest the integration method performance in the relative error of Larmor radius, without energy losses and the relative error of energy. Additionally, we plotted the distance amplification from rectilinear propagation as a function of the traveled distance, particle's magnetic rigidity, without energy losses, and particle's energy, with energy losses, to study the influence of particle's species on these calculations. The results clearly show when it is necessary to use a full three-dimensional simulation.Keywords: cosmic rays propagation, extragalactic magnetic fields, magnetic deflections, ultra-high energy
Procedia PDF Downloads 127893 Low Density Parity Check Codes
Authors: Kassoul Ilyes
Abstract:
The field of error correcting codes has been revolutionized by the introduction of iteratively decoded codes. Among these, LDPC codes are now a preferred solution thanks to their remarkable performance and low complexity. The binary version of LDPC codes showed even better performance, although it’s decoding introduced greater complexity. This thesis studies the performance of binary LDPC codes using simplified weighted decisions. Information is transported between a transmitter and a receiver by digital transmission systems, either by propagating over a radio channel or also by using a transmission medium such as the transmission line. The purpose of the transmission system is then to carry the information from the transmitter to the receiver as reliably as possible. These codes have not generated enough interest within the coding theory community. This forgetfulness will last until the introduction of Turbo-codes and the iterative principle. Then it was proposed to adopt Pearl's Belief Propagation (BP) algorithm for decoding these codes. Subsequently, Luby introduced irregular LDPC codes characterized by a parity check matrix. And finally, we study simplifications on binary LDPC codes. Thus, we propose a method to make the exact calculation of the APP simpler. This method leads to simplifying the implementation of the system.Keywords: LDPC, parity check matrix, 5G, BER, SNR
Procedia PDF Downloads 154892 Optimization of Production Scheduling through the Lean and Simulation Integration in Automotive Company
Authors: Guilherme Gorgulho, Carlos Roberto Camello Lima
Abstract:
Due to the competitive market in which companies are currently engaged, the constant changes require companies to react quickly regarding the variability of demand and process. The changes are caused by customers, or by demand fluctuations or variations of products, or the need to serve customers within agreed delivery taking into account the continuous search for quality and competitive prices in products. These changes end up influencing directly or indirectly the activities of the Planning and Production Control (PPC), which does business in strategic, tactical and operational levels of production systems. One area of concern for organizations is in the short term (operational level), because this planning stage any error or divergence will cause waste and impact on the delivery of products on time to customers. Thus, this study aims to optimize the efficiency of production scheduling, using different sequencing strategies in an automotive company. Seeking to aim the proposed objective, we used the computer simulation in conjunction with lean manufacturing to build and validate the current model, and subsequently the creation of future scenarios.Keywords: computational simulation, lean manufacturing, production scheduling, sequencing strategies
Procedia PDF Downloads 271