Search results for: optimization methods
4255 Optimized Calculation of Hourly Price Forward Curve (HPFC)
Authors: Ahmed Abdolkhalig
Abstract:
This paper examines many mathematical methods for molding the hourly price forward curve (HPFC); the model will be constructed by numerous regression methods, like polynomial regression, radial basic function neural networks & a furrier series. Examination the models goodness of fit will be done by means of statistical & graphical tools. The criteria for choosing the model will depend on minimize the Root Mean Squared Error (RMSE), using the correlation analysis approach for the regression analysis the optimal model will be distinct, which are robust against model misspecification. Learning & supervision technique employed to determine the form of the optimal parameters corresponding to each measure of overall loss. By using all the numerical methods that mentioned previously; the explicit expressions for the optimal model derived and the optimal designs will be implemented.Keywords: Forward curve, furrier series, regression, radial basic function neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42274254 Collaborative Planning and Forecasting
Authors: Neha Asthana, Vishal Krishna Prasad
Abstract:
Collaborative Planning and Forecasting is an innovative and systematic approach towards productive integration and assimilation of data synergized into information. The changing and variable market dynamics have persuaded global business chains to incorporate Collaborative Planning and Forecasting as an imperative tool. Thus, it is essential for the supply chains to constantly improvise, update its nature, and mould as per changing global environment.
Keywords: Information transfer, Forecasting, Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19054253 Groundwater Quality Improvement by Using Aeration and Filtration Methods
Authors: Nik N. Nik Daud, Nur H. Izehar, B. Yusuf, Thamer A. Mohamed, A. Ahsan
Abstract:
An experiment was conducted using two aeration methods (water-into-air and air-into-water) and followed by filtration processes using manganese greensand material. The properties of groundwater such as pH, dissolved oxygen, turbidity and heavy metal concentration (iron and manganese) will be assessed. The objectives of this study are i) to determine the effective aeration method and ii) to assess the effectiveness of manganese greensand as filter media in removing iron and manganese concentration in groundwater. Results showed that final pH for all samples after treatment are in range from 7.40 and 8.40. Both aeration methods increased the dissolved oxygen content. Final turbidity for groundwater samples are between 3 NTU to 29 NTU. Only three out of eight samples achieved iron concentration of 0.3mg/L and less and all samples reach manganese concentration of 0.1mg/L and less. Air-into-water aeration method gives higher percentage of iron and manganese removal compare to water-into-air method.Keywords: Aeration, filtration, groundwater, water quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 41114252 Validation and Selection between Machine Learning Technique and Traditional Methods to Reduce Bullwhip Effects: a Data Mining Approach
Authors: Hamid R. S. Mojaveri, Seyed S. Mousavi, Mojtaba Heydar, Ahmad Aminian
Abstract:
The aim of this paper is to present a methodology in three steps to forecast supply chain demand. In first step, various data mining techniques are applied in order to prepare data for entering into forecasting models. In second step, the modeling step, an artificial neural network and support vector machine is presented after defining Mean Absolute Percentage Error index for measuring error. The structure of artificial neural network is selected based on previous researchers' results and in this article the accuracy of network is increased by using sensitivity analysis. The best forecast for classical forecasting methods (Moving Average, Exponential Smoothing, and Exponential Smoothing with Trend) is resulted based on prepared data and this forecast is compared with result of support vector machine and proposed artificial neural network. The results show that artificial neural network can forecast more precisely in comparison with other methods. Finally, forecasting methods' stability is analyzed by using raw data and even the effectiveness of clustering analysis is measured.Keywords: Artificial Neural Networks (ANN), bullwhip effect, demand forecasting, Support Vector Machine (SVM).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20094251 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models
Authors: I. V. Pinto, M. R. Sooriyarachchi
Abstract:
It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.
Keywords: Goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, type-I error, penalized quasi-likelihood, power, quasi-likelihood.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7324250 Developing a Viral Artifact to Improve Employees’ Security Behavior
Authors: Stefan Bauer, Josef Frysak
Abstract:
According to the scientific information management literature, the improper use of information technology (e.g. personal computers) by employees are one main cause for operational and information security loss events. Therefore, organizations implement information security awareness programs to increase employees’ awareness to further prevention of loss events. However, in many cases these information security awareness programs consist of conventional delivery methods like posters, leaflets, or internal messages to make employees aware of information security policies. We assume that a viral information security awareness video might be more effective medium than conventional methods commonly used by organizations. The purpose of this research is to develop a viral video artifact to improve employee security behavior concerning information technology.
Keywords: Information Security Awareness, Delivery Methods, Viral Videos, Employee Security Behavior.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18044249 Compressive Strength Evaluation of Underwater Concrete Structures Integrating the Combination of Rebound Hardness and Ultrasonic Pulse Velocity Methods with Artificial Neural Networks
Authors: Seunghee Park, Junkyeong Kim, Eun-Seok Shin, Sang-Hun Han
Abstract:
In this study, two kinds of nondestructive evaluation (NDE) techniques (rebound hardness and ultrasonic pulse velocity methods) are investigated for the effective maintenance of underwater concrete structures. A new methodology to estimate the underwater concrete strengths more effectively, named “artificial neural network (ANN) – based concrete strength estimation with the combination of rebound hardness and ultrasonic pulse velocity methods” is proposed and verified throughout a series of experimental works.
Keywords: Underwater Concrete, Rebound Hardness, Schmidt hammer, Ultrasonic Pulse Velocity, Ultrasonic Sensor, Artificial Neural Networks, ANN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36614248 Simple Procedure for Probability Calculation of Tensile Crack Occurring in Rigid Pavement – Case Study
Authors: Aleš Florian, Lenka Ševelová, Jaroslav Žák
Abstract:
Formation of tensile cracks in concrete slabs of rigid pavement can be (among others) the initiation point of the other, more serious failures which can ultimately lead to complete degradation of the concrete slab and thus the whole pavement. Two measures can be used for reliability assessment of this phenomenon - the probability of failure and/or the reliability index. Different methods can be used for their calculation. The simple ones are called moment methods and simulation techniques. Two methods - FOSM Method and Simple Random Sampling Method - are verified and their comparison is performed. The influence of information about the probability distribution and the statistical parameters of input variables as well as of the limit state function on the calculated reliability index and failure probability are studied in three points on the lower surface of concrete slabs of the older type of rigid pavement formerly used in the Czech Republic.
Keywords: Failure, pavement, probability, reliability index, simulation, tensile crack.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23044247 Comparison of Detrending Methods in Spectral Analysis of Heart Rate Variability
Authors: Liping Li, Changchun Liu, Ke Li, Chengyu Liu
Abstract:
Non-stationary trend in R-R interval series is considered as a main factor that could highly influence the evaluation of spectral analysis. It is suggested to remove trends in order to obtain reliable results. In this study, three detrending methods, the smoothness prior approach, the wavelet and the empirical mode decomposition, were compared on artificial R-R interval series with four types of simulated trends. The Lomb-Scargle periodogram was used for spectral analysis of R-R interval series. Results indicated that the wavelet method showed a better overall performance than the other two methods, and more time-saving, too. Therefore it was selected for spectral analysis of real R-R interval series of thirty-seven healthy subjects. Significant decreases (19.94±5.87% in the low frequency band and 18.97±5.78% in the ratio (p<0.001)) were found. Thus the wavelet method is recommended as an optimal choice for use.Keywords: empirical mode decomposition, heart rate variability, signal detrending, smoothness priors, wavelet
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20684246 A Method to Annotate Programs with High-Level Knowledge of Computation
Authors: Nobuhiko Hishinuma, Jun Igari, Rentaro Yoshioka
Abstract:
When programming in languages such as C, Java, etc., it is difficult to reconstruct the programmer's ideas only from the program code. This occurs mainly because, much of the programmer's ideas behind the implementation are not recorded in the code during implementation. For example, physical aspects of computation such as spatial structures, activities, and meaning of variables are not required as instructions to the computer and are often excluded. This makes the future reconstruction of the original ideas difficult. AIDA, which is a multimedia programming language based on the cyberFilm model, can solve these problems allowing to describe ideas behind programs using advanced annotation methods as a natural extension to programming. In this paper, a development environment that implements the AIDA language is presented with a focus on the annotation methods. In particular, an actual scientific numerical computation code is created and the effects of the annotation methods are analyzed.Keywords: cyberFilm, development environment, knowledge engineering, multimedia programming language
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12814245 Online Prediction of Nonlinear Signal Processing Problems Based Kernel Adaptive Filtering
Authors: Hamza Nejib, Okba Taouali
Abstract:
This paper presents two of the most knowing kernel adaptive filtering (KAF) approaches, the kernel least mean squares and the kernel recursive least squares, in order to predict a new output of nonlinear signal processing. Both of these methods implement a nonlinear transfer function using kernel methods in a particular space named reproducing kernel Hilbert space (RKHS) where the model is a linear combination of kernel functions applied to transform the observed data from the input space to a high dimensional feature space of vectors, this idea known as the kernel trick. Then KAF is the developing filters in RKHS. We use two nonlinear signal processing problems, Mackey Glass chaotic time series prediction and nonlinear channel equalization to figure the performance of the approaches presented and finally to result which of them is the adapted one.Keywords: KLMS, online prediction, KAF, signal processing, RKHS, Kernel methods, KRLS, KLMS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10504244 Application of Machine Learning Methods to Online Test Error Detection in Semiconductor Test
Authors: Matthias Kirmse, Uwe Petersohn, Elief Paffrath
Abstract:
As in today's semiconductor industries test costs can make up to 50 percent of the total production costs, an efficient test error detection becomes more and more important. In this paper, we present a new machine learning approach to test error detection that should provide a faster recognition of test system faults as well as an improved test error recall. The key idea is to learn a classifier ensemble, detecting typical test error patterns in wafer test results immediately after finishing these tests. Since test error detection has not yet been discussed in the machine learning community, we define central problem-relevant terms and provide an analysis of important domain properties. Finally, we present comparative studies reflecting the failure detection performance of three individual classifiers and three ensemble methods based upon them. As base classifiers we chose a decision tree learner, a support vector machine and a Bayesian network, while the compared ensemble methods were simple and weighted majority vote as well as stacking. For the evaluation, we used cross validation and a specially designed practical simulation. By implementing our approach in a semiconductor test department for the observation of two products, we proofed its practical applicability.
Keywords: Ensemble methods, fault detection, machine learning, semiconductor test.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22734243 Influence of Insulation System Methods on Dissipation Factor and Voltage Endurance
Authors: Farzad Yavari, Hamid Chegini, Saeed Lotfi
Abstract:
This paper reviews the comparison of Resin Rich (RR) and Vacuum Pressure Impregnation (VPI) insulation system qualities for stator bar of rotating electrical machines. Voltage endurance and tangent delta are two diagnostic tests to determine the quality of insulation systems. The paper describes the trend of dissipation factor while performing voltage endurance test for different stator bar samples made with RR and VPI insulation system methods. Some samples were made with the same strands and insulation thickness but with different main wall material to prove the influence of insulation system methods on stator bar quality. Also, some of the samples were subjected to voltage at the temperature of their insulation class, and their dissipation factor changes were measured and studied.
Keywords: Vacuum pressure impregnation, resin rich, insulation, stator bar, dissipation factor, voltage endurance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5854242 Airfield Pavements Made of Reinforced Concrete: Dimensioning According to the Theory of Limit States and Eurocode
Abstract:
In the previous airfield construction industry, pavements made of reinforced concrete have been used very rarely; however, the necessity to use this type of pavements in an emergency situations justifies the need reference to this issue. The paper concerns the problem of airfield pavement dimensioning made of reinforced concrete and the evaluation of selected dimensioning methods of reinforced concrete slabs intended for airfield pavements. Analysis of slabs dimensioning, according to classical method of limit states has been performed and it has been compared to results obtained in case of methods complying with Eurocode 2 guidelines. Basis of an analysis was a concrete slab of class C35/45 with reinforcement, located in tension zone. Steel bars of 16.0 mm have been used as slab reinforcement. According to comparative analysis of obtained results, conclusions were reached regarding application legitimacy of the discussed methods and their design advantages.Keywords: Reinforced concrete, cement concrete, airport pavements.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12774241 Methods of Forming Informational Culture Students
Authors: Altynbek Moshkalov
Abstract:
Along with the basic features of students\' culture information, with its widely usage oriented on implementation of the new information technologies in educational process that determines the search for ways of pointing to the similarity of interdisciplinary connections content, aims and objectives of the study. In this regard, the article questions about students\' information culture, and also presented information about the aims and objectives of the information culture process among students. In the formation of a professional interest in relevant information, which is an opportunity to assist in informing the professional activities of the essence of effective use of interactive methods and innovative technologies in the learning process. The result of the experiment proves the effectiveness of the information culture process of students in training the system of higher education based on the credit technology. The main purpose of this paper is a comprehensive review of students\' information culture.Keywords: Information culture, methods of information culture of students, educational system of the credit technology, distance learning, information of interest, information and communication technologies and tools.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16604240 Application of CPN Tools for Simulation and Analysis of Bandwidth Allocation
Authors: Julija Asmuss, Gunars Lauks, Viktors Zagorskis
Abstract:
We consider the problem of bandwidth allocation in a substrate network as an optimization problem for the aggregate utility of multiple applications with diverse requirements and describe a simulation scheme for dynamically adaptive bandwidth allocation protocols. The proposed simulation model based on Coloured Petri Nets (CPN) is realized using CPN Tools.Keywords: Bandwidth Allocation Problem, Coloured Petri Nets, CPN Tools, Simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19684239 New Security Approach of Confidential Resources in Hybrid Clouds
Authors: Haythem Yahyaoui, Samir Moalla, Mounir Bouden, Skander Ghorbel
Abstract:
Nowadays, cloud environments are becoming a need for companies, this new technology gives the opportunities to access to the data anywhere and anytime. It also provides an optimized and secured access to the resources and gives more security for the data which is stored in the platform. However, some companies do not trust Cloud providers, they think that providers can access and modify some confidential data such as bank accounts. Many works have been done in this context, they conclude that encryption methods realized by providers ensure the confidentiality, but, they forgot that Cloud providers can decrypt the confidential resources. The best solution here is to apply some operations on the data before sending them to the provider Cloud in the objective to make them unreadable. The principal idea is to allow user how it can protect his data with his own methods. In this paper, we are going to demonstrate our approach and prove that is more efficient in term of execution time than some existing methods. This work aims at enhancing the quality of service of providers and ensuring the trust of the customers.
Keywords: Confidentiality, cryptography, security issues, trust issues.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14714238 Study on Crater Detection Using FLDA
Authors: Yoshiaki Takeda, Norifumi Aoyama, Takahiro Tanaami, Syouhei Honda, Kenta Tabata, Hiroyuki Kamata
Abstract:
In this paper, we validate crater detection in moon surface image using FLDA. This proposal assumes that it is applied to SLIM (Smart Lander for Investigating Moon) project aiming at the pin-point landing to the moon surface. The point where the lander should land is judged by the position relations of the craters obtained via camera, so the real-time image processing becomes important element. Besides, in the SLIM project, 400kg-class lander is assumed, therefore, high-performance computers for image processing cannot be equipped. We are studying various crater detection methods such as Haar-Like features, LBP, and PCA. And we think these methods are appropriate to the project, however, to identify the unlearned images obtained by actual is insufficient. In this paper, we examine the crater detection using FLDA, and compare with the conventional methods.
Keywords: Crater Detection, Fisher Linear Discriminant Analysis , Haar-Like Feature, Image Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17284237 Evaluation of the Hepatitis C Virus and Classical and Modern Immunoassays Used Nowadays to Diagnose It in Tirana
Authors: Stela Papa, Klementina Puto, Migena Pllaha
Abstract:
HCV is a hepatotropic RNA virus, transmitted primarily via the blood route, which causes progressive disease such as chronic hepatitis, liver cirrhosis, or hepatocellular carcinoma. HCV nowadays is a global healthcare problem. A variety of immunoassays including old and new technologies are being applied to detect HCV in our country. These methods include Immunochromatography assays (ICA), Fluorescence immunoassay (FIA), Enzyme linked fluorescent assay (ELFA), and Enzyme linked immunosorbent assay (ELISA) to detect HCV antibodies in blood serum, which lately is being slowly replaced by more sensitive methods such as rapid automated analyzer chemiluminescence immunoassay (CLIA). The aim of this study is to estimate HCV infection in carriers and chronic acute patients and to evaluate the use of new diagnostic methods. This study was realized from September 2016 to May 2018. During this study period, 2913 patients were analyzed for the presence of HCV by taking samples from their blood serum. The immunoassays performed were ICA, FIA, ELFA, ELISA, and CLIA assays. Concluding, 82% of patients taken in this study, resulted infected with HCV. Diagnostic methods in clinical laboratories are crucial in the early stages of infection, in the management of chronic hepatitis and in the treatment of patients during their disease.
Keywords: CLIA, ELISA, hepatitis C virus, immunoassay.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7424236 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains
Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe
Abstract:
The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.
Keywords: Digitalization, digital transformation, lean production, Industrie 4.0, value chain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20324235 Comparison of FAHP and TOPSIS for Evacuation Capability Assessment of High-rise Buildings
Authors: Peng Mei, Yan-Jun Qi, Yu Cui, Song Lu, He-Ping Zhang
Abstract:
A lot of computer-based methods have been developed to assess the evacuation capability (EC) of high-rise buildings. Because softwares are time-consuming and not proper for on scene applications, we adopted two methods, fuzzy analytic hierarchy process (FAHP) and technique for order preference by similarity to an ideal solution (TOPSIS), for EC assessment of a high-rise building in Jinan. The EC scores obtained with the two methods and the evacuation time acquired with Pathfinder 2009 for floors 47-60 of the building were compared with each other. The results show that FAHP performs better than TOPSIS for EC assessment of high-rise buildings, especially in the aspect of dealing with the effect of occupant type and distance to exit on EC, tackling complex problem with multi-level structure of criteria, and requiring less amount of computation. However, both FAHP and TOPSIS failed to appropriately handle the situation where the exit width changes while occupants are few.Keywords: Evacuation capability assessment, FAHP, high-rise buildings, TOPSIS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16194234 Modelling of Soil Erosion by Non Conventional Methods
Authors: Ganesh D. Kale, Sheela N. Vadsola
Abstract:
Soil erosion is the most serious problem faced at global and local level. So planning of soil conservation measures has become prominent agenda in the view of water basin managers. To plan for the soil conservation measures, the information on soil erosion is essential. Universal Soil Loss Equation (USLE), Revised Universal Soil Loss Equation 1 (RUSLE1or RUSLE) and Modified Universal Soil Loss Equation (MUSLE), RUSLE 1.06, RUSLE1.06c, RUSLE2 are most widely used conventional erosion estimation methods. The essential drawbacks of USLE, RUSLE1 equations are that they are based on average annual values of its parameters and so their applicability to small temporal scale is questionable. Also these equations do not estimate runoff generated soil erosion. So applicability of these equations to estimate runoff generated soil erosion is questionable. Data used in formation of USLE, RUSLE1 equations was plot data so its applicability at greater spatial scale needs some scale correction factors to be induced. On the other hand MUSLE is unsuitable for predicting sediment yield of small and large events. Although the new revised forms of USLE like RUSLE 1.06, RUSLE1.06c and RUSLE2 were land use independent and they have almost cleared all the drawbacks in earlier versions like USLE and RUSLE1, they are based on the regional data of specific area and their applicability to other areas having different climate, soil, land use is questionable. These conventional equations are applicable for sheet and rill erosion and unable to predict gully erosion and spatial pattern of rills. So the research was focused on development of nonconventional (other than conventional) methods of soil erosion estimation. When these non-conventional methods are combined with GIS and RS, gives spatial distribution of soil erosion. In the present paper the review of literature on non- conventional methods of soil erosion estimation supported by GIS and RS is presented.Keywords: Conventional methods, GIS, non-conventionalmethods, remote sensing, soil erosion modeling
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42904233 An Empirical Mode Decomposition Based Method for Action Potential Detection in Neural Raw Data
Authors: Sajjad Farashi, Mohammadjavad Abolhassani, Mostafa Taghavi Kani
Abstract:
Information in the nervous system is coded as firing patterns of electrical signals called action potential or spike so an essential step in analysis of neural mechanism is detection of action potentials embedded in the neural data. There are several methods proposed in the literature for such a purpose. In this paper a novel method based on empirical mode decomposition (EMD) has been developed. EMD is a decomposition method that extracts oscillations with different frequency range in a waveform. The method is adaptive and no a-priori knowledge about data or parameter adjusting is needed in it. The results for simulated data indicate that proposed method is comparable with wavelet based methods for spike detection. For neural signals with signal-to-noise ratio near 3 proposed methods is capable to detect more than 95% of action potentials accurately.
Keywords: EMD, neural data processing, spike detection, wavelet decomposition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23734232 Central Finite Volume Methods Applied in Relativistic Magnetohydrodynamics: Applications in Disks and Jets
Authors: Raphael de Oliveira Garcia, Samuel Rocha de Oliveira
Abstract:
We have developed a new computer program in Fortran 90, in order to obtain numerical solutions of a system of Relativistic Magnetohydrodynamics partial differential equations with predetermined gravitation (GRMHD), capable of simulating the formation of relativistic jets from the accretion disk of matter up to his ejection. Initially we carried out a study on numerical methods of unidimensional Finite Volume, namely Lax-Friedrichs, Lax-Wendroff, Nessyahu-Tadmor method and Godunov methods dependent on Riemann problems, applied to equations Euler in order to verify their main features and make comparisons among those methods. It was then implemented the method of Finite Volume Centered of Nessyahu-Tadmor, a numerical schemes that has a formulation free and without dimensional separation of Riemann problem solvers, even in two or more spatial dimensions, at this point, already applied in equations GRMHD. Finally, the Nessyahu-Tadmor method was possible to obtain stable numerical solutions - without spurious oscillations or excessive dissipation - from the magnetized accretion disk process in rotation with respect to a central black hole (BH) Schwarzschild and immersed in a magnetosphere, for the ejection of matter in the form of jet over a distance of fourteen times the radius of the BH, a record in terms of astrophysical simulation of this kind. Also in our simulations, we managed to get substructures jets. A great advantage obtained was that, with the our code, we got simulate GRMHD equations in a simple personal computer.
Keywords: Finite Volume Methods, Central Schemes, Fortran 90, Relativistic Astrophysics, Jet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23224231 Optimization of the Co-Precipitation of Industrial Waste Metals in a Continuous Reactor System
Authors: Thomas S. Abia II, Citlali Garcia-Saucedo
Abstract:
A continuous copper precipitation treatment (CCPT) system was conceived at Intel Chandler Site to serve as a first-of-kind (FOK) facility-scale waste copper (Cu), nickel (Ni), and manganese (Mn) co-precipitation facility. The process was designed to treat highly variable wastewater discharged from a substrate packaging research factory. The paper discusses metals co-precipitation induced by internal changes for manufacturing facilities that lack the capacity for hardware expansion due to real estate restrictions, aggressive schedules, or budgetary constraints. Herein, operating parameters such as pH and oxidation reduction potential (ORP) were examined to analyze the ability of the CCPT System to immobilize various waste metals. Additionally, influential factors such as influent concentrations and retention times were investigated to quantify the environmental variability against system performance. A total of 2,027 samples were analyzed and statistically evaluated to measure the performance of CCPT that was internally retrofitted for Mn abatement to meet environmental regulations. In order to enhance the consistency of the influent, a separate holding tank was cannibalized from another system to collect and slow-feed the segregated Mn wastewater from the factory into CCPT. As a result, the baseline influent Mn decreased from 17.2+18.7 mg1L-1 at pre-pilot to 5.15+8.11 mg1L-1 post-pilot (70.1% reduction). Likewise, the pre-trial and post-trial average influent Cu values to CCPT were 52.0+54.6 mg1L-1 and 33.9+12.7 mg1L-1, respectively (34.8% reduction). However, the raw Ni content of 0.97+0.39 mg1L-1 at pre-pilot increased to 1.06+0.17 mg1L-1 at post-pilot. The average Mn output declined from 10.9+11.7 mg1L-1 at pre-pilot to 0.44+1.33 mg1L-1 at post-pilot (96.0% reduction) as a result of the pH and ORP operating setpoint changes. In similar fashion, the output Cu quality improved from 1.60+5.38 mg1L-1 to 0.55+1.02 mg1L-1 (65.6% reduction) while the Ni output sustained a 50% enhancement during the pilot study (0.22+0.19 mg1L-1 reduced to 0.11+0.06 mg1L-1). pH and ORP were shown to be significantly instrumental to the precipitative versatility of the CCPT System.
Keywords: Copper, co-precipitation, industrial wastewater treatment, manganese, optimization, pilot study.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9824230 The Gasoil Hydrofining Kinetics Constants Identification
Authors: C. Patrascioiu, V. Matei, N. Nicolae
Abstract:
The paper describes the experiments and the kinetic parameters calculus of the gasoil hydrofining. They are presented experimental results of gasoil hidrofining using Mo and promoted with Ni on aluminum support catalyst. The authors have adapted a kinetic model gasoil hydrofining. Using this proposed kinetic model and the experimental data they have calculated the parameters of the model. The numerical calculus is based on minimizing the difference between the experimental sulf concentration and kinetic model estimation.
Keywords: Hydrofining, kinetic, modeling, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20204229 Simulation of Sloshing behavior using Moving Grid and Body Force Methods
Authors: Tadashi Watanabe
Abstract:
The flow field and the motion of the free surface in an oscillating container are simulated numerically to assess the numerical approach for studying two-phase flows under oscillating conditions. Two numerical methods are compared: one is to model the oscillating container directly using the moving grid of the ALE method, and the other is to simulate the effect of container motion using the oscillating body force acting on the fluid in the stationary container. The two-phase flow field in the container is simulated using the level set method in both cases. It is found that the calculated results by the body force method coinsides with those by the moving grid method and the sloshing behavior is predicted well by both the methods. Theoretical back ground and limitation of the body force method are discussed, and the effects of oscillation amplitude and frequency are shown.Keywords: Two-phase flow, simulation, oscillation, moving grid, body force
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16404228 Development of Elementary Literacy in the Czech Republic
Authors: Iva Košek Bartošová
Abstract:
There is great attention being paid in the field of development of first reading, thus early literacy skills in the Czech Republic. Yet inconclusive results of PISA and PIRLS force us to think over the teacher´s work, his/her roles in the education process and methods and forms used in lessons. There is also a significant importance to monitor the family environment and the pupil, themselves. The aim of the publishing output is to focus on one side dealing with methods of practicing reading technique and their results in the process of comprehension. In the first part of the contribution there are the goals of development of reading literacy and the methods used in reading practice in some EU countries and a follow-up comparison of research implemented by the help of modern technology of an eye tracker device in the year 2015 and a research conducted at the Institute of Education and Psychological Counselling of the Czech Republic in the year 2011/12. These are the results of a diagnostic test of reading in first classes of primary schools, taught by the genetic method and analytic-synthetic method. The results show that in the first stage of practice there are no statistically significant differences between any researched subjects taught by different methods of reading practice (with the use of several diagnostic texts focused on reading technique and its comprehension). Different results are shown at the end of Grade One and during Grade Two of primary school.
Keywords: Elementary literacy, eye tracker device, diagnostic reading tests, reading teaching method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10814227 Research on Software Security Testing
Authors: Gu Tian-yang, Shi Yin-sheng, Fang You-yuan
Abstract:
Software security testing is an important means to ensure software security and trustiness. This paper first mainly discusses the definition and classification of software security testing, and investigates methods and tools of software security testing widely. Then it analyzes and concludes the advantages and disadvantages of various methods and the scope of application, presents a taxonomy of security testing tools. Finally, the paper points out future focus and development directions of software security testing technology.
Keywords: security testing, security functional testing, securityvulnerability testing, testing method, testing tool
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 51334226 A Proposed Optimized and Efficient Intrusion Detection System for Wireless Sensor Network
Authors: Abdulaziz Alsadhan, Naveed Khan
Abstract:
In recent years intrusions on computer network are the major security threat. Hence, it is important to impede such intrusions. The hindrance of such intrusions entirely relies on its detection, which is primary concern of any security tool like Intrusion detection system (IDS). Therefore, it is imperative to accurately detect network attack. Numerous intrusion detection techniques are available but the main issue is their performance. The performance of IDS can be improved by increasing the accurate detection rate and reducing false positive. The existing intrusion detection techniques have the limitation of usage of raw dataset for classification. The classifier may get jumble due to redundancy, which results incorrect classification. To minimize this problem, Principle component analysis (PCA), Linear Discriminant Analysis (LDA) and Local Binary Pattern (LBP) can be applied to transform raw features into principle features space and select the features based on their sensitivity. Eigen values can be used to determine the sensitivity. To further classify, the selected features greedy search, back elimination, and Particle Swarm Optimization (PSO) can be used to obtain a subset of features with optimal sensitivity and highest discriminatory power. This optimal feature subset is used to perform classification. For classification purpose, Support Vector Machine (SVM) and Multilayer Perceptron (MLP) are used due to its proven ability in classification. The Knowledge Discovery and Data mining (KDD’99) cup dataset was considered as a benchmark for evaluating security detection mechanisms. The proposed approach can provide an optimal intrusion detection mechanism that outperforms the existing approaches and has the capability to minimize the number of features and maximize the detection rates.
Keywords: Particle Swarm Optimization (PSO), Principle component analysis (PCA), Linear Discriminant Analysis (LDA), Local Binary Pattern (LBP), Support Vector Machine (SVM), Multilayer Perceptron (MLP).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2763