Search results for: primal-dual interior point method
8763 Thermo-Mechanical Approach to Evaluate Softening Behavior of Polystyrene: Validation and Modeling
Authors: Salah Al-Enezi, Rashed Al-Zufairi, Naseer Ahmad
Abstract:
A Thermo-mechanical technique was developed to determine softening point temperature/glass transition temperature (Tg) of polystyrene exposed to high pressures. The design utilizes the ability of carbon dioxide to lower the glass transition temperature of polymers and acts as plasticizer. In this apparatus, the sorption of carbon dioxide to induce softening of polymers as a function of temperature/pressure is performed and the extent of softening is measured in three-point-flexural-bending mode. The polymer strip was placed in the cell in contact with the linear variable differential transformer (LVDT). CO2 was pumped into the cell from a supply cylinder to reach high pressure. The results clearly showed that full softening point of the samples, accompanied by a large deformation on the polymer strip. The deflection curves are initially relatively flat and then undergo a dramatic increase as the temperature is elevated. It was found that increasing the pressure of CO2 causes the temperature curves to shift from higher to lower by increment of about 45 K, over the pressure range of 0-120 bars. The obtained experimental Tg values were validated with the values reported in the literature. Finally, it is concluded that the defection model fits consistently to the generated experimental results, which attempts to describe in more detail how the central deflection of a thin polymer strip affected by the CO2 diffusions in the polymeric samples.
Keywords: Softening, high-pressure, polystyrene, CO2 diffusions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6658762 A Method for Quality Inspection of Motors by Detecting Abnormal Sound
Authors: Tadatsugu Kitamoto
Abstract:
Recently, a quality of motors is inspected by human ears. In this paper, I propose two systems using a method of speech recognition for automation of the inspection. The first system is based on a method of linear processing which uses K-means and Nearest Neighbor method, and the second is based on a method of non-linear processing which uses neural networks. I used motor sounds in these systems, and I successfully recognize 86.67% of motor sounds in the linear processing system and 97.78% in the non-linear processing system.Keywords: Acoustical diagnosis, Neural networks, K-means, Short-time Fourier transformation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17008761 Analysis on Iranian Wind Catcher and Its Effect on Natural Ventilation as a Solution towards Sustainable Architecture(Case Study: Yazd)
Authors: Mahnaz Mahmoudi Zarandi (Qazvin Islamic Azad University)
Abstract:
wind catchers have been served as a cooling system, used to provide acceptable ventilation by means of renewable energy of wind. In the present study, the city of Yazd in arid climate is selected as case study. From the architecture point of view, learning about wind catchers in this study is done by means of field surveys. Research method for selection of the case is based on random form, and analytical method. Wind catcher typology and knowledge of relationship governing the wind catcher's architecture were those measures that are taken for the first time. 53 wind catchers were analyzed. The typology of the wind-catchers is done by the physical analyzing, patterns and common concepts as incorporated in them. How the architecture of wind catcher can influence their operations by analyzing thermal behavior are the archetypes of selected wind catchers. Calculating fluids dynamics science, fluent software and numerical analysis are used in this study as the most accurate analytical approach. The results obtained from these analyses show the formal specifications of wind catchers with optimum operation in Yazd. The knowledge obtained from the optimum model could be used for design and construction of wind catchers with more improved operation
Keywords: Fluent Software, Iranian architecture, wind catcher
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 44938760 Statistical Modeling of Local Area Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes
Authors: Jihad S. Daba, J. P. Dubois
Abstract:
Fading noise degrades the performance of cellular communication, most notably in femto- and pico-cells in 3G and 4G systems. When the wireless channel consists of a small number of scattering paths, the statistics of fading noise is not analytically tractable and poses a serious challenge to developing closed canonical forms that can be analysed and used in the design of efficient and optimal receivers. In this context, noise is multiplicative and is referred to as stochastically local fading. In many analytical investigation of multiplicative noise, the exponential or Gamma statistics are invoked. More recent advances by the author of this paper utilized a Poisson modulated-weighted generalized Laguerre polynomials with controlling parameters and uncorrelated noise assumptions. In this paper, we investigate the statistics of multidiversity stochastically local area fading channel when the channel consists of randomly distributed Rayleigh and Rician scattering centers with a coherent Nakagami-distributed line of sight component and an underlying doubly stochastic Poisson process driven by a lognormal intensity. These combined statistics form a unifying triply stochastic filtered marked Poisson point process model.
Keywords: Cellular communication, femto- and pico-cells, stochastically local area fading channel, triply stochastic filtered marked Poisson point process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13458759 A New Approach for Mobile Agent Security
Authors: R. Haghighat far, H. Yarahmadi
Abstract:
A mobile agent is a software which performs an action autonomously and independently as a person or an organizations assistance. Mobile agents are used for searching information, retrieval information, filtering, intruder recognition in networks, and so on. One of the important issues of mobile agent is their security. It must consider different security issues in effective and secured usage of mobile agent. One of those issues is the integrity-s protection of mobile agents. In this paper, the advantages and disadvantages of each method, after reviewing the existing methods, is examined. Regarding to this matter that each method has its own advantage or disadvantage, it seems that by combining these methods, one can reach to a better method for protecting the integrity of mobile agents. Therefore, this method is provided in this paper and then is evaluated in terms of existing method. Finally, this method is simulated and its results are the sign of improving the possibility of integrity-s protection of mobile agents.Keywords: Integrity, Mobile Agent, Security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17598758 Playing Games with Genetic Algorithms: Application on Price-QoS Competition in Telecommunications Market
Authors: M’hamed Outanoute, Mohamed Baslam, Belaid Bouikhalene
Abstract:
The customers use the best compromise criterion between price and quality of service (QoS) to select or change their Service Provider (SP). The SPs share the same market and are competing to attract more customers to gain more profit. Due to the divergence of SPs interests, we believe that this situation is a non-cooperative game of price and QoS. The game converges to an equilibrium position known Nash Equilibrium (NE). In this work, we formulate a game theoretic framework for the dynamical behaviors of SPs. We use Genetic Algorithms (GAs) to find the price and QoS strategies that maximize the profit for each SP and illustrate the corresponding strategy in NE. In order to quantify how this NE point is performant, we perform a detailed analysis of the price of anarchy induced by the NE solution. Finally, we provide an extensive numerical study to point out the importance of considering price and QoS as a joint decision parameter.
Keywords: Pricing, QoS, Market share game, Genetic algorithms, Nash equilibrium, Learning, Price of anarchy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18058757 The Framework of Termination Mechanism in Modern Emergency Management
Authors: Yannan Wu, An Chen, Yan Zhao
Abstract:
Termination Mechanism is an indispensible part of the emergency management mechanism. Despite of its importance in both theory and practice, it is almost a brand new field for researching. The concept of termination mechanism is proposed firstly in this paper, and the design and implementation which are helpful to guarantee the effect and integrity of emergency management are discussed secondly. Starting with introduction of the problems caused by absent termination and incorrect termination, the essence of termination mechanism is analyzed, a model based on Optimal Stopping Theory is constructed and the termination index is given. The model could be applied to find the best termination time point.. Termination decision should not only be concerned in termination stage, but also in the whole emergency management process, which makes it a dynamic decision making process. Besides, the main subjects and the procedure of termination are illustrated after the termination time point is given. Some future works are discussed lastly.Keywords: Emergency management, Termination Mechanism, Optimal Termination Model, Decision Making, Optimal StoppingTheory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12678756 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses
Authors: Neil Bar, Andrew Heweston
Abstract:
Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.
Keywords: Probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11988755 A Formatting Method for Transforming XML Data into HTML
Authors: Zhe JIN, Motomichi TOYAMA
Abstract:
In this paper, we propose a fixed formatting method of PPX(Pretty Printer for XML). PPX is a query language for XML database which has extensive formatting capability that produces HTML as the result of a query. The fixed formatting method is to completely specify the combination of variables and layout specification operators within the layout expression of the GENERATE clause of PPX. In the experiment, a quick comparison shows that PPX requires far less description compared to XSLT or XQuery programs doing the same tasks.
Keywords: PPX, XML, HTML, XSLT, XQuery, fixed formatting method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13648754 Experimental Evaluation of Mobility Anchor Point Selection Scheme in Hierarchical Mobile IPv6
Authors: Zulkeflee Kusin, Mohamad Shanudin Zakaria
Abstract:
Hierarchical Mobile IPv6 (HMIPv6) was designed to support IP micro-mobility management in the Next Generation Networks (NGN) framework. The main design behind this protocol is the usage of Mobility Anchor Point (MAP) located at any level router of network to support hierarchical mobility management. However, the distance MAP selection in HMIPv6 causes MAP overloaded and increase frequent binding update as the network grows. Therefore, to address the issue in designing MAP selection scheme, we propose a dynamic load control mechanism integrates with a speed detection mechanism (DMS-DLC). From the experimental results we obtain that the proposed scheme gives better distribution in MAP load and increase handover speed.Keywords: Dynamic load control, HMIPv6, Mobility AnchorPoint, MAP selection scheme
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18018753 Currency Exchange Rate Forecasts Using Quantile Regression
Authors: Yuzhi Cai
Abstract:
In this paper, we discuss a Bayesian approach to quantile autoregressive (QAR) time series model estimation and forecasting. Together with a combining forecasts technique, we then predict USD to GBP currency exchange rates. Combined forecasts contain all the information captured by the fitted QAR models at different quantile levels and are therefore better than those obtained from individual models. Our results show that an unequally weighted combining method performs better than other forecasting methodology. We found that a median AR model can perform well in point forecasting when the predictive density functions are symmetric. However, in practice, using the median AR model alone may involve the loss of information about the data captured by other QAR models. We recommend that combined forecasts should be used whenever possible.Keywords: Exchange rate, quantile regression, combining forecasts.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17778752 Containment/Penetration Analysis for the Protection of Aircraft Engine External Configuration and Nuclear Power Plant Structures
Authors: Dong Wook Lee, Adrian Mistreanu
Abstract:
The authors have studied a method for analyzing containment and penetration using an explicit nonlinear Finite Element Analysis. This method may be used in the stage of concept design for the protection of external configurations or components of aircraft engines and nuclear power plant structures. This paper consists of the modeling method, the results obtained from the method and the comparison of the results with those calculated from simple analytical method. It shows that the containment capability obtained by proposed method matches well with analytically calculated containment capability.
Keywords: Computer Aided Engineering, CAE, containment analysis, Finite Element Analysis, FEA, impact analysis, penetration analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5398751 Simplex Method for Solving Linear Programming Problems with Fuzzy Numbers
Authors: S. H. Nasseri, E. Ardil, A. Yazdani, R. Zaefarian
Abstract:
The fuzzy set theory has been applied in many fields, such as operations research, control theory, and management sciences, etc. In particular, an application of this theory in decision making problems is linear programming problems with fuzzy numbers. In this study, we present a new method for solving fuzzy number linear programming problems, by use of linear ranking function. In fact, our method is similar to simplex method that was used for solving linear programming problems in crisp environment before.Keywords: Fuzzy number linear programming, rankingfunction, simplex method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35268750 A Grid Synchronization Method Based on Adaptive Notch Filter for SPV System with Modified MPPT
Authors: Priyanka Chaudhary, M. Rizwan
Abstract:
This paper presents a grid synchronization technique based on adaptive notch filter for SPV (Solar Photovoltaic) system along with MPPT (Maximum Power Point Tracking) techniques. An efficient grid synchronization technique offers proficient detection of various components of grid signal like phase and frequency. It also acts as a barrier for harmonics and other disturbances in grid signal. A reference phase signal synchronized with the grid voltage is provided by the grid synchronization technique to standardize the system with grid codes and power quality standards. Hence, grid synchronization unit plays important role for grid connected SPV systems. As the output of the PV array is fluctuating in nature with the meteorological parameters like irradiance, temperature, wind etc. In order to maintain a constant DC voltage at VSC (Voltage Source Converter) input, MPPT control is required to track the maximum power point from PV array. In this work, a variable step size P & O (Perturb and Observe) MPPT technique with DC/DC boost converter has been used at first stage of the system. This algorithm divides the dPpv/dVpv curve of PV panel into three separate zones i.e. zone 0, zone 1 and zone 2. A fine value of tracking step size is used in zone 0 while zone 1 and zone 2 requires a large value of step size in order to obtain a high tracking speed. Further, adaptive notch filter based control technique is proposed for VSC in PV generation system. Adaptive notch filter (ANF) approach is used to synchronize the interfaced PV system with grid to maintain the amplitude, phase and frequency parameters as well as power quality improvement. This technique offers the compensation of harmonics current and reactive power with both linear and nonlinear loads. To maintain constant DC link voltage a PI controller is also implemented and presented in this paper. The complete system has been designed, developed and simulated using SimPower System and Simulink toolbox of MATLAB. The performance analysis of three phase grid connected solar photovoltaic system has been carried out on the basis of various parameters like PV output power, PV voltage, PV current, DC link voltage, PCC (Point of Common Coupling) voltage, grid voltage, grid current, voltage source converter current, power supplied by the voltage source converter etc. The results obtained from the proposed system are found satisfactory.
Keywords: Solar photovoltaic systems, MPPT, voltage source converter, grid synchronization technique.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19688749 Inverse Heat Transfer Analysis of a Melting Furnace Using Levenberg-Marquardt Method
Authors: Mohamed Hafid, Marcel Lacroix
Abstract:
This study presents a simple inverse heat transfer procedure for predicting the wall erosion and the time-varying thickness of the protective bank that covers the inside surface of the refractory brick wall of a melting furnace. The direct problem is solved by using the Finite-Volume model. The melting/solidification process is modeled using the enthalpy method. The inverse procedure rests on the Levenberg-Marquardt method combined with the Broyden method. The effect of the location of the temperature sensors and of the measurement noise on the inverse predictions is investigated. Recommendations are made concerning the location of the temperature sensor.Keywords: Melting furnace, inverse heat transfer, enthalpy method, Levenberg–Marquardt Method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13178748 A Large-Eddy Simulation of Vortex Cell flow with Incoming Turbulent Boundary Layer
Authors: Arpiruk Hokpunna, Michael Manhart
Abstract:
We present a Large-Eddy simulation of a vortex cell with circular shaped. The results show that the flow field can be sub divided into four important zones, the shear layer above the cavity, the stagnation zone, the vortex core in the cavity and the boundary layer along the wall of the cavity. It is shown that the vortex core consits of solid body rotation without much turbulence activity. The vortex is mainly driven by high energy packets that are driven into the cavity from the stagnation point region and by entrainment of fluid from the cavity into the shear layer. The physics in the boundary layer along the cavity-s wall seems to be far from that of a canonical boundary layer which might be a crucial point for modelling this flow.Keywords: Turbulent flow, Large eddy simulations, boundary layer and cavity flow, vortex cell flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 82388747 On Tarski’s Type Theorems for L-Fuzzy Isotone and L-Fuzzy Relatively Isotone Maps on L-Complete Propelattices
Authors: František VÄŤelaĹ™, Zuzana PátĂková
Abstract:
Recently a new type of very general relational structures, the so called (L-)complete propelattices, was introduced. These significantly generalize complete lattices and completely lattice L-ordered sets, because they do not assume the technically very strong property of transitivity. For these structures also the main part of the original Tarski’s fixed point theorem holds for (L-fuzzy) isotone maps, i.e., the part which concerns the existence of fixed points and the structure of their set. In this paper, fundamental properties of (L-)complete propelattices are recalled and the so called L-fuzzy relatively isotone maps are introduced. For these maps it is proved that they also have fixed points in L-complete propelattices, even if their set does not have to be of an awaited analogous structure of a complete propelattice.Keywords: Fixed point, L-complete propelattice, L-fuzzy (relatively) isotone map, residuated lattice, transitivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11108746 Opto-Mechanical Characterization of Aspheric Lenses from the Hybrid Method
Authors: Aliouane Toufik, Hamdi Amine, Bouzid Djamel
Abstract:
Aspheric optical components are an alternative to the use of conventional lenses in the implementation of imaging systems for the visible range. Spherical lenses are capable of producing aberrations. Therefore, they are not able to focus all the light into a single point. Instead, aspherical lenses correct aberrations and provide better resolution even with compact lenses incorporating a small number of lenses.
Metrology of these components is very difficult especially when the resolution requirements increase and insufficient or complexity of conventional tools requires the development of specific approaches to characterization.
This work is part of the problem existed because the objectives are the study and comparison of different methods used to measure surface rays hybrid aspherical lenses.
Keywords: Aspherical surface, Manufacture of lenses, precision molding, radius of curvature, roughness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20158745 Static Single Point Positioning Using The Extended Kalman Filter
Authors: I. Sarras, G. Gerakios, A. Diamantis, A. I. Dounis, G. P. Syrcos
Abstract:
Global Positioning System (GPS) technology is widely used today in the areas of geodesy and topography as well as in aeronautics mainly for military purposes. Due to the military usage of GPS, full access and use of this technology is being denied to the civilian user who must then work with a less accurate version. In this paper we focus on the estimation of the receiver coordinates ( X, Y, Z ) and its clock bias ( δtr ) of a fixed point based on pseudorange measurements of a single GPS receiver. Utilizing the instantaneous coordinates of just 4 satellites and their clock offsets, by taking into account the atmospheric delays, we are able to derive a set of pseudorange equations. The estimation of the four unknowns ( X, Y, Z , δtr ) is achieved by introducing an extended Kalman filter that processes, off-line, all the data collected from the receiver. Higher performance of position accuracy is attained by appropriate tuning of the filter noise parameters and by including other forms of biases.
Keywords: Extended Kalman filter, GPS, Pseudorange
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25768744 Increased Signal to Noise Ratio in P300 Potentials by the Method of Coherent Self-Averaging in BCI Systems
Authors: Ricardo Espinosa
Abstract:
The coherent Self-Averaging (CSA), is a new method proposed in this work; applied to simulated signals evoked potentials related to events (ERP) to find the wave P300, useful systems in the brain computer interface (BCI). The CSA method cleans signal in the time domain of white noise through of successive averaging of a single signal. The method is compared with the traditional method, coherent averaging or synchronized (CA), showing optimal results in the improvement of the signal to noise ratio (SNR). The method of CSA is easy to implement, robust and applicable to any physiological time series contaminated with white noise
Keywords: Evoked potentials, wave P300, Coherent Self-averaging, brain - computer interface (BCI).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21448743 Analysing of Indoor Radio Wave Propagation on Ad-hoc Network by Using TP-LINK Router
Authors: Khine Phyu, Aung Myint Aye
Abstract:
This paper presents results of measurements campaign carried out at a carrier frequency of 24GHz with the help of TPLINK router in indoor line-of-sight (LOS) scenarios. Firstly, the radio wave propagation strategies are analyzed in some rooms with router of point to point Ad hoc network. Then floor attenuation is defined for 3 floors in experimental region. The free space model and dual slope models are modified by considering the influence of corridor conditions on each floor. Using these models, indoor signal attenuation can be estimated in modeling of indoor radio wave propagation. These results and modified models can also be used in planning the networks of future personal communications services.Keywords: radio wave signal analyzing, LOS radio wavepropagation, indoor radio wave propagation, free space model, tworay model and indoor attenuation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20178742 A New Inversion-free Method for Hermitian Positive Definite Solution of Matrix Equation
Authors: Minghui Wang, Juntao Zhang
Abstract:
An inversion-free iterative algorithm is presented for solving nonlinear matrix equation with a stepsize parameter t. The existence of the maximal solution is discussed in detail, and the method for finding it is proposed. Finally, two numerical examples are reported that show the efficiency of the method.
Keywords: Inversion-free method, Hermitian positive definite solution, Maximal solution, Convergence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16068741 Cascaded Neural Network for Internal Temperature Forecasting in Induction Motor
Authors: Hidir S. Nogay
Abstract:
In this study, two systems were created to predict interior temperature in induction motor. One of them consisted of a simple ANN model which has two layers, ten input parameters and one output parameter. The other one consisted of eight ANN models connected each other as cascaded. Cascaded ANN system has 17 inputs. Main reason of cascaded system being used in this study is to accomplish more accurate estimation by increasing inputs in the ANN system. Cascaded ANN system is compared with simple conventional ANN model to prove mentioned advantages. Dataset was obtained from experimental applications. Small part of the dataset was used to obtain more understandable graphs. Number of data is 329. 30% of the data was used for testing and validation. Test data and validation data were determined for each ANN model separately and reliability of each model was tested. As a result of this study, it has been understood that the cascaded ANN system produced more accurate estimates than conventional ANN model.Keywords: Cascaded neural network, internal temperature, three-phase induction motor, inverter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8728740 Wavelet and K-L Seperability Based Feature Extraction Method for Functional Data Classification
Authors: Jun Wan, Zehua Chen, Yingwu Chen, Zhidong Bai
Abstract:
This paper proposes a novel feature extraction method, based on Discrete Wavelet Transform (DWT) and K-L Seperability (KLS), for the classification of Functional Data (FD). This method combines the decorrelation and reduction property of DWT and the additive independence property of KLS, which is helpful to extraction classification features of FD. It is an advanced approach of the popular wavelet based shrinkage method for functional data reduction and classification. A theory analysis is given in the paper to prove the consistent convergence property, and a simulation study is also done to compare the proposed method with the former shrinkage ones. The experiment results show that this method has advantages in improving classification efficiency, precision and robustness.Keywords: classification, functional data, feature extraction, K-Lseperability, wavelet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14668739 Modified Levenberg-Marquardt Method for Neural Networks Training
Authors: Amir Abolfazl Suratgar, Mohammad Bagher Tavakoli, Abbas Hoseinabadi
Abstract:
In this paper a modification on Levenberg-Marquardt algorithm for MLP neural network learning is proposed. The proposed algorithm has good convergence. This method reduces the amount of oscillation in learning procedure. An example is given to show usefulness of this method. Finally a simulation verifies the results of proposed method.
Keywords: Levenberg-Marquardt, modification, neural network, variable learning rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 50468738 Derivative Spectrophotometry Applied to the Determination of Triprolidine Hydrochloride and Pseudoephedrine Hydrochloride in Tablets and Dissolution Testing
Authors: L. Sriphong, A. Chaidedgumjorn, K. Chaisuroj
Abstract:
A spectrophotometric method was developed for simultaneous quantification of pseudoephedrine hydrochloride (PSE) triprolidine hydrochloride (TRI) using second derivative method (zero-crossing technique). The second derivative amplitudes of PSE and TRI were measured at 271 and 321 nm, respectively. The calibration curves were linear in the range of 200 to 1,000 g/ml for PSE and 10 to 50 g/ml for TRI. The method was validated for specificity, accuracy, precision, limit of detection and limit of quantitation. The proposed method was applied to the assaying and dissolution of PSE and TRI in commercial tablets without any chemical separation. The results were compared with those obtained by the official USP31 method and statistical tests showed that there is no significant between the methods at 95% confidence level. The proposed method is simple, rapid and suitable for the routine quality control application. KeywordsTriprolidine, Pseudoephedrine, Derivative spectrophotometry, Dissolution testing.
Keywords: Triprolidine, Pseudoephedrine, Derivative spectrophotometry, Dissolution testing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22518737 The Effects of Knowledge Management on Human Capital towards Organizational Innovation
Authors: Wan Norhayate Wan Daud, Fakhrul Anwar Zainol, Maslina Mansor
Abstract:
The study was conducted to produce case studies from the Malaysian public universities stands point East Coast of Malaysia. The aim of this study is to analyze the effects of knowledge management on human capital toward organizational innovation. The focus point of this study is on the management member in the faculties of these three Malaysian Public Universities in the East Coast state of Peninsular Malaysia. In this case, respondents who agreed to further participate in the research will be invited to a one-hour face-to-face semi-structured, in-depth interview. As a result, the sample size for this study was 3 deans of Faculty of Management. Lastly, this study tries to recommend the framework of organizational innovation in Malaysian Public Universities.
Keywords: Human Capital, Knowledge Management, Organizational Innovation, Public University.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33328736 Three Dimensional Finite Element Analysis of Functionally Graded Radiation Shielding Nanoengineered Sandwich Composites
Authors: Nasim Abuali Galehdari, Thomas J. Ryan, Ajit D. Kelkar
Abstract:
In recent years, nanotechnology has played an important role in the design of an efficient radiation shielding polymeric composites. It is well known that, high loading of nanomaterials with radiation absorption properties can enhance the radiation attenuation efficiency of shielding structures. However, due to difficulties in dispersion of nanomaterials into polymer matrices, there has been a limitation in higher loading percentages of nanoparticles in the polymer matrix. Therefore, the objective of the present work is to provide a methodology to fabricate and then to characterize the functionally graded radiation shielding structures, which can provide an efficient radiation absorption property along with good structural integrity. Sandwich structures composed of Ultra High Molecular Weight Polyethylene (UHMWPE) fabric as face sheets and functionally graded epoxy nanocomposite as core material were fabricated. A method to fabricate a functionally graded core panel with controllable gradient dispersion of nanoparticles is discussed. In order to optimize the design of functionally graded sandwich composites and to analyze the stress distribution throughout the sandwich composite thickness, a finite element method was used. The sandwich panels were discretized using 3-Dimensional 8 nodded brick elements. Classical laminate analysis in conjunction with simplified micromechanics equations were used to obtain the properties of the face sheets. The presented finite element model would provide insight into deformation and damage mechanics of the functionally graded sandwich composites from the structural point of view.
Keywords: Nanotechnology, functionally graded material, radiation shielding, sandwich composites, finite element method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12708735 3D Modeling Approach for Cultural Heritage Structures: The Case of Virgin of Loreto Chapel in Cusco, Peru
Authors: Rony Reátegui, Cesar Chácara, Benjamin Castañeda, Rafael Aguilar
Abstract:
Nowadays, Heritage Building Information Modeling (HBIM) is considered an efficient tool to represent and manage information of Cultural Heritage (CH). The basis of this tool relies on a 3D model generally obtained from a Cloud-to-BIM procedure. There are different methods to create an HBIM model that goes from manual modeling based on the point cloud to the automatic detection of shapes and the creation of objects. The selection of these methods depends on the desired Level of Development (LOD), Level of Information (LOI), Grade of Generation (GOG) as well as on the availability of commercial software. This paper presents the 3D modeling of a stone masonry chapel using Recap Pro, Revit and Dynamo interface following a three-step methodology. The first step consists of the manual modeling of simple structural (e.g., regular walls, columns, floors, wall openings, etc.) and architectural (e.g., cornices, moldings and other minor details) elements using the point cloud as reference. Then, Dynamo is used for generative modeling of complex structural elements such as vaults, infills and domes. Finally, semantic information (e.g., materials, typology, state of conservation, etc.) and pathologies are added within the HBIM model as text parameters and generic models’ families respectively. The application of this methodology allows the documentation of CH following a relatively simple to apply process that ensures adequate LOD, LOI and GOG levels. In addition, the easy implementation of the method as well as the fact of using only one BIM software with its respective plugin for the scan-to-BIM modeling process means that this methodology can be adopted by a larger number of users with intermediate knowledge and limited resources, since the BIM software used has a free student license.
Keywords: Cloud-to-BIM, cultural heritage, generative modeling, HBIM, parametric modeling, Revit.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9278734 Forecasting Issues in Energy Markets within a Reg-ARIMA Framework
Authors: Ilaria Lucrezia Amerise
Abstract:
Electricity markets throughout the world have undergone substantial changes. Accurate, reliable, clear and comprehensible modeling and forecasting of different variables (loads and prices in the first instance) have achieved increasing importance. In this paper, we describe the actual state of the art focusing on reg-SARMA methods, which have proven to be flexible enough to accommodate the electricity price/load behavior satisfactory. More specifically, we will discuss: 1) The dichotomy between point and interval forecasts; 2) The difficult choice between stochastic (e.g. climatic variation) and non-deterministic predictors (e.g. calendar variables); 3) The confrontation between modelling a single aggregate time series or creating separated and potentially different models of sub-series. The noteworthy point that we would like to make it emerge is that prices and loads require different approaches that appear irreconcilable even though must be made reconcilable for the interests and activities of energy companies.Keywords: Forecasting problem, interval forecasts, time series, electricity prices, reg-plus-SARMA methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 812