Search results for: Emerging methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4321

Search results for: Emerging methods

3121 Using Non-Linear Programming Techniques in Determination of the Most Probable Slip Surface in 3D Slopes

Authors: M. M. Toufigh, A. R. Ahangarasr, A. Ouria

Abstract:

Among many different methods that are used for optimizing different engineering problems mathematical (numerical) optimization techniques are very important because they can easily be used and are consistent with most of engineering problems. Many studies and researches are done on stability analysis of three dimensional (3D) slopes and the relating probable slip surfaces and determination of factors of safety, but in most of them force equilibrium equations, as in simplified 2D methods, are considered only in two directions. In other words for decreasing mathematical calculations and also for simplifying purposes the force equilibrium equation in 3rd direction is omitted. This point is considered in just a few numbers of previous studies and most of them have only given a factor of safety and they haven-t made enough effort to find the most probable slip surface. In this study shapes of the slip surfaces are modeled, and safety factors are calculated considering the force equilibrium equations in all three directions, and also the moment equilibrium equation is satisfied in the slip direction, and using nonlinear programming techniques the shape of the most probable slip surface is determined. The model which is used in this study is a 3D model that is composed of three upper surfaces which can cover all defined and probable slip surfaces. In this research the meshing process is done in a way that all elements are prismatic with quadrilateral cross sections, and the safety factor is defined on this quadrilateral surface in the base of the element which is a part of the whole slip surface. The method that is used in this study to find the most probable slip surface is the non-linear programming method in which the objective function that must get optimized is the factor of safety that is a function of the soil properties and the coordinates of the nodes on the probable slip surface. The main reason for using non-linear programming method in this research is its quick convergence to the desired responses. The final results show a good compatibility with the previously used classical and 2D methods and also show a reasonable convergence speed.

Keywords: Non-linear programming, numerical optimization, slope stability, 3D analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1605
3120 Effect of Open-Ended Laboratory toward Learners Performance in Environmental Engineering Course: Case Study of Civil Engineering at Universiti Malaysia Sabah

Authors: N. Bolong, J. Makinda, I. Saad

Abstract:

Laboratory activities have produced benefits in student learning. With current drives of new technology resources and evolving era of education methods, renewal status of learning and teaching in laboratory methods are in progress, for both learners and the educators. To enhance learning outcomes in laboratory works particularly in engineering practices and testing, learning via handson by instruction may not sufficient. This paper describes and compares techniques and implementation of traditional (expository) with open-ended laboratory (problem-based) for two consecutive cohorts studying environmental laboratory course in civil engineering program. The transition of traditional to problem-based findings and effect were investigated in terms of course assessment student feedback survey, course outcome learning measurement and student performance grades. It was proved that students have demonstrated better performance in their grades and 12% increase in the course outcome (CO) in problem-based open-ended laboratory style than traditional method; although in perception, students has responded less favorable in their feedback.

Keywords: Engineering education, open-ended laboratory, environmental engineering lab.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3023
3119 Extraction and Characterisation of Protein Fraction from Date Palm Fruit Seeds

Authors: Ibrahim A. Akasha, Lydia Campbell, Stephen R. Euston

Abstract:

Date palm (Phoenix dactylifera L.) seeds are waste streams which are considered a major problem to the food industry. They contain potentially useful protein (10-15% of the whole date-s weight). Global production, industrialisation and utilisation of dates are increasing steadily. The worldwide production of date palm fruit has increased from 1.8 million tons in 1961 to 6.9 million tons in 2005, thus from the global production of dates are almost 800.000 tonnes of date palm seeds are not currently used [1]. The current study was carried out to convert the date palm seeds into useful protein powder. Compositional analysis showed that the seeds were rich in protein and fat 5.64 and 8.14% respectively. We used several laboratory scale methods to extract proteins from seed to produce a high protein powder. These methods included simple acid or alkali extraction, with or without ultrafiltration and phenol trichloroacetic acid with acetone precipitation (Ph/TCA method). The highest protein content powder (68%) was obtained by Ph/TCA method with yield of material (44%) whereas; the use of just alkali extraction gave the lowest protein content of 8%, and a yield of 32%.

Keywords: Date palm seed, Phoenix dactylifera L., extraction of date palm seed protein

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4603
3118 Scientific Production on Lean Supply Chains Published in Journals Indexed by SCOPUS and Web of Science Databases: A Bibliometric Study

Authors: T. Botelho de Sousa, F. Raphael Cabral Furtado, O. Eduardo da Silva Ferri, A. Batista, W. Augusto Varella, C. Eduardo Pinto, J. Mimar Santa Cruz Yabarrena, S. Gibran Ruwer, F. Müller Guerrini, L. Adalberto Philippsen Júnior

Abstract:

Lean Supply Chain Management (LSCM) is an emerging research field in Operations Management (OM). As a strategic model that focuses on reduced cost and waste with fulfilling the needs of customers, LSCM attracts great interest among researchers and practitioners. The purpose of this paper is to present an overview of Lean Supply Chains literature, based on bibliometric analysis through 57 papers published in indexed journals by SCOPUS and/or Web of Science databases. The results indicate that the last three years (2015, 2016, and 2017) were the most productive on LSCM discussion, especially in Supply Chain Management and International Journal of Lean Six Sigma journals. India, USA, and UK are the most productive countries; nevertheless, cross-country studies by collaboration among researchers were detected, by social network analysis, as a research practice, appearing to play a more important role on LSCM studies. Despite existing limitation, such as limited indexed journal database, bibliometric analysis helps to enlighten ongoing efforts on LSCM researches, including most used technical procedures and collaboration network, showing important research gaps, especially, for development countries researchers.

Keywords: Lean supply chains, bibliometric study, SCOPUS, web of Science.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 925
3117 Assessment of Negative Impacts Affecting Public Transportation Modes and Infrastructure in Burgersfort Town towards Building Urban Sustainability

Authors: Ntloana Hlabishi Peter

Abstract:

The availability of public transportation modes and qualitative infrastructure is a burning issue that affects urban sustainability. Public transportation is indispensable in providing adequate transportation means to people at an affordable price, and it promotes public transport reliance. Burgersfort town has a critical condition on the urban public transportation infrastructure which affects the bus and taxi public transport modes and the existing infrastructure. The municipality is regarded as one of the mining towns in Limpopo Province considering the availability of mining activities and proposal on establishment of a Special Economic Zone (SEZ). The study aim is to assess the efficacy of current public transportation infrastructure and to propose relevant recommendations that will unlock the possibility of future supportable public transportation systems. The Key Informant Interview (KII) was used to acquire data on the views from commuters and stakeholders involved. There KII incorporated three relevant questions in relation to services rendered in public transportation. Relevant literature relating to public transportation modes and infrastructure revealed the imperatives of public transportation infrastructure, and relevant legislation was reviewed concerning public transport infrastructure. The finding revealed poor conditions on the public transportation ranks and also inadequate parking space for public transportation modes. The study reveals that 100% of people interviewed were not satisfied with the condition of public transportation infrastructure and 100% are not satisfied with the services offered by public transportation sectors. The findings revealed that the municipality is the main player who can upgrade the existing conditions of public transportation. The study recommended that an intermodal transportation facility must be established to resolve the emerging challenges.

Keywords: Public transportation, modes, infrastructure, urban sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 678
3116 Investigation of the Capability of REALP5 to Solve Complex Fuel Geometry

Authors: D. Abdelrazek, M. NaguibAly, A. A. Badawi, Asmaa G. Abo Elnour, A. A. El-Kafas

Abstract:

This work is developed within IAEA Coordinated Research Program 1496, “Innovative methods in research reactor analysis: Benchmark against experimental data on neutronics and thermal-hydraulic computational methods and tools for operation and safety analysis of research reactors”.

The study investigates the capability of Code RELAP5/Mod3.4 to solve complex geometry complexity. Its results are compared to the results of PARET, a common code in thermal hydraulic analysis for research reactors, belonging to MTR-PC groups.

The WWR-SM reactor at the Institute of Nuclear Physics (INP) in the Republic of Uzbekistan is simulated using both PARET and RELAP5 at steady state. Results from the two codes are compared.

REALP5 code succeeded in solving the complex fuel geometry. The PARET code needed some calculations to obtain the final result. Although the final results from the PARET are more accurate, the small differences in both results makes using RELAP5 code recommended in case of complex fuel assemblies. 

Keywords: Complex fuel geometry, PARET, RELAP5, WWR-SM reactor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2237
3115 Automated Process Quality Monitoring with Prediction of Fault Condition Using Measurement Data

Authors: Hyun-Woo Cho

Abstract:

Detection of incipient abnormal events is important to improve safety and reliability of machine operations and reduce losses caused by failures. Improper set-ups or aligning of parts often leads to severe problems in many machines. The construction of prediction models for predicting faulty conditions is quite essential in making decisions on when to perform machine maintenance. This paper presents a multivariate calibration monitoring approach based on the statistical analysis of machine measurement data. The calibration model is used to predict two faulty conditions from historical reference data. This approach utilizes genetic algorithms (GA) based variable selection, and we evaluate the predictive performance of several prediction methods using real data. The results shows that the calibration model based on supervised probabilistic principal component analysis (SPPCA) yielded best performance in this work. By adopting a proper variable selection scheme in calibration models, the prediction performance can be improved by excluding non-informative variables from their model building steps.

Keywords: Prediction, operation monitoring, on-line data, nonlinear statistical methods, empirical model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1651
3114 Detection of Actuator Faults for an Attitude Control System using Neural Network

Authors: S. Montenegro, W. Hu

Abstract:

The objective of this paper is to develop a neural network-based residual generator to detect the fault in the actuators for a specific communication satellite in its attitude control system (ACS). First, a dynamic multilayer perceptron network with dynamic neurons is used, those neurons correspond a second order linear Infinite Impulse Response (IIR) filter and a nonlinear activation function with adjustable parameters. Second, the parameters from the network are adjusted to minimize a performance index specified by the output estimated error, with the given input-output data collected from the specific ACS. Then, the proposed dynamic neural network is trained and applied for detecting the faults injected to the wheel, which is the main actuator in the normal mode for the communication satellite. Then the performance and capabilities of the proposed network were tested and compared with a conventional model-based observer residual, showing the differences between these two methods, and indicating the benefit of the proposed algorithm to know the real status of the momentum wheel. Finally, the application of the methods in a satellite ground station is discussed.

Keywords: Satellite, Attitude Control, Momentum Wheel, Neural Network, Fault Detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1980
3113 A Kernel Based Rejection Method for Supervised Classification

Authors: Abdenour Bounsiar, Edith Grall, Pierre Beauseroy

Abstract:

In this paper we are interested in classification problems with a performance constraint on error probability. In such problems if the constraint cannot be satisfied, then a rejection option is introduced. For binary labelled classification, a number of SVM based methods with rejection option have been proposed over the past few years. All of these methods use two thresholds on the SVM output. However, in previous works, we have shown on synthetic data that using thresholds on the output of the optimal SVM may lead to poor results for classification tasks with performance constraint. In this paper a new method for supervised classification with rejection option is proposed. It consists in two different classifiers jointly optimized to minimize the rejection probability subject to a given constraint on error rate. This method uses a new kernel based linear learning machine that we have recently presented. This learning machine is characterized by its simplicity and high training speed which makes the simultaneous optimization of the two classifiers computationally reasonable. The proposed classification method with rejection option is compared to a SVM based rejection method proposed in recent literature. Experiments show the superiority of the proposed method.

Keywords: rejection, Chow's rule, error-reject tradeoff, SupportVector Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1438
3112 Creative Skills Supported by Multidisciplinary Learning: Case Innovation Course at the Seinäjoki University of Applied Sciences

Authors: Satu Lautamäki

Abstract:

This paper presents findings from a multidisciplinary course (bachelor level) implemented at Seinäjoki University of Applied Sciences, Finland. The course aims to develop innovative thinking of students, by having projects given by companies, using design thinking methods as a tool for creativity and by integrating students into multidisciplinary teams working on the given projects. The course is obligatory for all first year bachelor students across four faculties (business and culture, food and agriculture, health care and social work, and technology). The course involves around 800 students and 30 pedagogical coaches, and it is implemented as an intensive one-week course each year. The paper discusses the pedagogy, structure and coordination of the course. Also, reflections on methods for the development of creative skills are given. Experts in contemporary, global context often work in teams, which consist of people who have different areas of expertise and represent various professional backgrounds. That is why there is a strong need for new training methods where multidisciplinary approach is at the heart of learning. Creative learning takes place when different parties bring information to the discussion and learn from each other. When students in different fields are looking for professional growth for themselves and take responsibility for the professional growth of other learners, they form a mutual learning relationship with each other. Multidisciplinary team members make decisions both individually and collectively, which helps them to understand and appreciate other disciplines. Our results show that creative and multidisciplinary project learning can develop diversity of knowledge and competences, for instance, students’ cultural knowledge, teamwork and innovation competences, time management and presentation skills as well as support a student’s personal development as an expert. It is highly recommended that higher education curricula should include various studies for students from different study fields to work in multidisciplinary teams.

Keywords: Multidisciplinary learning, creative skills, innovative thinking, project-based learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 509
3111 Using Data Mining in Automotive Safety

Authors: Carine Cridelich, Pablo Juesas Cano, Emmanuel Ramasso, Noureddine Zerhouni, Bernd Weiler

Abstract:

Safety is one of the most important considerations when buying a new car. While active safety aims at avoiding accidents, passive safety systems such as airbags and seat belts protect the occupant in case of an accident. In addition to legal regulations, organizations like Euro NCAP provide consumers with an independent assessment of the safety performance of cars and drive the development of safety systems in automobile industry. Those ratings are mainly based on injury assessment reference values derived from physical parameters measured in dummies during a car crash test. The components and sub-systems of a safety system are designed to achieve the required restraint performance. Sled tests and other types of tests are then carried out by car makers and their suppliers to confirm the protection level of the safety system. A Knowledge Discovery in Databases (KDD) process is proposed in order to minimize the number of tests. The KDD process is based on the data emerging from sled tests according to Euro NCAP specifications. About 30 parameters of the passive safety systems from different data sources (crash data, dummy protocol) are first analysed together with experts opinions. A procedure is proposed to manage missing data and validated on real data sets. Finally, a procedure is developed to estimate a set of rough initial parameters of the passive system before testing aiming at reducing the number of tests.

Keywords: KDD process, passive safety systems, sled test, dummy injury assessment reference values, frontal impact

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2835
3110 Genetic Algorithm Parameters Optimization for Bi-Criteria Multiprocessor Task Scheduling Using Design of Experiments

Authors: Sunita Dhingra, Satinder Bal Gupta, Ranjit Biswas

Abstract:

Multiprocessor task scheduling is a NP-hard problem and Genetic Algorithm (GA) has been revealed as an excellent technique for finding an optimal solution. In the past, several methods have been considered for the solution of this problem based on GAs. But, all these methods consider single criteria and in the present work, minimization of the bi-criteria multiprocessor task scheduling problem has been considered which includes weighted sum of makespan & total completion time. Efficiency and effectiveness of genetic algorithm can be achieved by optimization of its different parameters such as crossover, mutation, crossover probability, selection function etc. The effects of GA parameters on minimization of bi-criteria fitness function and subsequent setting of parameters have been accomplished by central composite design (CCD) approach of response surface methodology (RSM) of Design of Experiments. The experiments have been performed with different levels of GA parameters and analysis of variance has been performed for significant parameters for minimisation of makespan and total completion time simultaneously.

Keywords: Multiprocessor task scheduling, Design of experiments, Genetic Algorithm, Makespan, Total completion time.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2823
3109 A Comparative Study on the Financial Characteristics for Development Methods of Urban Development Project - Focusing on Multi-level Replotting Method -

Authors: Jin hui Kim, Hyung kwan Cho, Ji won Moon, Hoon Chang

Abstract:

The purpose of this study is comparing and analysing of the financial characteristics for development methods of the urban development project in the established area, focusing on the multi-level replotting. Analysis showed that the type of the lowest expenditure was 'combination type of group-land and multi-level replotting' and the type of the highest profitability was 'multi-level replotting type'. But 'multi-level replotting type' has still risk of amount of cost for the additional architecture. In addition, we subdivided standard amount for liquidation of replotting and analysed income-expenditure flow. Analysis showed that both of 'multi-level replotting type' and 'combination type of group-land and multi-level replotting' improved profitability of project and property change ratio. However, when the standard was under a certain amount, amount of original property for the replotting was increased exponentially, and profitability of project.

Keywords: Urban development, multi-level replotting, financial characteristics, expropriation type, combination type, urban meteorology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1548
3108 Assessment of the Adaptive Pushover Analysis Using Displacement-based Loading in Prediction the Seismic Behaviour of the Unsymmetric-Plan Buildings

Authors: M.O. Makhmalbaf, F. Mohajeri Nav, M. Zabihi Samani

Abstract:

The recent drive for use of performance-based methodologies in design and assessment of structures in seismic areas has significantly increased the demand for the development of reliable nonlinear inelastic static pushover analysis tools. As a result, the adaptive pushover methods have been developed during the last decade, which unlike their conventional pushover counterparts, feature the ability to account for the effect that higher modes of vibration and progressive stiffness degradation might have on the distribution of seismic storey forces. Even in advanced pushover methods, little attention has been paid to the Unsymmetric structures. This study evaluates the seismic demands for three dimensional Unsymmetric-Plan buildings determined by the Displacement-based Adaptive Pushover (DAP) analysis, which has been introduced by Antoniou and Pinho [2004]. The capability of DAP procedure in capturing the torsional effects due to the irregularities of the structures, is investigated by comparing its estimates to the exact results, obtained from Incremental Dynamic Analysis (IDA). Also the capability of the procedure in prediction the seismic behaviour of the structure is discussed.

Keywords: Nonlinear static procedures, Unsymmetric-PlanBuildings, Torsional effects, IDA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2754
3107 An Implementation of Multi-Media Applications in Teaching Structural Design to Architectural Students

Authors: Wafa Labib

Abstract:

Teaching methods include lectures, workshops and tutorials for the presentation and discussion of ideas have become out of date; were developed outside the discipline of architecture from the college of engineering and do not satisfy the architectural students’ needs and causes them many difficulties in integrating structure into their design. In an attempt to improve structure teaching methods, this paper focused upon proposing a supportive teaching/learning tool using multi-media applications which seeks to better meet the architecture student’s needs and capabilities and improve the understanding and application of basic and intermediate structural engineering and technology principles. Before introducing the use of multi-media as a supportive teaching tool, a questionnaire was distributed to third year students of a structural design course who were selected as a sample to be surveyed forming a sample of 90 cases. The primary aim of the questionnaire was to identify the students’ learning style and to investigate whether the selected method of teaching could make the teaching and learning process more efficient. Students’ reaction on the use of this method was measured using three key elements indicating that this method is an appropriate teaching method for the nature of the students and the course as well.

Keywords: Teaching Method, Architecture, Learning style, Multi-Media.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1726
3106 Optimal Capacitor Allocation for loss reduction in Distribution System Using Fuzzy and Plant Growth Simulation Algorithm

Authors: R. Srinivasa Rao

Abstract:

This paper presents a new and efficient approach for capacitor placement in radial distribution systems that determine the optimal locations and size of capacitor with an objective of improving the voltage profile and reduction of power loss. The solution methodology has two parts: in part one the loss sensitivity factors are used to select the candidate locations for the capacitor placement and in part two a new algorithm that employs Plant growth Simulation Algorithm (PGSA) is used to estimate the optimal size of capacitors at the optimal buses determined in part one. The main advantage of the proposed method is that it does not require any external control parameters. The other advantage is that it handles the objective function and the constraints separately, avoiding the trouble to determine the barrier factors. The proposed method is applied to 9 and 34 bus radial distribution systems. The solutions obtained by the proposed method are compared with other methods. The proposed method has outperformed the other methods in terms of the quality of solution.

Keywords: Distribution systems, Capacitor allocation, Loss reduction, Fuzzy, PGSA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2273
3105 Technology Roadmapping in Defense Industry

Authors: Sevgi Özlem Bulu, Arif Furkan Mendi, Tolga Erol, İzzet Gökhan Özbilgin

Abstract:

The rapid progress of technology in today's competitive conditions has also accelerated companies' technology development activities. As a result, companies are paying more attention to R&D studies and are beginning to allocate a larger share to R&D projects. A more systematic, comprehensive, target-oriented implementation of R&D studies is crucial for the company to achieve successful results. As a consequence, Technology Roadmap (TRM) is gaining importance as a management tool. It has critical prospects for achieving medium and long term success as it contains decisions about past business, future plans, technological infrastructure. When studies on TRM are examined, projects to be placed on the roadmap are selected by many different methods. Generally preferred methods are based on multi-criteria decision making methods. Management of selected projects becomes an important point after the selection phase of the projects. At this stage, TRM are used. TRM can be created in many different ways so that each institution can prepare its own Technology Roadmap according to their strategic plan. Depending on the intended use, there can be TRM with different layers at different sizes. In the evaluation phase of the R&D projects and in the creation of the TRM, HAVELSAN, Turkey's largest defense company in the software field, carries out this process with great care and diligence. At the beginning, suggested R&D projects are evaluated by the Technology Management Board (TMB) of HAVELSAN in accordance with the company's resources, objectives, and targets. These projects are presented to the TMB periodically for evaluation within the framework of certain criteria by board members. After the necessary steps have been passed, the approved projects are added to the time-based TRM, which is composed of four layers as market, product, project and technology. The use of a four-layered roadmap provides a clearer understanding and visualization of company strategy and objectives. This study demonstrates the benefits of using TRM, four-layered Technology Roadmapping and the possibilities for the institutions in the defense industry.

Keywords: Project selection, R&D in defense industry, R&D project selection, technology roadmapping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 987
3104 Photogrammetric Survey on the Natural Gas Pipeline Projects of Iran-Turkey- Europe (ITE)

Authors: Ferruh Yildiz

Abstract:

The ITE Project is a project that has 1800 km length and across the Turkey's land through east to west. The project of pipeline enters geographically from Iran to Doğubayazit (Turkey) in the east, exits to Greece from Ipsala province of Turkey in the west. This project is the one of the international projects in such scale that provides the natural gas of Iran and Caspian Sea through the European continent. In this investigation, some information will be given about the methods used to verify the direction of the pipeline and the technical properties of the results obtained. The cost of project itself entirely depends on the direction of the pipeline which would be as short as possible and the specifications of the land cover. Production standards of 1/2000 scaled digital orthophoto and vectoral maps as a results of the use of map production materials and methods (such as high resolution satellite images, and digital aerial images captured from digital aerial cameras), will also be given in this report. According to Turkish national map production standards, TM ((Transversal Mercator, 3 degree) projection is used for large scale map and UTM (Universal Transversal Mercator, 6 degree) is used for small scale map production standards. Some information is also given about the projection used in the ITE natural gas pipeline project.

Keywords: Digital Image Processing, Natural Gas Pipe Line, Photogrammetry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2400
3103 Study on Performance of Wigner Ville Distribution for Linear FM and Transient Signal Analysis

Authors: Azeemsha Thacham Poyil, Nasimudeen KM

Abstract:

This research paper presents some methods to assess the performance of Wigner Ville Distribution for Time-Frequency representation of non-stationary signals, in comparison with the other representations like STFT, Spectrogram etc. The simultaneous timefrequency resolution of WVD is one of the important properties which makes it preferable for analysis and detection of linear FM and transient signals. There are two algorithms proposed here to assess the resolution and to compare the performance of signal detection. First method is based on the measurement of area under timefrequency plot; in case of a linear FM signal analysis. A second method is based on the instantaneous power calculation and is used in case of transient, non-stationary signals. The implementation is explained briefly for both methods with suitable diagrams. The accuracy of the measurements is validated to show the better performance of WVD representation in comparison with STFT and Spectrograms.

Keywords: WVD: Wigner Ville Distribution, STFT: Short Time Fourier Transform, FT: Fourier Transform, TFR: Time-Frequency Representation, FM: Frequency Modulation, LFM Signal: Linear FM Signal, JTFA: Joint time frequency analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2409
3102 Data and Spatial Analysis for Economy and Education of 28 E.U. Member-States for 2014

Authors: Alexiou Dimitra, Fragkaki Maria

Abstract:

The objective of the paper is the study of geographic, economic and educational variables and their contribution to determine the position of each member-state among the EU-28 countries based on the values of seven variables as given by Eurostat. The Data Analysis methods of Multiple Factorial Correspondence Analysis (MFCA) Principal Component Analysis and Factor Analysis have been used. The cross tabulation tables of data consist of the values of seven variables for the 28 countries for 2014. The data are manipulated using the CHIC Analysis V 1.1 software package. The results of this program using MFCA and Ascending Hierarchical Classification are given in arithmetic and graphical form. For comparison reasons with the same data the Factor procedure of Statistical package IBM SPSS 20 has been used. The numerical and graphical results presented with tables and graphs, demonstrate the agreement between the two methods. The most important result is the study of the relation between the 28 countries and the position of each country in groups or clouds, which are formed according to the values of the corresponding variables.

Keywords: Multiple factorial correspondence analysis, principal component analysis, factor analysis, E.U.-28 countries, statistical package IBM SPSS 20, CHIC Analysis V 1.1 Software, Eurostat.eu statistics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1069
3101 Extraction of Symbolic Rules from Artificial Neural Networks

Authors: S. M. Kamruzzaman, Md. Monirul Islam

Abstract:

Although backpropagation ANNs generally predict better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions cannot be explained as those of decision trees. In many applications, it is desirable to extract knowledge from trained ANNs for the users to gain a better understanding of how the networks solve the problems. A new rule extraction algorithm, called rule extraction from artificial neural networks (REANN) is proposed and implemented to extract symbolic rules from ANNs. A standard three-layer feedforward ANN is the basis of the algorithm. A four-phase training algorithm is proposed for backpropagation learning. Explicitness of the extracted rules is supported by comparing them to the symbolic rules generated by other methods. Extracted rules are comparable with other methods in terms of number of rules, average number of conditions for a rule, and predictive accuracy. Extensive experimental studies on several benchmarks classification problems, such as breast cancer, iris, diabetes, and season classification problems, demonstrate the effectiveness of the proposed approach with good generalization ability.

Keywords: Backpropagation, clustering algorithm, constructivealgorithm, continuous activation function, pruning algorithm, ruleextraction algorithm, symbolic rules.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1604
3100 Mathematical Modeling of the AMCs Cross-Contamination Removal in the FOUPs: Finite Element Formulation and Application in FOUP’s Decontamination

Authors: N. Santatriniaina, J. Deseure, T.Q. Nguyen, H. Fontaine, C. Beitia, L. Rakotomanana

Abstract:

Nowadays, with the increasing of the wafer's size and the decreasing of critical size of integrated circuit manufacturing in modern high-tech, microelectronics industry needs a maximum attention to challenge the contamination control. The move to 300 [mm] is accompanied by the use of Front Opening Unified Pods for wafer and his storage. In these pods an airborne cross contamination may occur between wafers and the pods. A predictive approach using modeling and computational methods is very powerful method to understand and qualify the AMCs cross contamination processes. This work investigates the required numerical tools which are employed in order to study the AMCs cross-contamination transfer phenomena between wafers and FOUPs. Numerical optimization and finite element formulation in transient analysis were established. Analytical solution of one dimensional problem was developed and the calibration process of physical constants was performed. The least square distance between the model (analytical 1D solution) and the experimental data are minimized. The behavior of the AMCs intransient analysis was determined. The model framework preserves the classical forms of the diffusion and convection-diffusion equations and yields to consistent form of the Fick's law. The adsorption process and the surface roughness effect were also traduced as a boundary condition using the switch condition Dirichlet to Neumann and the interface condition. The methodology is applied, first using the optimization methods with analytical solution to define physical constants, and second using finite element method including adsorption kinetic and the switch of Dirichlet to Neumann condition.

Keywords: AMCs, FOUP, cross-contamination, adsorption, diffusion, numerical analysis, wafers, Dirichlet to Neumann, finite elements methods, Fick’s law, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3158
3099 Medical Image Fusion Based On Redundant Wavelet Transform and Morphological Processing

Authors: P. S. Gomathi, B. Kalaavathi

Abstract:

The process in which the complementary information from multiple images is integrated to provide composite image that contains more information than the original input images is called image fusion. Medical image fusion provides useful information from multimodality medical images that provides additional information to the doctor for diagnosis of diseases in a better way. This paper represents the wavelet based medical image fusion algorithm on different multimodality medical images. In order to fuse the medical images, images are decomposed using Redundant Wavelet Transform (RWT). The high frequency coefficients are convolved with morphological operator followed by the maximum-selection (MS) rule. The low frequency coefficients are processed by MS rule. The reconstructed image is obtained by inverse RWT. The quantitative measures which includes Mean, Standard Deviation, Average Gradient, Spatial frequency, Edge based Similarity Measures are considered for evaluating the fused images. The performance of this proposed method is compared with Pixel averaging, PCA, and DWT fusion methods. When compared with conventional methods, the proposed framework provides better performance for analysis of multimodality medical images.

Keywords: Discrete Wavelet Transform (DWT), Image Fusion, Morphological Processing, Redundant Wavelet Transform (RWT).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2145
3098 Geometric Contrast of a 3D Model Obtained by Means of Digital Photogrametry with a Quasimetric Camera on UAV Classical Methods

Authors: Julio Manuel de Luis Ruiz, Javier Sedano Cibrián, Rubén Pérez Álvarez, Raúl Pereda García, Cristina Diego Soroa

Abstract:

Nowadays, the use of drones has been extended to practically any human activity. One of the main applications is focused on the surveying field. In this regard, software programs that process the images captured by the sensor from the drone in an almost automatic way have been developed and commercialized, but they only allow contrasting the results through control points. This work proposes the contrast of a 3D model obtained from a flight developed by a drone and a non-metric camera (due to its low cost), with a second model that is obtained by means of the historically-endorsed classical methods. In addition to this, the contrast is developed over a certain territory with a significant unevenness, so as to test the model generated with photogrammetry, and considering that photogrammetry with drones finds more difficulties in terms of accuracy in this kind of situations. Distances, heights, surfaces and volumes are measured on the basis of the 3D models generated, and the results are contrasted. The differences are about 0.2% for the measurement of distances and heights, 0.3% for surfaces and 0.6% when measuring volumes. Although they are not important, they do not meet the order of magnitude that is presented by salespeople.

Keywords: Accuracy, classical topographic, 3D model, photogrammetry, UAV.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 505
3097 Optimal Capacitor Placement in a Radial Distribution System using Plant Growth Simulation Algorithm

Authors: R. Srinivasa Rao, S. V. L. Narasimham

Abstract:

This paper presents a new and efficient approach for capacitor placement in radial distribution systems that determine the optimal locations and size of capacitor with an objective of improving the voltage profile and reduction of power loss. The solution methodology has two parts: in part one the loss sensitivity factors are used to select the candidate locations for the capacitor placement and in part two a new algorithm that employs Plant growth Simulation Algorithm (PGSA) is used to estimate the optimal size of capacitors at the optimal buses determined in part one. The main advantage of the proposed method is that it does not require any external control parameters. The other advantage is that it handles the objective function and the constraints separately, avoiding the trouble to determine the barrier factors. The proposed method is applied to 9, 34, and 85-bus radial distribution systems. The solutions obtained by the proposed method are compared with other methods. The proposed method has outperformed the other methods in terms of the quality of solution.

Keywords: Distribution systems, Capacitor placement, loss reduction, Loss sensitivity factors, PGSA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5269
3096 Design and Analysis of Extra High Voltage Non-Ceramic Insulator by Finite Element Method

Authors: M. Nageswara Rao, V. S. N. K. Chaitanya, P. Pratyusha

Abstract:

High voltage insulator has to withstand sever electrical stresses. Higher electrical stresses lead to erosion of the insulator surface. Degradation of insulating properties leads to flashover and in some extreme cases it may cause to puncture. For analyzing these electrical stresses and implement necessary actions to diminish the electrical stresses, numerical methods are best. By minimizing the electrical stresses, reliability of the power system will improve. In this paper electric field intensity at critical regions of 400 kV silicone composite insulator is analyzed using finite element method. Insulator is designed using FEMM-2D software package. Electric Field Analysis (EFA) results are analyzed for five cases i.e., only insulator, insulator with two sides arcing horn, High Voltage (HV) end grading ring, grading ring-arcing horn arrangement and two sides grading ring. These EFA results recommended that two sides grading ring is better for minimization of electrical stresses and improving life span of insulator.

Keywords: Polymer insulator, electric field analysis, numerical methods, finite element method, FEMM-2D.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1133
3095 Shaking Force Balancing of Mechanisms: An Overview

Authors: Vigen Arakelian

Abstract:

The balancing of mechanisms is a well-known problem in the field of mechanical engineering because the variable dynamic loads cause vibrations, as well as noise, wear and fatigue of the machines. A mechanical system with unbalance shaking force and shaking moment transmits substantial vibration to the frame. Therefore, the objective of the balancing is to cancel or reduce the variable dynamic reactions transmitted to the frame. The resolution of this problem consists in the balancing of the shaking force and shaking moment. It can be fully or partially, by internal mass redistribution via adding counterweights or by modification of the mechanism's architecture via adding auxiliary structures. The balancing problems are of continue interest to researchers. Several laboratories around the world are very active in this area and new results are published regularly. However, despite its ancient history, mechanism balancing theory continues to be developed and new approaches and solutions are constantly being reported. Various surveys have been published that disclose particularities of balancing methods. The author believes that this is an appropriate moment to present a state of the art of the shaking force balancing studies completed by new research results. This paper presents an overview of methods devoted to the shaking force balancing of mechanisms, as well as the historical aspects of the origins and the evolution of the balancing theory of mechanisms.

Keywords: Inertia forces, shaking forces, balancing, dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 529
3094 Geospatial Network Analysis Using Particle Swarm Optimization

Authors: Varun Singh, Mainak Bandyopadhyay, Maharana Pratap Singh

Abstract:

The shortest path (SP) problem concerns with finding the shortest path from a specific origin to a specified destination in a given network while minimizing the total cost associated with the path. This problem has widespread applications. Important applications of the SP problem include vehicle routing in transportation systems particularly in the field of in-vehicle Route Guidance System (RGS) and traffic assignment problem (in transportation planning). Well known applications of evolutionary methods like Genetic Algorithms (GA), Ant Colony Optimization, Particle Swarm Optimization (PSO) have come up to solve complex optimization problems to overcome the shortcomings of existing shortest path analysis methods. It has been reported by various researchers that PSO performs better than other evolutionary optimization algorithms in terms of success rate and solution quality. Further Geographic Information Systems (GIS) have emerged as key information systems for geospatial data analysis and visualization. This research paper is focused towards the application of PSO for solving the shortest path problem between multiple points of interest (POI) based on spatial data of Allahabad City and traffic speed data collected using GPS. Geovisualization of results of analysis is carried out in GIS.

Keywords: GIS, Outliers, PSO, Traffic Data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2879
3093 Spatial Query Localization Method in Limited Reference Point Environment

Authors: Victor Krebss

Abstract:

Task of object localization is one of the major challenges in creating intelligent transportation. Unfortunately, in densely built-up urban areas, localization based on GPS only produces a large error, or simply becomes impossible. New opportunities arise for the localization due to the rapidly emerging concept of a wireless ad-hoc network. Such network, allows estimating potential distance between these objects measuring received signal level and construct a graph of distances in which nodes are the localization objects, and edges - estimates of the distances between pairs of nodes. Due to the known coordinates of individual nodes (anchors), it is possible to determine the location of all (or part) of the remaining nodes of the graph. Moreover, road map, available in digital format can provide localization routines with valuable additional information to narrow node location search. However, despite abundance of well-known algorithms for solving the problem of localization and significant research efforts, there are still many issues that currently are addressed only partially. In this paper, we propose localization approach based on the graph mapped distances on the digital road map data basis. In fact, problem is reduced to distance graph embedding into the graph representing area geo location data. It makes possible to localize objects, in some cases even if only one reference point is available. We propose simple embedding algorithm and sample implementation as spatial queries over sensor network data stored in spatial database, allowing employing effectively spatial indexing, optimized spatial search routines and geometry functions.

Keywords: Intelligent Transportation System, Sensor Network, Localization, Spatial Query, GIS, Graph Embedding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1526
3092 Comparison of Irradiance Decomposition and Energy Production Methods in a Solar Photovoltaic System

Authors: Tisciane Perpetuo e Oliveira, Dante Inga Narvaez, Marcelo Gradella Villalva

Abstract:

Installations of solar photovoltaic systems have increased considerably in the last decade. Therefore, it has been noticed that monitoring of meteorological data (solar irradiance, air temperature, wind velocity, etc.) is important to predict the potential of a given geographical area in solar energy production. In this sense, the present work compares two computational tools that are capable of estimating the energy generation of a photovoltaic system through correlation analyzes of solar radiation data: PVsyst software and an algorithm based on the PVlib package implemented in MATLAB. In order to achieve the objective, it was necessary to obtain solar radiation data (measured and from a solarimetric database), analyze the decomposition of global solar irradiance in direct normal and horizontal diffuse components, as well as analyze the modeling of the devices of a photovoltaic system (solar modules and inverters) for energy production calculations. Simulated results were compared with experimental data in order to evaluate the performance of the studied methods. Errors in estimation of energy production were less than 30% for the MATLAB algorithm and less than 20% for the PVsyst software.

Keywords: Energy production, meteorological data, irradiance decomposition, solar photovoltaic system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 747