Search results for: Deep approach metacognitive methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8444

Search results for: Deep approach metacognitive methods

8114 Geometric Operators in the Selection of Human Resources

Authors: José M. Merigó, Anna M. Gil-Lafuente

Abstract:

We study the possibility of using geometric operators in the selection of human resources. We develop three new methods that use the ordered weighted geometric (OWG) operator in different indexes used for the selection of human resources. The objective of these models is to manipulate the neutrality of the old methods so the decision maker is able to select human resources according to his particular attitude. In order to develop these models, first a short revision of the OWG operator is developed. Second, we briefly explain the general process for the selection of human resources. Then, we develop the three new indexes. They will use the OWG operator in the Hamming distance, in the adequacy coefficient and in the index of maximum and minimum level. Finally, an illustrative example about the new approach is given.

Keywords: OWG operator, decision making, human resources, Hamming distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1363
8113 An Investigation to Study the Moisture Dependency of Ground Enhancement Compound

Authors: Arunima Shukla, Vikas Almadi, Devesh Jaiswal, Sunil Saini, Bhusan S. Patil

Abstract:

Lightning protection consists of three main parts; mainly air termination system, down conductor, and earth termination system. Earth termination system is the most important part as earth is the sink and source of charges. Therefore, even when the charges are captured and delivered to the ground, and an easy path is not provided to the charges, earth termination system would lead to problems. Soil has significantly different resistivities ranging from 10 Ωm for wet organic soil to 10000 Ωm for bedrock. Different methods have been discussed and used conventionally such as deep-ground-well method and altering the length of the rod. Those methods are not considered economical. Therefore, it was a general practice to use charcoal along with salt to reduce the soil resistivity. Bentonite is worldwide acceptable material, that had led our interest towards study of bentonite at first. It was concluded that bentonite is a clay which is non-corrosive, environment friendly. Whereas bentonite is suitable only when there is moisture present in the soil, as in the absence of moisture, cracks will appear on the surface which will provide an open passage to the air, resulting into increase in the resistivity. Furthermore, bentonite without moisture does not have enough bonding property, moisture retention, conductivity, and non-leachability. Therefore, bentonite was used along with the other backfill material to overcome the dependency of bentonite on moisture. Different experiments were performed to get the best ratio of bentonite and carbon backfill. It was concluded that properties will highly depend on the quantity of bentonite and carbon-based backfill material.

Keywords: Backfill material, bentonite, conducting soil, grounding material, low resistivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 391
8112 Single Image Defogging Method Using Variational Approach for Edge-Preserving Regularization

Authors: Wan-Hyun Cho, In-Seop Na, Seong-ChaeSeo, Sang-Kyoon Kim, Soon-Young Park

Abstract:

In this paper, we propose the variational approach to solve single image defogging problem. In the inference process of the atmospheric veil, we defined new functional for atmospheric veil that satisfy edge-preserving regularization property. By using the fundamental lemma of calculus of variations, we derive the Euler-Lagrange equation foratmospheric veil that can find the maxima of a given functional. This equation can be solved by using a gradient decent method and time parameter. Then, we can have obtained the estimated atmospheric veil, and then have conducted the image restoration by using inferred atmospheric veil. Finally we have improved the contrast of restoration image by various histogram equalization methods. The experimental results show that the proposed method achieves rather good defogging results.

Keywords: Image defogging, Image restoration, Atmospheric veil, Transmission, Variational approach, Euler-Lagrange equation, Image enhancement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2916
8111 Effective Class of Discreet Programing Problems

Authors: Kaziyev G. Z., Nabiyeva G. S., Kalizhanova A.U.

Abstract:

We consider herein a concise view of discreet programming models and methods. There has been conducted the models and methods analysis. On the basis of discreet programming models there has been elaborated and offered a new class of problems, i.e. block-symmetry models and methods of applied tasks statements and solutions.

Keywords: Discreet programming, block-symmetry, analysis methods, information systems development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1323
8110 A Comparison of Deterministic and Probabilistic Methods for Determining the Required Amount of Spinning Reserve

Authors: A. Ehsani, A. Karimizadeh, H. Fallahi, A. Jalali

Abstract:

In an electric power system, spinning reserve requirements can be determined by using deterministic and/or probabilistic measures. Although deterministic methods are usual in many systems, application of probabilistic methods becomes increasingly important in the new environment of the electric power utility industry. This is because of the increased uncertainty associated with competition. In this paper 1) a new probabilistic method is presented which considers the reliability of transmission system in a simplified manner and 2) deterministic and probabilistic methods are compared. The studied methods are applied to the Roy Billinton Test System (RBTS).

Keywords: Reliability, Spinning Reserve, Risk, Transmission, Unit Commitment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2007
8109 Efficient Tools for Managing Uncertainties in Design and Operation of Engineering Structures

Authors: J. Menčík

Abstract:

Actual load, material characteristics and other quantities often differ from the design values. This can cause worse function, shorter life or failure of a civil engineering structure, a machine, vehicle or another appliance. The paper shows main causes of the uncertainties and deviations and presents a systematic approach and efficient tools for their elimination or mitigation of consequences. Emphasis is put on the design stage, which is most important for reliability ensuring. Principles of robust design and important tools are explained, including FMEA, sensitivity analysis and probabilistic simulation methods. The lifetime prediction of long-life objects can be improved by long-term monitoring of the load response and damage accumulation in operation. The condition evaluation of engineering structures, such as bridges, is often based on visual inspection and verbal description. Here, methods based on fuzzy logic can reduce the subjective influences.

Keywords: Design, fuzzy methods, Monte Carlo, reliability, robust design, sensitivity analysis, simulation, uncertainties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1789
8108 Continual Learning Using Data Generation for Hyperspectral Remote Sensing Scene Classification

Authors: Samiah Alammari, Nassim Ammour

Abstract:

When providing a massive number of tasks successively to a deep learning process, a good performance of the model requires preserving the previous tasks data to retrain the model for each upcoming classification. Otherwise, the model performs poorly due to the catastrophic forgetting phenomenon. To overcome this shortcoming, we developed a successful continual learning deep model for remote sensing hyperspectral image regions classification. The proposed neural network architecture encapsulates two trainable subnetworks. The first module adapts its weights by minimizing the discrimination error between the land-cover classes during the new task learning, and the second module tries to learn how to replicate the data of the previous tasks by discovering the latent data structure of the new task dataset. We conduct experiments on hyperspectral image (HSI) dataset on Indian Pines. The results confirm the capability of the proposed method.

Keywords: Continual learning, data reconstruction, remote sensing, hyperspectral image segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 175
8107 An Agent-Based Approach to Immune Modelling: Priming Individual Response

Authors: Dimitri Perrin, Heather J. Ruskin, Martin Crane

Abstract:

This study focuses on examining why the range of experience with respect to HIV infection is so diverse, especially in regard to the latency period. An agent-based approach in modelling the infection is used to extract high-level behaviour which cannot be obtained analytically from the set of interaction rules at the cellular level. A prototype model encompasses local variation in baseline properties, contributing to the individual disease experience, and is included in a network which mimics the chain of lymph nodes. The model also accounts for stochastic events such as viral mutations. The size and complexity of the model require major computational effort and parallelisation methods are used.

Keywords: HIV, Immune modelling, Agent-based system, individual response.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1242
8106 Selecting an Advanced Creep Model or a Sophisticated Time-Integration? A New Approach by Means of Sensitivity Analysis

Authors: Holger Keitel

Abstract:

The prediction of long-term deformations of concrete and reinforced concrete structures has been a field of extensive research and several different creep models have been developed so far. Most of the models were developed for constant concrete stresses, thus, in case of varying stresses a specific superposition principle or time-integration, respectively, is necessary. Nowadays, when modeling concrete creep the engineering focus is rather on the application of sophisticated time-integration methods than choosing the more appropriate creep model. For this reason, this paper presents a method to quantify the uncertainties of creep prediction originating from the selection of creep models or from the time-integration methods. By adapting variance based global sensitivity analysis, a methodology is developed to quantify the influence of creep model selection or choice of time-integration method. Applying the developed method, general recommendations how to model creep behavior for varying stresses are given.

Keywords: Concrete creep models, time-integration methods, sensitivity analysis, prediction uncertainty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1515
8105 A Mathematical Representation for Mechanical Model Assessment: Numerical Model Qualification Method

Authors: Keny Ordaz-Hernandez, Xavier Fischer, Fouad Bennis

Abstract:

This article illustrates a model selection management approach for virtual prototypes in interactive simulations. In those numerical simulations, the virtual prototype and its environment are modelled as a multiagent system, where every entity (prototype,human, etc.) is modelled as an agent. In particular, virtual prototyp ingagents that provide mathematical models of mechanical behaviour inform of computational methods are considered. This work argues that selection of an appropriate model in a changing environment,supported by models? characteristics, can be managed by the deter-mination a priori of specific exploitation and performance measures of virtual prototype models. As different models exist to represent a single phenomenon, it is not always possible to select the best one under all possible circumstances of the environment. Instead the most appropriate shall be selecting according to the use case. The proposed approach consists in identifying relevant metrics or indicators for each group of models (e.g. entity models, global model), formulate their qualification, analyse the performance, and apply the qualification criteria. Then, a model can be selected based on the performance prediction obtained from its qualification. The authors hope that this approach will not only help to inform engineers and researchers about another approach for selecting virtual prototype models, but also assist virtual prototype engineers in the systematic or automatic model selection.

Keywords: Virtual prototype models, domain, qualification criterion, model qualification, model assessment, environmental modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2008
8104 Mechanical Properties of D2 Tool Steel Cryogenically Treated Using Controllable Cooling

Authors: A. Rabin, G. Mazor, I. Ladizhenski, R. Z. Shneck

Abstract:

The hardness and hardenability of AISI D2 cold work tool steel with conventional quenching (CQ), deep cryogenic quenching (DCQ) and rapid deep cryogenic quenching heat treatments caused by temporary porous coating based on magnesium sulfate was investigated. Each of the cooling processes was examined from the perspective of the full process efficiency, heat flux in the austenite-martensite transformation range followed by characterization of the temporary porous layer made of magnesium sulfate using confocal laser scanning microscopy (CLSM), surface and core hardness and hardenability using Vickers hardness technique. The results show that the cooling rate (CR) at the austenite-martensite transformation range has a high influence on the hardness of the studied steel.

Keywords: AISI D2, controllable cooling, magnesium sulfate coating, rapid cryogenic heat treatment, temporary porous layer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 312
8103 An Implementation of Fuzzy Logic Technique for Prediction of the Power Transformer Faults

Authors: Omar M. Elmabrouk., Roaa Y. Taha., Najat M. Ebrahim, Sabbreen A. Mohammed

Abstract:

Power transformers are the most crucial part of power electrical system, distribution and transmission grid. This part is maintained using predictive or condition-based maintenance approach. The diagnosis of power transformer condition is performed based on Dissolved Gas Analysis (DGA). There are five main methods utilized for analyzing these gases. These methods are International Electrotechnical Commission (IEC) gas ratio, Key Gas, Roger gas ratio, Doernenburg, and Duval Triangle. Moreover, due to the importance of the transformers, there is a need for an accurate technique to diagnose and hence predict the transformer condition. The main objective of this technique is to avoid the transformer faults and hence to maintain the power electrical system, distribution and transmission grid. In this paper, the DGA was utilized based on the data collected from the transformer records available in the General Electricity Company of Libya (GECOL) which is located in Benghazi-Libya. The Fuzzy Logic (FL) technique was implemented as a diagnostic approach based on IEC gas ratio method. The FL technique gave better results and approved to be used as an accurate prediction technique for power transformer faults. Also, this technique is approved to be a quite interesting for the readers and the concern researchers in the area of FL mathematics and power transformer.

Keywords: Fuzzy logic, dissolved gas-in-oil analysis, DGA, prediction, power transformer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1330
8102 Methodology for Developing an Intelligent Tutoring System Based on Marzano’s Taxonomy

Authors: Joaquin Navarro Perales, Ana Lidia Franzoni Velázquez, Francisco Cervantes Pérez

Abstract:

The Mexican educational system faces diverse challenges related with the quality and coverage of education. The development of Intelligent Tutoring Systems (ITS) may help to solve some of them by helping teachers to customize their classes according to the performance of the students in online courses. In this work, we propose the adaptation of a functional ITS based on Bloom’s taxonomy called Sistema de Apoyo Generalizado para la Enseñanza Individualizada (SAGE), to measure student’s metacognition and their emotional response based on Marzano’s taxonomy. The students and the system will share the control over the advance in the course, so they can improve their metacognitive skills. The system will not allow students to get access to subjects not mastered yet. The interaction between the system and the student will be implemented through Natural Language Processing techniques, thus avoiding the use of sensors to evaluate student’s response. The teacher will evaluate student’s knowledge utilization, which is equivalent to the last cognitive level in Marzano’s taxonomy.

Keywords: Intelligent tutoring systems, student modelling, metacognition, affective computing, natural language processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 974
8101 Experimental Study of Hyperparameter Tuning a Deep Learning Convolutional Recurrent Network for Text Classification

Authors: Bharatendra Rai

Abstract:

Sequences of words in text data have long-term dependencies and are known to suffer from vanishing gradient problem when developing deep learning models. Although recurrent networks such as long short-term memory networks help overcome this problem, achieving high text classification performance is a challenging problem. Convolutional recurrent networks that combine advantages of long short-term memory networks and convolutional neural networks, can be useful for text classification performance improvements. However, arriving at suitable hyperparameter values for convolutional recurrent networks is still a challenging task where fitting of a model requires significant computing resources. This paper illustrates the advantages of using convolutional recurrent networks for text classification with the help of statistically planned computer experiments for hyperparameter tuning. 

Keywords: Convolutional recurrent networks, hyperparameter tuning, long short-term memory networks, Tukey honest significant differences

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 60
8100 Socio-Technical Systems: Transforming Theory into Practice

Authors: L. Ngowi, N. H. Mvungi

Abstract:

This paper critically examines the evolution of socio-technical systems theory, its practices, and challenges in system design and development. It examines concepts put forward by researchers focusing on the application of the theory in software engineering. There are various methods developed that use socio-technical concepts based on systems engineering without remarkable success. The main constraint is the large amount of data and inefficient techniques used in the application of the concepts in system engineering for developing time-bound systems and within a limited/controlled budget. This paper critically examines each of the methods, highlight bottlenecks and suggest the way forward. Since socio-technical systems theory only explains what to do, but not how doing it, hence engineers are not using the concept to save time, costs and reduce risks associated with new frameworks. Hence, a new framework, which can be considered as a practical approach is proposed that borrows concepts from soft systems method, agile systems development and object-oriented analysis and design to bridge the gap between theory and practice. The approach will enable the development of systems using socio-technical systems theory to attract/enable the system engineers/software developers to use socio-technical systems theory in building worthwhile information systems to avoid fragilities and hostilities in the work environment.

Keywords: Socio-technical systems, human centered design, software engineering, cognitive engineering, soft systems, systems engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2782
8099 Comparison of Full Graph Methods of Switched Circuits Solution

Authors: Zdeňka Dostálová, David Matoušek, Bohumil Brtnik

Abstract:

As there are also graph methods of circuit analysis in addition to algebraic methods, it is, in theory, clearly possible to carry out an analysis of a whole switched circuit in two-phase switching exclusively by the graph method as well. This article deals with two methods of full-graph solving of switched circuits: by transformation graphs and by two-graphs. It deals with the circuit switched capacitors and the switched current, too. All methods are presented in an equally detailed steps to be able to compare.

Keywords: Switched capacitors of two phases, switched currents of two phases, transformation graph, two-graph, Mason's formula, voltage transfer, summary graph.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1284
8098 A Survey in Techniques for Imbalanced Intrusion Detection System Datasets

Authors: Najmeh Abedzadeh, Matthew Jacobs

Abstract:

An intrusion detection system (IDS) is a software application that monitors malicious activities and generates alerts if any are detected. However, most network activities in IDS datasets are normal, and the relatively few numbers of attacks make the available data imbalanced. Consequently, cyber-attacks can hide inside a large number of normal activities, and machine learning algorithms have difficulty learning and classifying the data correctly. In this paper, a comprehensive literature review is conducted on different types of algorithms for both implementing the IDS and methods in correcting the imbalanced IDS dataset. The most famous algorithms are machine learning (ML), deep learning (DL), synthetic minority over-sampling technique (SMOTE), and reinforcement learning (RL). Most of the research use the CSE-CIC-IDS2017, CSE-CIC-IDS2018, and NSL-KDD datasets for evaluating their algorithms.

Keywords: IDS, intrusion detection system, imbalanced datasets, sampling algorithms, big data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1046
8097 Exploring Additional Intention Predictors within Dietary Behavior among Type 2 Diabetes

Authors: D. O. Omondi, M. K. Walingo, G. M. Mbagaya

Abstract:

Objective: This study explored the possibility of integrating Health Belief Concepts as additional predictors of intention to adopt a recommended diet-category within the Theory of Planned Behavior (TPB). Methods: The study adopted a Sequential Exploratory Mixed Methods approach. Qualitative data were generated on attitude, subjective norm, perceived behavioral control and perceptions on predetermined diet-categories including perceived susceptibility, perceived benefits, perceived severity and cues to action. Synthesis of qualitative data was done using constant comparative approach during phase 1. A survey tool developed from qualitative results was used to collect information on the same concepts across 237 legible Type 2 diabetics. Data analysis included use of Structural Equation Modeling in Analysis of Moment Structures to explore the possibility of including perceived susceptibility, perceived benefits, perceived severity and cues to action as additional intention predictors in a single nested model. Results: Two models-one nested based on the traditional TPB model {χ2=223.3, df = 77, p = .02, χ2/df = 2.9; TLI = .93; CFI =.91; RMSEA (90CI) = .090(.039, .146)} and the newly proposed Planned Behavior Health Belief Model (PBHB) {χ2 = 743.47, df = 301, p = .019; TLI = .90; CFI=.91; RMSEA (90CI) = .079(.031, .14)} passed the goodness of fit tests based on common fit indicators used. Conclusion: The newly developed PBHB Model ranked higher than the traditional TPB model with reference made to chi-square ratios (PBHB: χ2/df = 2.47; p=0.19 against TPB: χ2/df = 2.9, p=0.02). The integrated model can be used to motivate Type 2 diabetics towards healthy eating.

Keywords: Theory, intention, predictors, mixed methods design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1384
8096 Through Biometric Card in Romania: Person Identification by Face, Fingerprint and Voice Recognition

Authors: Hariton N. Costin, Iulian Ciocoiu, Tudor Barbu, Cristian Rotariu

Abstract:

In this paper three different approaches for person verification and identification, i.e. by means of fingerprints, face and voice recognition, are studied. Face recognition uses parts-based representation methods and a manifold learning approach. The assessment criterion is recognition accuracy. The techniques under investigation are: a) Local Non-negative Matrix Factorization (LNMF); b) Independent Components Analysis (ICA); c) NMF with sparse constraints (NMFsc); d) Locality Preserving Projections (Laplacianfaces). Fingerprint detection was approached by classical minutiae (small graphical patterns) matching through image segmentation by using a structural approach and a neural network as decision block. As to voice / speaker recognition, melodic cepstral and delta delta mel cepstral analysis were used as main methods, in order to construct a supervised speaker-dependent voice recognition system. The final decision (e.g. “accept-reject" for a verification task) is taken by using a majority voting technique applied to the three biometrics. The preliminary results, obtained for medium databases of fingerprints, faces and voice recordings, indicate the feasibility of our study and an overall recognition precision (about 92%) permitting the utilization of our system for a future complex biometric card.

Keywords: Biometry, image processing, pattern recognition, speech analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1917
8095 Land Subsidence and Fissuring Due to Ground Water Withdrawal in Yazd-Ardakan Basin, Central Iran

Authors: Eslamizadeh, Azat., Samanirad, Shahram

Abstract:

The Yazd-Ardakan basin in Central Iran has two separated aquifers. The shallow unconfined aquifer is supplies 40 Qanats. The deep saturated confined aquifer is the main water storage. Due to over-withdrawal, water table has been decreasing during last 25 years. Recent study shows that the shortage of the aquifer is about 16 meters and land subsidence is 0.5 - 1.2 meters. Long deep cracks are found just above the aquifer and devour the irrigation water and floods. Although the most cracks direction is NW-SE and could be compared to the main direction of YA basin, there is no direct evidence for relation between land subsidence and the huge cracks. Large-scale water pumping has been decreased the water pressure in aquifer. The pressure decline disturbed the balance and increased the pressure of overlying sediments. So porosity decreased and compaction started. Then, sediments compaction developed and made land subsidence and some huge cracks slowly.

Keywords: Land subsidence, Iran, Yazd, aquifer

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1777
8094 Application of a Modified BCR Approach to Investigate the Mobility and Availability of Trace Elements (As, Ba, Cd, Co, Cr, Cu, Mo,Ni, Pb, Zn, and Hg) from a Solid Residue Matrix Designed for Soil Amendment

Authors: Mikko Mäkelä, Risto Pöykiö, Gary Watkins, Hannu Nurmesniemi, Olli Dahl

Abstract:

Trace element speciation of an integrated soil amendment matrix was studied with a modified BCR sequential extraction procedure. The analysis included pseudo-total concentration determinations according to USEPA 3051A and relevant physicochemical properties by standardized methods. Based on the results, the soil amendment matrix possessed neutralization capacity comparable to commercial fertilizers. Additionally, the pseudo-total concentrations of all trace elements included in the Finnish regulation for agricultural fertilizers were lower than the respective statutory limit values. According to chemical speciation, the lability of trace elements increased in the following order: Hg < Cr < Co < Cu < As < Zn < Ni < Pb < Cd < V < Mo < Ba. The validity of the BCR approach as a tool for chemical speciation was confirmed by the additional acid digestion phase. Recovery of trace elements during the procedure assured the validity of the approach and indicated good quality of the analytical work.

Keywords: BCR, bioavailability, trace element, industrialresidue, sequential extraction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1816
8093 An Overview of Islanding Detection Methods in Photovoltaic Systems

Authors: Wei Yee Teoh, Chee Wei Tan

Abstract:

The issue of unintentional islanding in PV grid interconnection still remains as a challenge in grid-connected photovoltaic (PV) systems. This paper discusses the overview of popularly used anti-islanding detection methods, practically applied in PV grid-connected systems. Anti-islanding methods generally can be classified into four major groups, which include passive methods, active methods, hybrid methods and communication base methods. Active methods have been the preferred detection technique over the years due to very small non-detected zone (NDZ) in small scale distribution generation. Passive method is comparatively simpler than active method in terms of circuitry and operations. However, it suffers from large NDZ that significantly reduces its performance. Communication base methods inherit the advantages of active and passive methods with reduced drawbacks. Hybrid method which evolved from the combination of both active and passive methods has been proven to achieve accurate anti-islanding detection by many researchers. For each of the studied anti-islanding methods, the operation analysis is described while the advantages and disadvantages are compared and discussed. It is difficult to pinpoint a generic method for a specific application, because most of the methods discussed are governed by the nature of application and system dependent elements. This study concludes that the setup and operation cost is the vital factor for anti-islanding method selection in order to achieve minimal compromising between cost and system quality.

Keywords: Active method, hybrid method, islanding detection, passive method, photovoltaic (PV), utility method

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9724
8092 A Novel Design Approach for Mechatronic Systems Based On Multidisciplinary Design Optimization

Authors: Didier Casner, Jean Renaud, Remy Houssin, Dominique Knittel

Abstract:

In this paper, a novel approach for the multidisciplinary design optimization (MDO) of complex mechatronic systems. This approach, which is a part of a global project aiming to include the MDO aspect inside an innovative design process. As a first step, the paper considers the MDO as a redesign approach which is limited to the parametric optimization. After defining and introducing the different keywords, the proposed method which is based on the V-Model which is commonly used in mechatronics.

Keywords: mechatronics, Multidisciplinary Design Optimization (MDO), multiobjective optimization, engineering design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2036
8091 A Quadratic Approach for Generating Pythagorean Triples

Authors: P. K. Rahul Krishna, S. Sandeep Kumar, Jayanthi Sunder Raj

Abstract:

The article explores one of the important relations between numbers-the Pythagorean triples (triplets) which finds its application in distance measurement, construction of roads, towers, buildings and wherever Pythagoras theorem finds its application. The Pythagorean triples are numbers, that satisfy the condition “In a given set of three natural numbers, the sum of squares of two natural numbers is equal to the square of the other natural number”. There are numerous methods and equations to obtain the triplets, which have their own merits and demerits. Here, quadratic approach for generating triples uses the hypotenuse leg difference method. The advantage is that variables are few and finally only three independent variables are present.

Keywords: Arithmetic progression, hypotenuse leg difference method, natural numbers, Pythagorean triplets, quadratic equation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 792
8090 A Comparison of Inflow Generation Methods for Large-Eddy Simulation

Authors: Francois T. Pronk, Steven J. Hulshoff

Abstract:

A study of various turbulent inflow generation methods was performed to compare their relative effectiveness for LES computations of turbulent boundary layers. This study confirmed the quality of the turbulent information produced by the family of recycling and rescaling methods which take information from within the computational domain. Furthermore, more general inflow methods also proved applicable to such simulations, with a precursor-like inflow and a random inflow augmented with forcing planes showing promising results.

Keywords: Boundary layer, Flat plate, Inflow modeling, LES

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1624
8089 Implementation of Student-Centered Learning Approach in Building Surveying Course

Authors: Amal A. Abdel-Sattar

Abstract:

The curriculum of architecture department in Prince Sultan University includes ‘Building Surveying’ course which is usually a part of civil engineering courses. As a fundamental requirement of the course, it requires a strong background in mathematics and physics, which are not usually preferred subjects to the architecture students and many of them are not giving the required and necessary attention to these courses during their preparation year before commencing their architectural study. This paper introduces the concept and the methodology of the student-centered learning approach in the course of building surveying for architects. One of the major outcomes is the improvement in the students’ involvement in the course and how this will cover and strength their analytical weak points and improve their mathematical skills. The study is conducted through three semesters with a total number of 99 students. The effectiveness of the student-centered learning approach is studied using the student survey at the end of each semester and teacher observations. This survey showed great acceptance of the students for these methods. Also, the teachers observed a great improvement in the students’ mathematical abilities and how keener they became in attending the classes which were clearly reflected on the low absence record.

Keywords: Architecture, building surveying, student-centered learning, teaching, and learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1253
8088 Alternative Methods to Rank the Impact of Object Oriented Metrics in Fault Prediction Modeling using Neural Networks

Authors: Kamaldeep Kaur, Arvinder Kaur, Ruchika Malhotra

Abstract:

The aim of this paper is to rank the impact of Object Oriented(OO) metrics in fault prediction modeling using Artificial Neural Networks(ANNs). Past studies on empirical validation of object oriented metrics as fault predictors using ANNs have focused on the predictive quality of neural networks versus standard statistical techniques. In this empirical study we turn our attention to the capability of ANNs in ranking the impact of these explanatory metrics on fault proneness. In ANNs data analysis approach, there is no clear method of ranking the impact of individual metrics. Five ANN based techniques are studied which rank object oriented metrics in predicting fault proneness of classes. These techniques are i) overall connection weights method ii) Garson-s method iii) The partial derivatives methods iv) The Input Perturb method v) the classical stepwise methods. We develop and evaluate different prediction models based on the ranking of the metrics by the individual techniques. The models based on overall connection weights and partial derivatives methods have been found to be most accurate.

Keywords: Artificial Neural Networks (ANNS), Backpropagation, Fault Prediction Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1732
8087 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: Goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, type-I error, penalized quasi-likelihood, power, quasi-likelihood.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 707
8086 Numerical Methods versus Bjerksund and Stensland Approximations for American Options Pricing

Authors: Marasovic Branka, Aljinovic Zdravka, Poklepovic Tea

Abstract:

Numerical methods like binomial and trinomial trees and finite difference methods can be used to price a wide range of options contracts for which there are no known analytical solutions. American options are the most famous of that kind of options. Besides numerical methods, American options can be valued with the approximation formulas, like Bjerksund-Stensland formulas from 1993 and 2002. When the value of American option is approximated by Bjerksund-Stensland formulas, the computer time spent to carry out that calculation is very short. The computer time spent using numerical methods can vary from less than one second to several minutes or even hours. However to be able to conduct a comparative analysis of numerical methods and Bjerksund-Stensland formulas, we will limit computer calculation time of numerical method to less than one second. Therefore, we ask the question: Which method will be most accurate at nearly the same computer calculation time?

Keywords: Bjerksund and Stensland approximations, Computational analysis, Finance, Options pricing, Numerical methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5998
8085 Causal Relation Identification Using Convolutional Neural Networks and Knowledge Based Features

Authors: Tharini N. de Silva, Xiao Zhibo, Zhao Rui, Mao Kezhi

Abstract:

Causal relation identification is a crucial task in information extraction and knowledge discovery. In this work, we present two approaches to causal relation identification. The first is a classification model trained on a set of knowledge-based features. The second is a deep learning based approach training a model using convolutional neural networks to classify causal relations. We experiment with several different convolutional neural networks (CNN) models based on previous work on relation extraction as well as our own research. Our models are able to identify both explicit and implicit causal relations as well as the direction of the causal relation. The results of our experiments show a higher accuracy than previously achieved for causal relation identification tasks.

Keywords: Causal relation identification, convolutional neural networks, natural Language Processing, Machine Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2215