Search results for: NP-complete problem.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3562

Search results for: NP-complete problem.

502 A Modularized Design for Multi-Drivers Off-Road Vehicle Driving-Line and its Performance Assessment

Authors: Yi Jianjun, Sun Yingce, Hu Diqing, Li Chenggang

Abstract:

Modularized design approach can facilitate the modeling of complex systems and support behavior analysis and simulation in an iterative and thus complex engineering process, by using encapsulated submodels of components and of their interfaces. Therefore it can improve the design efficiency and simplify the solving complicated problem. Multi-drivers off-road vehicle is comparatively complicated. Driving-line is an important core part to a vehicle; it has a significant contribution to the performance of a vehicle. Multi-driver off-road vehicles have complex driving-line, so its performance is heavily dependent on the driving-line. A typical off-road vehicle-s driving-line system consists of torque converter, transmission, transfer case and driving-axles, which transfer the power, generated by the engine and distribute it effectively to the driving wheels according to the road condition. According to its main function, this paper puts forward a modularized approach for designing and evaluation of vehicle-s driving-line. It can be used to effectively estimate the performance of driving-line during concept design stage. Through appropriate analysis and assessment method, an optimal design can be reached. This method has been applied to the practical vehicle design, it can improve the design efficiency and is convenient to assess and validate the performance of a vehicle, especially of multi-drivers off-road vehicle.

Keywords: Heavy-loaded Off-road Vehicle, Power Driving-line, Modularized Design, Performance Assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1849
501 Bilingual Gaming Kit to Teach English Language through Collaborative Learning

Authors: Sarayu Agarwal

Abstract:

This paper aims to teach English (secondary language) by bridging the understanding between the Regional language (primary language) and the English Language (secondary language). Here primary language is the one a person has learned from birth or within the critical period, while secondary language would be any other language one learns or speaks. The paper also focuses on evolving old teaching methods to a contemporary participatory model of learning and teaching. Pilot studies were conducted to gauge an understanding of student’s knowledge of the English language. Teachers and students were interviewed and their academic curriculum was assessed as a part of the initial study. Extensive literature study and design thinking principles were used to devise a solution to the problem. The objective is met using a holistic learning kit/card game to teach children word recognition, word pronunciation, word spelling and writing words. Implication of the paper is a noticeable improvement in the understanding and grasping of English language. With increasing usage and applicability of English as a second language (ESL) world over, the paper becomes relevant due to its easy replicability to any other primary or secondary language. Future scope of this paper would be transforming the idea of participatory learning into self-regulated learning methods. With the upcoming govt. learning centres in rural areas and provision of smart devices such as tablets, the development of the card games into digital applications seems very feasible.

Keywords: English as a second language, vocabulary-building, learning through gamification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1356
500 Optimal Opportunistic Maintenance Policy for a Two-Unit System

Authors: Nooshin Salari, Viliam Makis, Jane Doe

Abstract:

This paper presents a maintenance policy for a system consisting of two units. Unit 1 is gradually deteriorating and is subject to soft failure. Unit 2 has a general lifetime distribution and is subject to hard failure. Condition of unit 1 of the system is monitored periodically and it is considered as failed when its deterioration level reaches or exceeds a critical level N. At the failure time of unit 2 system is considered as failed, and unit 2 will be correctively replaced by the next inspection epoch. Unit 1 or 2 are preventively replaced when deterioration level of unit 1 or age of unit 2 exceeds the related preventive maintenance (PM) levels. At the time of corrective or preventive replacement of unit 2, there is an opportunity to replace unit 1 if its deterioration level reaches the opportunistic maintenance (OM) level. If unit 2 fails in an inspection interval, system stops operating although unit 1 has not failed. A mathematical model is derived to find the preventive and opportunistic replacement levels for unit 1 and preventive replacement age for unit 2, that minimize the long run expected average cost per unit time. The problem is formulated and solved in the semi-Markov decision process (SMDP) framework. Numerical example is provided to illustrate the performance of the proposed model and the comparison of the proposed model with an optimal policy without opportunistic maintenance level for unit 1 is carried out.

Keywords: Condition-based maintenance, opportunistic maintenance, preventive maintenance, two-unit system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1017
499 Bridging Stress Modeling of Composite Materials Reinforced by Fibers Using Discrete Element Method

Authors: Chong Wang, Kellem M. Soares, Luis E. Kosteski

Abstract:

The problem of toughening in brittle materials reinforced by fibers is complex, involving all of the mechanical properties of fibers, matrix and the fiber/matrix interface, as well as the geometry of the fiber. Development of new numerical methods appropriate to toughening simulation and analysis is necessary. In this work, we have performed simulations and analysis of toughening in brittle matrix reinforced by randomly distributed fibers by means of the discrete elements method. At first, we put forward a mechanical model of toughening contributed by random fibers. Then with a numerical program, we investigated the stress, damage and bridging force in the composite material when a crack appeared in the brittle matrix. From the results obtained, we conclude that: (i) fibers of high strength and low elasticity modulus are beneficial to toughening; (ii) fibers of relatively high elastic modulus compared to the matrix may result in substantial matrix damage due to spalling effect; (iii) employment of high-strength synthetic fibers is a good option for toughening. We expect that the combination of the discrete element method (DEM) with the finite element method (FEM) can increase the versatility and efficiency of the software developed. The present work can guide the design of ceramic composites of high performance through the optimization of the parameters.

Keywords: Bridging stress, discrete element method, fiber reinforced composites, toughening.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1899
498 Analysis Model for the Relationship of Users, Products, and Stores on Online Marketplace Based on Distributed Representation

Authors: Ke He, Wumaier Parezhati, Haruka Yamashita

Abstract:

Recently, online marketplaces in the e-commerce industry, such as Rakuten and Alibaba, have become some of the most popular online marketplaces in Asia. In these shopping websites, consumers can select purchase products from a large number of stores. Additionally, consumers of the e-commerce site have to register their name, age, gender, and other information in advance, to access their registered account. Therefore, establishing a method for analyzing consumer preferences from both the store and the product side is required. This study uses the Doc2Vec method, which has been studied in the field of natural language processing. Doc2Vec has been used in many cases to analyze the extraction of semantic relationships between documents (represented as consumers) and words (represented as products) in the field of document classification. This concept is applicable to represent the relationship between users and items; however, the problem is that one more factor (i.e., shops) needs to be considered in Doc2Vec. More precisely, a method for analyzing the relationship between consumers, stores, and products is required. The purpose of our study is to combine the analysis of the Doc2vec model for users and shops, and for users and items in the same feature space. This method enables the calculation of similar shops and items for each user. In this study, we derive the real data analysis accumulated in the online marketplace and demonstrate the efficiency of the proposal.

Keywords: Doc2Vec, marketing, online marketplace, recommendation system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 466
497 Particle Filter Supported with the Neural Network for Aircraft Tracking Based on Kernel and Active Contour

Authors: Mohammad Izadkhah, Mojtaba Hoseini, Alireza Khalili Tehrani

Abstract:

In this paper we presented a new method for tracking flying targets in color video sequences based on contour and kernel. The aim of this work is to overcome the problem of losing target in changing light, large displacement, changing speed, and occlusion. The proposed method is made in three steps, estimate the target location by particle filter, segmentation target region using neural network and find the exact contours by greedy snake algorithm. In the proposed method we have used both region and contour information to create target candidate model and this model is dynamically updated during tracking. To avoid the accumulation of errors when updating, target region given to a perceptron neural network to separate the target from background. Then its output used for exact calculation of size and center of the target. Also it is used as the initial contour for the greedy snake algorithm to find the exact target's edge. The proposed algorithm has been tested on a database which contains a lot of challenges such as high speed and agility of aircrafts, background clutter, occlusions, camera movement, and so on. The experimental results show that the use of neural network increases the accuracy of tracking and segmentation.

Keywords: Video tracking, particle filter, greedy snake, neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1193
496 Latent Semantic Inference for Agriculture FAQ Retrieval

Authors: Dawei Wang, Rujing Wang, Ying Li, Baozi Wei

Abstract:

FAQ system can make user find answer to the problem that puzzles them. But now the research on Chinese FAQ system is still on the theoretical stage. This paper presents an approach to semantic inference for FAQ mining. To enhance the efficiency, a small pool of the candidate question-answering pairs retrieved from the system for the follow-up work according to the concept of the agriculture domain extracted from user input .Input queries or questions are converted into four parts, the question word segment (QWS), the verb segment (VS), the concept of agricultural areas segment (CS), the auxiliary segment (AS). A semantic matching method is presented to estimate the similarity between the semantic segments of the query and the questions in the pool of the candidate. A thesaurus constructed from the HowNet, a Chinese knowledge base, is adopted for word similarity measure in the matcher. The questions are classified into eleven intension categories using predefined question stemming keywords. For FAQ mining, given a query, the question part and answer part in an FAQ question-answer pair is matched with the input query, respectively. Finally, the probabilities estimated from these two parts are integrated and used to choose the most likely answer for the input query. These approaches are experimented on an agriculture FAQ system. Experimental results indicate that the proposed approach outperformed the FAQ-Finder system in agriculture FAQ retrieval.

Keywords: FAQ, Semantic Inference, Ontology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1379
495 The Study of the Intelligent Fuzzy Weighted Input Estimation Method Combined with the Experiment Verification for the Multilayer Materials

Authors: Ming-Hui Lee, Tsung-Chien Chen, Tsu-Ping Yu, Horng-Yuan Jang

Abstract:

The innovative intelligent fuzzy weighted input estimation method (FWIEM) can be applied to the inverse heat transfer conduction problem (IHCP) to estimate the unknown time-varying heat flux of the multilayer materials as presented in this paper. The feasibility of this method can be verified by adopting the temperature measurement experiment. The experiment modular may be designed by using the copper sample which is stacked up 4 aluminum samples with different thicknesses. Furthermore, the bottoms of copper samples are heated by applying the standard heat source, and the temperatures on the tops of aluminum are measured by using the thermocouples. The temperature measurements are then regarded as the inputs into the presented method to estimate the heat flux in the bottoms of copper samples. The influence on the estimation caused by the temperature measurement of the sample with different thickness, the processing noise covariance Q, the weighting factor γ , the sampling time interval Δt , and the space discrete interval Δx , will be investigated by utilizing the experiment verification. The results show that this method is efficient and robust to estimate the unknown time-varying heat input of the multilayer materials.

Keywords: Multilayer Materials, Input Estimation Method, IHCP, Heat Flux.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1237
494 Maximum Common Substructure Extraction in RNA Secondary Structures Using Clique Detection Approach

Authors: Shih-Yi Chao

Abstract:

The similarity comparison of RNA secondary structures is important in studying the functions of RNAs. In recent years, most existing tools represent the secondary structures by tree-based presentation and calculate the similarity by tree alignment distance. Different to previous approaches, we propose a new method based on maximum clique detection algorithm to extract the maximum common structural elements in compared RNA secondary structures. A new graph-based similarity measurement and maximum common subgraph detection procedures for comparing purely RNA secondary structures is introduced. Given two RNA secondary structures, the proposed algorithm consists of a process to determine the score of the structural similarity, followed by comparing vertices labelling, the labelled edges and the exact degree of each vertex. The proposed algorithm also consists of a process to extract the common structural elements between compared secondary structures based on a proposed maximum clique detection of the problem. This graph-based model also can work with NC-IUB code to perform the pattern-based searching. Therefore, it can be used to identify functional RNA motifs from database or to extract common substructures between complex RNA secondary structures. We have proved the performance of this proposed algorithm by experimental results. It provides a new idea of comparing RNA secondary structures. This tool is helpful to those who are interested in structural bioinformatics.

Keywords: Clique detection, labeled vertices, RNA secondary structures, subgraph, similarity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1457
493 3D Liver Segmentation from CT Images Using a Level Set Method Based on a Shape and Intensity Distribution Prior

Authors: Nuseiba M. Altarawneh, Suhuai Luo, Brian Regan, Guijin Tang

Abstract:

Liver segmentation from medical images poses more challenges than analogous segmentations of other organs. This contribution introduces a liver segmentation method from a series of computer tomography images. Overall, we present a novel method for segmenting liver by coupling density matching with shape priors. Density matching signifies a tracking method which operates via maximizing the Bhattacharyya similarity measure between the photometric distribution from an estimated image region and a model photometric distribution. Density matching controls the direction of the evolution process and slows down the evolving contour in regions with weak edges. The shape prior improves the robustness of density matching and discourages the evolving contour from exceeding liver’s boundaries at regions with weak boundaries. The model is implemented using a modified distance regularized level set (DRLS) model. The experimental results show that the method achieves a satisfactory result. By comparing with the original DRLS model, it is evident that the proposed model herein is more effective in addressing the over segmentation problem. Finally, we gauge our performance of our model against matrices comprising of accuracy, sensitivity, and specificity.

Keywords: Bhattacharyya distance, distance regularized level set (DRLS) model, liver segmentation, level set method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2338
492 A Bibliometric Assessment on Sustainability and Clustering

Authors: Fernanda M. Assef, Maria Teresinha A. Steiner, David Gabriel F. de Barros

Abstract:

Review researches are useful in terms of analysis of research problems. Between the types of review documents, we commonly find bibliometric studies. This type of application often helps the global visualization of a research problem and helps academics worldwide to understand the context of a research area better. In this document, a bibliometric view surrounding clustering techniques and sustainability problems is presented. The authors aimed at which issues mostly use clustering techniques and even which sustainability issue would be more impactful on today’s moment of research. During the bibliometric analysis, we found 10 different groups of research in clustering applications for sustainability issues: Energy; Environmental; Non-urban Planning; Sustainable Development; Sustainable Supply Chain; Transport; Urban Planning; Water; Waste Disposal; and, Others. Moreover, by analyzing the citations of each group, it was discovered that the Environmental group could be classified as the most impactful research cluster in the area mentioned. After the content analysis of each paper classified in the environmental group, it was found that the k-means technique is preferred for solving sustainability problems with clustering methods since it appeared the most amongst the documents. The authors finally conclude that a bibliometric assessment could help indicate a gap of researches on waste disposal – which was the group with the least amount of publications – and the most impactful research on environmental problems.

Keywords: Bibliometric assessment, clustering, sustainability, territorial partitioning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 390
491 Stochastic Subspace Modelling of Turbulence

Authors: M. T. Sichani, B. J. Pedersen, S. R. K. Nielsen

Abstract:

Turbulence of the incoming wind field is of paramount importance to the dynamic response of civil engineering structures. Hence reliable stochastic models of the turbulence should be available from which time series can be generated for dynamic response and structural safety analysis. In the paper an empirical cross spectral density function for the along-wind turbulence component over the wind field area is taken as the starting point. The spectrum is spatially discretized in terms of a Hermitian cross-spectral density matrix for the turbulence state vector which turns out not to be positive definite. Since the succeeding state space and ARMA modelling of the turbulence rely on the positive definiteness of the cross-spectral density matrix, the problem with the non-positive definiteness of such matrices is at first addressed and suitable treatments regarding it are proposed. From the adjusted positive definite cross-spectral density matrix a frequency response matrix is constructed which determines the turbulence vector as a linear filtration of Gaussian white noise. Finally, an accurate state space modelling method is proposed which allows selection of an appropriate model order, and estimation of a state space model for the vector turbulence process incorporating its phase spectrum in one stage, and its results are compared with a conventional ARMA modelling method.

Keywords: Turbulence, wind turbine, complex coherence, state space modelling, ARMA modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646
490 Fuzzy Wavelet Packet based Feature Extraction Method for Multifunction Myoelectric Control

Authors: Rami N. Khushaba, Adel Al-Jumaily

Abstract:

The myoelectric signal (MES) is one of the Biosignals utilized in helping humans to control equipments. Recent approaches in MES classification to control prosthetic devices employing pattern recognition techniques revealed two problems, first, the classification performance of the system starts degrading when the number of motion classes to be classified increases, second, in order to solve the first problem, additional complicated methods were utilized which increase the computational cost of a multifunction myoelectric control system. In an effort to solve these problems and to achieve a feasible design for real time implementation with high overall accuracy, this paper presents a new method for feature extraction in MES recognition systems. The method works by extracting features using Wavelet Packet Transform (WPT) applied on the MES from multiple channels, and then employs Fuzzy c-means (FCM) algorithm to generate a measure that judges on features suitability for classification. Finally, Principle Component Analysis (PCA) is utilized to reduce the size of the data before computing the classification accuracy with a multilayer perceptron neural network. The proposed system produces powerful classification results (99% accuracy) by using only a small portion of the original feature set.

Keywords: Biomedical Signal Processing, Data mining andInformation Extraction, Machine Learning, Rehabilitation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1737
489 Qualitative Survey on Managing Building Maintenance Projects

Authors: Edmond W.M. Lam, Albert P.C. Chan, Daniel W.M. Chan

Abstract:

Buildings are one of the valuable assets to provide people with shelters for work, leisure and rest. After years of attacks by weather, buildings will deteriorate which need proper maintenance in order to fulfill the requirements and satisfaction of the users. Poorly managed buildings not just give a negative image to the city itself, but also pose potential risk hazards to the health and safety of the general public. As a result, the management of maintenance projects has played an important role in cities like Hong Kong where the problem of urban decay has drawn much attention. However, most research has focused on managing new construction, and little research effort has been put on maintenance projects. Given the short duration and more diversified nature of work, repair and maintenance works are found to be more difficult to monitor and regulate when compared with new works. Project participants may face with problems in running maintenance projects which should be investigated so that proper strategies can be established. This paper aims to provide a thorough analysis on the problems of running maintenance projects. A review of literature on the characteristics of building maintenance projects was firstly conducted, which forms a solid basis for the empirical study. Results on the problems and difficulties of running maintenance projects from the viewpoints of industry practitioners will also be delivered with a view to formulating effective strategies for managing maintenance projects successfully.

Keywords: characteristics, problems, building maintenance, Hong Kong

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2115
488 A Study on Improving the Flow Capacity of the Valves

Authors: A. G. Pradeep, Gorantla Giridhar Kumar, Vijay Turaga, Vinod Srinivasa

Abstract:

The major problem in the flow control valve is of lower Flow Capacity (Cv) which will reduce overall efficiency of flow circuit. Designers are continuously working to improve the Cv of the valve, but they need to validate the design ideas they have regarding the improvement of Cv. Traditional method of prototype and testing take a lot of time, that is where CFD comes into picture with very quick and accurate validation along with the visualization which is not possible with traditional testing method. We have developed a method to predict Cv value using CFD analysis by iterating on various Boundary conditions, solver settings and by carrying out grid convergence studies to establish correlation between the CFD model and Test data. The present study investigates 3 different ideas put forward by the designers for improving the flow capacity of the valves like reducing the cage thickness, changing the port position, and using the parabolic plug to guide the flow. Using CFD, we analyzed all design changes using the established methodology that we developed. We were able to evaluate the effect of these design changes on the Valve Cv. We optimized the wetted surface of the valve further by suggesting the design modification to the lower part of the valve to make the flow more streamlined. We could find that changing cage thickness and port position has little impact on the valve Cv. Combination of optimized wetted surface and introduction of parabolic plug improved the Cv of the valve significantly.

Keywords: Flow control valves, flow capacity, CFD simulations, design validation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 438
487 Determining Food Habits in Süleymanpasa Town of Tekirdag City, Turkey

Authors: Emine Yilmaz, Ismail Yilmaz, Harun Uran

Abstract:

Food-borne problems have been placed among the most leading problems of the society especially in recent years. This state arises as a problem which affects the society wholly such as the supply of food stuffs that are necessary for an individual to perform his physiological and biological functions, their amount, compound, their effects on health and distribution by individuals. This study was conducted in order to determine the sensitivities and criteria of people, who have different socio-economic backgrounds and live in Süleymanpasa Town of Tekirdag City, in their preference of food stuffs. The research data were collected by means of Interview Technique with individuals within the scope of the study (300) and applying surveys with convenience sampling. According to the research results, quality appears in the first rank among the factors by which consumers are affected while buying food stuffs. Consumers stated that they try to be careful with not buying food sold outdoors. The most preferred food among the ones being sold outdoor were found to be breakfast food. Also, food stuff which consumers become the most selective for while buying was determined to be meat and meat products. Due to general knowledge about the food stuff consumed in human nutrition may affect their health negatively; consumers expressed that they are very relevant with their diets and this circumstances affects their purchase preferences.  

Keywords: Consumption, food safety, consumer behavior, purchase preferences.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2797
486 An Effective Decision-Making Strategy Based on Multi-Objective Optimization for Commercial Vehicles in Highway Scenarios

Authors: Weiming Hu, Xu Li, Xiaonan Li, Zhong Xu, Li Yuan, Xuan Dong

Abstract:

Maneuver decision-making plays a critical role in high-performance intelligent driving. This paper proposes a risk assessment-based decision-making network (RADMN) to address the problem of driving strategy for the commercial vehicle. RADMN integrates two networks, aiming at identifying the risk degree of collision and rollover and providing decisions to ensure the effectiveness and reliability of driving strategy. In the risk assessment module, risk degrees of the backward collision, forward collision and rollover are quantified for hazard recognition. In the decision module, a deep reinforcement learning based on multi-objective optimization (DRL-MOO) algorithm is designed, which comprehensively considers the risk degree and motion states of each traffic participant. To evaluate the performance of the proposed framework, Prescan/Simulink joint simulation was conducted in highway scenarios. Experimental results validate the effectiveness and reliability of the proposed RADMN. The output driving strategy can guarantee the safety and provide key technical support for the realization of autonomous driving of commercial vehicles.

Keywords: Decision-making strategy, risk assessment, multi-objective optimization, commercial vehicle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 592
485 The Proof of Analogous Results for Martingales and Partial Differential Equations Options Price Valuation Formulas Using Stochastic Differential Equation Models in Finance

Authors: H. D. Ibrahim, H. C. Chinwenyi, A. H. Usman

Abstract:

Valuing derivatives (options, futures, swaps, forwards, etc.) is one uneasy task in financial mathematics. The two ways this problem can be effectively resolved in finance is by the use of two methods (Martingales and Partial Differential Equations (PDEs)) to obtain their respective options price valuation formulas. This research paper examined two different stochastic financial models which are Constant Elasticity of Variance (CEV) model and Black-Karasinski term structure model. Assuming their respective option price valuation formulas, we proved the analogous of the Martingales and PDEs options price valuation formulas for the two different Stochastic Differential Equation (SDE) models. This was accomplished by using the applications of Girsanov theorem for defining an Equivalent Martingale Measure (EMM) and the Feynman-Kac theorem. The results obtained show the systematic proof for analogous of the two (Martingales and PDEs) options price valuation formulas beginning with the Martingales option price formula and arriving back at the Black-Scholes parabolic PDEs and vice versa.

Keywords: Option price valuation, Martingales, Partial Differential Equations, PDEs, Equivalent Martingale Measure, Girsanov Theorem, Feyman-Kac Theorem, European Put Option.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 388
484 Investigating Iraqi EFL University Students' Productive Knowledge of Grammatical Collocations in English

Authors: Adnan Z. Mkhelif

Abstract:

Grammatical collocations (GCs) are word combinations containing a preposition or a grammatical structure, such as an infinitive (e.g. smile at, interested in, easy to learn, etc.). Such collocations tend to be difficult for Iraqi EFL university students (IUS) to master. To help address this problem, it is important to identify the factors causing it. This study aims at investigating the effects of L2 proficiency, frequency of GCs and their transparency on IUSs’ productive knowledge of GCs. The study involves 112 undergraduate participants with different proficiency levels, learning English in formal contexts in Iraq. The data collection instruments include (but not limited to) a productive knowledge test (designed by the researcher using the British National Corpus (BNC)), as well as the grammar part of the Oxford Placement Test (OPT). The study findings have shown that all the above-mentioned factors have significant effects on IUSs’ productive knowledge of GCs. In addition to establishing evidence of which factors of L2 learning might be relevant to learning GCs, it is hoped that the findings of the present study will contribute to more effective methods of teaching that can better address and help overcome the problems IUSs encounter in learning GCs. The study is thus hoped to have significant theoretical and pedagogical implications for researchers, syllabus designers as well as teachers of English as a foreign/second language.

Keywords: Corpus linguistics, frequency, grammatical collocations, L2 vocabulary learning, productive knowledge, proficiency, transparency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 869
483 Street Begging and Its Psychosocial Social Effects in Ibadan Metropolis, Oyo State, Nigeria

Authors: Temitope M. Ojo, Titilayo A. Benson

Abstract:

This study investigated street begging and its psychosocial effect in Ibadan Metropolis, Oyo State, Nigeria. In carrying out this study, four research questions were used. The instrument used for data collection was a face-to-face and self-developed questionnaire. The results revealed there is high awareness level on the causes of street begging among the respondents, who also mentioned several factors contributing to street begging. However, respondents disagreed that lack of education is a factor contributing to street begging in Nigeria. The psycho-social effects of street begging, as identified by the respondents, are development of inferiority complex, lack of social interaction, loss of self-respect and dignity, increased mindset of poverty and loss of self-confident. Solution to street begging as identified by the respondents also includes provision of rehabilitation centers, provision of food for students in Islamic schools and monthly survival allowance. Specific policies and other legislative frameworks are needed in terms of age, sex, disability, and family-related issues, to effectively address the begging problem. Therefore, it is recommended that policy planners must adopt multi-faceted, multi-targeted, and multi-tiered approaches if they are to have any impact on the lives of street beggars in all four categories. In this regard, both preventative and responsive interventions are needed instead of rehabilitative solutions for each category of street beggars.

Keywords: Beggars, begging, psychosocial effect, respondents, street begging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3707
482 An Efficient Biometric Cryptosystem using Autocorrelators

Authors: R. Bremananth, A. Chitra

Abstract:

Cryptography provides the secure manner of information transmission over the insecure channel. It authenticates messages based on the key but not on the user. It requires a lengthy key to encrypt and decrypt the sending and receiving the messages, respectively. But these keys can be guessed or cracked. Moreover, Maintaining and sharing lengthy, random keys in enciphering and deciphering process is the critical problem in the cryptography system. A new approach is described for generating a crypto key, which is acquired from a person-s iris pattern. In the biometric field, template created by the biometric algorithm can only be authenticated with the same person. Among the biometric templates, iris features can efficiently be distinguished with individuals and produces less false positives in the larger population. This type of iris code distribution provides merely less intra-class variability that aids the cryptosystem to confidently decrypt messages with an exact matching of iris pattern. In this proposed approach, the iris features are extracted using multi resolution wavelets. It produces 135-bit iris codes from each subject and is used for encrypting/decrypting the messages. The autocorrelators are used to recall original messages from the partially corrupted data produced by the decryption process. It intends to resolve the repudiation and key management problems. Results were analyzed in both conventional iris cryptography system (CIC) and non-repudiation iris cryptography system (NRIC). It shows that this new approach provides considerably high authentication in enciphering and deciphering processes.

Keywords: Autocorrelators, biometrics cryptography, irispatterns, wavelets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527
481 Detection and Classification of Faults on Parallel Transmission Lines Using Wavelet Transform and Neural Network

Authors: V.S.Kale, S.R.Bhide, P.P.Bedekar, G.V.K.Mohan

Abstract:

The protection of parallel transmission lines has been a challenging task due to mutual coupling between the adjacent circuits of the line. This paper presents a novel scheme for detection and classification of faults on parallel transmission lines. The proposed approach uses combination of wavelet transform and neural network, to solve the problem. While wavelet transform is a powerful mathematical tool which can be employed as a fast and very effective means of analyzing power system transient signals, artificial neural network has a ability to classify non-linear relationship between measured signals by identifying different patterns of the associated signals. The proposed algorithm consists of time-frequency analysis of fault generated transients using wavelet transform, followed by pattern recognition using artificial neural network to identify the type of the fault. MATLAB/Simulink is used to generate fault signals and verify the correctness of the algorithm. The adaptive discrimination scheme is tested by simulating different types of fault and varying fault resistance, fault location and fault inception time, on a given power system model. The simulation results show that the proposed scheme for fault diagnosis is able to classify all the faults on the parallel transmission line rapidly and correctly.

Keywords: Artificial neural network, fault detection and classification, parallel transmission lines, wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3011
480 Expectation-Confirmation Model of Information System Continuance: A Meta-Analysis

Authors: Hui-Min Lai, Chin-Pin Chen, Yung-Fu Chang

Abstract:

The expectation-confirmation model (ECM) is one of the most widely used models for evaluating information system continuance, and this model has been extended to other study backgrounds, or expanded with other theoretical perspectives. However, combining ECM with other theories or investigating the background problem may produce some disparities, thus generating inaccurate conclusions. Habit is considered to be an important factor that influences the user’s continuance behavior. This paper thus critically examines seven pairs of relationships from the original ECM and the habit variable. A meta-analysis was used to tackle the development of ECM research over the last 10 years from a range of journals and conference papers published in 2005–2014. Forty-six journal articles and 19 conference papers were selected for analysis. The results confirm our prediction that a high effect size for the seven pairs of relationships was obtained (ranging from r=0.386 to r=0.588). Furthermore, a meta-analytic structural equation modeling was performed to simultaneously test all relationships. The results show that habit had a significant positive effect on continuance intention at p<=0.05 and that the six other pairs of relationships were significant at p<0.10. Based on the findings, we refined our original research model and an alternative model was proposed for understanding and predicting information system continuance. Some theoretical implications are also discussed.

Keywords: Expectation-confirmation theory, expectation- confirmation model, meta-analysis, meta-analytic structural equation modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2730
479 Effect on the Performance of the Nano-Particulate Graphite Lubricant in the Turning of AISI 1040 Steel under Variable Machining Conditions

Authors: S. Srikiran, Dharmala Venkata Padmaja, P. N. L. Pavani, R. Pola Rao, K. Ramji

Abstract:

Technological advancements in the development of cutting tools and coolant/lubricant chemistry have enhanced the machining capabilities of hard materials under higher machining conditions. Generation of high temperatures at the cutting zone during machining is one of the most important and pertinent problems which adversely affect the tool life and surface finish of the machined components. Generally, cutting fluids and solid lubricants are used to overcome the problem of heat generation, which is not effectively addressing the problems. With technological advancements in the field of tribology, nano-level particulate solid lubricants are being used nowadays in machining operations, especially in the areas of turning and grinding. The present investigation analyses the effect of using nano-particulate graphite powder as lubricant in the turning of AISI 1040 steel under variable machining conditions and to study its effect on cutting forces, tool temperature and surface roughness of the machined component. Experiments revealed that the increase in cutting forces and tool temperature resulting in the decrease of surface quality with the decrease in the size of nano-particulate graphite powder as lubricant.

Keywords: Solid lubricant, graphite, minimum quantity lubrication, nanoparticles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 943
478 Finite Element Study on Corono-Radicular Restored Premolars

Authors: Sandu L., Topală F., Porojan S.

Abstract:

Restoration of endodontically treated teeth is a common problem in dentistry, related to the fractures occurring in such teeth and to concentration of forces little information regarding variation of basic preparation guidelines in stress distribution has been available. To date, there is still no agreement in the literature about which material or technique can optimally restore endodontically treated teeth. The aim of the present study was to evaluate the influence of the core height and restoration materials on corono-radicular restored upper first premolar. The first step of the study was to achieve 3D models in order to analyze teeth, dowel and core restorations and overlying full ceramic crowns. The FEM model was obtained by importing the solid model into ANSYS finite element analysis software. An occlusal load of 100 N was conducted, and stresses occurring in the restorations, and teeth structures were calculated. Numerical simulations provide a biomechanical explanation for stress distribution in prosthetic restored teeth. Within the limitations of the present study, it was found that the core height has no important influence on the stress generated in coronoradicular restored premolars. It can be drawn that the cervical regions of the teeth and restorations were subjected to the highest stress concentrations.

Keywords: 3D models, finite element analysis, dowel and core restoration, full ceramic crown, premolars, structural simulations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2883
477 Steady State Power Flow Calculations with STATCOM under Load Increase Scenario and Line Contingencies

Authors: A. S. Telang, P. P. Bedekar

Abstract:

Flexible AC transmission system controllers play an important role in controlling the line power flow and in improving voltage profiles of the power system network. They can be used to increase the reliability and efficiency of transmission and distribution system. The modeling of these FACTS controllers in power flow calculations have become a challenging research problem. This paper presents a simple and systematic approach for a steady state power flow calculations of power system with STATCOM (Static Synchronous Compensator). It shows how systematically STATCOM can be implemented in conventional power flow calculations. The main contribution of this paper is to investigate this approach for two special conditions i.e. consideration of load increase pattern incorporating load change (active, reactive and both active and reactive) at all load buses simultaneously and the line contingencies under such load change. Such investigation proves to be relevant for determination of strategy for the optimal placement of STATCOM to enhance the voltage stability. The performance has been evaluated on many standard IEEE test systems. The results for standard IEEE-30 bus test system are presented here.

Keywords: Load flow analysis, Newton-Raphson (N-R) power flow, Flexible AC transmission system, FACTS, Static synchronous compensator, STATCOM, voltage profile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1166
476 Accurate Visualization of Graphs of Functions of Two Real Variables

Authors: Zeitoun D. G., Thierry Dana-Picard

Abstract:

The study of a real function of two real variables can be supported by visualization using a Computer Algebra System (CAS). One type of constraints of the system is due to the algorithms implemented, yielding continuous approximations of the given function by interpolation. This often masks discontinuities of the function and can provide strange plots, not compatible with the mathematics. In recent years, point based geometry has gained increasing attention as an alternative surface representation, both for efficient rendering and for flexible geometry processing of complex surfaces. In this paper we present different artifacts created by mesh surfaces near discontinuities and propose a point based method that controls and reduces these artifacts. A least squares penalty method for an automatic generation of the mesh that controls the behavior of the chosen function is presented. The special feature of this method is the ability to improve the accuracy of the surface visualization near a set of interior points where the function may be discontinuous. The present method is formulated as a minimax problem and the non uniform mesh is generated using an iterative algorithm. Results show that for large poorly conditioned matrices, the new algorithm gives more accurate results than the classical preconditioned conjugate algorithm.

Keywords: Function singularities, mesh generation, point allocation, visualization, collocation least squares method, Augmented Lagrangian method, Uzawa's Algorithm, Preconditioned Conjugate Gradien

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1708
475 Evaluation of Coastal Erosion in the Jurisdiction of the Municipalities of Puerto Colombia and Tubará, Atlántico, Colombia in Google Earth Engine with Landsat and Sentinel 2 Images

Authors: Francisco Javier Reyes Salazar, Héctor Mauricio Ramírez

Abstract:

The coastal zones are home to mangrove swamps, coral reefs, and seagrass ecosystems, which are the most biodiverse and fragile on the planet. These areas support a great diversity of marine life; they are also extraordinarily important for humans in the provision of food, water, wood, and other associated goods and services; they also contribute to climate regulation. The lack of an automated model that generates information on the dynamics of changes in coastlines and coastal erosion is identified as a central problem. In this paper, coastlines were determined from 1984 to 2020 on the Google Earth Engine platform from Landsat and Sentinel images. Then, we determined the Modified Normalized Difference Water Index (MNDWI) and used Digital Shoreline Analysis System (DSAS) v5.0. Starting from the 2020 coastline; the 10-year prediction (Year 2031) was determined with the erosion of 238.32 hectares and an accretion of 181.96 hectares. For the 20-year prediction (Year 2041) will be presented an erosion of 544.04 hectares and an accretion of 133.94 hectares. The erosion and accretion of Playa Muelle in the municipality of Puerto Colombia were established, which will register the highest value of erosion. The coverage that presented the greatest change was that of artificialized territories.

Keywords: Coastline, coastal erosion, MNDWI, Google Earth Engine, Colombia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 197
474 A TIPSO-SVM Expert System for Efficient Classification of TSTO Surrogates

Authors: Ali Sarosh, Dong Yun-Feng, Muhammad Umer

Abstract:

Fully reusable spaceplanes do not exist as yet. This implies that design-qualification for optimized highly-integrated forebody-inlet configuration of booster-stage vehicle cannot be based on archival data of other spaceplanes. Therefore, this paper proposes a novel TIPSO-SVM expert system methodology. A non-trivial problem related to optimization and classification of hypersonic forebody-inlet configuration in conjunction with mass-model of the two-stage-to-orbit (TSTO) vehicle is solved. The hybrid-heuristic machine learning methodology is based on two-step improved particle swarm optimizer (TIPSO) algorithm and two-step support vector machine (SVM) data classification method. The efficacy of method is tested by first evolving an optimal configuration for hypersonic compression system using TIPSO algorithm; thereafter, classifying the results using two-step SVM method. In the first step extensive but non-classified mass-model training data for multiple optimized configurations is segregated and pre-classified for learning of SVM algorithm. In second step the TIPSO optimized mass-model data is classified using the SVM classification. Results showed remarkable improvement in configuration and mass-model along with sizing parameters.

Keywords: TIPSO-SVM expert system, TIPSO algorithm, two-step SVM method, aerothermodynamics, mass-modeling, TSTO vehicle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2318
473 Sorting Primitives and Genome Rearrangementin Bioinformatics: A Unified Perspective

Authors: Swapnoneel Roy, Minhazur Rahman, Ashok Kumar Thakur

Abstract:

Bioinformatics and computational biology involve the use of techniques including applied mathematics, informatics, statistics, computer science, artificial intelligence, chemistry, and biochemistry to solve biological problems usually on the molecular level. Research in computational biology often overlaps with systems biology. Major research efforts in the field include sequence alignment, gene finding, genome assembly, protein structure alignment, protein structure prediction, prediction of gene expression and proteinprotein interactions, and the modeling of evolution. Various global rearrangements of permutations, such as reversals and transpositions,have recently become of interest because of their applications in computational molecular biology. A reversal is an operation that reverses the order of a substring of a permutation. A transposition is an operation that swaps two adjacent substrings of a permutation. The problem of determining the smallest number of reversals required to transform a given permutation into the identity permutation is called sorting by reversals. Similar problems can be defined for transpositions and other global rearrangements. In this work we perform a study about some genome rearrangement primitives. We show how a genome is modelled by a permutation, introduce some of the existing primitives and the lower and upper bounds on them. We then provide a comparison of the introduced primitives.

Keywords: Sorting Primitives, Genome Rearrangements, Transpositions, Block Interchanges, Strip Exchanges.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2161