Search results for: deployment methods
3827 Fault Detection via Stability Analysis for the Hybrid Control Unit of HEVs
Authors: Kyogun Chang, Yoon Bok Lee
Abstract:
Fault detection determines faultexistence and detecting time. This paper discusses two layered fault detection methods to enhance the reliability and safety. Two layered fault detection methods consist of fault detection methods of component level controllers and system level controllers. Component level controllers detect faults by using limit checking, model-based detection, and data-driven detection and system level controllers execute detection by stability analysis which can detect unknown changes. System level controllers compare detection results via stability with fault signals from lower level controllers. This paper addresses fault detection methods via stability and suggests fault detection criteria in nonlinear systems. The fault detection method applies tothe hybrid control unit of a military hybrid electric vehicleso that the hybrid control unit can detect faults of the traction motor.Keywords: Two Layered Fault Detection, Stability Analysis, Fault-Tolerant Control
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17113826 A Comparison of Shunt Active Power Filter Control Methods under Non-Sinusoidal and Unbalanced Voltage Conditions
Authors: H. Abaali, M. T. Lamchich, M. Raoufi
Abstract:
There are a variety of reference current identification methods, for the shunt active power filter (SAPF), such as the instantaneous active and reactive power, the instantaneous active and reactive current and the synchronous detection method are evaluated and compared under ideal, non sinusoidal and unbalanced voltage conditions. The SAPF performances, for the investigated identification methods, are tested for a non linear load. The simulation results, using Matlab Power System Blockset Toolbox from a complete structure, are presented and discussed.
Keywords: Shunt active power filter, Current perturbation, Non sinusoidal and unbalanced voltage conditions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25443825 Exploring the Need to Study the Efficacy of VR Training Compared to Traditional Cybersecurity Training
Authors: Shaila Rana, Wasim Alhamdani
Abstract:
Effective cybersecurity training is of the utmost importance, given the plethora of attacks that continue to increase in complexity and ubiquity. VR cybersecurity training remains a starkly understudied discipline. Studies that evaluated the effectiveness of VR cybersecurity training over traditional methods are required. An engaging and interactive platform can support knowledge retention of the training material. Consequently, an effective form of cybersecurity training is required to support a culture of cybersecurity awareness. Measurements of effectiveness varied throughout the studies, with surveys and observations being the two most utilized forms of evaluating effectiveness. Further research is needed to evaluate the effectiveness of VR cybersecurity training and traditional training. Additionally, research for evaluating if VR cybersecurity training is more effective than traditional methods is vital. This paper proposes a methodology to compare the two cybersecurity training methods and their effectiveness. The proposed framework includes developing both VR and traditional cybersecurity training methods and delivering them to at least 100 users. A quiz along with a survey will be administered and statistically analyzed to determine if there is a difference in knowledge retention and user satisfaction. The aim of this paper is to bring attention to the need to study VR cybersecurity training and its effectiveness compared to traditional training methods. This paper hopes to contribute to the cybersecurity training field by providing an effective way to train users for security awareness. If VR training is deemed more effective, this could create a new direction for cybersecurity training practices.
Keywords: Virtual reality cybersecurity training, VR cybersecurity training, traditional cybersecurity training, evaluating efficacy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10833824 Analysis of Train Passenger Seat Using Ergonomic Function Deployment Method
Authors: Robertoes K. K. Wibowo, Siswoyo Soekarno, Irma Puspitasari
Abstract:
Indonesian people use trains for their transportation, especially they use economy class train transportation because it is cheaper and has a more precise schedule than any other ground transportation. Nevertheless, the economy class passenger seat raises some inconvenience issues for passengers. This is due to the design of the chair on the economic class of trains that did not adjusted to the shape of anthropometry of Indonesian people. Thus, research needs to be conducted on the design of the seats in the economic class of trains. The purpose of this research is to make the design of economy class passenger seats ergonomic. This research method uses questionnaires and anthropometry measurements. The data obtained is processed using House of Quality of Ergonomic Function Development. From the results of analysis and data processing were obtained important changes from the original design. Ergonomic chair design according to the analysis is a stainless steel frame, seat height 390 mm, with a seat width for each passenger of 400 mm and a depth of 400 mm. Design of the backrest has a height of 840 mm, width of 430 mm and length of 300 mm that can move at the angle of 105-115 degrees. The width of the footrest is 42 mm and 400 mm length. The thickness of the seat cushion is 100 mm.
Keywords: Chair, ergonomics, function development, train passenger.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18283823 Bioinformatics Profiling of Missense Mutations
Authors: I. Nassiri, B. Goliaei, M. Tavassoli
Abstract:
The ability to distinguish missense nucleotide substitutions that contribute to harmful effect from those that do not is a difficult problem usually accomplished through functional in vivo analyses. In this study, instead current biochemical methods, the effects of missense mutations upon protein structure and function were assayed by means of computational methods and information from the databases. For this order, the effects of new missense mutations in exon 5 of PTEN gene upon protein structure and function were examined. The gene coding for PTEN was identified and localized on chromosome region 10q23.3 as the tumor suppressor gene. The utilization of these methods were shown that c.319G>A and c.341T>G missense mutations that were recognized in patients with breast cancer and Cowden disease, could be pathogenic. This method could be use for analysis of missense mutation in others genes.Keywords: Bioinformatics, missense mutations, PTEN tumorsuppressor gene.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23903822 Tests for Gaussianity of a Stationary Time Series
Authors: Adnan Al-Smadi
Abstract:
One of the primary uses of higher order statistics in signal processing has been for detecting and estimation of non- Gaussian signals in Gaussian noise of unknown covariance. This is motivated by the ability of higher order statistics to suppress additive Gaussian noise. In this paper, several methods to test for non- Gaussianity of a given process are presented. These methods include histogram plot, kurtosis test, and hypothesis testing using cumulants and bispectrum of the available sequence. The hypothesis testing is performed by constructing a statistic to test whether the bispectrum of the given signal is non-zero. A zero bispectrum is not a proof of Gaussianity. Hence, other tests such as the kurtosis test should be employed. Examples are given to demonstrate the performance of the presented methods.Keywords: Non-Gaussian, bispectrum, kurtosis, hypothesistesting, histogram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19163821 The Influence of the Commons Structure Modification on the Active Power Losses Allocation
Authors: O. Pop, C. Barbulescu, M. Nemes, St. Kilyeni
Abstract:
The tracing methods determine the contribution the power system sources have in their supplying. These methods can be used to assess the transmission prices, but also to recover the transmission fixed cost. In this paper is presented the influence of the modification of commons structure has on the specific price of transfer and on active power losses. The authors propose a power losses allocation method, based on Kirschen-s method. The system operator must make use of a few basic principles about allocation. The only necessary information is the power flows on system branches and the modifications applied to power system buses. In order to illustrate this method, the 25-bus test system is used, elaborated within the Electrical Power Engineering Department, from Timisoara, Romania.Keywords: Power systems, P-U bus, P-Q bus, loss allocation, traceability methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15253820 An Efficient 3D Animation Data Reduction Using Frame Removal
Authors: Jinsuk Yang, Choongjae Joo, Kyoungsu Oh
Abstract:
Existing methods in which the animation data of all frames are stored and reproduced as with vertex animation cannot be used in mobile device environments because these methods use large amounts of the memory. So 3D animation data reduction methods aimed at solving this problem have been extensively studied thus far and we propose a new method as follows. First, we find and remove frames in which motion changes are small out of all animation frames and store only the animation data of remaining frames (involving large motion changes). When playing the animation, the removed frame areas are reconstructed using the interpolation of the remaining frames. Our key contribution is to calculate the accelerations of the joints of individual frames and the standard deviations of the accelerations using the information of joint locations in the relevant 3D model in order to find and delete frames in which motion changes are small. Our methods can reduce data sizes by approximately 50% or more while providing quality which is not much lower compared to original animations. Therefore, our method is expected to be usefully used in mobile device environments or other environments in which memory sizes are limited.
Keywords: Data Reduction, Interpolation, Vertex Animation, 3D Animation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16613819 Teaching Computer Programming to Diverse Students: A Comparative, Mixed-Methods, Classroom Research Study
Authors: Almudena Konrad, Tomás Galguera
Abstract:
Lack of motivation and interest is a serious obstacle to students’ learning computing skills. A need exists for a knowledge base on effective pedagogy and curricula to teach computer programming. This paper presents results from research evaluating a six-year project designed to teach complex concepts in computer programming collaboratively, while supporting students to continue developing their computer thinking and related coding skills individually. Utilizing a quasi-experimental, mixed methods design, the pedagogical approaches and methods were assessed in two contrasting groups of students with different socioeconomic status, gender, and age composition. Analyses of quantitative data from Likert-scale surveys and an evaluation rubric, combined with qualitative data from reflective writing exercises and semi-structured interviews yielded convincing evidence of the project’s success at both teaching and inspiring students.Keywords: Computational thinking, computing education, computer programming curriculum, logic, teaching methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7893818 Relative Suitability Evaluation of Two Methods of Particle-Size Analysis for Selected Soils of Sudan Savanna of Nigeria
Authors: B. A. Lawal, B. R. Singh, G. A. Babaji, P. A. Tsado
Abstract:
The two widely used methods base on the sedimentation principle (Bouyoucos hydrometer and International pipette) for particle-size analysis were comparatively evaluated on soils collected from various locations in Sudan savanna of Nigeria particularly from Sokoto and Zamfara States. The hydrometer method under-estimated the silt and over-estimated the clay content. Also, the hydrometer reading proved difficult and tended to submerge when floated for clay reading in the suspension of very sandy soils (900g kg-1 sand). Furthermore, the results from the two methods were validated by subjecting the data to USDA soil textural triangle to determine their textural class names. The outcome was that 91.67 % of the experimental soils retained the same textural class names irrespective of the method. Thus, Bouyoucos hydrometer method may conveniently find a place in routine work in view of its simplicity, rapidity, and strong correlation with the pipette method.
Keywords: Hydrometer and pipette methods, particle-size analysis, sedimentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23713817 A User Friendly Tool for Performance Evaluation of Different Reference Evapotranspiration Methods
Authors: Vijay Shankar
Abstract:
Evapotranspiration (ET) is a major component of the hydrologic cycle and its accurate estimation is essential for hydrological studies. In past, various estimation methods have been developed for different climatological data, and the accuracy of these methods varies with climatic conditions. Reference crop evapotranspiration (ET0) is a key variable in procedures established for estimating evapotranspiration rates of agricultural crops. Values of ET0 are used with crop coefficients for many aspects of irrigation and water resources planning and management. Numerous methods are used for estimating ET0. As per internationally accepted procedures outlined in the United Nations Food and Agriculture Organization-s Irrigation and Drainage Paper No. 56(FAO-56), use of Penman-Monteith equation is recommended for computing ET0 from ground based climatological observations. In the present study, seven methods have been selected for performance evaluation. User friendly software has been developed using programming language visual basic. The visual basic has ability to create graphical environment using less coding. For given data availability the developed software estimates reference evapotranspiration for any given area and period for which data is available. The accuracy of the software has been checked by the examples given in FAO-56.The developed software is a user friendly tool for estimating ET0 under different data availability and climatic conditions.
Keywords: Crop coefficient, Crop evapotranspiration, Field moisture, Irrigation Scheduling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16543816 Effects of Different Drying Methods on the Properties of Viscose Single Jersey Fabrics
Authors: M. Kucukali Ozturk, Y. Beceren, B. Nergis
Abstract:
The study discussed in this paper was conducted in an attempt to investigate effects of different drying methods (line dry and tumble dry) on viscose single jersey fabrics knitted with ring yarn.Keywords: Color change, dimensional properties, drying method, fabric tightness, physical properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30133815 The Communication Library DIALOG for iFDAQ of the COMPASS Experiment
Authors: Y. Bai, M. Bodlak, V. Frolov, S. Huber, V. Jary, I. Konorov, D. Levit, J. Novy, D. Steffen, O. Subrt, M. Virius
Abstract:
Modern experiments in high energy physics impose great demands on the reliability, the efficiency, and the data rate of Data Acquisition Systems (DAQ). This contribution focuses on the development and deployment of the new communication library DIALOG for the intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN. The iFDAQ utilizing a hardware event builder is designed to be able to readout data at the maximum rate of the experiment. The DIALOG library is a communication system both for distributed and mixed environments, it provides a network transparent inter-process communication layer. Using the high-performance and modern C++ framework Qt and its Qt Network API, the DIALOG library presents an alternative to the previously used DIM library. The DIALOG library was fully incorporated to all processes in the iFDAQ during the run 2016. From the software point of view, it might be considered as a significant improvement of iFDAQ in comparison with the previous run. To extend the possibilities of debugging, the online monitoring of communication among processes via DIALOG GUI is a desirable feature. In the paper, we present the DIALOG library from several insights and discuss it in a detailed way. Moreover, the efficiency measurement and comparison with the DIM library with respect to the iFDAQ requirements is provided.Keywords: Data acquisition system, DIALOG library, DIM library, FPGA, Qt framework, TCP/IP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11253814 Interference Management in Long Term Evolution-Advanced System
Authors: Selma Sbit, Mohamed Bechir Dadi, Belgacem Chibani Rhaimi
Abstract:
Incorporating Home eNodeB (HeNB) in cellular networks, e.g. Long Term Evolution Advanced (LTE-A), is beneficial for extending coverage and enhancing capacity at low price especially within the non-line-of sight (NLOS) environments such as homes. HeNB or femtocell is a small low powered base station which provides radio coverage to the mobile users in an indoor environment. This deployment results in a heterogeneous network where the available spectrum becomes shared between two layers. Therefore, a problem of Inter Cell Interference (ICI) appears. This issue is the main challenge in LTE-A. To deal with this challenge, various techniques based on frequency, time and power control are proposed. This paper deals with the impact of carrier aggregation and higher order MIMO (Multiple Input Multiple Output) schemes on the LTE-Advanced performance. Simulation results show the advantages of these schemes on the system capacity (4.109 b/s/Hz when bandwidth B=100 MHz and when applying MIMO 8x8 for SINR=30 dB), maximum theoretical peak data rate (more than 4 Gbps for B=100 MHz and when MIMO 8x8 is used) and spectral efficiency (15 b/s/Hz and 30b/s/Hz when MIMO 4x4 and MIMO 8x8 are applying respectively for SINR=30 dB).
Keywords: LTE-Advanced, carrier aggregation, MIMO, capacity, peak data rate, spectral efficiency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9053813 Numerical Study of Iterative Methods for the Solution of the Dirichlet-Neumann Map for Linear Elliptic PDEs on Regular Polygon Domains
Authors: A. G. Sifalakis, E. P. Papadopoulou, Y. G. Saridakis
Abstract:
A generalized Dirichlet to Neumann map is one of the main aspects characterizing a recently introduced method for analyzing linear elliptic PDEs, through which it became possible to couple known and unknown components of the solution on the boundary of the domain without solving on its interior. For its numerical solution, a well conditioned quadratically convergent sine-Collocation method was developed, which yielded a linear system of equations with the diagonal blocks of its associated coefficient matrix being point diagonal. This structural property, among others, initiated interest for the employment of iterative methods for its solution. In this work we present a conclusive numerical study for the behavior of classical (Jacobi and Gauss-Seidel) and Krylov subspace (GMRES and Bi-CGSTAB) iterative methods when they are applied for the solution of the Dirichlet to Neumann map associated with the Laplace-s equation on regular polygons with the same boundary conditions on all edges.Keywords: Elliptic PDEs, Dirichlet to Neumann Map, Global Relation, Collocation, Iterative Methods, Jacobi, Gauss-Seidel, GMRES, Bi-CGSTAB.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17113812 Data Mining Classification Methods Applied in Drug Design
Authors: Mária Stachová, Lukáš Sobíšek
Abstract:
Data mining incorporates a group of statistical methods used to analyze a set of information, or a data set. It operates with models and algorithms, which are powerful tools with the great potential. They can help people to understand the patterns in certain chunk of information so it is obvious that the data mining tools have a wide area of applications. For example in the theoretical chemistry data mining tools can be used to predict moleculeproperties or improve computer-assisted drug design. Classification analysis is one of the major data mining methodologies. The aim of thecontribution is to create a classification model, which would be able to deal with a huge data set with high accuracy. For this purpose logistic regression, Bayesian logistic regression and random forest models were built using R software. TheBayesian logistic regression in Latent GOLD software was created as well. These classification methods belong to supervised learning methods. It was necessary to reduce data matrix dimension before construct models and thus the factor analysis (FA) was used. Those models were applied to predict the biological activity of molecules, potential new drug candidates.Keywords: data mining, classification, drug design, QSAR
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28493811 Development of a Multi-Factorial Instrument for Accident Analysis Based on Systemic Methods
Authors: C. V. Pietreanu, S. E. Zaharia, C. Dinu
Abstract:
The present research is built on three major pillars, commencing by making some considerations on accident investigation methods and pointing out both defining aspects and differences between linear and non-linear analysis. The traditional linear focus on accident analysis describes accidents as a sequence of events, while the latest systemic models outline interdependencies between different factors and define the processes evolution related to a specific (normal) situation. Linear and non-linear accident analysis methods have specific limitations, so the second point of interest is mirrored by the aim to discover the drawbacks of systemic models which becomes a starting point for developing new directions to identify risks or data closer to the cause of incidents/accidents. Since communication represents a critical issue in the interaction of human factor and has been proved to be the answer of the problems made by possible breakdowns in different communication procedures, from this focus point, on the third pylon a new error-modeling instrument suitable for risk assessment/accident analysis will be elaborated.
Keywords: Accident analysis, multi-factorial error modeling, risk, systemic methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10403810 Application of Scanning Electron Microscopy and X-Ray Evaluation of the Main Digestion Methods for Determination of Macroelements in Plant Tissue
Authors: Krasimir I. Ivanov, Penka S. Zapryanova, Stefan V. Krustev, Violina R. Angelova
Abstract:
Three commonly used digestion methods (dry ashing, acid digestion, and microwave digestion) in different variants were compared for digestion of tobacco leaves. Three main macroelements (K, Ca and Mg) were analysed using AAS Spectrometer Spectra АА 220, Varian, Australia. The accuracy and precision of the measurements were evaluated by using Polish reference material CTR-VTL-2 (Virginia tobacco leaves). To elucidate the problems with elemental recovery X-Ray and SEM–EDS analysis of all residues after digestion were performed. The X-ray investigation showed a formation of KClO4 when HClO4 was used as a part of the acids mixture. The use of HF at Ca and Mg determination led to the formation of CaF2 and MgF2. The results were confirmed by energy dispersive X-ray microanalysis. SPSS program for Windows was used for statistical data processing.
Keywords: Digestion methods, determination of macroelements, plant tissue.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9403809 The Management in Large Emergency Situations – A Best Practise Case Study based on GIS for Management of Evacuation
Authors: Ion Baş, Claudiu Zoicaş, Angela Ioniţâ
Abstract:
In most of the cases, natural disasters lead to the necessity of evacuating people. The quality of evacuation management is dramatically improved by the use of information provided by decision support systems, which become indispensable in case of large scale evacuation operations. This paper presents a best practice case study. In November 2007, officers from the Emergency Situations Inspectorate “Crisana" of Bihor County from Romania participated to a cross-border evacuation exercise, when 700 people have been evacuated from Netherlands to Belgium. One of the main objectives of the exercise was the test of four different decision support systems. Afterwards, based on that experience, software system called TEVAC (Trans Border Evacuation) has been developed “in house" by the experts of this institution. This original software system was successfully tested in September 2008, during the deployment of the international exercise EU-HUROMEX 2008, the scenario involving real evacuation of 200 persons from Hungary to Romania. Based on the lessons learned and results, starting from April 2009, the TEVAC software is used by all Emergency Situations Inspectorates all over Romania.Keywords: Emergency evacuation, Searching Features, TEVAC(Trans Border Evacuation) software system, User Interface Design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15833808 Simulation of a Multi-Component Transport Model for the Chemical Reaction of a CVD-Process
Abstract:
In this paper we present discretization and decomposition methods for a multi-component transport model of a chemical vapor deposition (CVD) process. CVD processes are used to manufacture deposition layers or bulk materials. In our transport model we simulate the deposition of thin layers. The microscopic model is based on the heavy particles, which are derived by approximately solving a linearized multicomponent Boltzmann equation. For the drift-process of the particles we propose diffusionreaction equations as well as for the effects of heat conduction. We concentrate on solving the diffusion-reaction equation with analytical and numerical methods. For the chemical processes, modelled with reaction equations, we propose decomposition methods and decouple the multi-component models to simpler systems of differential equations. In the numerical experiments we present the computational results of our proposed models.
Keywords: Chemical reactions, chemical vapor deposition, convection-diffusion-reaction equations, decomposition methods, multi-component transport.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14103807 Improved Power Spectrum Estimation for RR-Interval Time Series
Authors: B. S. Saini, Dilbag Singh, Moin Uddin, Vinod Kumar
Abstract:
The RR interval series is non-stationary and unevenly spaced in time. For estimating its power spectral density (PSD) using traditional techniques like FFT, require resampling at uniform intervals. The researchers have used different interpolation techniques as resampling methods. All these resampling methods introduce the low pass filtering effect in the power spectrum. The lomb transform is a means of obtaining PSD estimates directly from irregularly sampled RR interval series, thus avoiding resampling. In this work, the superiority of Lomb transform method has been established over FFT based approach, after applying linear and cubicspline interpolation as resampling methods, in terms of reproduction of exact frequency locations as well as the relative magnitudes of each spectral component.Keywords: HRV, Lomb Transform, Resampling, RR-intervals.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32373806 Emerging Methods as a Tool for Obtaining Subconscious Feedback in E-commerce and Marketplace
Authors: J. Berčík, A. Mravcová, A. Rusková, P. Jurčišin, R. Virágh
Abstract:
The online world is changing every day. With this comes the emergence and development of new business models. One of them is the sale of several types of products in one place. This type of sales in the form of online marketplaces has undergone a positive development in recent years and represents a kind of alternative to brick-and-mortar shopping centers. The main philosophy is to buy several products under one roof. Examples of popular e-commerce marketplaces are Amazon, eBay or Allegro. Their share of total e-commerce turnover is expected to even double in the coming years. The paper highlights possibilities for testing web applications and online marketplace using emerging methods like stationary eye camera (Eye tracking) and facial analysis (FaceReading).
Keywords: Emerging methods, consumer neuroscience, e-commerce, marketplace, user experience, user interface.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 803805 Feature Selection Methods for an Improved SVM Classifier
Authors: Daniel Morariu, Lucian N. Vintan, Volker Tresp
Abstract:
Text categorization is the problem of classifying text documents into a set of predefined classes. After a preprocessing step, the documents are typically represented as large sparse vectors. When training classifiers on large collections of documents, both the time and memory restrictions can be quite prohibitive. This justifies the application of feature selection methods to reduce the dimensionality of the document-representation vector. In this paper, three feature selection methods are evaluated: Random Selection, Information Gain (IG) and Support Vector Machine feature selection (called SVM_FS). We show that the best results were obtained with SVM_FS method for a relatively small dimension of the feature vector. Also we present a novel method to better correlate SVM kernel-s parameters (Polynomial or Gaussian kernel).Keywords: Feature Selection, Learning with Kernels, SupportVector Machine, and Classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18293804 An Exploratory Study in Nursing Education: Factors Influencing Nursing Students’ Acceptance of Mobile Learning
Authors: R. Abdulrahman, A. Eardley, A. Soliman
Abstract:
The proliferation in the development of mobile learning (m-learning) has played a vital role in the rapidly growing electronic learning market. This relatively new technology can help to encourage the development of in learning and to aid knowledge transfer a number of areas, by familiarizing students with innovative information and communications technologies (ICT). M-learning plays a substantial role in the deployment of learning methods for nursing students by using the Internet and portable devices to access learning resources ‘anytime and anywhere’. However, acceptance of m-learning by students is critical to the successful use of m-learning systems. Thus, there is a need to study the factors that influence student’s intention to use m-learning. This paper addresses this issue. It outlines the outcomes of a study that evaluates the unified theory of acceptance and use of technology (UTAUT) model as applied to the subject of user acceptance in relation to m-learning activity in nurse education. The model integrates the significant components across eight prominent user acceptance models. Therefore, a standard measure is introduced with core determinants of user behavioural intention. The research model extends the UTAUT in the context of m-learning acceptance by modifying and adding individual innovativeness (II) and quality of service (QoS) to the original structure of UTAUT. The paper goes on to add the factors of previous experience (of using mobile devices in similar applications) and the nursing students’ readiness (to use the technology) to influence their behavioural intentions to use m-learning. This study uses a technique called ‘convenience sampling’ which involves student volunteers as participants in order to collect numerical data. A quantitative method of data collection was selected and involves an online survey using a questionnaire form. This form contains 33 questions to measure the six constructs, using a 5-point Likert scale. A total of 42 respondents participated, all from the Nursing Institute at the Armed Forces Hospital in Saudi Arabia. The gathered data were then tested using a research model that employs the structural equation modelling (SEM), including confirmatory factor analysis (CFA). The results of the CFA show that the UTAUT model has the ability to predict student behavioural intention and to adapt m-learning activity to the specific learning activities. It also demonstrates satisfactory, dependable and valid scales of the model constructs. This suggests further analysis to confirm the model as a valuable instrument in order to evaluate the user acceptance of m-learning activity.
Keywords: Mobile learning, nursing institute, unified theory of acceptance and use of technology model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12063803 Computable Function Representations Using Effective Chebyshev Polynomial
Authors: Mohammed A. Abutheraa, David Lester
Abstract:
We show that Chebyshev Polynomials are a practical representation of computable functions on the computable reals. The paper presents error estimates for common operations and demonstrates that Chebyshev Polynomial methods would be more efficient than Taylor Series methods for evaluation of transcendental functions.
Keywords: Approximation Theory, Chebyshev Polynomial, Computable Functions, Computable Real Arithmetic, Integration, Numerical Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30873802 Adhesion Strength Evaluation Methods in Thermally Sprayed Coatings
Authors: M.Jalali Azizpour, H.Mohammadi majd, Milad Jalali, H.Fasihi
Abstract:
The techniques for estimating the adhesive and cohesive strength in high velocity oxy fuel (HVOF) thermal spray coatings have been discussed and compared. The development trend and the last investigation have been studied. We will focus on benefits and limitations of these methods in different process and materials.
Keywords: Adhesion, Bonding strength, Cohesion, HVOF Thermal spray
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31393801 Comparison of Experimental Relationships to Determine Flow Discharge in Meandering Compound Channels Using M5 Decision Tree Model
Authors: Mehdi Kheradmand, Mehdi Azhdary Moghaddam, Abdolreza Zahiri, Khalil Ghorbani
Abstract:
This research compares results of major methods of determining the flow discharge using experimental relationships with results from the M5 decision tree model in meandering compound sections in several laboratory channels. It was found that the M5 decision tree model enjoyed greater accuracy of statistical parameters compared to methods to the said methods. This suggested that the M5 decision tree model has highly improved the calculated accuracy of the flow discharge in meandering compound channels.
Keywords: Stage-discharge relationship, M5 decision tree model, compound section, meandering compound channel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2313800 Convex Restrictions for Outage Constrained MU-MISO Downlink under Imperfect Channel State Information
Authors: A. Preetha Priyadharshini, S. B. M. Priya
Abstract:
In this paper, we consider the MU-MISO downlink scenario, under imperfect channel state information (CSI). The main issue in imperfect CSI is to keep the probability of each user achievable outage rate below the given threshold level. Such a rate outage constraints present significant and analytical challenges. There are many probabilistic methods are used to minimize the transmit optimization problem under imperfect CSI. Here, decomposition based large deviation inequality and Bernstein type inequality convex restriction methods are used to perform the optimization problem under imperfect CSI. These methods are used for achieving improved output quality and lower complexity. They provide a safe tractable approximation of the original rate outage constraints. Based on these method implementations, performance has been evaluated in the terms of feasible rate and average transmission power. The simulation results are shown that all the two methods offer significantly improved outage quality and lower computational complexity.
Keywords: Imperfect channel state information, outage probability, multiuser- multi input single output.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11153799 Calibration of Syringe Pumps Using Interferometry and Optical Methods
Authors: E. Batista, R. Mendes, A. Furtado, M. C. Ferreira, I. Godinho, J. A. Sousa, M. Alvares, R. Martins
Abstract:
Syringe pumps are commonly used for drug delivery in hospitals and clinical environments. These instruments are critical in neonatology and oncology, where any variation in the flow rate and drug dosing quantity can lead to severe incidents and even death of the patient. Therefore it is very important to determine the accuracy and precision of these devices using the suitable calibration methods. The Volume Laboratory of the Portuguese Institute for Quality (LVC/IPQ) uses two different methods to calibrate syringe pumps from 16 nL/min up to 20 mL/min. The Interferometric method uses an interferometer to monitor the distance travelled by a pusher block of the syringe pump in order to determine the flow rate. Therefore, knowing the internal diameter of the syringe with very high precision, the travelled distance, and the time needed for that travelled distance, it was possible to calculate the flow rate of the fluid inside the syringe and its uncertainty. As an alternative to the gravimetric and the interferometric method, a methodology based on the application of optical technology was also developed to measure flow rates. Mainly this method relies on measuring the increase of volume of a drop over time. The objective of this work is to compare the results of the calibration of two syringe pumps using the different methodologies described above. The obtained results were consistent for the three methods used. The uncertainties values were very similar for all the three methods, being higher for the optical drop method due to setup limitations.
Keywords: Calibration, interferometry, syringe pump, optical method, uncertainty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7813798 Generation of Sets of Synthetic Classifiers for the Evaluation of Abstract-Level Combination Methods
Authors: N. Greco, S. Impedovo, R.Modugno, G. Pirlo
Abstract:
This paper presents a new technique for generating sets of synthetic classifiers to evaluate abstract-level combination methods. The sets differ in terms of both recognition rates of the individual classifiers and degree of similarity. For this purpose, each abstract-level classifier is considered as a random variable producing one class label as the output for an input pattern. From the initial set of classifiers, new slightly different sets are generated by applying specific operators, which are defined at the purpose. Finally, the sets of synthetic classifiers have been used to estimate the performance of combination methods for abstract-level classifiers. The experimental results demonstrate the effectiveness of the proposed approach.
Keywords: Abstract-level Classifier, Dempster-Shafer Rule, Multi-expert Systems, Similarity Index, System Evaluation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487