Search results for: non-destructive testing methods.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4864

Search results for: non-destructive testing methods.

4204 A Hybrid Feature Selection by Resampling, Chi squared and Consistency Evaluation Techniques

Authors: Amir-Massoud Bidgoli, Mehdi Naseri Parsa

Abstract:

In this paper a combined feature selection method is proposed which takes advantages of sample domain filtering, resampling and feature subset evaluation methods to reduce dimensions of huge datasets and select reliable features. This method utilizes both feature space and sample domain to improve the process of feature selection and uses a combination of Chi squared with Consistency attribute evaluation methods to seek reliable features. This method consists of two phases. The first phase filters and resamples the sample domain and the second phase adopts a hybrid procedure to find the optimal feature space by applying Chi squared, Consistency subset evaluation methods and genetic search. Experiments on various sized datasets from UCI Repository of Machine Learning databases show that the performance of five classifiers (Naïve Bayes, Logistic, Multilayer Perceptron, Best First Decision Tree and JRIP) improves simultaneously and the classification error for these classifiers decreases considerably. The experiments also show that this method outperforms other feature selection methods.

Keywords: feature selection, resampling, reliable features, Consistency Subset Evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2584
4203 Panoramic Sensor Based Blind Spot Accident Prevention System

Authors: Rajendra Prasad Mahapatra, K. Vimal Kumar

Abstract:

There are many automotive accidents due to blind spots and driver inattentiveness. Blind spot is the area that is invisible to the driver's viewpoint without head rotation. Several methods are available for assisting the drivers. Simplest methods are — rear mirrors and wide-angle lenses. But, these methods have a disadvantage of the requirement for human assistance. So, the accuracy of these devices depends on driver. Another approach called an automated approach that makes use of sensors such as sonar or radar. These sensors are used to gather range information. The range information will be processed and used for detecting the collision. The disadvantage of this system is — low angular resolution and limited sensing volumes. This paper is a panoramic sensor based automotive vehicle monitoring..

Keywords: Panoramic sensors, Blind spot, Convex lens, Computer Vision, Sonar.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2117
4202 Shape Error Concealment for Shape Independent Transform Coding

Authors: Sandra Ondrušová, Jaroslav Polec

Abstract:

Arbitrarily shaped video objects are an important concept in modern video coding methods. The techniques presently used are not based on image elements but rather video objects having an arbitrary shape. In this paper, spatial shape error concealment techniques to be used for object-based image in error-prone environments are proposed. We consider a geometric shape representation consisting of the object boundary, which can be extracted from the α-plane. Three different approaches are used to replace a missing boundary segment: Bézier interpolation, Bézier approximation and NURBS approximation. Experimental results on object shape with different concealment difficulty demonstrate the performance of the proposed methods. Comparisons with proposed methods are also presented.

Keywords: error concealment, shape coding, object-based image, NURBS, Bézier curves.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1288
4201 Determination of Electromagnetic Properties of Human Tissues

Authors: Iliana Marinova, Valentin Mateev

Abstract:

In this paper a computer system for electromagnetic properties measurements is designed. The system employs Agilent 4294A precision impedance analyzer to measure the amplitude and the phase of a signal applied over a tested biological tissue sample. Measured by the developed computer system data could be used for tissue characterization in wide frequency range from 40Hz to 110MHz. The computer system can interface with output devices acquiring flexible testing process.

Keywords: Electromagnetic properties, human tissue, bioimpedance, measurement system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2429
4200 Resource Efficiency within Current Production

Authors: Sarah Majid Ansari, Serjosha Wulf, Matthias Görke

Abstract:

In times of global warming and the increasing shortage of resources, sustainable production is becoming more and more inevitable. Companies cannot only heighten their competitiveness but also contribute positively to environmental protection through efficient energy and resource consumption. Regarding this, technical solutions are often preferred during production, although organizational and process-related approaches also offer great potential. This project focuses on reducing resource usage, with a special emphasis on the human factor. It is the aspiration to develop a methodology that systematically implements and embeds suitable and individual measures and methods regarding resource efficiency throughout the entire production. The measures and methods established help employees handle resources and energy more sensitively. With this in mind, this paper also deals with the difficulties that can occur during the sensitization of employees and the implementation of these measures and methods. In addition, recommendations are given on how to avoid such difficulties.

Keywords: Implementation, human factor, production plant, resource efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1975
4199 Wasting Human and Computer Resources

Authors: Mária Csernoch, Piroska Biró

Abstract:

The legends about “user-friendly” and “easy-to-use” birotical tools (computer-related office tools) have been spreading and misleading end-users. This approach has led us to the extremely high number of incorrect documents, causing serious financial losses in the creating, modifying, and retrieving processes. Our research proved that there are at least two sources of this underachievement: (1) The lack of the definition of the correctly edited, formatted documents. Consequently, end-users do not know whether their methods and results are correct or not. They are not aware of their ignorance. They are so ignorant that their ignorance does not allow them to realize their lack of knowledge. (2) The end-users’ problem solving methods. We have found that in non-traditional programming environments end-users apply, almost exclusively, surface approach metacognitive methods to carry out their computer related activities, which are proved less effective than deep approach methods. Based on these findings we have developed deep approach methods which are based on and adapted from traditional programming languages. In this study, we focus on the most popular type of birotical documents, the text based documents. We have provided the definition of the correctly edited text, and based on this definition, adapted the debugging method known in programming. According to the method, before the realization of text editing, a thorough debugging of already existing texts and the categorization of errors are carried out. With this method in advance to real text editing users learn the requirements of text based documents and also of the correctly formatted text. The method has been proved much more effective than the previously applied surface approach methods. The advantages of the method are that the real text handling requires much less human and computer sources than clicking aimlessly in the GUI (Graphical User Interface), and the data retrieval is much more effective than from error-prone documents.

Keywords: Deep approach metacognitive methods, error-prone birotical documents, financial losses, human and computer resources.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1911
4198 Adopted Method of Information System Strategy for Knowledge Management System: A Literature Review

Authors: Elin Cahyaningsih, Dana Indra Sensuse, Wahyu Catur Wibowo, Sofiyanti Indriasari

Abstract:

Bureaucracy reform program drives Indonesian government to change their management to enhance their organizational performance. Information technology became one of strategic plan that organization tried to improve. Knowledge management system is one of information system that supporting knowledge management implementation in government which categorized as people perspective, because this system has high dependency in human interaction and participation. Strategic plan for developing knowledge management system can be determine using some of information system strategic methods. This research conducted to define type of strategic method of information system, stage of activity each method, strength and weakness. Literature review methods used to identify and classify strategic methods of information system, differentiate method type, categorize common activities, strength and weakness. Result of this research are determine and compare six strategic information system methods, Balanced Scorecard and Risk Analysis believe as common strategic method that usually used and have the highest excellence strength.

Keywords: Knowledge management system, balanced scorecard, five force, risk analysis, gap analysis, value chain analysis, SWOT analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2640
4197 New Analysis Methods on Strict Avalanche Criterion of S-Boxes

Authors: Phyu Phyu Mar, Khin Maung Latt

Abstract:

S-boxes (Substitution boxes) are keystones of modern symmetric cryptosystems (block ciphers, as well as stream ciphers). S-boxes bring nonlinearity to cryptosystems and strengthen their cryptographic security. They are used for confusion in data security An S-box satisfies the strict avalanche criterion (SAC), if and only if for any single input bit of the S-box, the inversion of it changes each output bit with probability one half. If a function (cryptographic transformation) is complete, then each output bit depends on all of the input bits. Thus, if it were possible to find the simplest Boolean expression for each output bit in terms of the input bits, each of these expressions would have to contain all of the input bits if the function is complete. From some important properties of S-box, the most interesting property SAC (Strict Avalanche Criterion) is presented and to analyze this property three analysis methods are proposed.

Keywords: S-boxes, cryptosystems, strict avalanche criterion, function, analysis methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3921
4196 Multidimensional Performance Tracking

Authors: C. Ardil

Abstract:

In this study, a model, together with a software tool that implements it, has been developed to determine the performance ratings of employees in an organization operating in the information technology sector using the indicators obtained from employees' online study data. Weighted Sum (WS) Method and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method based on multidimensional decision making approach were used in the study. WS and TOPSIS methods provide multidimensional decision making (MDDM) methods that allow all dimensions to be evaluated together considering specific weights, allowing employees to objectively evaluate the problem of online performance tracking. The application of WS and TOPSIS mathematical methods, which can combine alternatives with a large number of dimensions and reach simultaneous solution, has been implemented through an online performance tracking software. In the application of WS and TOPSIS methods, objective dimension weights were calculated by using entropy information (EI) and standard deviation (SD) methods from the data obtained by employees' online performance tracking method, decision matrix was formed by using performance scores for each employee, and a single performance score was calculated for each employee. Based on the calculated performance score, employees were given a performance evaluation decision. The results of Pareto set evidence and comparative mathematical analysis validate that employees' performance preference rankings in WS and TOPSIS methods are closely related. This suggests the compatibility, applicability, and validity of the proposed method to the MDDM problems in which a large number of alternative and dimension types are taken into account. With this study, an objective, realistic, feasible and understandable mathematical method, together with a software tool that implements it has been demonstrated. This is considered to be preferable because of the subjectivity, limitations and high cost of the methods traditionally used in the measurement and performance appraisal in the information technology sector.

Keywords: Weighted sum, entropy ınformation, standard deviation, online performance tracking, performance evaluation, performance management, multidimensional decision making.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1110
4195 Control Analysis Using Tuning Methods for a Designed, Developed and Modeled Cross Flow Water Tube Heat Exchanger

Authors: Shaival H. Nagarsheth, Utpal Pandya, Hemant J. Nagarsheth

Abstract:

Cross flow water tube heat exchanger can be designed and made operational using methods of model building and simulation of the system. This paper projects the design and development of a model of cross flow water tube heat-exchanger system, simulation and validation of control analysis of different tuning methods. Feedback and override control system is developed using inputs acquired with the help of sensory system. A mathematical model is formulated for analysis of system behaviour. The temperature is regulated at the desired set point automatically.

Keywords: Heat Exchanger, Feedback, Override, Temperature, PID.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2404
4194 Development of Workplace Environmental Monitoring Systems Using Ubiquitous Sensor Network

Authors: Jung-Min Yun, Jong-Hyun Baek, Byoung Ky Kang, Peom Park

Abstract:

In this study, workplace environmental monitoring systems were established using USN(Ubiquitous Sensor Networks) and LabVIEW. Although existing direct sampling methods enable finding accurate values as of the time points of measurement, those methods are disadvantageous in that continuous management and supervision are difficult and costs for are high when those methods are used. Therefore, the efficiency and reliability of workplace management by supervisors are relatively low when those methods are used. In this study, systems were established so that information on workplace environmental factors such as temperatures, humidity and noises is measured and transmitted to the PC in real time to enable supervisors to monitor workplaces through LabVIEW on the PC. When any accidents have occurred in workplaces, supervisors can immediately respond through the monitoring system and this system enables integrated workplace management and the prevention of safety accidents. By introducing these monitoring systems, safety accidents due to harmful environmental factors in workplaces can be prevented and these monitoring systems will be also helpful in finding out the correlation between safety accidents and occupational diseases by comparing and linking databases established by this monitoring system with existing statistical data.

Keywords: Ubiquitous Sensor Nework, LabVIEW, Environment Monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2545
4193 Korea and Japan Economic Relations: An Analysis through the World Trade Organization

Authors: Caroline S. Dutra, Tatiana C. Squeff

Abstract:

It is well known that the history between South Korea and Japan influences their international relations; thus, also encompassing their economic relations. In this sense, it is impossible to analyze the latter without understanding the development of the former, which is known for episodes of hostility, like on Japanese colonization, but also had moments of cultural and trade interexchange. Indeed, since 1965, with the establishment of diplomatic relations between both countries, their trade relations have improved, especially after both nations have signed the General Agreement on Tariffs and Trade (GATT). Thereafter, with the establishment of the World Trade Organization (WTO) in 1995, another chapter of their diplomatic and economic relations have been inaugurated. Hence, bearing in mind this history between both nations, this research intends to examine their relations through the analysis of the WTO panels they have engaged in between each other, which are, in chronological order, “DS323: Japan – Import Quotas on Dried Laver and Seasoned Laver”, “DS336: Japan - Countervailing Duties on Dynamic Random Access Memories from Korea”, “DS495: Korea - Import Band, and Testing and Certification Requirements for Radionuclides”, “DS553: Korea - Sunset Review of Anti-Dumping Duties on Stainless Steel Bars” and “DS571: Korea - Measures Affecting Trade in Commercial Vessels”. The objective of this case analysis is to point out what are the areas that are more conflictual between Japan and South Korea in regard to their economic relations so that it is possible to assert on their future (economic) relations and other possible outcomes. And in order to do so, bibliographic and documental research will be made, particularly those involving the WTO and the nations under consideration. Regarding the methods used, it is important to highlight that this is applied research in the field of international economic relations and international law, which follows a hypothetic-deductive model.

Keywords: International economic relations, Japan, South Korea, World Trade Organization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 960
4192 Performance Evaluation of ROI Extraction Models from Stationary Images

Authors: K.V. Sridhar, Varun Gunnala, K.S.R Krishna Prasad

Abstract:

In this paper three basic approaches and different methods under each of them for extracting region of interest (ROI) from stationary images are explored. The results obtained for each of the proposed methods are shown, and it is demonstrated where each method outperforms the other. Two main problems in ROI extraction: the channel selection problem and the saliency reversal problem are discussed and how best these two are addressed by various methods is also seen. The basic approaches are 1) Saliency based approach 2) Wavelet based approach 3) Clustering based approach. The saliency approach performs well on images containing objects of high saturation and brightness. The wavelet based approach performs well on natural scene images that contain regions of distinct textures. The mean shift clustering approach partitions the image into regions according to the density distribution of pixel intensities. The experimental results of various methodologies show that each technique performs at different acceptable levels for various types of images.

Keywords: clustering, ROI, saliency, wavelets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1409
4191 Ultrasonic Echo Image Adaptive Watermarking Using the Just-Noticeable Difference Estimation

Authors: Amnach Khawne, Kazuhiko Hamamoto, Orachat Chitsobhuk

Abstract:

Most of the image watermarking methods, using the properties of the human visual system (HVS), have been proposed in literature. The component of the visual threshold is usually related to either the spatial contrast sensitivity function (CSF) or the visual masking. Especially on the contrast masking, most methods have not mention to the effect near to the edge region. Since the HVS is sensitive what happens on the edge area. This paper proposes ultrasound image watermarking using the visual threshold corresponding to the HVS in which the coefficients in a DCT-block have been classified based on the texture, edge, and plain area. This classification method enables not only useful for imperceptibility when the watermark is insert into an image but also achievable a robustness of watermark detection. A comparison of the proposed method with other methods has been carried out which shown that the proposed method robusts to blockwise memoryless manipulations, and also robust against noise addition.

Keywords: Medical image watermarking, Human Visual System, Image Adaptive Watermark

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1602
4190 Determination of in Vitro Susceptibility of the Typhoid Pathogens to Synergistic Action of Euphorbia Hirta, Euphorbia Heterophylla and Phyllanthus Niruri for Possible Development of Effective Anti-Typhoid Drugs

Authors: Abalaka, M. E., Daniyan, S. Y., Adeyemo, S. O.

Abstract:

Studies were carried out to determine the in vitro susceptibility of the typhoid pathogens to combined action of Euphorbia hirta, Euphorbia heterophylla and Phyllanthus niruri. Clinical isolates of the typhoid bacilli were subjected to susceptibility testing using agar diffusion technique and the minimum inhibitory concentration (MIC) determined with tube dilution technique. These isolates, when challenged with doses of the extracts from the three medicinal plants showed zones of inhibition as wide as 26±0.2mm, 22±0.1mm and 18±0.0mm respectively. The minimum inhibitory concentration (MIC) revealed organisms inhibited at varying concentrations of extracts: E. hirta (S. typhi 0.250mg/ml, S. paratyphi A 0.125mg/ml, S. paratyphi B 0.185mg/ml and S. paratyphi C 0.225mg/ml), E. heterophylla (S. typhi 0.280mg/ml, S. paratyphi A 0.150mg/ml, S. paratyphi B 0.200mg/ml and S. paratyphi C 0.250mg/ml) and P. niruri (S. typhi 0.150mg/ml, S. paratyphi A 0.100mg/ml, S. paratyphi B 0.115mg/ml and S. paratyphi C 0.125mg/ml). The results of the synergy between the three plants in the ration of 1:1:1 showed very low MICs for the test pathogens as follows S. typhi 0.025mg/ml, S. paratyphi A 0.080mg/ml, S. paratyphi B 0.015mg/ml and S. paratyphi C 0.10mg/ml with the diameter zone of inhibition (DZI) ranging from 35±0.2mm, 28±0.4mm, 20±0.1mm and 32±0.3mm respectively. The secondary metabolites were identified using simple methods and HPLC. Organic components such as anthroquinones, different alkaloids, tannins, 6-ethoxy-1,2,3,4-tetrahydro-2,2,4-trimethyl and steroids were identified. The prevalence of Salmonellae, a deadly infectious disease, is still very high in parts of Nigeria. The synergistic action of these three plants is very high. It is concluded that pharmaceutical companies should take advantage of these findings to develop new anti-typhoid drugs from these plants.

Keywords: A Prevalence, Susceptibility, Synergistic, Typhoid pathogens.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1957
4189 Functionality of Negotiation Agent on Value-based Design Decision

Authors: Arazi Idrus, Christiono Utomo

Abstract:

This paper presents functionality of negotiation agent on value-based design decision. The functionality is based on the characteristics of the system and goal specification. A Prometheus Design Tool model was used for developing the system. Group functionality will be the attribute for negotiation agents, which comprises a coordinator agent and decision- maker agent. The results of the testing of the system to a building system selection on valuebased decision environment are also presented.

Keywords: Functionality, negotiation agent, value-baseddecision

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1420
4188 Contextual SenSe Model: Word Sense Disambiguation Using Sense and Sense Value of Context Surrounding the Target

Authors: Vishal Raj, Noorhan Abbas

Abstract:

Ambiguity in NLP (Natural Language Processing) refers to the ability of a word, phrase, sentence, or text to have multiple meanings. This results in various kinds of ambiguities such as lexical, syntactic, semantic, anaphoric and referential. This study is focused mainly on solving the issue of Lexical ambiguity. Word Sense Disambiguation (WSD) is an NLP technique that aims to resolve lexical ambiguity by determining the correct meaning of a word within a given context. Most WSD solutions rely on words for training and testing, but we have used lemma and Part of Speech (POS) tokens of words for training and testing. Lemma adds generality and POS adds properties of word into token. We have designed a method to create an affinity matrix to calculate the affinity between any pair of lemma_POS (a token where lemma and POS of word are joined by underscore) of given training set. Additionally, we have devised an algorithm to create the sense clusters of tokens using affinity matrix under hierarchy of POS of lemma. Furthermore, three different mechanisms to predict the sense of target word using the affinity/similarity value are devised. Each contextual token contributes to the sense of target word with some value and whichever sense gets higher value becomes the sense of target word. So, contextual tokens play a key role in creating sense clusters and predicting the sense of target word, hence, the model is named Contextual SenSe Model (CSM). CSM exhibits a noteworthy simplicity and explication lucidity in contrast to contemporary deep learning models characterized by intricacy, time-intensive processes, and challenging explication. CSM is trained on SemCor training data and evaluated on SemEval test dataset. The results indicate that despite the naivety of the method, it achieves promising results when compared to the Most Frequent Sense (MFS) model.

Keywords: Word Sense Disambiguation, WSD, Contextual SenSe Model, Most Frequent Sense, part of speech, POS, Natural Language Processing, NLP, OOV, out of vocabulary, ELMo, Embeddings from Language Model, BERT, Bidirectional Encoder Representations from Transformers, Word2Vec, lemma_POS, Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 389
4187 A Comparison between Russian and Western Approach for Deep Foundation Design

Authors: Saeed Delara, Kendra MacKay

Abstract:

Varying methodologies are considered for pile design for both Russian and Western approaches. Although both approaches rely on toe and side frictional resistances, different calculation methods are proposed to estimate pile capacity. The Western approach relies on compactness (internal friction angle) of soil for cohesionless soils and undrained shear strength for cohesive soils. The Russian approach relies on grain size for cohesionless soils and liquidity index for cohesive soils. Though most recommended methods in the Western approaches are relatively simple methods to predict pile settlement, the Russian approach provides a detailed method to estimate single pile and pile group settlement. Details to calculate pile axial capacity and settlement using the Russian and Western approaches are discussed and compared against field test results.

Keywords: Pile capacity, pile settlement, Russian approach, western approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 855
4186 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks

Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar

Abstract:

DNA Barcode provides good sources of needed information to classify living species. The classification problem has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use the similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. However, all the used methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. In fact, our method permits to avoid the complex problem of form and structure in different classes of organisms. The empirical data and their classification performances are compared with other methods. Evenly, in this study, we present our system which is consisted of three phases. The first one, is called transformation, is composed of three sub steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. Moreover, the second phase step is an approximation; it is empowered by the use of Multi Library Wavelet Neural Networks (MLWNN). Finally, the third one, is called the classification of DNA Barcodes, is realized by applying the algorithm of hierarchical classification.

Keywords: DNA Barcode, Electron-Ion Interaction Pseudopotential, Multi Library Wavelet Neural Networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1967
4185 Dialogue Meetings as an Arena for Collaboration and Reflection among Researchers and Practitioners

Authors: Kerstin Grunden, Ann Svensson, Berit Forsman, Christina Karlsson, Ayman Obeid

Abstract:

The research question of the article is to explore whether the dialogue meetings method could be relevant for reflective learning among researchers and practitioners when welfare technology should be implemented in municipalities, or not. A testbed was planned to be implemented in a retirement home in a Swedish municipality, and the practitioners worked with a pre-study of that testbed. In the article, the dialogue between the researchers and the practitioners in the dialogue meetings is described and analyzed. The potential of dialogue meetings as an arena for learning and reflection among researchers and practitioners is discussed. The research methodology approach is participatory action research with mixed methods (dialogue meetings, focus groups, participant observations). The main findings from the dialogue meetings were that the researchers learned more about the use of traditional research methods, and the practitioners learned more about how they could improve their use of the methods to facilitate change processes in their organization. These findings have the potential both for the researchers and the practitioners to result in more relevant use of research methods in change processes in organizations. It is concluded that dialogue meetings could be relevant for reflective learning among researchers and practitioners when welfare technology should be implemented in a health care organization.

Keywords: Dialogue meetings, implementation, reflection, test bed, welfare technology, participatory action research.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 464
4184 Optimized Calculation of Hourly Price Forward Curve (HPFC)

Authors: Ahmed Abdolkhalig

Abstract:

This paper examines many mathematical methods for molding the hourly price forward curve (HPFC); the model will be constructed by numerous regression methods, like polynomial regression, radial basic function neural networks & a furrier series. Examination the models goodness of fit will be done by means of statistical & graphical tools. The criteria for choosing the model will depend on minimize the Root Mean Squared Error (RMSE), using the correlation analysis approach for the regression analysis the optimal model will be distinct, which are robust against model misspecification. Learning & supervision technique employed to determine the form of the optimal parameters corresponding to each measure of overall loss. By using all the numerical methods that mentioned previously; the explicit expressions for the optimal model derived and the optimal designs will be implemented.

Keywords: Forward curve, furrier series, regression, radial basic function neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4229
4183 Groundwater Quality Improvement by Using Aeration and Filtration Methods

Authors: Nik N. Nik Daud, Nur H. Izehar, B. Yusuf, Thamer A. Mohamed, A. Ahsan

Abstract:

An experiment was conducted using two aeration methods (water-into-air and air-into-water) and followed by filtration processes using manganese greensand material. The properties of groundwater such as pH, dissolved oxygen, turbidity and heavy metal concentration (iron and manganese) will be assessed. The objectives of this study are i) to determine the effective aeration method and ii) to assess the effectiveness of manganese greensand as filter media in removing iron and manganese concentration in groundwater. Results showed that final pH for all samples after treatment are in range from 7.40 and 8.40. Both aeration methods increased the dissolved oxygen content. Final turbidity for groundwater samples are between 3 NTU to 29 NTU. Only three out of eight samples achieved iron concentration of 0.3mg/L and less and all samples reach manganese concentration of 0.1mg/L and less. Air-into-water aeration method gives higher percentage of iron and manganese removal compare to water-into-air method.

Keywords: Aeration, filtration, groundwater, water quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4112
4182 Validation and Selection between Machine Learning Technique and Traditional Methods to Reduce Bullwhip Effects: a Data Mining Approach

Authors: Hamid R. S. Mojaveri, Seyed S. Mousavi, Mojtaba Heydar, Ahmad Aminian

Abstract:

The aim of this paper is to present a methodology in three steps to forecast supply chain demand. In first step, various data mining techniques are applied in order to prepare data for entering into forecasting models. In second step, the modeling step, an artificial neural network and support vector machine is presented after defining Mean Absolute Percentage Error index for measuring error. The structure of artificial neural network is selected based on previous researchers' results and in this article the accuracy of network is increased by using sensitivity analysis. The best forecast for classical forecasting methods (Moving Average, Exponential Smoothing, and Exponential Smoothing with Trend) is resulted based on prepared data and this forecast is compared with result of support vector machine and proposed artificial neural network. The results show that artificial neural network can forecast more precisely in comparison with other methods. Finally, forecasting methods' stability is analyzed by using raw data and even the effectiveness of clustering analysis is measured.

Keywords: Artificial Neural Networks (ANN), bullwhip effect, demand forecasting, Support Vector Machine (SVM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2010
4181 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: Goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, type-I error, penalized quasi-likelihood, power, quasi-likelihood.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 733
4180 Developing a Viral Artifact to Improve Employees’ Security Behavior

Authors: Stefan Bauer, Josef Frysak

Abstract:

According to the scientific information management literature, the improper use of information technology (e.g. personal computers) by employees are one main cause for operational and information security loss events. Therefore, organizations implement information security awareness programs to increase employees’ awareness to further prevention of loss events. However, in many cases these information security awareness programs consist of conventional delivery methods like posters, leaflets, or internal messages to make employees aware of information security policies. We assume that a viral information security awareness video might be more effective medium than conventional methods commonly used by organizations. The purpose of this research is to develop a viral video artifact to improve employee security behavior concerning information technology.

Keywords: Information Security Awareness, Delivery Methods, Viral Videos, Employee Security Behavior.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1805
4179 Improved Computational Efficiency of Machine Learning Algorithms Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK

Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick

Abstract:

The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning (ML) archetypal that could forecast the COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID-19 cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organization (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data are split into 8:2 ratio for training and testing purposes to forecast future new COVID-19 cases. Support Vector Machine (SVM), Random Forest (RF), and linear regression (LR) algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID-19 cases is evaluated. RF outperformed the other two ML algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n = 30. The mean square error obtained for RF is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis, RF algorithm can perform more effectively and efficiently in predicting the new COVID-19 cases, which could help the health sector to take relevant control measures for the spread of the virus.

Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 172
4178 Simple Procedure for Probability Calculation of Tensile Crack Occurring in Rigid Pavement – Case Study

Authors: Aleš Florian, Lenka Ševelová, Jaroslav Žák

Abstract:

Formation of tensile cracks in concrete slabs of rigid pavement can be (among others) the initiation point of the other, more serious failures which can ultimately lead to complete degradation of the concrete slab and thus the whole pavement. Two measures can be used for reliability assessment of this phenomenon - the probability of failure and/or the reliability index. Different methods can be used for their calculation. The simple ones are called moment methods and simulation techniques. Two methods - FOSM Method and Simple Random Sampling Method - are verified and their comparison is performed. The influence of information about the probability distribution and the statistical parameters of input variables as well as of the limit state function on the calculated reliability index and failure probability are studied in three points on the lower surface of concrete slabs of the older type of rigid pavement formerly used in the Czech Republic.

Keywords: Failure, pavement, probability, reliability index, simulation, tensile crack.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2307
4177 Comparison of Detrending Methods in Spectral Analysis of Heart Rate Variability

Authors: Liping Li, Changchun Liu, Ke Li, Chengyu Liu

Abstract:

Non-stationary trend in R-R interval series is considered as a main factor that could highly influence the evaluation of spectral analysis. It is suggested to remove trends in order to obtain reliable results. In this study, three detrending methods, the smoothness prior approach, the wavelet and the empirical mode decomposition, were compared on artificial R-R interval series with four types of simulated trends. The Lomb-Scargle periodogram was used for spectral analysis of R-R interval series. Results indicated that the wavelet method showed a better overall performance than the other two methods, and more time-saving, too. Therefore it was selected for spectral analysis of real R-R interval series of thirty-seven healthy subjects. Significant decreases (19.94±5.87% in the low frequency band and 18.97±5.78% in the ratio (p<0.001)) were found. Thus the wavelet method is recommended as an optimal choice for use.

Keywords: empirical mode decomposition, heart rate variability, signal detrending, smoothness priors, wavelet

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2070
4176 A Method to Annotate Programs with High-Level Knowledge of Computation

Authors: Nobuhiko Hishinuma, Jun Igari, Rentaro Yoshioka

Abstract:

When programming in languages such as C, Java, etc., it is difficult to reconstruct the programmer's ideas only from the program code. This occurs mainly because, much of the programmer's ideas behind the implementation are not recorded in the code during implementation. For example, physical aspects of computation such as spatial structures, activities, and meaning of variables are not required as instructions to the computer and are often excluded. This makes the future reconstruction of the original ideas difficult. AIDA, which is a multimedia programming language based on the cyberFilm model, can solve these problems allowing to describe ideas behind programs using advanced annotation methods as a natural extension to programming. In this paper, a development environment that implements the AIDA language is presented with a focus on the annotation methods. In particular, an actual scientific numerical computation code is created and the effects of the annotation methods are analyzed.

Keywords: cyberFilm, development environment, knowledge engineering, multimedia programming language

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1282
4175 Online Prediction of Nonlinear Signal Processing Problems Based Kernel Adaptive Filtering

Authors: Hamza Nejib, Okba Taouali

Abstract:

This paper presents two of the most knowing kernel adaptive filtering (KAF) approaches, the kernel least mean squares and the kernel recursive least squares, in order to predict a new output of nonlinear signal processing. Both of these methods implement a nonlinear transfer function using kernel methods in a particular space named reproducing kernel Hilbert space (RKHS) where the model is a linear combination of kernel functions applied to transform the observed data from the input space to a high dimensional feature space of vectors, this idea known as the kernel trick. Then KAF is the developing filters in RKHS. We use two nonlinear signal processing problems, Mackey Glass chaotic time series prediction and nonlinear channel equalization to figure the performance of the approaches presented and finally to result which of them is the adapted one.

Keywords: KLMS, online prediction, KAF, signal processing, RKHS, Kernel methods, KRLS, KLMS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1052