Search results for: Multiple data sources
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9046

Search results for: Multiple data sources

7846 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: Anomaly detection, autoencoder, data centers, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 702
7845 Use of Hair as an Indicator of Environmental Lead Pollution: Changes after Twenty Years of Phasing Out Leaded Gasoline

Authors: M. A. Abou Donia, A. A. K. Abou-Arab, Nevin E. Sharaf, A. K. Enab, Sherif R. Mohamed

Abstract:

Lead (Pb) poisoning is one of the most common and preventable environmental health problems. There are different sources of environmental pollution with lead as lead alkyl additives in petrol and manufacturing processes. Pb in the atmosphere can be deposited in urban soils, and may then be re-suspended to re-enter the atmosphere. This could increase human exposure to Pb and cause long-term health effects. Thus, monitoring Pb pollution is considered one of the major tasks in controlling pollution. Scalp hair can be utilized for the determination of lead (Pb) concentration. It provides a lasting record of metal intakes of weeks or even months, and for most metals, their accumulation in hair reflects their accumulation in the whole body. This work was conducted to investigate the concentration of lead in male scalp hair of Cairo (residential-traffic and residential-industrial) and rural residents after twenty years of phasing out of leaded gasoline. Results indicated that the mean concentration of lead in hair of residential-traffic (9.7552 μg/g ±0.71) and residential-industrial (12.3288 μg/g ±1.13) was significantly higher than that in rural residents (4.7327 μg/g ±0.67). The mean concentration of lead in hair of resident’s industrial areas was the highest among Cairo residents and not the traffic areas as it was before phasing out of leaded gasoline. Twenty years of phasing out of leaded gasoline in Cairo has greatly improved the lead pollution among residents of traffic areas, but industrial areas residents were still suffering from lead pollution, which needs more efforts to control the sources of lead pollution.

Keywords: Heavy metals, lead, hair, biological sample, urban pollution, rural pollution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1739
7844 AnQL: A Query Language for Annotation Documents

Authors: Neerja Bhatnagar, Ben A. Juliano, Renee S. Renner

Abstract:

This paper presents data annotation models at five levels of granularity (database, relation, column, tuple, and cell) of relational data to address the problem of unsuitability of most relational databases to express annotations. These models do not require any structural and schematic changes to the underlying database. These models are also flexible, extensible, customizable, database-neutral, and platform-independent. This paper also presents an SQL-like query language, named Annotation Query Language (AnQL), to query annotation documents. AnQL is simple to understand and exploits the already-existent wide knowledge and skill set of SQL.

Keywords: Annotation query language, data annotations, data annotation models, semantic data annotations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1787
7843 Machine Learning-Enabled Classification of Climbing Using Small Data

Authors: Nicholas Milburn, Yu Liang, Dalei Wu

Abstract:

Athlete performance scoring within the climbing domain presents interesting challenges as the sport does not have an objective way to assign skill. Assessing skill levels within any sport is valuable as it can be used to mark progress while training, and it can help an athlete choose appropriate climbs to attempt. Machine learning-based methods are popular for complex problems like this. The dataset available was composed of dynamic force data recorded during climbing; however, this dataset came with challenges such as data scarcity, imbalance, and it was temporally heterogeneous. Investigated solutions to these challenges include data augmentation, temporal normalization, conversion of time series to the spectral domain, and cross validation strategies. The investigated solutions to the classification problem included light weight machine classifiers KNN and SVM as well as the deep learning with CNN. The best performing model had an 80% accuracy. In conclusion, there seems to be enough information within climbing force data to accurately categorize climbers by skill.

Keywords: Classification, climbing, data imbalance, data scarcity, machine learning, time sequence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 520
7842 Solving an Extended Resource Leveling Problem with Multiobjective Evolutionary Algorithms

Authors: Javier Roca, Etienne Pugnaghi, Gaëtan Libert

Abstract:

We introduce an extended resource leveling model that abstracts real life projects that consider specific work ranges for each resource. Contrary to traditional resource leveling problems this model considers scarce resources and multiple objectives: the minimization of the project makespan and the leveling of each resource usage over time. We formulate this model as a multiobjective optimization problem and we propose a multiobjective genetic algorithm-based solver to optimize it. This solver consists in a two-stage process: a main stage where we obtain non-dominated solutions for all the objectives, and a postprocessing stage where we seek to specifically improve the resource leveling of these solutions. We propose an intelligent encoding for the solver that allows including domain specific knowledge in the solving mechanism. The chosen encoding proves to be effective to solve leveling problems with scarce resources and multiple objectives. The outcome of the proposed solvers represent optimized trade-offs (alternatives) that can be later evaluated by a decision maker, this multi-solution approach represents an advantage over the traditional single solution approach. We compare the proposed solver with state-of-art resource leveling methods and we report competitive and performing results.

Keywords: Intelligent problem encoding, multiobjective decision making, evolutionary computing, RCPSP, resource leveling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4167
7841 FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule

Authors: Lu Si, Jie Yu, Shasha Li, Jun Ma, Lei Luo, Qingbo Wu, Yongqi Ma, Zhengji Liu

Abstract:

Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rule, we propose a large data sets instance selection method with MapReduce framework. Besides ensuring the prediction accuracy and reduction rate, it has two desirable properties: First, it reduces the work load in the aggregation node; Second and most important, it produces the same result with the sequential version, which other parallel methods cannot achieve. We evaluate the performance of FCNN-MR on one small data set and two large data sets. The experimental results show that it is effective and practical.

Keywords: Instance selection, data reduction, MapReduce, kNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 990
7840 Optimization of Real Time Measured Data Transmission, Given the Amount of Data Transmitted

Authors: Michal Kopcek, Tomas Skulavik, Michal Kebisek, Gabriela Krizanova

Abstract:

The operation of nuclear power plants involves continuous monitoring of the environment in their area. This monitoring is performed using a complex data acquisition system, which collects status information about the system itself and values of many important physical variables e.g. temperature, humidity, dose rate etc. This paper describes a proposal and optimization of communication that takes place in teledosimetric system between the central control server responsible for the data processing and storing and the decentralized measuring stations, which are measuring the physical variables. Analyzes of ongoing communication were performed and consequently the optimization of the system architecture and communication was done.

Keywords: Communication protocol, transmission optimization, data acquisition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1791
7839 Siding Mode Control of Pitch-Rate of an F-16 Aircraft

Authors: Ekprasit Promtun, Sridhar Seshagiri

Abstract:

This paper considers the control of the longitudinal flight dynamics of an F-16 aircraft. The primary design objective is model-following of the pitch rate q, which is the preferred system for aircraft approach and landing. Regulation of the aircraft velocity V (or the Mach-hold autopilot) is also considered, but as a secondary objective. The problem is challenging because the system is nonlinear, and also non-affine in the input. A sliding mode controller is designed for the pitch rate, that exploits the modal decomposition of the linearized dynamics into its short-period and phugoid approximations. The inherent robustness of the SMC design provides a convenient way to design controllers without gain scheduling, with a steady-state response that is comparable to that of a conventional polynomial based gain-scheduled approach with integral control, but with improved transient performance. Integral action is introduced in the sliding mode design using the recently developed technique of “conditional integrators", and it is shown that robust regulation is achieved with asymptotically constant exogenous signals, without degrading the transient response. Through extensive simulation on the nonlinear multiple-input multiple-output (MIMO) longitudinal model of the F-16 aircraft, it is shown that the conditional integrator design outperforms the one based on the conventional linear control, without requiring any scheduling.

Keywords: Sliding-mode Control, Integral Control, Model Following, F-16 Longitudinal Dynamics, Pitch-Rate Control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3187
7838 A Mobile Multihop Relay Dynamic TDD Scheme for Cellular Networks

Authors: Jong-Moon Chung, Hyung-Weon Cho, Ki-Yong Jin, Min-Hee Cho

Abstract:

In this paper, we present an analytical framework for the evaluation of the uplink performance of multihop cellular networks based on dynamic time division duplex (TDD). New wireless broadband protocols, such as WiMAX, WiBro, and 3G-LTE apply TDD, and mobile communication protocols under standardization (e.g., IEEE802.16j) are investigating mobile multihop relay (MMR) as a future technology. In this paper a novel MMR TDD scheme is presented, where the dynamic range of the frame is shared to traffic resources of asymmetric nature and multihop relaying. The mobile communication channel interference model comprises of inner and co-channel interference (CCI). The performance analysis focuses on the uplink due to the fact that the effects of dynamic resource allocation show significant performance degradation only in the uplink compared to time division multiple access (TDMA) schemes due to CCI [1-3], where the downlink results to be the same or better.The analysis was based on the signal to interference power ratio (SIR) outage probability of dynamic TDD (D-TDD) and TDMA systems,which are the most widespread mobile communication multi-user control techniques. This paper presents the uplink SIR outage probability with multihop results and shows that the dynamic TDD scheme applying MMR can provide a performance improvement compared to single hop applications if executed properly.

Keywords: Co-Channel Interference, Dynamic TDD, MobileMultihop Reply, Cellular Network, Time Division Multiple Access.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2323
7837 Empirical Process Monitoring Via Chemometric Analysis of Partially Unbalanced Data

Authors: Hyun-Woo Cho

Abstract:

Real-time or in-line process monitoring frameworks are designed to give early warnings for a fault along with meaningful identification of its assignable causes. In artificial intelligence and machine learning fields of pattern recognition various promising approaches have been proposed such as kernel-based nonlinear machine learning techniques. This work presents a kernel-based empirical monitoring scheme for batch type production processes with small sample size problem of partially unbalanced data. Measurement data of normal operations are easy to collect whilst special events or faults data are difficult to collect. In such situations, noise filtering techniques can be helpful in enhancing process monitoring performance. Furthermore, preprocessing of raw process data is used to get rid of unwanted variation of data. The performance of the monitoring scheme was demonstrated using three-dimensional batch data. The results showed that the monitoring performance was improved significantly in terms of detection success rate of process fault.

Keywords: Process Monitoring, kernel methods, multivariate filtering, data-driven techniques, quality improvement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1728
7836 A Comparison of Image Data Representations for Local Stereo Matching

Authors: André Smith, Amr Abdel-Dayem

Abstract:

The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.

Keywords: Colour data, local stereo matching, stereo correspondence, disparity map.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 891
7835 Building a Service-Centric Business Model in SMEs in the Business-to-Business Context

Authors: Päivi J. Tossavainen , Leena Alakoski, Katri Ojasalo

Abstract:

Building a service-centric business model requires new knowledge and capabilities in companies. This paper enlightens the challenges small and medium sized firms (SMEs) face when developing their service-centric business models. This paper examines the premise for knowledge transfer and capability development required. The objective of this paper is to increase knowledge about SME-s transformation to service-centric business models.This paper reports an action research based case study. The paper provides empirical evidence from three case companies. The empirical data was collected through multiple methods. The findings of the paper are: First, the developed model to analyze the current state in companies. Second, the process of building the service – centric business models. Third, the selection of suitable service development methods. The lack of a holistic understanding on service logic suggests that SMEs need practical and easy to use methods to improve their business

Keywords: service-centric business model, service development, action research, case study

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1760
7834 Flexible, Adaptable and Scaleable Business Rules Management System for Data Validation

Authors: Kashif Kamran, Farooque Azam

Abstract:

The policies governing the business of any organization are well reflected in her business rules. The business rules are implemented by data validation techniques, coded during the software development process. Any change in business policies results in change in the code written for data validation used to enforce the business policies. Implementing the change in business rules without changing the code is the objective of this paper. The proposed approach enables users to create rule sets at run time once the software has been developed. The newly defined rule sets by end users are associated with the data variables for which the validation is required. The proposed approach facilitates the users to define business rules using all the comparison operators and Boolean operators. Multithreading is used to validate the data entered by end user against the business rules applied. The evaluation of the data is performed by a newly created thread using an enhanced form of the RPN (Reverse Polish Notation) algorithm.

Keywords: Business Rules, data validation, multithreading, Reverse Polish Notation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2253
7833 Tidal Data Analysis using ANN

Authors: Ritu Vijay, Rekha Govil

Abstract:

The design of a complete expansion that allows for compact representation of certain relevant classes of signals is a central problem in signal processing applications. Achieving such a representation means knowing the signal features for the purpose of denoising, classification, interpolation and forecasting. Multilayer Neural Networks are relatively a new class of techniques that are mathematically proven to approximate any continuous function arbitrarily well. Radial Basis Function Networks, which make use of Gaussian activation function, are also shown to be a universal approximator. In this age of ever-increasing digitization in the storage, processing, analysis and communication of information, there are numerous examples of applications where one needs to construct a continuously defined function or numerical algorithm to approximate, represent and reconstruct the given discrete data of a signal. Many a times one wishes to manipulate the data in a way that requires information not included explicitly in the data, which is done through interpolation and/or extrapolation. Tidal data are a very perfect example of time series and many statistical techniques have been applied for tidal data analysis and representation. ANN is recent addition to such techniques. In the present paper we describe the time series representation capabilities of a special type of ANN- Radial Basis Function networks and present the results of tidal data representation using RBF. Tidal data analysis & representation is one of the important requirements in marine science for forecasting.

Keywords: ANN, RBF, Tidal Data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630
7832 Object Negotiation Mechanism for an Intelligent Environment Using Event Agents

Authors: Chiung-Hui Chen

Abstract:

With advancements in science and technology, the concept of the Internet of Things (IoT) has gradually developed. The development of the intelligent environment adds intelligence to objects in the living space by using the IoT. In the smart environment, when multiple users share the living space, if different service requirements from different users arise, then the context-aware system will have conflicting situations for making decisions about providing services. Therefore, the purpose of establishing a communication and negotiation mechanism among objects in the intelligent environment is to resolve those service conflicts among users. This study proposes developing a decision-making methodology that uses “Event Agents” as its core. When the sensor system receives information, it evaluates a user’s current events and conditions; analyses object, location, time, and environmental information; calculates the priority of the object; and provides the user services based on the event. Moreover, when the event is not single but overlaps with another, conflicts arise. This study adopts the “Multiple Events Correlation Matrix” in order to calculate the degree values of incidents and support values for each object. The matrix uses these values as the basis for making inferences for system service, and to further determine appropriate services when there is a conflict.

Keywords: Internet of things, intelligent object, event agents, negotiation mechanism, degree of similarity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1173
7831 Spatial Data Mining by Decision Trees

Authors: S. Oujdi, H. Belbachir

Abstract:

Existing methods of data mining cannot be applied on spatial data because they require spatial specificity consideration, as spatial relationships. This paper focuses on the classification with decision trees, which are one of the data mining techniques. We propose an extension of the C4.5 algorithm for spatial data, based on two different approaches Join materialization and Querying on the fly the different tables. Similar works have been done on these two main approaches, the first - Join materialization - favors the processing time in spite of memory space, whereas the second - Querying on the fly different tables- promotes memory space despite of the processing time. The modified C4.5 algorithm requires three entries tables: a target table, a neighbor table, and a spatial index join that contains the possible spatial relationship among the objects in the target table and those in the neighbor table. Thus, the proposed algorithms are applied to a spatial data pattern in the accidentology domain. A comparative study of our approach with other works of classification by spatial decision trees will be detailed.

Keywords: C4.5 Algorithm, Decision trees, S-CART, Spatial data mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2957
7830 Affine Projection Algorithm with Variable Data-Reuse Factor

Authors: ChangWoo Lee, Young Kow Lee, Sung Jun Ban, SungHoo Choi, Sang Woo Kim

Abstract:

This paper suggests a new Affine Projection (AP) algorithm with variable data-reuse factor using the condition number as a decision factor. To reduce computational burden, we adopt a recently reported technique which estimates the condition number of an input data matrix. Several simulations show that the new algorithm has better performance than that of the conventional AP algorithm.

Keywords: Affine projection algorithm, variable data-reuse factor, condition number, convergence rate, misalignment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520
7829 The Development of the Multi-Agent Classification System (MACS) in Compliance with FIPA Specifications

Authors: Mohamed R. Mhereeg

Abstract:

The paper investigates the feasibility of constructing a software multi-agent based monitoring and classification system and utilizing it to provide an automated and accurate classification of end users developing applications in the spreadsheet domain. The agents function autonomously to provide continuous and periodic monitoring of excels spreadsheet workbooks. Resulting in, the development of the MultiAgent classification System (MACS) that is in compliance with the specifications of the Foundation for Intelligent Physical Agents (FIPA). However, different technologies have been brought together to build MACS. The strength of the system is the integration of the agent technology with the FIPA specifications together with other technologies that are Windows Communication Foundation (WCF) services, Service Oriented Architecture (SOA), and Oracle Data Mining (ODM). The Microsoft's .NET widows service based agents were utilized to develop the monitoring agents of MACS, the .NET WCF services together with SOA approach allowed the distribution and communication between agents over the WWW that is in order to satisfy the monitoring and classification of the multiple developer aspect. ODM was used to automate the classification phase of MACS.

Keywords: Autonomous, Classification, MACS, Multi-Agent, SOA, WCF.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1560
7828 Attribute Analysis of Quick Response Code Payment Users Using Discriminant Non-negative Matrix Factorization

Authors: Hironori Karachi, Haruka Yamashita

Abstract:

Recently, the system of quick response (QR) code is getting popular. Many companies introduce new QR code payment services and the services are competing with each other to increase the number of users. For increasing the number of users, we should grasp the difference of feature of the demographic information, usage information, and value of users between services. In this study, we conduct an analysis of real-world data provided by Nomura Research Institute including the demographic data of users and information of users’ usages of two services; LINE Pay, and PayPay. For analyzing such data and interpret the feature of them, Nonnegative Matrix Factorization (NMF) is widely used; however, in case of the target data, there is a problem of the missing data. EM-algorithm NMF (EMNMF) to complete unknown values for understanding the feature of the given data presented by matrix shape. Moreover, for comparing the result of the NMF analysis of two matrices, there is Discriminant NMF (DNMF) shows the difference of users features between two matrices. In this study, we combine EMNMF and DNMF and also analyze the target data. As the interpretation, we show the difference of the features of users between LINE Pay and Paypay.

Keywords: Data science, non-negative matrix factorization, missing data, quality of services.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 415
7827 Damage Localization of Deterministic-Stochastic Systems

Authors: Yen-Po Wang, Ming-Chih Huang, Ming-Lian Chang

Abstract:

A scheme integrated with deterministic–stochastic subspace system identification and the method of damage localization vector is proposed in this study for damage detection of structures based on seismic response data. A series of shaking table tests using a five-storey steel frame has been conducted in National Center for Research on Earthquake Engineering (NCREE), Taiwan. Damage condition is simulated by reducing the cross-sectional area of some of the columns at the bottom. Both single and combinations of multiple damage conditions at various locations have been considered. In the system identification analysis, either full or partial observation conditions have been taken into account. It has been shown that the damaged stories can be identified from global responses of the structure to earthquakes if sufficiently observed. In addition to detecting damage(s) with respect to the intact structure, identification of new or extended damages of the as-damaged (ill-conditioned) counterpart has also been studied. The proposed scheme proves to be effective.

Keywords: Damage locating vectors, deterministic-stochastic subspace system, shaking table tests, system identification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1673
7826 Using Data Mining for Learning and Clustering FCM

Authors: Somayeh Alizadeh, Mehdi Ghazanfari, Mohammad Fathian

Abstract:

Fuzzy Cognitive Maps (FCMs) have successfully been applied in numerous domains to show relations between essential components. In some FCM, there are more nodes, which related to each other and more nodes means more complex in system behaviors and analysis. In this paper, a novel learning method used to construct FCMs based on historical data and by using data mining and DEMATEL method, a new method defined to reduce nodes number. This method cluster nodes in FCM based on their cause and effect behaviors.

Keywords: Clustering, Data Mining, Fuzzy Cognitive Map(FCM), Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1994
7825 A Neutral Set Approach for Applying TOPSIS in Maintenance Strategy Selection

Authors: C. Ardil

Abstract:

This paper introduces the concept of neutral sets (NSs) and explores various operations on NSs, along with their associated properties. The foundation of the Neutral Set framework lies in ontological neutrality and the principles of logic, including the Law of Non-Contradiction. By encompassing components for possibility, indeterminacy, and necessity, the NS framework provides a flexible representation of truth, uncertainty, and necessity, accommodating diverse ontological perspectives without presupposing specific existential commitments. The inclusion of Possibility acknowledges the spectrum of potential states or propositions, promoting neutrality by accommodating various viewpoints. Indeterminacy reflects the inherent uncertainty in understanding reality, refraining from making definitive ontological commitments in uncertain situations. Necessity captures propositions that must hold true under all circumstances, aligning with the principle of logical consistency and implicitly supporting the Law of Non-Contradiction. Subsequently, a neutral set-TOPSIS approach is applied in the maintenance strategy selection problem, demonstrating the practical applicability of the NS framework. The paper further explores uncertainty relations and presents the fundamental preliminaries of NS theory, emphasizing its role in fostering ontological neutrality and logical coherence in reasoning.

Keywords: Uncertainty sets, neutral sets, maintenance strategy selection multiple criteria decision-making analysis, MCDM, uncertainty decision analysis, distance function, multiple attribute, decision making, selection method, uncertainty, TOPSIS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 69
7824 Modeling Low Voltage Power Line as a Data Communication Channel

Authors: Eklas Hossain, Sheroz Khan, Ahad Ali

Abstract:

Power line communications may be used as a data communication channel in public and indoor distribution networks so that it does not require the installing of new cables. Industrial low voltage distribution network may be utilized for data transfer required by the on-line condition monitoring of electric motors. This paper presents a pilot distribution network for modeling low voltage power line as data transfer channel. The signal attenuation in communication channels in the pilot environment is presented and the analysis is done by varying the corresponding parameters for the signal attenuation.

Keywords: Data communication, indoor distribution networks, low voltage, power line.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3262
7823 Generating Concept Trees from Dynamic Self-organizing Map

Authors: Norashikin Ahmad, Damminda Alahakoon

Abstract:

Self-organizing map (SOM) provides both clustering and visualization capabilities in mining data. Dynamic self-organizing maps such as Growing Self-organizing Map (GSOM) has been developed to overcome the problem of fixed structure in SOM to enable better representation of the discovered patterns. However, in mining large datasets or historical data the hierarchical structure of the data is also useful to view the cluster formation at different levels of abstraction. In this paper, we present a technique to generate concept trees from the GSOM. The formation of tree from different spread factor values of GSOM is also investigated and the quality of the trees analyzed. The results show that concept trees can be generated from GSOM, thus, eliminating the need for re-clustering of the data from scratch to obtain a hierarchical view of the data under study.

Keywords: dynamic self-organizing map, concept formation, clustering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1435
7822 Optical Fiber Data Throughput in a Quantum Communication System

Authors: Arash Kosari, Ali Araghi

Abstract:

A mathematical model for an optical-fiber communication channel is developed which results in an expression that calculates the throughput and loss of the corresponding link. The data are assumed to be transmitted by using of separate photons with different polarizations. The derived model also shows the dependency of data throughput with length of the channel and depolarization factor. It is observed that absorption of photons affects the throughput in a more intensive way in comparison with that of depolarization. Apart from that, the probability of depolarization and the absorption of radiated photons are obtained.

Keywords: Absorption, data throughput, depolarization, optical fiber.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656
7821 Fuzzy Clustering of Categorical Attributes and its Use in Analyzing Cultural Data

Authors: George E. Tsekouras, Dimitris Papageorgiou, Sotiris Kotsiantis, Christos Kalloniatis, Panagiotis Pintelas

Abstract:

We develop a three-step fuzzy logic-based algorithm for clustering categorical attributes, and we apply it to analyze cultural data. In the first step the algorithm employs an entropy-based clustering scheme, which initializes the cluster centers. In the second step we apply the fuzzy c-modes algorithm to obtain a fuzzy partition of the data set, and the third step introduces a novel cluster validity index, which decides the final number of clusters.

Keywords: Categorical data, cultural data, fuzzy logic clustering, fuzzy c-modes, cluster validity index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1686
7820 Complementing Assessment Processes with Standardized Tests: A Work in Progress

Authors: Amparo Camacho

Abstract:

ABET accredited programs must assess the development of student learning outcomes (SOs) in engineering programs. Different institutions implement different strategies for this assessment, and they are usually designed “in house.” This paper presents a proposal for including standardized tests to complement the ABET assessment model in an engineering college made up of six distinct engineering programs. The engineering college formulated a model of quality assurance in education to be implemented throughout the six engineering programs to regularly assess and evaluate the achievement of SOs in each program offered. The model uses diverse techniques and sources of data to assess student performance and to implement actions of improvement based on the results of this assessment. The model is called “Assessment Process Model” and it includes SOs A through K, as defined by ABET. SOs can be divided into two categories: “hard skills” and “professional skills” (soft skills). The first includes abilities, such as: applying knowledge of mathematics, science, and engineering and designing and conducting experiments, as well as analyzing and interpreting data. The second category, “professional skills”, includes communicating effectively, and understanding professional and ethnical responsibility. Within the Assessment Process Model, various tools were used to assess SOs, related to both “hard” as well as “soft” skills. The assessment tools designed included: rubrics, surveys, questionnaires, and portfolios. In addition to these instruments, the Engineering College decided to use tools that systematically gather consistent quantitative data. For this reason, an in-house exam was designed and implemented, based on the curriculum of each program. Even though this exam was administered during various academic periods, it is not currently considered standardized. In 2017, the Engineering College included three standardized tests: one to assess mathematical and scientific reasoning and two more to assess reading and writing abilities. With these exams, the college hopes to obtain complementary information that can help better measure the development of both hard and soft skills of students in the different engineering programs. In the first semester of 2017, the three exams were given to three sample groups of students from the six different engineering programs. Students in the sample groups were either from the first, fifth, and tenth semester cohorts. At the time of submission of this paper, the engineering college has descriptive statistical data and is working with various statisticians to have a more in-depth and detailed analysis of the sample group of students’ achievement on the three exams. The overall objective of including standardized exams in the assessment model is to identify more precisely the least developed SOs in order to define and implement educational strategies necessary for students to achieve them in each engineering program.

Keywords: Assessment, hard skills, soft skills, standardized tests.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 781
7819 Effect of Inventory Management on Financial Performance: Evidence from Nigerian Conglomerate Companies

Authors: Adamu Danlami Ahmed

Abstract:

Inventory management is the determinant of effective and efficient work for any manager. This study looked at the relationship between inventory management and financial performance. The population of the study comprises all conglomerate quoted companies in the Nigerian Stock Exchange market as at 31st December 2010. The scope of the study covered the period from 2010 to 2014. Descriptive, Pearson correlation and multiple regressions are used to analyze the data. It was found that inventory management is significantly related to the profitability of the company. This entails that an efficient management of the inventory cycle will enhance the profitability of the company. Also, lack of proper management of it will hinder the financial performance of organizations. Based on the results, it was recommended that a conglomerate company should try to see that inventories are kept to a minimum, as well as make sure the proper checks are maintained to make sure only needed inventories are in the store. As well as to keep track of the movement of goods, in order to avoid unnecessary delay of finished and work in progress (WIP) goods in the store and warehouse.

Keywords: Finished goods, work in progress, financial performance, inventory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4587
7818 Multiclass Support Vector Machines with Simultaneous Multi-Factors Optimization for Corporate Credit Ratings

Authors: Hyunchul Ahn, William X. S. Wong

Abstract:

Corporate credit rating prediction is one of the most important topics, which has been studied by researchers in the last decade. Over the last decade, researchers are pushing the limit to enhance the exactness of the corporate credit rating prediction model by applying several data-driven tools including statistical and artificial intelligence methods. Among them, multiclass support vector machine (MSVM) has been widely applied due to its good predictability. However, heuristics, for example, parameters of a kernel function, appropriate feature and instance subset, has become the main reason for the critics on MSVM, as they have dictate the MSVM architectural variables. This study presents a hybrid MSVM model that is intended to optimize all the parameter such as feature selection, instance selection, and kernel parameter. Our model adopts genetic algorithm (GA) to simultaneously optimize multiple heterogeneous design factors of MSVM.

Keywords: Corporate credit rating prediction, feature selection, genetic algorithms, instance selection, multiclass support vector machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1391
7817 Weighted Data Replication Strategy for Data Grid Considering Economic Approach

Authors: N. Mansouri, A. Asadi

Abstract:

Data Grid is a geographically distributed environment that deals with data intensive application in scientific and enterprise computing. Data replication is a common method used to achieve efficient and fault-tolerant data access in Grids. In this paper, a dynamic data replication strategy, called Enhanced Latest Access Largest Weight (ELALW) is proposed. This strategy is an enhanced version of Latest Access Largest Weight strategy. However, replication should be used wisely because the storage capacity of each Grid site is limited. Thus, it is important to design an effective strategy for the replication replacement task. ELALW replaces replicas based on the number of requests in future, the size of the replica, and the number of copies of the file. It also improves access latency by selecting the best replica when various sites hold replicas. The proposed replica selection selects the best replica location from among the many replicas based on response time that can be determined by considering the data transfer time, the storage access latency, the replica requests that waiting in the storage queue and the distance between nodes. Simulation results utilizing the OptorSim show our replication strategy achieve better performance overall than other strategies in terms of job execution time, effective network usage and storage resource usage.

Keywords: Data grid, data replication, simulation, replica selection, replica placement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2081