Search results for: fuzzy aggregation operator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1410

Search results for: fuzzy aggregation operator

330 IoT Based Agriculture Monitoring Framework for Sustainable Rice Production

Authors: Armanul Hoque Shaon, Md Baizid Mahmud, Askander Nobi, Md. Raju Ahmed, Md. Jiabul Hoque

Abstract:

In the Internet of Things (IoT), devices are linked to the internet through a wireless network, allowing them to collect and transmit data without the need for a human operator. Agriculture relies heavily on wireless sensors, which are a vital component of the Internet of Things (IoT). This kind of wireless sensor network monitors physical or environmental variables like temperatures, sound, vibration, pressure, or motion without relying on a central location or sink and collaboratively passes its data across the network to be analyzed. As the primary source of plant nutrients, the soil is critical to the agricultural industry's continued growth. We're excited about the prospect of developing an Internet of Things (IoT) solution. To arrange the network, the sink node collects groundwater levels and sends them to the Gateway, which centralizes the data and forwards it to the sensor nodes. The sink node gathers soil moisture data, transmits the mean to the Gateways, and then forwards it to the website for dissemination. The web server is in charge of storing and presenting the moisture in the soil data to the web application's users. Soil characteristics may be collected using a networked method that we developed to improve rice production. Paddy land is running out as the population of our nation grows. The success of this project will be dependent on the appropriate use of the existing land base.

Keywords: IoT based agriculture monitoring, intelligent irrigation, communicating network, rice production

Procedia PDF Downloads 154
329 Analysis of ECGs Survey Data by Applying Clustering Algorithm

Authors: Irum Matloob, Shoab Ahmad Khan, Fahim Arif

Abstract:

As Indo-pak has been the victim of heart diseases since many decades. Many surveys showed that percentage of cardiac patients is increasing in Pakistan day by day, and special attention is needed to pay on this issue. The framework is proposed for performing detailed analysis of ECG survey data which is conducted for measuring the prevalence of heart diseases statistics in Pakistan. The ECG survey data is evaluated or filtered by using automated Minnesota codes and only those ECGs are used for further analysis which is fulfilling the standardized conditions mentioned in the Minnesota codes. Then feature selection is performed by applying proposed algorithm based on discernibility matrix, for selecting relevant features from the database. Clustering is performed for exposing natural clusters from the ECG survey data by applying spectral clustering algorithm using fuzzy c means algorithm. The hidden patterns and interesting relationships which have been exposed after this analysis are useful for further detailed analysis and for many other multiple purposes.

Keywords: arrhythmias, centroids, ECG, clustering, discernibility matrix

Procedia PDF Downloads 351
328 The Non-Stationary BINARMA(1,1) Process with Poisson Innovations: An Application on Accident Data

Authors: Y. Sunecher, N. Mamode Khan, V. Jowaheer

Abstract:

This paper considers the modelling of a non-stationary bivariate integer-valued autoregressive moving average of order one (BINARMA(1,1)) with correlated Poisson innovations. The BINARMA(1,1) model is specified using the binomial thinning operator and by assuming that the cross-correlation between the two series is induced by the innovation terms only. Based on these assumptions, the non-stationary marginal and joint moments of the BINARMA(1,1) are derived iteratively by using some initial stationary moments. As regards to the estimation of parameters of the proposed model, the conditional maximum likelihood (CML) estimation method is derived based on thinning and convolution properties. The forecasting equations of the BINARMA(1,1) model are also derived. A simulation study is also proposed where BINARMA(1,1) count data are generated using a multivariate Poisson R code for the innovation terms. The performance of the BINARMA(1,1) model is then assessed through a simulation experiment and the mean estimates of the model parameters obtained are all efficient, based on their standard errors. The proposed model is then used to analyse a real-life accident data on the motorway in Mauritius, based on some covariates: policemen, daily patrol, speed cameras, traffic lights and roundabouts. The BINARMA(1,1) model is applied on the accident data and the CML estimates clearly indicate a significant impact of the covariates on the number of accidents on the motorway in Mauritius. The forecasting equations also provide reliable one-step ahead forecasts.

Keywords: non-stationary, BINARMA(1, 1) model, Poisson innovations, conditional maximum likelihood, CML

Procedia PDF Downloads 129
327 Credit Risk Assessment Using Rule Based Classifiers: A Comparative Study

Authors: Salima Smiti, Ines Gasmi, Makram Soui

Abstract:

Credit risk is the most important issue for financial institutions. Its assessment becomes an important task used to predict defaulter customers and classify customers as good or bad payers. To this objective, numerous techniques have been applied for credit risk assessment. However, to our knowledge, several evaluation techniques are black-box models such as neural networks, SVM, etc. They generate applicants’ classes without any explanation. In this paper, we propose to assess credit risk using rules classification method. Our output is a set of rules which describe and explain the decision. To this end, we will compare seven classification algorithms (JRip, Decision Table, OneR, ZeroR, Fuzzy Rule, PART and Genetic programming (GP)) where the goal is to find the best rules satisfying many criteria: accuracy, sensitivity, and specificity. The obtained results confirm the efficiency of the GP algorithm for German and Australian datasets compared to other rule-based techniques to predict the credit risk.

Keywords: credit risk assessment, classification algorithms, data mining, rule extraction

Procedia PDF Downloads 181
326 The Hydrotrope-Mediated, Low-Temperature, Aqueous Dissolution of Maize Starch

Authors: Jeroen Vinkx, Jan A. Delcour, Bart Goderis

Abstract:

Complete aqueous dissolution of starch is notoriously difficult. A high-temperature autoclaving process is necessary, followed by cooling the solution below its boiling point. The cooled solution is inherently unstable over time. Gelation and retrogradation processes, along with aggregation-induced by undissolved starch remnants, result in starch precipitation. We recently observed the spontaneous gelatinization of native maize starch (MS) in aqueous sodium salicylate (NaSal) solutions at room temperature. A hydrotropic mode of solubilization is hypothesized. Differential scanning calorimetry (DSC) and polarized optical microscopy (POM) of starch dispersions in NaSal solution were used to demonstrate the room temperature gelatinization of MS at different concentrations of MS and NaSal. The DSC gelatinization peak shifts to lower temperatures, and the gelatinization enthalpy decreases with increasing NaSal concentration. POM images confirm the same trend through the disappearance of the ‘Maltese cross’ interference pattern of starch granules. The minimal NaSal concentration to induce complete room temperature dissolution of MS was found to be around 15-20 wt%. The MS content of the dispersion has little influence on the amount of NaSal needed to dissolve it. The effect of the NaSal solution on the MS molecular weight was checked with HPSEC. It is speculated that, because of its amphiphilic character, NaSal enhances the solubility of MS in water by association with the more hydrophobic MS moieties, much like urea, which has also been used to enhance starch dissolution in alkaline aqueous media. As such small molecules do not tend to form micelles in water, they are called hydrotropes rather than surfactants. A minimal hydrotrope concentration (MHC) is necessary for the hydrotropes to structure themselves in water, resulting in a higher solubility of MS. This is the case for the system MS/NaSal/H₂O. Further investigations into the putative hydrotropic dissolution mechanism are necessary.

Keywords: hydrotrope, dissolution, maize starch, sodium salicylate, gelatinization

Procedia PDF Downloads 188
325 Evaluating and Reducing Aircraft Technical Delays and Cancellations Impact on Reliability Operational: Case Study of Airline Operator

Authors: Adel A. Ghobbar, Ahmad Bakkar

Abstract:

Although special care is given to maintenance, aircraft systems fail, and these failures cause delays and cancellations. The occurrence of Delays and Cancellations affects operators and manufacturers negatively. To reduce technical delays and cancellations, one should be able to determine the important systems causing them. The goal of this research is to find a method to define the most expensive delays and cancellations systems for Airline operators. A predictive model was introduced to forecast the failure and their impact after carrying out research that identifies relevant information to tackle the problems faced while answering the questions of this paper. Data were obtained from the manufacturers’ services reliability team database. Subsequently, delays and cancellations evaluation methods were identified. No cost estimation methods were used due to their complexity. The model was developed, and it takes into account the frequency of delays and cancellations and uses weighting factors to give an indication of the severity of their duration. The weighting factors are based on customer experience. The data Analysis approach has shown that delays and cancellations events are not seasonal and do not follow any specific trends. The use of weighting factor does have an influence on the shortlist over short periods (Monthly) but not the analyzed period of three years. Landing gear and the navigation system are among the top 3 factors causing delays and cancellations for all three aircraft types. The results did confirm that the cooperation between certain operators and manufacture reduce the impact of delays and cancellations.

Keywords: reliability, availability, delays & cancellations, aircraft maintenance

Procedia PDF Downloads 132
324 Magnetoresistance Transition from Negative to Positive in Functionalization of Carbon Nanotube and Composite with Polyaniline

Authors: Krishna Prasad Maity, Narendra Tanty, Ananya Patra, V. Prasad

Abstract:

Carbon nanotube (CNT) is a well-known material for very good electrical, thermal conductivity and high tensile strength. Because of that, it’s widely used in many fields like nanotechnology, electronics, optics, etc. In last two decades, polyaniline (PANI) with CNT and functionalized CNT (fCNT) have been promising materials in application of gas sensing, electromagnetic shielding, electrode of capacitor etc. So, the study of electrical conductivity of PANI/CNT and PANI/fCNT is important to understand the charge transport and interaction between PANI and CNT in the composite. It is observed that a transition in magnetoresistance (MR) with lowering temperature, increasing magnetic field and decreasing CNT percentage in CNT/PANI composite. Functionalization of CNT prevent the nanotube aggregation, improves interfacial interaction, dispersion and stabilized in polymer matrix. However, it shortens the length, breaks C-C sp² bonds and enhances the disorder creating defects on the side walls. We have studied electrical resistivity and MR in PANI with CNT and fCNT composites for different weight percentages down to the temperature 4.2K and up to magnetic field 5T. Resistivity increases significantly in composite at low temperature due to functionalization of CNT compared to only CNT. Interestingly a transition from negative to positive magnetoresistance has been observed when the filler is changed from pure CNT to functionalized CNT after a certain percentage (10wt%) as the effect of more disorder in fCNT/PANI composite. The transition of MR has been explained on the basis of polaron-bipolaron model. The long-range Coulomb interaction between two polarons screened by disorder in the composite of fCNT/PANI, increases the effective on-site Coulomb repulsion energy to form bipolaron which leads to change the sign of MR from negative to positive.

Keywords: coulomb interaction, magnetoresistance transition, polyaniline composite, polaron-bipolaron

Procedia PDF Downloads 172
323 Comparison of Two Anesthetic Methods during Interventional Neuroradiology Procedure: Propofol versus Sevoflurane Using Patient State Index

Authors: Ki Hwa Lee, Eunsu Kang, Jae Hong Park

Abstract:

Background: Interventional neuroradiology (INR) has been a rapidly growing and evolving neurosurgical part during the past few decades. Sevoflurane and propofol are both suitable anesthetics for INR procedure. Monitoring of depth of anesthesia is being used very widely. SEDLine™ monitor, a 4-channel processed EEG monitor, uses a proprietary algorithm to analyze the raw EEG signal and displays the Patient State Index (PSI) values. There are only a fewer studies examining the PSI in the neuro-anesthesia. We aimed to investigate the difference of PSI values and hemodynamic variables between sevoflurane and propofol anesthesia during INR procedure. Methods: We reviewed the medical records of patients who scheduled to undergo embolization of non-ruptured intracranial aneurysm by a single operator from May 2013 to December 2014, retrospectively. Sixty-five patients were categorized into two groups; sevoflurane (n = 33) vs propofol (n = 32) group. The PSI values, hemodynamic variables, and the use of hemodynamic drugs were analyzed. Results: Significant differences were seen between PSI values obtained during different perioperative stages in both two groups (P < 0.0001). The PSI values of propofol group were lower than that of sevoflurane group during INR procedure (P < 0.01). The patients in propofol group had more prolonged time of extubation and more phenylephrine requirement than sevoflurane group (p < 0.05). Anti-hypertensive drug was more administered to the patients during extubation in sevoflurane group (p < 0.05). Conclusions: The PSI can detect depth of anesthesia and changes of concentration of anesthetics during INR procedure. Extubation was faster in sevoflurane group, but smooth recovery was shown in propofol group.

Keywords: interventional neuroradiology, patient state index, propofol, sevoflurane

Procedia PDF Downloads 180
322 General Architecture for Automation of Machine Learning Practices

Authors: U. Borasi, Amit Kr. Jain, Rakesh, Piyush Jain

Abstract:

Data collection, data preparation, model training, model evaluation, and deployment are all processes in a typical machine learning workflow. Training data needs to be gathered and organised. This often entails collecting a sizable dataset and cleaning it to remove or correct any inaccurate or missing information. Preparing the data for use in the machine learning model requires pre-processing it after it has been acquired. This often entails actions like scaling or normalising the data, handling outliers, selecting appropriate features, reducing dimensionality, etc. This pre-processed data is then used to train a model on some machine learning algorithm. After the model has been trained, it needs to be assessed by determining metrics like accuracy, precision, and recall, utilising a test dataset. Every time a new model is built, both data pre-processing and model training—two crucial processes in the Machine learning (ML) workflow—must be carried out. Thus, there are various Machine Learning algorithms that can be employed for every single approach to data pre-processing, generating a large set of combinations to choose from. Example: for every method to handle missing values (dropping records, replacing with mean, etc.), for every scaling technique, and for every combination of features selected, a different algorithm can be used. As a result, in order to get the optimum outcomes, these tasks are frequently repeated in different combinations. This paper suggests a simple architecture for organizing this largely produced “combination set of pre-processing steps and algorithms” into an automated workflow which simplifies the task of carrying out all possibilities.

Keywords: machine learning, automation, AUTOML, architecture, operator pool, configuration, scheduler

Procedia PDF Downloads 57
321 Impact of Map Generalization in Spatial Analysis

Authors: Lin Li, P. G. R. N. I. Pussella

Abstract:

When representing spatial data and their attributes on different types of maps, the scale plays a key role in the process of map generalization. The process is consisted with two main operators such as selection and omission. Once some data were selected, they would undergo of several geometrical changing processes such as elimination, simplification, smoothing, exaggeration, displacement, aggregation and size reduction. As a result of these operations at different levels of data, the geometry of the spatial features such as length, sinuosity, orientation, perimeter and area would be altered. This would be worst in the case of preparation of small scale maps, since the cartographer has not enough space to represent all the features on the map. What the GIS users do is when they wanted to analyze a set of spatial data; they retrieve a data set and does the analysis part without considering very important characteristics such as the scale, the purpose of the map and the degree of generalization. Further, the GIS users use and compare different maps with different degrees of generalization. Sometimes, GIS users are going beyond the scale of the source map using zoom in facility and violate the basic cartographic rule 'it is not suitable to create a larger scale map using a smaller scale map'. In the study, the effect of map generalization for GIS analysis would be discussed as the main objective. It was used three digital maps with different scales such as 1:10000, 1:50000 and 1:250000 which were prepared by the Survey Department of Sri Lanka, the National Mapping Agency of Sri Lanka. It was used common features which were on above three maps and an overlay analysis was done by repeating the data with different combinations. Road data, River data and Land use data sets were used for the study. A simple model, to find the best place for a wild life park, was used to identify the effects. The results show remarkable effects on different degrees of generalization processes. It can see that different locations with different geometries were received as the outputs from this analysis. The study suggests that there should be reasonable methods to overcome this effect. It can be recommended that, as a solution, it would be very reasonable to take all the data sets into a common scale and do the analysis part.

Keywords: generalization, GIS, scales, spatial analysis

Procedia PDF Downloads 328
320 The Appraisal of Construction Sites Productivity: In Kendall’s Concordance

Authors: Abdulkadir Abu Lawal

Abstract:

For the dearth of reliable cardinal numerical data, the linked phenomena in productivity indices such as operational costs and company turnovers, etc. could not be investigated. This would not give us insight to the root of productivity problems at unique sites. So, ordinal ranking by professionals who were most directly involved with construction sites was applied for Kendall’s concordance. Responses gathered from independent architects, builders/engineers, and quantity surveyors were herein analyzed. They were responses based on factors that affect sites productivity, and these factors were categorized as head office factors, resource management effectiveness factors, motivational factors, and training/skill development factors. It was found that productivity is low and has to be improved in order to facilitate Nigerian efforts in bridging its infrastructure deficit. The significance of this work is underlined with the Kendall’s coefficient of concordance of 0.78, while remedial measures must be emphasized to stimulate better productivity. Further detailed study can be undertaken by using Fuzzy logic analysis on wider Delphi survey.

Keywords: factors, Kendall's coefficient of concordance, magnitude of agreement, percentage magnitude of dichotomy, ranking variables

Procedia PDF Downloads 627
319 Comparative Analysis of Data Gathering Protocols with Multiple Mobile Elements for Wireless Sensor Network

Authors: Bhat Geetalaxmi Jairam, D. V. Ashoka

Abstract:

Wireless Sensor Networks are used in many applications to collect sensed data from different sources. Sensed data has to be delivered through sensors wireless interface using multi-hop communication towards the sink. The data collection in wireless sensor networks consumes energy. Energy consumption is the major constraints in WSN .Reducing the energy consumption while increasing the amount of generated data is a great challenge. In this paper, we have implemented two data gathering protocols with multiple mobile sinks/elements to collect data from sensor nodes. First, is Energy-Efficient Data Gathering with Tour Length-Constrained Mobile Elements in Wireless Sensor Networks (EEDG), in which mobile sinks uses vehicle routing protocol to collect data. Second is An Intelligent Agent-based Routing Structure for Mobile Sinks in WSNs (IAR), in which mobile sinks uses prim’s algorithm to collect data. Authors have implemented concepts which are common to both protocols like deployment of mobile sinks, generating visiting schedule, collecting data from the cluster member. Authors have compared the performance of both protocols by taking statistics based on performance parameters like Delay, Packet Drop, Packet Delivery Ratio, Energy Available, Control Overhead. Authors have concluded this paper by proving EEDG is more efficient than IAR protocol but with few limitations which include unaddressed issues likes Redundancy removal, Idle listening, Mobile Sink’s pause/wait state at the node. In future work, we plan to concentrate more on these limitations to avail a new energy efficient protocol which will help in improving the life time of the WSN.

Keywords: aggregation, consumption, data gathering, efficiency

Procedia PDF Downloads 497
318 Formalizing a Procedure for Generating Uncertain Resource Availability Assumptions Based on Real Time Logistic Data Capturing with Auto-ID Systems for Reactive Scheduling

Authors: Lars Laußat, Manfred Helmus, Kamil Szczesny, Markus König

Abstract:

As one result of the project “Reactive Construction Project Scheduling using Real Time Construction Logistic Data and Simulation”, a procedure for using data about uncertain resource availability assumptions in reactive scheduling processes has been developed. Prediction data about resource availability is generated in a formalized way using real-time monitoring data e.g. from auto-ID systems on the construction site and in the supply chains. The paper focuses on the formalization of the procedure for monitoring construction logistic processes, for the detection of disturbance and for generating of new and uncertain scheduling assumptions for the reactive resource constrained simulation procedure that is and will be further described in other papers.

Keywords: auto-ID, construction logistic, fuzzy, monitoring, RFID, scheduling

Procedia PDF Downloads 513
317 Risk Prioritization in Tunneling Construction Projects

Authors: David Nantes, George Gilbert

Abstract:

There are a lot of risks that might crop up as a tunneling project develops, and it's crucial to be aware of them. Due to the unexpected nature of tunneling projects and the interconnectedness of risk occurrences, the risk assessment approach presents a significant challenge. The purpose of this study is to provide a hybrid FDEMATEL-ANP model to help prioritize risks during tunnel construction projects. The ambiguity in expert judgments and the relative severity of interdependencies across risk occurrences are both taken into consideration by this model, thanks to the Fuzzy Decision-Making Trial and Evaluation Laboratory (FDEMATEL). The Analytic Network Process (ANP) method is used to rank priorities and assess project risks. The authors provide a case study of a subway tunneling construction project to back up the validity of their methodology. The results showed that the proposed method successfully isolated key risk factors and elucidated their interplay in the case study. The proposed method has the potential to become a helpful resource for evaluating dangers associated with tunnel construction projects.

Keywords: risk, prioritization, FDEMATEL, ANP, tunneling construction projects

Procedia PDF Downloads 92
316 Development of a Few-View Computed Tomographic Reconstruction Algorithm Using Multi-Directional Total Variation

Authors: Chia Jui Hsieh, Jyh Cheng Chen, Chih Wei Kuo, Ruei Teng Wang, Woei Chyn Chu

Abstract:

Compressed sensing (CS) based computed tomographic (CT) reconstruction algorithm utilizes total variation (TV) to transform CT image into sparse domain and minimizes L1-norm of sparse image for reconstruction. Different from the traditional CS based reconstruction which only calculates x-coordinate and y-coordinate TV to transform CT images into sparse domain, we propose a multi-directional TV to transform tomographic image into sparse domain for low-dose reconstruction. Our method considers all possible directions of TV calculations around a pixel, so the sparse transform for CS based reconstruction is more accurate. In 2D CT reconstruction, we use eight-directional TV to transform CT image into sparse domain. Furthermore, we also use 26-directional TV for 3D reconstruction. This multi-directional sparse transform method makes CS based reconstruction algorithm more powerful to reduce noise and increase image quality. To validate and evaluate the performance of this multi-directional sparse transform method, we use both Shepp-Logan phantom and a head phantom as the targets for reconstruction with the corresponding simulated sparse projection data (angular sampling interval is 5 deg and 6 deg, respectively). From the results, the multi-directional TV method can reconstruct images with relatively less artifacts compared with traditional CS based reconstruction algorithm which only calculates x-coordinate and y-coordinate TV. We also choose RMSE, PSNR, UQI to be the parameters for quantitative analysis. From the results of quantitative analysis, no matter which parameter is calculated, the multi-directional TV method, which we proposed, is better.

Keywords: compressed sensing (CS), low-dose CT reconstruction, total variation (TV), multi-directional gradient operator

Procedia PDF Downloads 256
315 Preparation of Flurbiprofen Derivative for Enhanced Brain Penetration

Authors: Jungkyun Im

Abstract:

Nonsteroidal anti-inflammatory drugs (NSAIDs) are effective for relieving pain and reducing inflammation. They are nonselective inhibitors of two isoforms of COX, cyclooxygenase-1 (COX-1) and cyclooxygenase-2 (COX-2), and thereby inhibiting the production of hormone-like lipid compounds such as, prostaglandins and thromboxanes which cause inflammation, pain, fever, platelet aggregation, etc. In addition, recently there are many research articles reporting the neuroprotective effect of NSAIDs in neurodegenerative diseases, such as Alzheimer’s disease (AD) and Parkinson’s disease (PD). However, the clinical use of NSAIDs in these diseases is limited by low brain distribution. Therefore, in order to assist the in-depth investigation on the pharmaceutical mechanism of flurbiprofen in neuroprotection and to make flurbiprofen a more potent drug to prevent or alleviate neurodegenerative diseases, delivery of flurbiprofen to brain should be effective and sufficient amount of flurbiprofen must penetrate the BBB thus gaining access into the patient’s brain. We have recently developed several types of guanidine-rich molecular carriers with high molecular weights and good water solubility that readily cross the blood-brain barrier (BBB) and display efficient distributions in the mouse brain. The G8 (having eight guanidine groups) molecular carrier based on D-sorbitol was found to be very effective in delivering anticancer drugs to a mouse brain. In the present study, employing the same molecular carrier, we prepared the flurbiprofen conjugate and studied its BBB permeation by mouse tissue distribution study. Flurbiprofen was attached to a molecular carrier with a fluorescein probe and multiple terminal guanidiniums. The conjugate was found to internalize into live cells and readily cross the BBB to enter the mouse brain. Our novel synthetic flurbiprofen conjugate will hopefully delivery NSAIDs into brain, and is therefore applicable to the neurodegenerative diseases treatment or prevention.

Keywords: flurbiprofen, drug delivery, molecular carrier, organic synthesis

Procedia PDF Downloads 231
314 Modeling and Analyzing Controversy in Large-Scale Cyber-Argumentation

Authors: Najla Althuniyan

Abstract:

Online discussions take place across different platforms. These discussions have the potential to extract crowd wisdom and capture the collective intelligence from a different perspective. However, certain phenomena, such as controversy, often appear in online argumentation that makes the discussion between participants heated. Heated discussions can be used to extract new knowledge. Therefore, detecting the presence of controversy is an essential task to determine if collective intelligence can be extracted from online discussions. This paper uses existing measures for estimating controversy quantitatively in cyber-argumentation. First, it defines controversy in different fields, and then it identifies the attributes of controversy in online discussions. The distributions of user opinions and the distance between opinions are used to calculate the controversial degree of a discussion. Finally, the results from each controversy measure are discussed and analyzed using an empirical study generated by a cyber-argumentation tool. This is an improvement over the existing measurements because it does not require ground-truth data or specific settings and can be adapted to distribution-based or distance-based opinions.

Keywords: online argumentation, controversy, collective intelligence, agreement analysis, collaborative decision-making, fuzzy logic

Procedia PDF Downloads 116
313 A Flute Tracking System for Monitoring the Wear of Cutting Tools in Milling Operations

Authors: Hatim Laalej, Salvador Sumohano-Verdeja, Thomas McLeay

Abstract:

Monitoring of tool wear in milling operations is essential for achieving the desired dimensional accuracy and surface finish of a machined workpiece. Although there are numerous statistical models and artificial intelligence techniques available for monitoring the wear of cutting tools, these techniques cannot pin point which cutting edge of the tool, or which insert in the case of indexable tooling, is worn or broken. Currently, the task of monitoring the wear on the tool cutting edges is carried out by the operator who performs a manual inspection, causing undesirable stoppages of machine tools and consequently resulting in costs incurred from lost productivity. The present study is concerned with the development of a flute tracking system to segment signals related to each physical flute of a cutter with three flutes used in an end milling operation. The purpose of the system is to monitor the cutting condition for individual flutes separately in order to determine their progressive wear rates and to predict imminent tool failure. The results of this study clearly show that signals associated with each flute can be effectively segmented using the proposed flute tracking system. Furthermore, the results illustrate that by segmenting the sensor signal by flutes it is possible to investigate the wear in each physical cutting edge of the cutting tool. These findings are significant in that they facilitate the online condition monitoring of a cutting tool for each specific flute without the need for operators/engineers to perform manual inspections of the tool.

Keywords: machining, milling operation, tool condition monitoring, tool wear prediction

Procedia PDF Downloads 303
312 Design and Motion Control of a Two-Wheel Inverted Pendulum Robot

Authors: Shiuh-Jer Huang, Su-Shean Chen, Sheam-Chyun Lin

Abstract:

Two-wheel inverted pendulum robot (TWIPR) is designed with two-hub DC motors for human riding and motion control evaluation. In order to measure the tilt angle and angular velocity of the inverted pendulum robot, accelerometer and gyroscope sensors are chosen. The mobile robot’s moving position and velocity were estimated based on DC motor built in hall sensors. The control kernel of this electric mobile robot is designed with embedded Arduino Nano microprocessor. A handle bar was designed to work as steering mechanism. The intelligent model-free fuzzy sliding mode control (FSMC) was employed as the main control algorithm for this mobile robot motion monitoring with different control purpose adjustment. The intelligent controllers were designed for balance control, and moving speed control purposes of this robot under different operation conditions and the control performance were evaluated based on experimental results.

Keywords: balance control, speed control, intelligent controller, two wheel inverted pendulum

Procedia PDF Downloads 224
311 Create a Brand Value Assessment Model to Choosing a Cosmetic Brand in Tehran Combining DEMATEL Techniques and Multi-Stage ANFIS

Authors: Hamed Saremi, Suzan Taghavy, Seyed Mohammad Hanif Sanjari, Mostafa Kahali

Abstract:

One of the challenges in manufacturing and service companies to provide a product or service is recognized Brand to consumers in target markets. They provide most of their processes under the same capacity. But the constant threat of devastating internal and external resources to prevent a rise Brands and more companies are recognizing the stages are bankrupt. This paper has tried to identify and analyze effective indicators of brand equity and focuses on indicators and presents a model of intelligent create a model to prevent possible damage. In this study, the identified indicators of brand equity are based on literature study and according to expert opinions, set of indicators By techniques DEMATEL Then to used Multi-Step Adaptive Neural-Fuzzy Inference system (ANFIS) to design a multi-stage intelligent system for assessment of brand equity.

Keywords: brand, cosmetic product, ANFIS, DEMATEL

Procedia PDF Downloads 417
310 A Large Ion Collider Experiment (ALICE) Diffractive Detector Control System for RUN-II at the Large Hadron Collider

Authors: J. C. Cabanillas-Noris, M. I. Martínez-Hernández, I. León-Monzón

Abstract:

The selection of diffractive events in the ALICE experiment during the first data taking period (RUN-I) of the Large Hadron Collider (LHC) was limited by the range over which rapidity gaps occur. It would be possible to achieve better measurements by expanding the range in which the production of particles can be detected. For this purpose, the ALICE Diffractive (AD0) detector has been installed and commissioned for the second phase (RUN-II). Any new detector should be able to take the data synchronously with all other detectors and be operated through the ALICE central systems. One of the key elements that must be developed for the AD0 detector is the Detector Control System (DCS). The DCS must be designed to operate safely and correctly this detector. Furthermore, the DCS must also provide optimum operating conditions for the acquisition and storage of physics data and ensure these are of the highest quality. The operation of AD0 implies the configuration of about 200 parameters, from electronics settings and power supply levels to the archiving of operating conditions data and the generation of safety alerts. It also includes the automation of procedures to get the AD0 detector ready for taking data in the appropriate conditions for the different run types in ALICE. The performance of AD0 detector depends on a certain number of parameters such as the nominal voltages for each photomultiplier tube (PMT), their threshold levels to accept or reject the incoming pulses, the definition of triggers, etc. All these parameters define the efficiency of AD0 and they have to be monitored and controlled through AD0 DCS. Finally, AD0 DCS provides the operator with multiple interfaces to execute these tasks. They are realized as operating panels and scripts running in the background. These features are implemented on a SCADA software platform as a distributed control system which integrates to the global control system of the ALICE experiment.

Keywords: AD0, ALICE, DCS, LHC

Procedia PDF Downloads 305
309 A Picture is worth a Billion Bits: Real-Time Image Reconstruction from Dense Binary Pixels

Authors: Tal Remez, Or Litany, Alex Bronstein

Abstract:

The pursuit of smaller pixel sizes at ever increasing resolution in digital image sensors is mainly driven by the stringent price and form-factor requirements of sensors and optics in the cellular phone market. Recently, Eric Fossum proposed a novel concept of an image sensor with dense sub-diffraction limit one-bit pixels (jots), which can be considered a digital emulation of silver halide photographic film. This idea has been recently embodied as the EPFL Gigavision camera. A major bottleneck in the design of such sensors is the image reconstruction process, producing a continuous high dynamic range image from oversampled binary measurements. The extreme quantization of the Poisson statistics is incompatible with the assumptions of most standard image processing and enhancement frameworks. The recently proposed maximum-likelihood (ML) approach addresses this difficulty, but suffers from image artifacts and has impractically high computational complexity. In this work, we study a variant of a sensor with binary threshold pixels and propose a reconstruction algorithm combining an ML data fitting term with a sparse synthesis prior. We also show an efficient hardware-friendly real-time approximation of this inverse operator. Promising results are shown on synthetic data as well as on HDR data emulated using multiple exposures of a regular CMOS sensor.

Keywords: binary pixels, maximum likelihood, neural networks, sparse coding

Procedia PDF Downloads 201
308 A Multi-Objective Decision Making Model for Biodiversity Conservation and Planning: Exploring the Concept of Interdependency

Authors: M. Mohan, J. P. Roise, G. P. Catts

Abstract:

Despite living in an era where conservation zones are de-facto the central element in any sustainable wildlife management strategy, we still find ourselves grappling with several pareto-optimal situations regarding resource allocation and area distribution for the same. In this paper, a multi-objective decision making (MODM) model is presented to answer the question of whether or not we can establish mutual relationships between these contradicting objectives. For our study, we considered a Red-cockaded woodpecker (Picoides borealis) habitat conservation scenario in the coastal plain of North Carolina, USA. Red-cockaded woodpecker (RCW) is a non-migratory territorial bird that excavates cavities in living pine trees for roosting and nesting. The RCW groups nest in an aggregation of cavity trees called ‘cluster’ and for our model we use the number of clusters to be established as a measure of evaluating the size of conservation zone required. The case study is formulated as a linear programming problem and the objective function optimises the Red-cockaded woodpecker clusters, carbon retention rate, biofuel, public safety and Net Present Value (NPV) of the forest. We studied the variation of individual objectives with respect to the amount of area available and plotted a two dimensional dynamic graph after establishing interrelations between the objectives. We further explore the concept of interdependency by integrating the MODM model with GIS, and derive a raster file representing carbon distribution from the existing forest dataset. Model results demonstrate the applicability of interdependency from both linear and spatial perspectives, and suggest that this approach holds immense potential for enhancing environmental investment decision making in future.

Keywords: conservation, interdependency, multi-objective decision making, red-cockaded woodpecker

Procedia PDF Downloads 337
307 An Operators’ Real-sense-based Fire Simulation for Human Factors Validation in Nuclear Power Plants

Authors: Sa-Kil Kim, Jang-Soo Lee

Abstract:

On March 31, 1993, a severe fire accident took place in a nuclear power plant located in Narora in North India. The event involved a major fire in the turbine building of NAPS unit-1 and resulted in a total loss of power to the unit for 17 hours. In addition, there was a heavy ingress of smoke in the control room, mainly through the intake of the ventilation system, forcing the operators to vacate the control room. The Narora fire accident provides us lessons indicating that operators could lose their mind and predictable behaviors during a fire. After the Fukushima accident, which resulted from a natural disaster, unanticipated external events are also required to be prepared and controlled for the ultimate safety of nuclear power plants. From last year, our research team has developed a test and evaluation facility that can simulate external events such as an earthquake and fire based on the operators’ real-sense. As one of the results of the project, we proposed a unit real-sense-based facility that can simulate fire events in a control room for utilizing a test-bed of human factor validation. The test-bed has the operator’s workstation shape and functions to simulate fire conditions such as smoke, heat, and auditory alarms in accordance with the prepared fire scenarios. Furthermore, the test-bed can be used for the operators’ training and experience.

Keywords: human behavior in fire, human factors validation, nuclear power plants, real-sense-based fire simulation

Procedia PDF Downloads 283
306 Robust Heart Rate Estimation from Multiple Cardiovascular and Non-Cardiovascular Physiological Signals Using Signal Quality Indices and Kalman Filter

Authors: Shalini Rankawat, Mansi Rankawat, Rahul Dubey, Mazad Zaveri

Abstract:

Physiological signals such as electrocardiogram (ECG) and arterial blood pressure (ABP) in the intensive care unit (ICU) are often seriously corrupted by noise, artifacts, and missing data, which lead to errors in the estimation of heart rate (HR) and incidences of false alarm from ICU monitors. Clinical support in ICU requires most reliable heart rate estimation. Cardiac activity, because of its relatively high electrical energy, may introduce artifacts in Electroencephalogram (EEG), Electrooculogram (EOG), and Electromyogram (EMG) recordings. This paper presents a robust heart rate estimation method by detection of R-peaks of ECG artifacts in EEG, EMG & EOG signals, using energy-based function and a novel Signal Quality Index (SQI) assessment technique. SQIs of physiological signals (EEG, EMG, & EOG) were obtained by correlation of nonlinear energy operator (teager energy) of these signals with either ECG or ABP signal. HR is estimated from ECG, ABP, EEG, EMG, and EOG signals from separate Kalman filter based upon individual SQIs. Data fusion of each HR estimate was then performed by weighing each estimate by the Kalman filters’ SQI modified innovations. The fused signal HR estimate is more accurate and robust than any of the individual HR estimate. This method was evaluated on MIMIC II data base of PhysioNet from bedside monitors of ICU patients. The method provides an accurate HR estimate even in the presence of noise and artifacts.

Keywords: ECG, ABP, EEG, EMG, EOG, ECG artifacts, Teager-Kaiser energy, heart rate, signal quality index, Kalman filter, data fusion

Procedia PDF Downloads 696
305 Improving Trainings of Mineral Processing Operators Through Gamification and Modelling and Simulation

Authors: Pedro A. S. Bergamo, Emilia S. Streng, Jan Rosenkranz, Yousef Ghorbani

Abstract:

Within the often-hazardous mineral industry, simulation training has speedily gained appreciation as an important method of increasing site safety and productivity through enhanced operator skill and knowledge. Performance calculations related to froth flotation, one of the most important concentration methods, is probably the hardest topic taught during the training of plant operators. Currently, most training teach those skills by traditional methods like slide presentations and hand-written exercises with a heavy focus on memorization. To optimize certain aspects of these pieces of training, we developed “MinFloat”, which teaches the operation formulas of the froth flotation process with the help of gamification. The simulation core based on a first-principles flotation model was implemented in Unity3D and an instructor tutoring system was developed, which presents didactic content and reviews the selected answers. The game was tested by 25 professionals with extensive experience in the mining industry based on a questionnaire formulated for training evaluations. According to their feedback, the game scored well in terms of quality, didactic efficacy and inspiring character. The feedback of the testers on the main target audience and the outlook of the mentioned solution is presented. This paper aims to provide technical background on the construction of educational games for the mining industry besides showing how feedback from experts can more efficiently be gathered thanks to new technologies such as online forms.

Keywords: training evaluation, simulation based training, modelling, and simulation, froth flotation

Procedia PDF Downloads 113
304 Research on the Teaching Quality Evaluation of China’s Network Music Education APP

Authors: Guangzhuang Yu, Chun-Chu Liu

Abstract:

With the advent of the Internet era in recent years, social music education has gradually shifted from the original entity education mode to the mode of entity plus network teaching. No matter for school music education, professional music education or social music education, the teaching quality is the most important evaluation index. Regarding the research on teaching quality evaluation, scholars at home and abroad have contributed a lot of research results on the basis of multiple methods and evaluation subjects. However, to our best knowledge the complete evaluation model for the virtual teaching interaction mode of the emerging network music education Application (APP) has not been established. This research firstly found out the basic dimensions that accord with the teaching quality required by the three parties, constructing the quality evaluation index system; and then, on the basis of expounding the connotation of each index, it determined the weight of each index by using method of fuzzy analytic hierarchy process, providing ideas and methods for scientific, objective and comprehensive evaluation of the teaching quality of network education APP.

Keywords: network music education APP, teaching quality evaluation, index and connotation

Procedia PDF Downloads 128
303 Diagnostic Accuracy of the Tuberculin Skin Test for Tuberculosis Diagnosis: Interest of Using ROC Curve and Fagan’s Nomogram

Authors: Nouira Mariem, Ben Rayana Hazem, Ennigrou Samir

Abstract:

Background and aim: During the past decade, the frequency of extrapulmonary forms of tuberculosis has increased. These forms are under-diagnosed using conventional tests. The aim of this study was to evaluate the performance of the Tuberculin Skin Test (TST) for the diagnosis of tuberculosis, using the ROC curve and Fagan’s Nomogram methodology. Methods: This was a case-control, multicenter study in 11 anti-tuberculosis centers in Tunisia, during the period from June to November2014. The cases were adults aged between 18 and 55 years with confirmed tuberculosis. Controls were free from tuberculosis. A data collection sheet was filled out and a TST was performed for each participant. Diagnostic accuracy measures of TST were estimated using ROC curve and Area Under Curve to estimate sensitivity and specificity of a determined cut-off point. Fagan’s nomogram was used to estimate its predictive values. Results: Overall, 1053 patients were enrolled, composed of 339 cases (sex-ratio (M/F)=0.87) and 714 controls (sex-ratio (M/F)=0.99). The mean age was 38.3±11.8 years for cases and 33.6±11 years for controls. The mean diameter of the TST induration was significantly higher among cases than controls (13.7mm vs.6.2mm;p=10-6). Area Under Curve was 0.789 [95% CI: 0.758-0.819; p=0.01], corresponding to a moderate discriminating power for this test. The most discriminative cut-off value of the TST, which were associated with the best sensitivity (73.7%) and specificity (76.6%) couple was about 11 mm with a Youden index of 0.503. Positive and Negative predictive values were 3.11% and 99.52%, respectively. Conclusion: In view of these results, we can conclude that the TST can be used for tuberculosis diagnosis with a good sensitivity and specificity. However, the skin induration measurement and its interpretation is operator dependent and remains difficult and subjective. The combination of the TST with another test such as the Quantiferon test would be a good alternative.

Keywords: tuberculosis, tuberculin skin test, ROC curve, cut-off

Procedia PDF Downloads 67
302 Normalized Enterprises Architectures: Portugal's Public Procurement System Application

Authors: Tiago Sampaio, André Vasconcelos, Bruno Fragoso

Abstract:

The Normalized Systems Theory, which is designed to be applied to software architectures, provides a set of theorems, elements and rules, with the purpose of enabling evolution in Information Systems, as well as ensuring that they are ready for change. In order to make that possible, this work’s solution is to apply the Normalized Systems Theory to the domain of enterprise architectures, using Archimate. This application is achieved through the adaptation of the elements of this theory, making them artifacts of the modeling language. The theorems are applied through the identification of the viewpoints to be used in the architectures, as well as the transformation of the theory’s encapsulation rules into architectural rules. This way, it is possible to create normalized enterprise architectures, thus fulfilling the needs and requirements of the business. This solution was demonstrated using the Portuguese Public Procurement System. The Portuguese government aims to make this system as fair as possible, allowing every organization to have the same business opportunities. The aim is for every economic operator to have access to all public tenders, which are published in any of the 6 existing platforms, independently of where they are registered. In order to make this possible, we applied our solution to the construction of two different architectures, which are able of fulfilling the requirements of the Portuguese government. One of those architectures, TO-BE A, has a Message Broker that performs the communication between the platforms. The other, TO-BE B, represents the scenario in which the platforms communicate with each other directly. Apart from these 2 architectures, we also represent the AS-IS architecture that demonstrates the current behavior of the Public Procurement Systems. Our evaluation is based on a comparison between the AS-IS and the TO-BE architectures, regarding the fulfillment of the rules and theorems of the Normalized Systems Theory and some quality metrics.

Keywords: archimate, architecture, broker, enterprise, evolvable systems, interoperability, normalized architectures, normalized systems, normalized systems theory, platforms

Procedia PDF Downloads 357
301 Hysteresis Modeling in Iron-Dominated Magnets Based on a Deep Neural Network Approach

Authors: Maria Amodeo, Pasquale Arpaia, Marco Buzio, Vincenzo Di Capua, Francesco Donnarumma

Abstract:

Different deep neural network architectures have been compared and tested to predict magnetic hysteresis in the context of pulsed electromagnets for experimental physics applications. Modelling quasi-static or dynamic major and especially minor hysteresis loops is one of the most challenging topics for computational magnetism. Recent attempts at mathematical prediction in this context using Preisach models could not attain better than percent-level accuracy. Hence, this work explores neural network approaches and shows that the architecture that best fits the measured magnetic field behaviour, including the effects of hysteresis and eddy currents, is the nonlinear autoregressive exogenous neural network (NARX) model. This architecture aims to achieve a relative RMSE of the order of a few 100 ppm for complex magnetic field cycling, including arbitrary sequences of pseudo-random high field and low field cycles. The NARX-based architecture is compared with the state-of-the-art, showing better performance than the classical operator-based and differential models, and is tested on a reference quadrupole magnetic lens used for CERN particle beams, chosen as a case study. The training and test datasets are a representative example of real-world magnet operation; this makes the good result obtained very promising for future applications in this context.

Keywords: deep neural network, magnetic modelling, measurement and empirical software engineering, NARX

Procedia PDF Downloads 130