Search results for: data comparison
28291 Novel Hole-Bar Standard Design and Inter-Comparison for Geometric Errors Identification on Machine-Tool
Authors: F. Viprey, H. Nouira, S. Lavernhe, C. Tournier
Abstract:
Manufacturing of freeform parts may be achieved on 5-axis machine tools currently considered as a common means of production. In particular, the geometrical quality of the freeform parts depends on the accuracy of the multi-axis structural loop, which is composed of several component assemblies maintaining the relative positioning between the tool and the workpiece. Therefore, to reach high quality of the geometries of the freeform parts the geometric errors of the 5 axis machine should be evaluated and compensated, which leads one to master the deviations between the tool and the workpiece (volumetric accuracy). In this study, a novel hole-bar design was developed and used for the characterization of the geometric errors of a RRTTT 5-axis machine tool. The hole-bar standard design is made of Invar material, selected since it is less sensitive to thermal drift. The proposed design allows once to extract 3 intrinsic parameters: one linear positioning and two straightnesses. These parameters can be obtained by measuring the cylindricity of 12 holes (bores) and 11 cylinders located on a perpendicular plane. By mathematical analysis, twelve 3D points coordinates can be identified and correspond to the intersection of each hole axis with the least square plane passing through two perpendicular neighbour cylinders axes. The hole-bar was calibrated using a precision CMM at LNE traceable the SI meter definition. The reversal technique was applied in order to separate the error forms of the hole bar from the motion errors of the mechanical guiding systems. An inter-comparison was additionally conducted between four NMIs (National Metrology Institutes) within the EMRP IND62: JRP-TIM project. Afterwards, the hole-bar was integrated in RRTTT 5-axis machine tool to identify its volumetric errors. Measurements were carried out in real time and combine raw data acquired by the Renishaw RMP600 touch probe and the linear and rotary encoders. The geometric errors of the 5 axis machine were also evaluated by an accurate laser tracer interferometer system. The results were compared to those obtained with the hole bar.Keywords: volumetric errors, CMM, 3D hole-bar, inter-comparison
Procedia PDF Downloads 38428290 Comparison of Authentication Methods in Internet of Things Technology
Authors: Hafizah Che Hasan, Fateen Nazwa Yusof, Maslina Daud
Abstract:
Internet of Things (IoT) is a powerful industry system, which end-devices are interconnected and automated, allowing the devices to analyze data and execute actions based on the analysis. The IoT technology leverages the technology of Radio-Frequency Identification (RFID) and Wireless Sensor Network (WSN), including mobile and sensor. These technologies contribute to the evolution of IoT. However, due to more devices are connected each other in the Internet, and data from various sources exchanged between things, confidentiality of the data becomes a major concern. This paper focuses on one of the major challenges in IoT; authentication, in order to preserve data integrity and confidentiality are in place. A few solutions are reviewed based on papers from the last few years. One of the proposed solutions is securing the communication between IoT devices and cloud servers with Elliptic Curve Cryptograhpy (ECC) based mutual authentication protocol. This solution focuses on Hyper Text Transfer Protocol (HTTP) cookies as security parameter. Next proposed solution is using keyed-hash scheme protocol to enable IoT devices to authenticate each other without the presence of a central control server. Another proposed solution uses Physical Unclonable Function (PUF) based mutual authentication protocol. It emphasizes on tamper resistant and resource-efficient technology, which equals a 3-way handshake security protocol.Keywords: Internet of Things (IoT), authentication, PUF ECC, keyed-hash scheme protocol
Procedia PDF Downloads 26428289 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods
Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer
Abstract:
Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.Keywords: cross-validation, importance sampling, information criteria, predictive accuracy
Procedia PDF Downloads 39228288 Providing Security to Private Cloud Using Advanced Encryption Standard Algorithm
Authors: Annapureddy Srikant Reddy, Atthanti Mahendra, Samala Chinni Krishna, N. Neelima
Abstract:
In our present world, we are generating a lot of data and we, need a specific device to store all these data. Generally, we store data in pen drives, hard drives, etc. Sometimes we may loss the data due to the corruption of devices. To overcome all these issues, we implemented a cloud space for storing the data, and it provides more security to the data. We can access the data with just using the internet from anywhere in the world. We implemented all these with the java using Net beans IDE. Once user uploads the data, he does not have any rights to change the data. Users uploaded files are stored in the cloud with the file name as system time and the directory will be created with some random words. Cloud accepts the data only if the size of the file is less than 2MB.Keywords: cloud space, AES, FTP, NetBeans IDE
Procedia PDF Downloads 20628287 Variation Theory and Mixed Instructional Approaches: Advancing Conceptual Understanding in Geometry
Authors: Belete Abebaw, Mulugeta Atinafu, Awoke Shishigu
Abstract:
The study aimed to examine students’ problem-solving skills through mixed instruction (variation theory based Geogerba assisted problem-solving instructional approaches). A total of 125 students divided into 4 intact groups participated in the study. The study employed a quasi-experimental research design. Three intact groups were randomly assigned as a treatment group, while one group was taken as a comparison group. Each of the groups took a specific instructional approach, while the comparison group proceeded as usual without any changes to the instructional process for all sessions. Both pre and post problem-solving tests were administered to all groups. To analyze the data and examine the differences (if any) in each group, ANCOVA and Paired samples t-tests were employed. There was a significant mean difference between students pre-test and post-test in their conceptual understanding of each treatment group. Furthermore, the mixed treatment had a large mean difference. It was recommended that teachers give attention to using variation theory-based geometry problem-solving approaches for students’ better understanding. Administrators should emphasize launching Geogebra software through IT labs in schools, and government officials should appreciate the implementation of technology in schools.Keywords: conceptual understanding, Geogebra, learning geometry, problem solving approaches, variation theory
Procedia PDF Downloads 2528286 Comparison of Conjunctival Autograft versus Amniotic Membrane Transplantation for Pterygium Surgery
Authors: Luksanaporn Krungkraipetch
Abstract:
Currently, surgery is the only known effective treatment for pterygium. In certain groups, the probability of recurrence after basic sclera excision is very significant. Tissue grafting is substantially more time-consuming and challenging than keeping the sclera uncovered, but it reduces the chance of recurrence. Conjunctival autograft surgery is older than amniotic membrane graft surgery. The purpose of this study was to compare pterygium surgery with conjunctival autograft against an amniotic membrane transplant. In the study, a randomized controlled trial was used. Four cases were ruled out (two for failing to meet inclusion criteria and the other for refusing to participate). Group I (n = 40) received the intervention, whereas Group II (n = 40) served as the control. Both descriptive and inferential statistical approaches were used, including data analysis and data analysis statistics. The descriptive statistics analysis covered basic pterygium surgery information as well as the risk of recurrent pterygium. As an inferential statistic, the chi-square was used. A p-value of 0.05 is statistically significant. The findings of this investigation were the majority of patients in Group I were female (70.0%), aged 41–60 years, had no underlying disease (95.0%), and had nasal pterygium (97.5%). The majority of Group II patients were female (60.0%), aged 41–60 years, had no underlying disease (97.5%) and had nasal pterygium (97.5%). Group I had no recurrence of pterygium after surgery, but Group II had a 7.5% recurrence rate. Typically, the recurrence time is twelve months. The majority of pterygium recurrences occur in females (83.3%), between the ages of 41 and 60 (66.7%), with no underlying disease. The recurrence period is typically six months (60%) and a nasal pterygium site (83.3%). Pterygium recurrence after surgery is associated with nasal location (p =.002). 16.7% of pterygium surgeries result in complications; one woman with nasal pterygium underwent autograft surgery six months later. The presence of granulation tissue at the surgical site is a mild complication. A pterygium surgery recurrence rate comparison of conjunctival autograft and amniotic membrane transplantation revealed that conjunctival autograft had a higher recurrence rate than amniotic membrane transplantation (p =.013).Keywords: pterygium, pterygium surgery, conjunctival autograft, amniotic membrane transplantation
Procedia PDF Downloads 7028285 Iterative Method for Lung Tumor Localization in 4D CT
Authors: Sarah K. Hagi, Majdi Alnowaimi
Abstract:
In the last decade, there were immense advancements in the medical imaging modalities. These advancements can scan a whole volume of the lung organ in high resolution images within a short time. According to this performance, the physicians can clearly identify the complicated anatomical and pathological structures of lung. Therefore, these advancements give large opportunities for more advance of all types of lung cancer treatment available and will increase the survival rate. However, lung cancer is still one of the major causes of death with around 19% of all the cancer patients. Several factors may affect survival rate. One of the serious effects is the breathing process, which can affect the accuracy of diagnosis and lung tumor treatment plan. We have therefore developed a semi automated algorithm to localize the 3D lung tumor positions across all respiratory data during respiratory motion. The algorithm can be divided into two stages. First, a lung tumor segmentation for the first phase of the 4D computed tomography (CT). Lung tumor segmentation is performed using an active contours method. Then, localize the tumor 3D position across all next phases using a 12 degrees of freedom of an affine transformation. Two data set where used in this study, a compute simulate for 4D CT using extended cardiac-torso (XCAT) phantom and 4D CT clinical data sets. The result and error calculation is presented as root mean square error (RMSE). The average error in data sets is 0.94 mm ± 0.36. Finally, evaluation and quantitative comparison of the results with a state-of-the-art registration algorithm was introduced. The results obtained from the proposed localization algorithm show a promising result to localize alung tumor in 4D CT data.Keywords: automated algorithm , computed tomography, lung tumor, tumor localization
Procedia PDF Downloads 60228284 Business Intelligence for Profiling of Telecommunication Customer
Authors: Rokhmatul Insani, Hira Laksmiwati Soemitro
Abstract:
Business Intelligence is a methodology that exploits the data to produce information and knowledge systematically, business intelligence can support the decision-making process. Some methods in business intelligence are data warehouse and data mining. A data warehouse can store historical data from transactional data. For data modelling in data warehouse, we apply dimensional modelling by Kimball. While data mining is used to extracting patterns from the data and get insight from the data. Data mining has many techniques, one of which is segmentation. For profiling of telecommunication customer, we use customer segmentation according to customer’s usage of services, customer invoice and customer payment. Customers can be grouped according to their characteristics and can be identified the profitable customers. We apply K-Means Clustering Algorithm for segmentation. The input variable for that algorithm we use RFM (Recency, Frequency and Monetary) model. All process in data mining, we use tools IBM SPSS modeller.Keywords: business intelligence, customer segmentation, data warehouse, data mining
Procedia PDF Downloads 48328283 A Comparative Study of Particle Image Velocimetry (PIV) and Particle Tracking Velocimetry (PTV) for Airflow Measurement
Authors: Sijie Fu, Pascal-Henry Biwolé, Christian Mathis
Abstract:
Among modern airflow measurement methods, Particle Image Velocimetry (PIV) and Particle Tracking Velocimetry (PTV), as visualized and non-instructive measurement techniques, are playing more important role. This paper conducts a comparative experimental study for airflow measurement employing both techniques with the same condition. Velocity vector fields, velocity contour fields, voticity profiles and turbulence profiles are selected as the comparison indexes. The results show that the performance of both PIV and PTV techniques for airflow measurement is satisfied, but some differences between the both techniques are existed, it suggests that selecting the measurement technique should be based on a comprehensive consideration.Keywords: airflow measurement, comparison, PIV, PTV
Procedia PDF Downloads 42428282 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV
Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran
Abstract:
Ortho-rectification is the process of geometrically correcting an aerial image such that the scale is uniform. The ortho-image formed from the process is corrected for lens distortion, topographic relief, and camera tilt. This can be used to measure true distances, because it is an accurate representation of the Earth’s surface. Ortho-rectification and geo-referencing are essential to pin point the exact location of targets in video imagery acquired at the UAV platform. This can only be achieved by comparing such video imagery with an existing digital map. However, it is only when the image is ortho-rectified with the same co-ordinate system as an existing map that such a comparison is possible. The video image sequences from the UAV platform must be geo-registered, that is, each video frame must carry the necessary camera information before performing the ortho-rectification process. Each rectified image frame can then be mosaicked together to form a seamless image map covering the selected area. This can then be used for comparison with an existing map for geo-referencing. In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) Decompilation of video stream into individual frames; (2) Finding of interior camera orientation parameters; (3) Finding the relative exterior orientation parameters for each video frames with respect to each other; (4) Finding the absolute exterior orientation parameters, using self-calibration adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a 2-D planimetric mapping, which can be compared with a well referenced existing digital map for the purpose of georeferencing and aerial surveillance. A test field located in Abuja, Nigeria was used for testing our method. Fifteen minutes video and telemetry data were collected using the UAV and the data collected were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images are more reliable than those from original perspective photographs when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 meters.Keywords: geo-referencing, ortho-rectification, video frame, self-calibration
Procedia PDF Downloads 47828281 Thermal Modelling and Experimental Comparison for a Moving Pantograph Strip
Authors: Nicolas Delcey, Philippe Baucour, Didier Chamagne, Geneviève Wimmer, Auditeau Gérard, Bausseron Thomas, Bouger Odile, Blanvillain Gérard
Abstract:
This paper proposes a thermal study of the catenary/pantograph interface for a train in motion. A 2.5D complex model of the pantograph strip has been defined and created by a coupling between a 1D and a 2D model. Experimental and simulation results are presented and with a comparison allow validating the 2.5D model. Some physical phenomena are described and presented with the help of the model such as the stagger motion thermal effect, particular heats and the effect of the material characteristics. Finally it is possible to predict the critical thermal configuration during a train trip.Keywords: electro-thermal studies, mathematical optimizations, multi-physical approach, numerical model, pantograph strip wear
Procedia PDF Downloads 32728280 Comparison of Number of Waves Surfed and Duration Using Global Positioning System and Inertial Sensors
Authors: João Madureira, Ricardo Lagido, Inês Sousa, Fraunhofer Portugal
Abstract:
Surf is an increasingly popular sport and its performance evaluation is often qualitative. This work aims at using a smartphone to collect and analyze the GPS and inertial sensors data in order to obtain quantitative metrics of the surfing performance. Two approaches are compared for detection of wave rides, computing the number of waves rode in a surfing session, the starting time of each wave and its duration. The first approach is based on computing the velocity from the Global Positioning System (GPS) signal and finding the velocity thresholds that allow identifying the start and end of each wave ride. The second approach adds information from the Inertial Measurement Unit (IMU) of the smartphone, to the velocity thresholds obtained from the GPS unit, to determine the start and end of each wave ride. The two methods were evaluated using GPS and IMU data from two surfing sessions and validated with similar metrics extracted from video data collected from the beach. The second method, combining GPS and IMU data, was found to be more accurate in determining the number of waves, start time and duration. This paper shows that it is feasible to use smartphones for quantification of performance metrics during surfing. In particular, detection of the waves rode and their duration can be accurately determined using the smartphone GPS and IMU.Keywords: inertial measurement unit (IMU), global positioning system (GPS), smartphone, surfing performance
Procedia PDF Downloads 40128279 Performance Comparison of Situation-Aware Models for Activating Robot Vacuum Cleaner in a Smart Home
Authors: Seongcheol Kwon, Jeongmin Kim, Kwang Ryel Ryu
Abstract:
We assume an IoT-based smart-home environment where the on-off status of each of the electrical appliances including the room lights can be recognized in a real time by monitoring and analyzing the smart meter data. At any moment in such an environment, we can recognize what the household or the user is doing by referring to the status data of the appliances. In this paper, we focus on a smart-home service that is to activate a robot vacuum cleaner at right time by recognizing the user situation, which requires a situation-aware model that can distinguish the situations that allow vacuum cleaning (Yes) from those that do not (No). We learn as our candidate models a few classifiers such as naïve Bayes, decision tree, and logistic regression that can map the appliance-status data into Yes and No situations. Our training and test data are obtained from simulations of user behaviors, in which a sequence of user situations such as cooking, eating, dish washing, and so on is generated with the status of the relevant appliances changed in accordance with the situation changes. During the simulation, both the situation transition and the resulting appliance status are determined stochastically. To compare the performances of the aforementioned classifiers we obtain their learning curves for different types of users through simulations. The result of our empirical study reveals that naïve Bayes achieves a slightly better classification accuracy than the other compared classifiers.Keywords: situation-awareness, smart home, IoT, machine learning, classifier
Procedia PDF Downloads 42228278 Comparison of Gait Variability in Individuals with Trans-Tibial and Trans-Femoral Lower Limb Loss: A Pilot Study
Authors: Hilal Keklicek, Fatih Erbahceci, Elif Kirdi, Ali Yalcin, Semra Topuz, Ozlem Ulger, Gul Sener
Abstract:
Objectives and Goals: The stride-to-stride fluctuations in gait is a determinant of qualified locomotion as known as gait variability. Gait variability is an important predictive factor of fall risk and useful for monitoring the effects of therapeutic interventions and rehabilitation. Comparison of gait variability in individuals with trans-tibial lower limb loss and trans femoral lower limb loss was the aim of the study. Methods: Ten individuals with traumatic unilateral trans femoral limb loss(TF), 12 individuals with traumatic transtibial lower limb loss(TT) and 12 healthy individuals(HI) were the participants of the study. All participants were evaluated with treadmill. Gait characteristics including mean step length, step length variability, ambulation index, time on each foot of participants were evaluated with treadmill. Participants were walked at their preferred speed for six minutes. Data from 4th minutes to 6th minutes were selected for statistical analyses to eliminate learning effect. Results: There were differences between the groups in intact limb step length variation, time on each foot, ambulation index and mean age (p < .05) according to the Kruskal Wallis Test. Pairwise analyses showed that there were differences between the TT and TF in residual limb variation (p=.041), time on intact foot (p=.024), time on prosthetic foot(p=.024), ambulation index(p = .003) in favor of TT group. There were differences between the TT and HI group in intact limb variation (p = .002), time on intact foot (p<.001), time on prosthetic foot (p < .001), ambulation index result (p < .001) in favor of HI group. There were differences between the TF and HI group in intact limb variation (p = .001), time on intact foot (p=.01) ambulation index result (p < .001) in favor of HI group. There was difference between the groups in mean age result from HI group were younger (p < .05).There were similarity between the groups in step lengths (p>.05) and time of prosthesis using in individuals with lower limb loss (p > .05). Conclusions: The pilot study provided basic data about gait stability in individuals with traumatic lower limb loss. Results of the study showed that to evaluate the gait differences between in different amputation level, long-range gait analyses methods may be useful to get more valuable information. On the other hand, similarity in step length may be resulted from effective prosthetic using or effective gait rehabilitation, in conclusion, all participants with lower limb loss were already trained. The differences between the TT and HI; TF and HI may be resulted from the age related features, therefore, age matched population in HI were recommended future studies. Increasing the number of participants and comparison of age-matched groups also recommended to generalize these result.Keywords: lower limb loss, amputee, gait variability, gait analyses
Procedia PDF Downloads 28028277 Comparison of the H-Index of Researchers of Google Scholar and Scopus
Authors: Adian Fatchur Rochim, Abdul Muis, Riri Fitri Sari
Abstract:
H-index has been widely used as a performance indicator of researchers around the world especially in Indonesia. The Government uses Scopus and Google scholar as indexing references in providing recognition and appreciation. However, those two indexing services yield to different H-index values. For that purpose, this paper evaluates the difference of the H-index from those services. Researchers indexed by Webometrics, are used as reference’s data in this paper. Currently, Webometrics only uses H-index from Google Scholar. This paper observed and compared corresponding researchers’ data from Scopus to get their H-index score. Subsequently, some researchers with huge differences in score are observed in more detail on their paper’s publisher. This paper shows that the H-index of researchers in Google Scholar is approximately 2.45 times of their Scopus H-Index. Most difference exists due to the existence of uncertified publishers, which is considered in Google Scholar but not in Scopus.Keywords: Google Scholar, H-index, Scopus, performance indicator
Procedia PDF Downloads 27528276 Imputation Technique for Feature Selection in Microarray Data Set
Authors: Younies Saeed Hassan Mahmoud, Mai Mabrouk, Elsayed Sallam
Abstract:
Analysing DNA microarray data sets is a great challenge, which faces the bioinformaticians due to the complication of using statistical and machine learning techniques. The challenge will be doubled if the microarray data sets contain missing data, which happens regularly because these techniques cannot deal with missing data. One of the most important data analysis process on the microarray data set is feature selection. This process finds the most important genes that affect certain disease. In this paper, we introduce a technique for imputing the missing data in microarray data sets while performing feature selection.Keywords: DNA microarray, feature selection, missing data, bioinformatics
Procedia PDF Downloads 57428275 PDDA: Priority-Based, Dynamic Data Aggregation Approach for Sensor-Based Big Data Framework
Authors: Lutful Karim, Mohammed S. Al-kahtani
Abstract:
Sensors are being used in various applications such as agriculture, health monitoring, air and water pollution monitoring, traffic monitoring and control and hence, play the vital role in the growth of big data. However, sensors collect redundant data. Thus, aggregating and filtering sensors data are significantly important to design an efficient big data framework. Current researches do not focus on aggregating and filtering data at multiple layers of sensor-based big data framework. Thus, this paper introduces (i) three layers data aggregation and framework for big data and (ii) a priority-based, dynamic data aggregation scheme (PDDA) for the lowest layer at sensors. Simulation results show that the PDDA outperforms existing tree and cluster-based data aggregation scheme in terms of overall network energy consumptions and end-to-end data transmission delay.Keywords: big data, clustering, tree topology, data aggregation, sensor networks
Procedia PDF Downloads 34628274 Transient and Persistent Efficiency Estimation for Electric Grid Utilities Based on Meta-Frontier: Comparative Analysis of China and Japan
Authors: Bai-Chen Xie, Biao Li
Abstract:
With the deepening of international exchanges and investment, the international comparison of power grid firms has become the focus of regulatory authorities. Ignoring the differences in the economic environment, resource endowment, technology, and other aspects of different countries or regions may lead to efficiency bias. Based on the Meta-frontier model, this paper divides China and Japan into two groups by using the data of China and Japan from 2006 to 2020. While preserving the differences between the two countries, it analyzes and compares the efficiency of the transmission and distribution industries of the two countries. Combined with the four-component stochastic frontier model, the efficiency is divided into transient and persistent efficiency. We found that there are obvious differences between the transmission and distribution sectors in China and Japan. On the one hand, the inefficiency of the two countries is mostly caused by long-term and structural problems. The key to improve the efficiency of the two countries is to focus more on solving long-term and structural problems. On the other hand, the long-term and structural problems that cause the inefficiency of the two countries are not the same. Quality factors have different effects on the efficiency of the two countries, and this different effect is captured by the common frontier model but is offset in the overall model. Based on these findings, this paper proposes some targeted policy recommendations.Keywords: transmission and distribution industries, transient efficiency, persistent efficiency, meta-frontier, international comparison
Procedia PDF Downloads 10028273 Comparison of Multivariate Adaptive Regression Splines and Random Forest Regression in Predicting Forced Expiratory Volume in One Second
Authors: P. V. Pramila , V. Mahesh
Abstract:
Pulmonary Function Tests are important non-invasive diagnostic tests to assess respiratory impairments and provides quantifiable measures of lung function. Spirometry is the most frequently used measure of lung function and plays an essential role in the diagnosis and management of pulmonary diseases. However, the test requires considerable patient effort and cooperation, markedly related to the age of patients esulting in incomplete data sets. This paper presents, a nonlinear model built using Multivariate adaptive regression splines and Random forest regression model to predict the missing spirometric features. Random forest based feature selection is used to enhance both the generalization capability and the model interpretability. In the present study, flow-volume data are recorded for N= 198 subjects. The ranked order of feature importance index calculated by the random forests model shows that the spirometric features FVC, FEF 25, PEF,FEF 25-75, FEF50, and the demographic parameter height are the important descriptors. A comparison of performance assessment of both models prove that, the prediction ability of MARS with the `top two ranked features namely the FVC and FEF 25 is higher, yielding a model fit of R2= 0.96 and R2= 0.99 for normal and abnormal subjects. The Root Mean Square Error analysis of the RF model and the MARS model also shows that the latter is capable of predicting the missing values of FEV1 with a notably lower error value of 0.0191 (normal subjects) and 0.0106 (abnormal subjects). It is concluded that combining feature selection with a prediction model provides a minimum subset of predominant features to train the model, yielding better prediction performance. This analysis can assist clinicians with a intelligence support system in the medical diagnosis and improvement of clinical care.Keywords: FEV, multivariate adaptive regression splines pulmonary function test, random forest
Procedia PDF Downloads 31028272 Finite Element Method Analysis of a Modified Rotor 6/4 Switched Reluctance Motor's and Comparison with Brushless Direct Current Motor in Pan-Tilt Applications
Authors: Umit Candan, Kadir Dogan, Ozkan Akin
Abstract:
In this study, the use of a modified rotor 6/4 Switched Reluctance Motor (SRM) and a Brushless Direct Current Motor (BLDC) in pan-tilt systems is compared. Pan-tilt systems are critical mechanisms that enable the precise orientation of cameras and sensors, and their performance largely depends on the characteristics of the motors used. The aim of the study is to determine how the performance of the SRM can be improved through rotor modifications and how these improvements can compete with BLDC motors. Using Finite Element Method (FEM) analyses, the design characteristics and magnetic performance of the 6/4 Switched Reluctance Motor are examined in detail. The modified SRM is found to offer increased torque capacity and efficiency while standing out with its simple construction and robustness. FEM analysis results of SRM indicate that considering its cost-effectiveness and performance improvements achieved through modifications, the SRM is a strong alternative for certain pan-tilt applications. This study aims to provide engineers and researchers with a performance comparison of the modified rotor 6/4 SRM and BLDC motors in pan-tilt systems, helping them make more informed and effective motor selections.Keywords: reluctance machines, switched reluctance machines, pan-tilt application, comparison, FEM analysis
Procedia PDF Downloads 5828271 A Comparison of Single of Decision Tree, Decision Tree Forest and Group Method of Data Handling to Evaluate the Surface Roughness in Machining Process
Authors: S. Ghorbani, N. I. Polushin
Abstract:
The machinability of workpieces (AISI 1045 Steel, AA2024 aluminum alloy, A48-class30 gray cast iron) in turning operation has been carried out using different types of cutting tool (conventional, cutting tool with holes in toolholder and cutting tool filled up with composite material) under dry conditions on a turning machine at different stages of spindle speed (630-1000 rpm), feed rate (0.05-0.075 mm/rev), depth of cut (0.05-0.15 mm) and tool overhang (41-65 mm). Experimentation was performed as per Taguchi’s orthogonal array. To evaluate the relative importance of factors affecting surface roughness the single decision tree (SDT), Decision tree forest (DTF) and Group method of data handling (GMDH) were applied.Keywords: decision tree forest, GMDH, surface roughness, Taguchi method, turning process
Procedia PDF Downloads 44128270 Image Features Comparison-Based Position Estimation Method Using a Camera Sensor
Authors: Jinseon Song, Yongwan Park
Abstract:
In this paper, propose method that can user’s position that based on database is built from single camera. Previous positioning calculate distance by arrival-time of signal like GPS (Global Positioning System), RF(Radio Frequency). However, these previous method have weakness because these have large error range according to signal interference. Method for solution estimate position by camera sensor. But, signal camera is difficult to obtain relative position data and stereo camera is difficult to provide real-time position data because of a lot of image data, too. First of all, in this research we build image database at space that able to provide positioning service with single camera. Next, we judge similarity through image matching of database image and transmission image from user. Finally, we decide position of user through position of most similar database image. For verification of propose method, we experiment at real-environment like indoor and outdoor. Propose method is wide positioning range and this method can verify not only position of user but also direction.Keywords: positioning, distance, camera, features, SURF(Speed-Up Robust Features), database, estimation
Procedia PDF Downloads 34928269 Layersomes for Oral Delivery of Amphotericin B
Authors: A. C. Rana, Abhinav Singh Rana
Abstract:
Layer by layer coating of biocompatible polyelectrolytes converts the liposomes into stable version i.e 'layersomes'. This system was further used to deliver the Amphotericin B through the oral route. Extensive optimization of different process variables resulted in the formation of layersomes with the particle size of 238.4±5.1, PDI of 0.24±0.16, the zeta potential of 34.6±1.3, and entrapment efficiency of 71.3±1.2. TEM analysis further confirmed the formation of spherical particles. Trehalose (10% w/w) resulted in the formation of fluffy and easy to redisperse cake in freeze dried layersomes. Controlled release up to 50 % within 24 h was observed in the case of layersomes. The layersomes were found stable in simulated biological fluids and resulted in the 3.59 fold higher bioavailability in comparison to free Amp-B. Furthermore, the developed formulation was found to be safe in comparison to Fungizone as indicated by blood urea nitrogen (BUN) and creatinine level.Keywords: amphotericin B, layersomes, liposomes, toxicity
Procedia PDF Downloads 52728268 Comparison Analysis of Multi-Channel Echo Cancellation Using Adaptive Filters
Authors: Sahar Mobeen, Anam Rafique, Irum Baig
Abstract:
Acoustic echo cancellation in multichannel is a system identification application. In real time environment, signal changes very rapidly which required adaptive algorithms such as Least Mean Square (LMS), Leaky Least Mean Square (LLMS), Normalized Least Mean square (NLMS) and average (AFA) having high convergence rate and stable. LMS and NLMS are widely used adaptive algorithm due to less computational complexity and AFA used of its high convergence rate. This research is based on comparison of acoustic echo (generated in a room) cancellation thorough LMS, LLMS, NLMS, AFA and newly proposed average normalized leaky least mean square (ANLLMS) adaptive filters.Keywords: LMS, LLMS, NLMS, AFA, ANLLMS
Procedia PDF Downloads 56628267 Evaluation of Deformation for Deep Excavations in the Greater Vancouver Area Through Case Studies
Authors: Boris Kolev, Matt Kokan, Mohammad Deriszadeh, Farshid Bateni
Abstract:
Due to the increasing demand for real estate and the need for efficient land utilization in Greater Vancouver, developers have been increasingly considering the construction of high-rise structures with multiple below-grade parking. The temporary excavations required to allow for the construction of underground levels have recently reached up to 40 meters in depth. One of the challenges with deep excavations is the prediction of wall displacements and ground settlements due to their effect on the integrity of City utilities, infrastructure, and adjacent buildings. A large database of survey monitoring data has been collected for deep excavations in various soil conditions and shoring systems. The majority of the data collected is for tie-back anchors and shotcrete lagging systems. The data were categorized, analyzed and the results were evaluated to find a relationship between the most dominant parameters controlling the displacement, such as depth of excavation, soil properties, and the tie-back anchor loading and arrangement. For a select number of deep excavations, finite element modeling was considered for analyses. The lateral displacements from the simulation results were compared to the recorded survey monitoring data. The study concludes with a discussion and comparison of the available empirical and numerical modeling methodologies for evaluating lateral displacements in deep excavations.Keywords: deep excavations, lateral displacements, numerical modeling, shoring walls, tieback anchors
Procedia PDF Downloads 18128266 Comparison of Slope Data between Google Earth and the Digital Terrain Model, for Registration in Car
Authors: André Felipe Gimenez, Flávia Alessandra Ribeiro da Silva, Roberto Saverio Souza Costa
Abstract:
Currently, the rural producer has been facing problems regarding environmental regularization, which is precisely why the CAR (Rural Environmental Registry) was created. CAR is an electronic registry for rural properties with the purpose of assimilating notions about legal reserve areas, permanent preservation areas, areas of limited use, stable areas, forests and remnants of native vegetation, and all rural properties in Brazil. . The objective of this work was to evaluate and compare altimetry and slope data from google Earth with a digital terrain model (MDT) generated by aerophotogrammetry, in three plots of a steep slope, for the purpose of declaration in the CAR (Rural Environmental Registry). The realization of this work is justified in these areas, in which rural landowners have doubts about the reliability of the use of the free software Google Earth to diagnose inclinations greater than 25 degrees, as recommended by federal law 12651/2012. Added to the fact that in the literature, there is a deficiency of this type of study for the purpose of declaration of the CAR. The results showed that when comparing the drone altimetry data with the Google Earth image data, in areas of high slope (above 40% slope), Google underestimated the real values of terrain slope. Thus, it is concluded that Google Earth is not reliable for diagnosing areas with an inclination greater than 25 degrees (46% declivity) for the purpose of declaration in the CAR, being essential to carry out the local topographic survey.Keywords: MDT, drone, RPA, SiCar, photogrammetry
Procedia PDF Downloads 13128265 A Comparative Study of Environment Risk Assessment Guidelines of Developing and Developed Countries Including Bangladesh
Authors: Syeda Fahria Hoque Mimmi, Aparna Islam
Abstract:
Genetically engineered (GE) plants are the need of time for increased demand for food. A complete set of regulations need to be followed from the development of a GE plant to its release into the environment. The whole regulation system is categorized into separate stages for maintaining the proper biosafety. Environmental risk assessment (ERA) is one of such crucial stages in the whole process. ERA identifies potential risks and their impacts through science-based evaluation where it is done in a case-by-case study. All the countries which deal with GE plants follow specific guidelines to conduct a successful ERA. In this study, ERA guidelines of 4 developing and 4 developed countries, including Bangladesh, were compared. ERA guidelines of countries such as India, Canada, Australia, the European Union, Argentina, Brazil, and the US were considered as a model to conduct the comparison study with Bangladesh. Initially, ten parameters were detected to compare the required data and information among all the guidelines. Surprisingly, an adequate amount of data and information requirements (e.g., if the intended modification/new traits of interest has been achieved or not, the growth habit of GE plants, consequences of any potential gene flow upon the cultivation of GE plants to sexually compatible plant species, potential adverse effects on the human health, etc.) matched between all the countries. However, a few differences in data requirement (e.g., agronomic conventions of non-transformed plants, applicants should clearly describe experimental procedures followed, etc.) were also observed in the study. Moreover, it was found that only a few countries provide instructions on the quality of the data used for ERA. If these similarities are recognized in a more framed manner, then the approval pathway of GE plants can be shared.Keywords: GE plants, ERA, harmonization, ERA guidelines, Information and data requirements
Procedia PDF Downloads 18728264 Comparative Study on Inhibiting Factors of Cost and Time Control in Nigerian Construction Practice
Authors: S. Abdulkadir, I. Y. Moh’d, S. U. Kunya, U. Nuruddeen
Abstract:
The basis of any contract formation between the client and contractor is the budgeted cost and the estimated duration of projects. These variables are paramount important to project's sponsor in a construction projects and in assessing the success or viability of construction projects. Despite the availability of various techniques of cost and time control, many projects failed to achieve their initial estimated cost and time. The paper evaluate the inhibiting factors of cost and time control in Nigerian construction practice and comparing the result with the United Kingdom practice as identified by one researcher. The populations of the study are construction professionals within Bauchi and Gombe state, Nigeria, a judgmental sampling employed in determining the size of respondents. Descriptive statistics used in analyzing the data in SPSS. Design change, project fraud and corruption, financing and payment of completed work found to be common among the top five inhibiting factors of cost and time control in the study area. Furthermore, the result had shown some comprising with slight contrast as in the case of United Kingdom practice. Study recommend the adaptation of mitigation measures developed in the UK prior to assessing its effectiveness and so also developing a mitigating measure for other top factors that are not within the one developed in United Kingdom practice. Also, it recommends a wider assessing comparison on the modify inhibiting factors of cost and time control as revealed by the study to cover almost all part of Nigeria.Keywords: comparison, cost, inhibiting factor, United Kingdom, time
Procedia PDF Downloads 44028263 Comparison of Wind Fragility for Window System in the Simplified 10 and 15-Story Building Considering Exposure Category
Authors: Viriyavudh Sim, WooYoung Jung
Abstract:
Window system in high rise building is occasionally subjected to an excessive wind intensity, particularly during typhoon. The failure of window system did not affect overall safety of structural performance; however, it could endanger the safety of the residents. In this paper, comparison of fragility curves for window system of two residential buildings was studied. The probability of failure for individual window was determined with Monte Carlo Simulation method. Then, lognormal cumulative distribution function was used to represent the fragility. The results showed that windows located on the edge of leeward wall were more susceptible to wind load and the probability of failure for each window panel increased at higher floors.Keywords: wind fragility, window system, high rise building, wind disaster
Procedia PDF Downloads 31428262 A Simple User Administration View of Computing Clusters
Authors: Valeria M. Bastos, Myrian A. Costa, Matheus Ambrozio, Nelson F. F. Ebecken
Abstract:
In this paper a very simple and effective user administration view of computing clusters systems is implemented in order of friendly provide the configuration and monitoring of distributed application executions. The user view, the administrator view, and an internal control module create an illusionary management environment for better system usability. The architecture, properties, performance, and the comparison with others software for cluster management are briefly commented.Keywords: big data, computing clusters, administration view, user view
Procedia PDF Downloads 330