Search results for: measuring accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5023

Search results for: measuring accuracy

4963 Comparison of Different k-NN Models for Speed Prediction in an Urban Traffic Network

Authors: Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu

Abstract:

A database that records average traffic speeds measured at five-minute intervals for all the links in the traffic network of a metropolitan city. While learning from this data the models that can predict future traffic speed would be beneficial for the applications such as the car navigation system, building predictive models for every link becomes a nontrivial job if the number of links in a given network is huge. An advantage of adopting k-nearest neighbor (k-NN) as predictive models is that it does not require any explicit model building. Instead, k-NN takes a long time to make a prediction because it needs to search for the k-nearest neighbors in the database at prediction time. In this paper, we investigate how much we can speed up k-NN in making traffic speed predictions by reducing the amount of data to be searched for without a significant sacrifice of prediction accuracy. The rationale behind this is that we had a better look at only the recent data because the traffic patterns not only repeat daily or weekly but also change over time. In our experiments, we build several different k-NN models employing different sets of features which are the current and past traffic speeds of the target link and the neighbor links in its up/down-stream. The performances of these models are compared by measuring the average prediction accuracy and the average time taken to make a prediction using various amounts of data.

Keywords: big data, k-NN, machine learning, traffic speed prediction

Procedia PDF Downloads 329
4962 Lipschitz Classifiers Ensembles: Usage for Classification of Target Events in C-OTDR Monitoring Systems

Authors: Andrey V. Timofeev

Abstract:

This paper introduces an original method for guaranteed estimation of the accuracy of an ensemble of Lipschitz classifiers. The solution was obtained as a finite closed set of alternative hypotheses, which contains an object of classification with a probability of not less than the specified value. Thus, the classification is represented by a set of hypothetical classes. In this case, the smaller the cardinality of the discrete set of hypothetical classes is, the higher is the classification accuracy. Experiments have shown that if the cardinality of the classifiers ensemble is increased then the cardinality of this set of hypothetical classes is reduced. The problem of the guaranteed estimation of the accuracy of an ensemble of Lipschitz classifiers is relevant in the multichannel classification of target events in C-OTDR monitoring systems. Results of suggested approach practical usage to accuracy control in C-OTDR monitoring systems are present.

Keywords: Lipschitz classifiers, confidence set, C-OTDR monitoring, classifiers accuracy, classifiers ensemble

Procedia PDF Downloads 461
4961 Features for Measuring Credibility on Facebook Information

Authors: Kanda Runapongsa Saikaew, Chaluemwut Noyunsan

Abstract:

Nowadays social media information, such as news, links, images, or VDOs, is shared extensively. However, the effectiveness of disseminating information through social media lacks in quality: less fact checking, more biases, and several rumors. Many researchers have investigated about credibility on Twitter, but there is no the research report about credibility information on Facebook. This paper proposes features for measuring credibility on Facebook information. We developed the system for credibility on Facebook. First, we have developed FB credibility evaluator for measuring credibility of each post by manual human’s labelling. We then collected the training data for creating a model using Support Vector Machine (SVM). Secondly, we developed a chrome extension of FB credibility for Facebook users to evaluate the credibility of each post. Based on the usage analysis of our FB credibility chrome extension, about 81% of users’ responses agree with suggested credibility automatically computed by the proposed system.

Keywords: facebook, social media, credibility measurement, internet

Procedia PDF Downloads 330
4960 The Impact of Direct and Indirect Pressure Measuring Systems on the Pressure Mapping for the Medical Compression Garments

Authors: Arash M. Shahidi, Tilak Dias, Gayani K. Nandasiri

Abstract:

While graduated compression is the foundation of treatment and management of many medical complications such as leg ulcer, varicose veins, and lymphedema, monitoring the interface pressure has been conducted using different sensors that operate based on diverse approaches. The variations existed from the pressure readings collected using different interface pressure measurement systems would cause difficulties in taking a decision regarding the compression therapy. It is crucial to acknowledge the differences existing between direct and indirect pressure measurement systems while considering the commercially available systems such as AMI, Picopress and OPM which are under direct measurements systems, and HATRA (BSI), HOSY (RAL-GZ) and FlexiForce which comes under the indirect measurement system. Furthermore, Piezo-resistive sensors (Flexiforce) can measure the changes in resistance corresponding to the applied force on the sensing area. Direct pressure measuring systems are capable of measuring interface pressure on the three-dimensional states, while the indirect pressure measuring systems stretch the fabric in the two-dimensional direction and extrapolate pressure from surface tension measured on the device and neglect the vital factor which is the radius of curvature. In this study, a leg mannequin of known dimensions is selected with a knitted class 3 compression stocking. It has been decided to evaluate the data collected from different available systems (AMI, PicoPress, FlexiForce, and HATRA) and compare the results. The results showed a discrepancy between Hatra, AMI, Picopress, and Flexiforce against the pressure standard used to generate class 3 compression stocking. As predicted a higher pressure value with direct interface measuring systems were monitored against HATRA due to the effect of the radius of curvature.

Keywords: AMI, FlexiForce, graduated compression, HATRA, interface pressure, PicoPress

Procedia PDF Downloads 317
4959 The Relationship between Life Event Stress, Depressive Thoughts, and Working Memory Capacity

Authors: Eid Abo Hamza, Ahmed Helal

Abstract:

Purpose: The objective is to measure the capacity of the working memory, ie. the maximum number of elements that can be retrieved and processed, by measuring the basic functions of working memory (inhibition/transfer/update), and also to investigate its relationship to life stress and depressive thoughts. Methods: The study sample consisted of 50 students from Egypt. A cognitive task was designed to measure the working memory capacity based on the determinants found in previous research, which showed that cognitive tasks are the best measurements of the functions and capacity of working memory. Results: The results indicated that there were statistically significant differences in the level of life stress events (high/low) on the task of measuring the working memory capacity. The results also showed that there were no statistically significant differences between males and females or between academic major on the task of measuring the working memory capacity. Furthermore, the results reported that there was no statistically significant effect of the interaction of the level of life stress (high/low) and gender (male/female) on the task of measuring working memory capacity. Finally, the results showed that there were significant differences in the level of depressive thoughts (high/low) on the task of measuring working memory. Conclusions: The current research concludes that neither the interaction of stressful life events, gender, and academic major, nor the interaction of depressive thoughts, gender, and academic major, influence on working memory capacity.

Keywords: working memory, depression, stress, life event

Procedia PDF Downloads 123
4958 Development of a New Device for Bending Fatigue Testing

Authors: B. Mokhtarnia, M. Layeghi

Abstract:

This work presented an original bending fatigue-testing setup for fatigue characterization of composite materials. A three-point quasi-static setup was introduced that was capable of applying stress control load in different loading waveforms, frequencies, and stress ratios. This setup was equipped with computerized measuring instruments to evaluate fatigue damage mechanisms. A detailed description of its different parts and working features was given, and dynamic analysis was done to verify the functional accuracy of the device. Feasibility was validated successfully by conducting experimental fatigue tests.

Keywords: bending fatigue, quasi-static testing setup, experimental fatigue testing, composites

Procedia PDF Downloads 85
4957 Low-Cost, Portable Optical Sensor with Regression Algorithm Models for Accurate Monitoring of Nitrites in Environments

Authors: David X. Dong, Qingming Zhang, Meng Lu

Abstract:

Nitrites enter waterways as runoff from croplands and are discharged from many industrial sites. Excessive nitrite inputs to water bodies lead to eutrophication. On-site rapid detection of nitrite is of increasing interest for managing fertilizer application and monitoring water source quality. Existing methods for detecting nitrites use spectrophotometry, ion chromatography, electrochemical sensors, ion-selective electrodes, chemiluminescence, and colorimetric methods. However, these methods either suffer from high cost or provide low measurement accuracy due to their poor selectivity to nitrites. Therefore, it is desired to develop an accurate and economical method to monitor nitrites in environments. We report a low-cost optical sensor, in conjunction with a machine learning (ML) approach to enable high-accuracy detection of nitrites in water sources. The sensor works under the principle of measuring molecular absorptions of nitrites at three narrowband wavelengths (295 nm, 310 nm, and 357 nm) in the ultraviolet (UV) region. These wavelengths are chosen because they have relatively high sensitivity to nitrites; low-cost light-emitting devices (LEDs) and photodetectors are also available at these wavelengths. A regression model is built, trained, and utilized to minimize cross-sensitivities of these wavelengths to the same analyte, thus achieving precise and reliable measurements with various interference ions. The measured absorbance data is input to the trained model that can provide nitrite concentration prediction for the sample. The sensor is built with i) a miniature quartz cuvette as the test cell that contains a liquid sample under test, ii) three low-cost UV LEDs placed on one side of the cell as light sources, with each LED providing a narrowband light, and iii) a photodetector with a built-in amplifier and an analog-to-digital converter placed on the other side of the test cell to measure the power of transmitted light. This simple optical design allows measuring the absorbance data of the sample at the three wavelengths. To train the regression model, absorbances of nitrite ions and their combination with various interference ions are first obtained at the three UV wavelengths using a conventional spectrophotometer. Then, the spectrophotometric data are inputs to different regression algorithm models for training and evaluating high-accuracy nitrite concentration prediction. Our experimental results show that the proposed approach enables instantaneous nitrite detection within several seconds. The sensor hardware costs about one hundred dollars, which is much cheaper than a commercial spectrophotometer. The ML algorithm helps to reduce the average relative errors to below 3.5% over a concentration range from 0.1 ppm to 100 ppm of nitrites. The sensor has been validated to measure nitrites at three sites in Ames, Iowa, USA. This work demonstrates an economical and effective approach to the rapid, reagent-free determination of nitrites with high accuracy. The integration of the low-cost optical sensor and ML data processing can find a wide range of applications in environmental monitoring and management.

Keywords: optical sensor, regression model, nitrites, water quality

Procedia PDF Downloads 45
4956 Measuring Environmental Efficiency of Energy in OPEC Countries

Authors: Bahram Fathi, Seyedhossein Sajadifar, Naser Khiabani

Abstract:

Data envelopment analysis (DEA) has recently gained popularity in energy efficiency analysis. A common feature of the previously proposed DEA models for measuring energy efficiency performance is that they treat energy consumption as an input within a production framework without considering undesirable outputs. However, energy use results in the generation of undesirable outputs as byproducts of producing desirable outputs. Within a joint production framework of both desirable and undesirable outputs, this paper presents several DEA-type linear programming models for measuring energy efficiency performance. In addition to considering undesirable outputs, our models treat different energy sources as different inputs so that changes in energy mix could be accounted for in evaluating energy efficiency. The proposed models are applied to measure the energy efficiency performances of 12 OPEC countries and the results obtained are presented.

Keywords: energy efficiency, undesirable outputs, data envelopment analysis

Procedia PDF Downloads 705
4955 Prediction Model of Body Mass Index of Young Adult Students of Public Health Faculty of University of Indonesia

Authors: Yuwaratu Syafira, Wahyu K. Y. Putra, Kusharisupeni Djokosujono

Abstract:

Background/Objective: Body Mass Index (BMI) serves various purposes, including measuring the prevalence of obesity in a population, and also in formulating a patient’s diet at a hospital, and can be calculated with the equation = body weight (kg)/body height (m)². However, the BMI of an individual with difficulties in carrying their weight or standing up straight can not necessarily be measured. The aim of this study was to form a prediction model for the BMI of young adult students of Public Health Faculty of University of Indonesia. Subject/Method: This study used a cross sectional design, with a total sample of 132 respondents, consisted of 58 males and 74 females aged 21- 30. The dependent variable of this study was BMI, and the independent variables consisted of sex and anthropometric measurements, which included ulna length, arm length, tibia length, knee height, mid-upper arm circumference, and calf circumference. Anthropometric information was measured and recorded in a single sitting. Simple and multiple linear regression analysis were used to create the prediction equation for BMI. Results: The male respondents had an average BMI of 24.63 kg/m² and the female respondents had an average of 22.52 kg/m². A total of 17 variables were analysed for its correlation with BMI. Bivariate analysis showed the variable with the strongest correlation with BMI was Mid-Upper Arm Circumference/√Ulna Length (MUAC/√UL) (r = 0.926 for males and r = 0.886 for females). Furthermore, MUAC alone also has a very strong correlation with BMI (r = 0,913 for males and r = 0,877 for females). Prediction models formed from either MUAC/√UL or MUAC alone both produce highly accurate predictions of BMI. However, measuring MUAC/√UL is considered inconvenient, which may cause difficulties when applied on the field. Conclusion: The prediction model considered most ideal to estimate BMI is: Male BMI (kg/m²) = 1.109(MUAC (cm)) – 9.202 and Female BMI (kg/m²) = 0.236 + 0.825(MUAC (cm)), based on its high accuracy levels and the convenience of measuring MUAC on the field.

Keywords: body mass index, mid-upper arm circumference, prediction model, ulna length

Procedia PDF Downloads 191
4954 Calibration of 2D and 3D Optical Measuring Instruments in Industrial Environments at Submillimeter Range

Authors: Alberto Mínguez-Martínez, Jesús de Vicente y Oliva

Abstract:

Modern manufacturing processes have led to the miniaturization of systems and, as a result, parts at the micro-and nanoscale are produced. This trend seems to become increasingly important in the near future. Besides, as a requirement of Industry 4.0, the digitalization of the models of production and processes makes it very important to ensure that the dimensions of newly manufactured parts meet the specifications of the models. Therefore, it is possible to reduce the scrap and the cost of non-conformities, ensuring the stability of the production at the same time. To ensure the quality of manufactured parts, it becomes necessary to carry out traceable measurements at scales lower than one millimeter. Providing adequate traceability to the SI unit of length (the meter) to 2D and 3D measurements at this scale is a problem that does not have a unique solution in industrial environments. Researchers in the field of dimensional metrology all around the world are working on this issue. A solution for industrial environments, even if it is not complete, will enable working with some traceability. At this point, we believe that the study of the surfaces could provide us with a first approximation to a solution. Among the different options proposed in the literature, the areal topography methods may be the most relevant because they could be compared to those measurements performed using Coordinate Measuring Machines (CMM’s). These measuring methods give (x, y, z) coordinates for each point, expressing it in two different ways, either expressing the z coordinate as a function of x, denoting it as z(x), for each Y-axis coordinate, or as a function of the x and y coordinates, denoting it as z (x, y). Between others, optical measuring instruments, mainly microscopes, are extensively used to carry out measurements at scales lower than one millimeter because it is a non-destructive measuring method. In this paper, the authors propose a calibration procedure for the scales of optical measuring instruments, particularizing for a confocal microscope, using material standards easy to find and calibrate in metrology and quality laboratories in industrial environments. Confocal microscopes are measuring instruments capable of filtering the out-of-focus reflected light so that when it reaches the detector, it is possible to take pictures of the part of the surface that is focused. Varying and taking pictures at different Z levels of the focus, a specialized software interpolates between the different planes, and it could reconstruct the surface geometry into a 3D model. As it is easy to deduce, it is necessary to give traceability to each axis. As a complementary result, the roughness Ra parameter will be traced to the reference. Although the solution is designed for a confocal microscope, it may be used for the calibration of other optical measuring instruments by applying minor changes.

Keywords: industrial environment, confocal microscope, optical measuring instrument, traceability

Procedia PDF Downloads 117
4953 Using Greywolf Optimized Machine Learning Algorithms to Improve Accuracy for Predicting Hospital Readmission for Diabetes

Authors: Vincent Liu

Abstract:

Machine learning algorithms (ML) can achieve high accuracy in predicting outcomes compared to classical models. Metaheuristic, nature-inspired algorithms can enhance traditional ML algorithms by optimizing them such as by performing feature selection. We compare ten ML algorithms to predict 30-day hospital readmission rates for diabetes patients in the US using a dataset from UCI Machine Learning Repository with feature selection performed by Greywolf nature-inspired algorithm. The baseline accuracy for the initial random forest model was 65%. After performing feature engineering, SMOTE for class balancing, and Greywolf optimization, the machine learning algorithms showed better metrics, including F1 scores, accuracy, and confusion matrix with improvements ranging in 10%-30%, and a best model of XGBoost with an accuracy of 95%. Applying machine learning this way can improve patient outcomes as unnecessary rehospitalizations can be prevented by focusing on patients that are at a higher risk of readmission.

Keywords: diabetes, machine learning, 30-day readmission, metaheuristic

Procedia PDF Downloads 21
4952 The Impact of Major Accounting Events on Managerial Ability and the Accuracy of Environmental Capital Expenditure Projections of the Environmentally Sensitive Industries

Authors: Jason Chen, Jennifer Chen, Shiyu Li

Abstract:

We examine whether managerial ability (MA), the passing of Sarbanes-Oxley in 2002 (SOX), and corporate operational complexity affect the accuracy of environmental capital expenditure projections of the environmentally sensitive industries (ESI). Prior studies found that firms in the ESI manipulated their projected environmental capital expenditures as a tool to achieve corporate legitimation and suggested that human factors must be examined to determine whether they are part of the determinants. We use MA to proxy for the latent human factors to examine whether MA affects the accuracy of financial disclosures in the ESI. To expand Chen and Chen (2020), we further investigate whether (1) SOX and (2) firms with complex operations and financial reporting in conjunction with MA affect firms’ projection accuracy. We find, overall, that MA is positively correlated with firm’s projection accuracy in the annual 10-Ks. Furthermore, results suggest that SOX has a positive, yet temporary, effect on MA, and that leads to better accuracy. Finally, MA matters for firms with more complex operations and financial reporting to make less projection errors than their less-complex counterparts. These results suggest that MA is a determinant that affects the accuracy of environmental capital expenditure projections for the firms in the ESI.

Keywords: managerial ability, environmentally sensitive industries, sox, corporate operational complexity

Procedia PDF Downloads 112
4951 Study of Natural Patterns on Digital Image Correlation Using Simulation Method

Authors: Gang Li, Ghulam Mubashar Hassan, Arcady Dyskin, Cara MacNish

Abstract:

Digital image correlation (DIC) is a contactless full-field displacement and strain reconstruction technique commonly used in the field of experimental mechanics. Comparing with physical measuring devices, such as strain gauges, which only provide very restricted coverage and are expensive to deploy widely, the DIC technique provides the result with full-field coverage and relative high accuracy using an inexpensive and simple experimental setup. It is very important to study the natural patterns effect on the DIC technique because the preparation of the artificial patterns is time consuming and hectic process. The objective of this research is to study the effect of using images having natural pattern on the performance of DIC. A systematical simulation method is used to build simulated deformed images used in DIC. A parameter (subset size) used in DIC can have an effect on the processing and accuracy of DIC and even cause DIC to failure. Regarding to the picture parameters (correlation coefficient), the higher similarity of two subset can lead the DIC process to fail and make the result more inaccurate. The pictures with good and bad quality for DIC methods have been presented and more importantly, it is a systematic way to evaluate the quality of the picture with natural patterns before they install the measurement devices.

Keywords: Digital Image Correlation (DIC), deformation simulation, natural pattern, subset size

Procedia PDF Downloads 391
4950 A Study of ZY3 Satellite Digital Elevation Model Verification and Refinement with Shuttle Radar Topography Mission

Authors: Bo Wang

Abstract:

As the first high-resolution civil optical satellite, ZY-3 satellite is able to obtain high-resolution multi-view images with three linear array sensors. The images can be used to generate Digital Elevation Models (DEM) through dense matching of stereo images. However, due to the clouds, forest, water and buildings covered on the images, there are some problems in the dense matching results such as outliers and areas failed to be matched (matching holes). This paper introduced an algorithm to verify the accuracy of DEM that generated by ZY-3 satellite with Shuttle Radar Topography Mission (SRTM). Since the accuracy of SRTM (Internal accuracy: 5 m; External accuracy: 15 m) is relatively uniform in the worldwide, it may be used to improve the accuracy of ZY-3 DEM. Based on the analysis of mass DEM and SRTM data, the processing can be divided into two aspects. The registration of ZY-3 DEM and SRTM can be firstly performed using the conjugate line features and area features matched between these two datasets. Then the ZY-3 DEM can be refined by eliminating the matching outliers and filling the matching holes. The matching outliers can be eliminated based on the statistics on Local Vector Binning (LVB). The matching holes can be filled by the elevation interpolated from SRTM. Some works are also conducted for the accuracy statistics of the ZY-3 DEM.

Keywords: ZY-3 satellite imagery, DEM, SRTM, refinement

Procedia PDF Downloads 315
4949 Reliability of Diffusion Tensor Imaging in Differentiation of Salivary Gland Tumors

Authors: Sally Salah El Menshawy, Ghada M. Ahmed GabAllah, Doaa Khedr M. Khedr

Abstract:

Background: Our study aims to detect the diagnostic role of DTI in the differentiation of salivary glands benign and malignant lesions. Results: Our study included 50 patients (25males and 25 females) divided into 4 groups (benign lesions n=20, malignant tumors n=13, post-operative changes n=10 and normal n=7). 28 patients were with parotid gland lesions, 4 patients were with submandibular gland lesions and only 1 case with sublingual gland affection. The mean fractional anisotropy (FA) and apparent diffusion coefficient (ADC) of malignant salivary gland tumors (n = 13) (0.380±0.082 and 0.877±0.234× 10⁻³ mm² s⁻¹) were significantly different (P<0.001) than that of benign tumors (n = 20) (0.147±0.03 and 1.47±0.605 × 10⁻³ mm² s⁻¹), respectively. The mean FA and ADC of post-operative changes (n = 10) were (0.211±0.069 and 1.63±0.20× 10⁻³ mm² s⁻¹) while that of normal glands (n =7) was (0.251±0.034and 1.54±0.29× 10⁻³ mm² s⁻¹), respectively. Using ADC to differentiate malignant lesions from benign lesions has an (AUC) of 0.810, with an accuracy of 69.7%. ADC used to differentiate malignant lesions from post-operative changes has (AUC) of 1.0, and an accuracy of 95.7%. FA used to discriminate malignant from benign lesions has (AUC) of 1.0, and an accuracy of 93.9%. FA used to differentiate malignant from post-operative changes has (AUC) of 0.923, and an accuracy of 95.7%. Combined FA and ADC used to differentiate malignant from benign lesions has (AUC) of 1.0, and an accuracy of 100%. Combined FA and ADC used to differentiate malignant from post-operative changes has (AUC) of 1.0, and an accuracy of 100%. Conclusion: Combined FA and ADC can differentiate malignant tumors from benign salivary gland lesions.

Keywords: diffusion tensor imaging, MRI, salivary gland, tumors

Procedia PDF Downloads 77
4948 The Limits of the Effectiveness of Digital Advertising: Demonstration by the Economic Approach of Measuring Advertising Effectiveness

Authors: Barkaoui Asma

Abstract:

In our article, we use the economic approach of measuring advertising effectiveness to show the margin of advertising spread gained through digital communication. For economists, profit maximization depends on determining the optimal advertising budget. For this, they use the theories of the marginalist current to determine when the maximum level of benefits is reached. Using the economic approach we show the significant return on investment for advertisers. We then discuss the risks of perception of advertising pressure by consumers.

Keywords: digital advertising, economic approach, effectiveness, pressure

Procedia PDF Downloads 271
4947 Spaces of Interpretation: Personal Space

Authors: Yehuda Roth

Abstract:

In quantum theory, a system’s time evolution is predictable unless an observer performs measurement, as the measurement process can randomize the system. This randomness appears when the measuring device does not accurately describe the measured item, i.e., when the states characterizing the measuring device appear as a superposition of those being measured. When such a mismatch occurs, the measured data randomly collapse into a single eigenstate of the measuring device. This scenario resembles the interpretation process in which the observer does not experience an objective reality but interprets it based on preliminary descriptions initially ingrained into his/her mind. This distinction is the motivation for the present study in which the collapse scenario is regarded as part of the interpretation process of the observer. By adopting the formalism of the quantum theory, we present a complete mathematical approach that describes the interpretation process. We demonstrate this process by applying the proposed interpretation formalism to the ambiguous image "My wife and mother-in-law" to identify whether a woman in the picture is young or old.

Keywords: quantum-like interpretation, ambiguous image, determination, quantum-like collapse, classified representation

Procedia PDF Downloads 73
4946 Using Support Vector Machines for Measuring Democracy

Authors: Tommy Krieger, Klaus Gruendler

Abstract:

We present a novel approach for measuring democracy, which enables a very detailed and sensitive index. This method is based on Support Vector Machines, a mathematical algorithm for pattern recognition. Our implementation evaluates 188 countries in the period between 1981 and 2011. The Support Vector Machines Democracy Index (SVMDI) is continuously on the 0-1-Interval and robust to variations in the numerical process parameters. The algorithm introduced here can be used for every concept of democracy without additional adjustments, and due to its flexibility it is also a valuable tool for comparison studies.

Keywords: democracy, democracy index, machine learning, support vector machines

Procedia PDF Downloads 341
4945 A Non-Invasive Neonatal Jaundice Screening Device Measuring Bilirubin on Eyes

Authors: Li Shihao, Dieter Trau

Abstract:

Bilirubin is a yellow substance that is made when the body breaks down old red blood cells. High levels of bilirubin can cause jaundice, a condition that makes the newborn's skin and the white part of the eyes look yellow. Jaundice is a serial-killer in developing countries in Southeast Asia such as Myanmar and most parts of Africa where jaundice screening is largely unavailable. Worldwide, 60% of newborns experience infant jaundice. One in ten will require therapy to prevent serious complications and lifelong neurologic sequelae. Limitations of current solutions: - Blood test: Blood tests are painful may largely unavailable in poor areas of developing countries, and also can be costly and unsafe due to the insufficient investment and lack of access to health care systems. - Transcutaneous jaundice-meter: 1) can only provide reliable results to caucasian newborns, due to skin pigmentations since current technologies measure bilirubin by the color of the skin. Basically, the darker the skin is, the harder to measure, 2) current jaundice meters are not affordable for most underdeveloped areas in Africa like Kenya and Togo, 3) fat tissue under the skin also influences the accuracy, which will give overestimated results, 4) current jaundice meters are not reliable after treatment (phototherapy) because bilirubin levels underneath the skin will be reduced first, while overall levels may be quite high. Thus, there is an urgent need for a low-cost non-invasive device, which can be effective not only for caucasian babies but also Asian and African newborns, to save lives at the most vulnerable time and prevent any complications like brain damage. Instead of measuring bilirubin on skin, we proposed a new method to do the measurement on the sclera, which can avoid the difference of skin pigmentations and ethnicities, due to the necessity for the sclera to be white regardless of racial background. This is a novel approach for measuring bilirubin by an optical method of light reflection off the white part of the eye. Moreover, the device is connected to a smart device, which can provide a user-friendly interface and the ability to record the clinical data continuously A disposable eye cap will be provided avoiding contamination and fixing the distance to the eye.

Keywords: Jaundice, bilirubin, non-invasive, sclera

Procedia PDF Downloads 213
4944 Using Machine Learning to Classify Different Body Parts and Determine Healthiness

Authors: Zachary Pan

Abstract:

Our general mission is to solve the problem of classifying images into different body part types and deciding if each of them is healthy or not. However, for now, we will determine healthiness for only one-sixth of the body parts, specifically the chest. We will detect pneumonia in X-ray scans of those chest images. With this type of AI, doctors can use it as a second opinion when they are taking CT or X-ray scans of their patients. Another ad-vantage of using this machine learning classifier is that it has no human weaknesses like fatigue. The overall ap-proach to this problem is to split the problem into two parts: first, classify the image, then determine if it is healthy. In order to classify the image into a specific body part class, the body parts dataset must be split into test and training sets. We can then use many models, like neural networks or logistic regression models, and fit them using the training set. Now, using the test set, we can obtain a realistic accuracy the models will have on images in the real world since these testing images have never been seen by the models before. In order to increase this testing accuracy, we can also apply many complex algorithms to the models, like multiplicative weight update. For the second part of the problem, to determine if the body part is healthy, we can have another dataset consisting of healthy and non-healthy images of the specific body part and once again split that into the test and training sets. We then use another neural network to train on those training set images and use the testing set to figure out its accuracy. We will do this process only for the chest images. A major conclusion reached is that convolutional neural networks are the most reliable and accurate at image classification. In classifying the images, the logistic regression model, the neural network, neural networks with multiplicative weight update, neural networks with the black box algorithm, and the convolutional neural network achieved 96.83 percent accuracy, 97.33 percent accuracy, 97.83 percent accuracy, 96.67 percent accuracy, and 98.83 percent accuracy, respectively. On the other hand, the overall accuracy of the model that de-termines if the images are healthy or not is around 78.37 percent accuracy.

Keywords: body part, healthcare, machine learning, neural networks

Procedia PDF Downloads 71
4943 The Concept of Accounting in Islamic Transactions

Authors: Ahmad Abdulkadir Ibrahim

Abstract:

The Islamic law of transactions laid down the methods and instruments of accounting and analyzed its basic assumptions in the modern world. There is a need to examine the implications of accounting initiatives in the Muslim world and attempt to outline the important characteristics of Islamic accounting and how Islamic accounting resolves the problem of measuring the cost of Murabaha goods in case of exchange rate variation. The research tends to discuss an analytical approach to the Islamic accounting concept as well as elaborating the jurisprudential matter and practical aspects of accounting in Islamic financial transactions. It also aims to alert the practitioners of accounting in the Islamic world to be aware of the concept of accounting in Islamic jurisprudence and its historical development. The methodology adopted in this research is the qualitative method through the consultation of relevant literature, which focuses on the thematic study of the subject matter. This is followed by an analysis and discussion of the contents of the materials used. It is concluded that Islamic accounting is unique in its norms as it has been characterized by fairness, accuracy in measuring tools, truthfulness, mutual trust, moderation in making a profit, and tolerance. It was also qualified by capacity and flexibility in terms of the tools and terminology used and invented by Islamic jurisprudence in the accounting system, which indicates its validity and consistency anytime and anywhere. An important conclusion of the research also lies in the refutation of the popular idea that an Italian writer known as Luca Pacilio was the first writer who developed the basis of double-entry due to the presented proofs by Muslim scholars of critical accounting developments, which cannot be ignored. It concludes further that Islamic jurisprudence draws the accounting system codified in the foundations of a market that is far from usury, fraud, cheating, and unfair competition in all areas.

Keywords: accounting, Islamic accounting, Islamic transactions, Islamic jurisprudence, double entry, murabaha, characteristics

Procedia PDF Downloads 37
4942 Pull-In Instability Determination of Microcapacitive Sensor for Measuring Special Range of Pressure

Authors: Yashar Haghighatfar, Shahrzad Mirhosseini

Abstract:

Pull-in instability is a nonlinear and crucial effect that is important for the design of microelectromechanical system devices. In this paper, the appropriate electrostatic voltage range is determined by measuring fluid flow pressure via micro pressure sensor based microbeam. The microbeam deflection contains two parts, the static and perturbation deflection of static. The second order equation regarding the equivalent stiffness, mass and damping matrices based on Galerkin method is introduced to predict pull-in instability due to the external voltage. Also the reduced order method is used for solving the second order nonlinear equation of motion. Furthermore, in the present study, the micro capacitive pressure sensor is designed for measuring special fluid flow pressure range. The results show that the measurable pressure range can be optimized, regarding damping field and external voltage.

Keywords: MEMS, pull-in instability, electrostatically actuated microbeam, reduced order method

Procedia PDF Downloads 199
4941 Discussion as a Means to Improve Peer Assessment Accuracy

Authors: Jung Ae Park, Jooyong Park

Abstract:

Writing is an important learning activity that cultivates higher level thinking. Effective and immediate feedback is necessary to help improve students' writing skills. Peer assessment can be an effective method in writing tasks because it makes it possible for students not only to receive quick feedback on their writing but also to get a chance to examine different perspectives on the same topic. Peer assessment can be practiced frequently and has the advantage of immediate feedback. However, there is controversy about the accuracy of peer assessment. In this study, we tried to demonstrate experimentally how the accuracy of peer assessment could be improved. Participants (n=76) were randomly assigned to groups of 4 members. All the participant graded two sets of 4 essays on the same topic. They graded the first set twice, and the second set or the posttest once. After the first grading of the first set, each group in the experimental condition 1 (discussion group), were asked to discuss the results of the peer assessment and then to grade the essays again. Each group in the experimental condition 2 (reading group), were asked to read the assessment on each essay by an expert and then to grade the essays again. In the control group, the participants were asked to grade the 4 essays twice in different orders. Afterwards, all the participants graded the second set of 4 essays. The mean score from 4 participants was calculated for each essay. The accuracy of the peer assessment was measured by Pearson correlation with the scores of the expert. The results were analyzed by two-way repeated measure ANOVA. The main effect of grading was observed: Grading accuracy got better as the number of grading experience increased. Analysis of posttest accuracy revealed that the score variations within a group of 4 participants decreased in both discussion and reading conditions but not in the control condition. These results suggest that having students discuss their grading together can be an efficient means to improve peer assessment accuracy. By discussing, students can learn from others about what to consider in grading and whether their grading is too strict or lenient. Further research is needed to examine the exact cause of the grading accuracy.

Keywords: peer assessment, evaluation accuracy, discussion, score variations

Procedia PDF Downloads 244
4940 Variable-Fidelity Surrogate Modelling with Kriging

Authors: Selvakumar Ulaganathan, Ivo Couckuyt, Francesco Ferranti, Tom Dhaene, Eric Laermans

Abstract:

Variable-fidelity surrogate modelling offers an efficient way to approximate function data available in multiple degrees of accuracy each with varying computational cost. In this paper, a Kriging-based variable-fidelity surrogate modelling approach is introduced to approximate such deterministic data. Initially, individual Kriging surrogate models, which are enhanced with gradient data of different degrees of accuracy, are constructed. Then these Gradient enhanced Kriging surrogate models are strategically coupled using a recursive CoKriging formulation to provide an accurate surrogate model for the highest fidelity data. While, intuitively, gradient data is useful to enhance the accuracy of surrogate models, the primary motivation behind this work is to investigate if it is also worthwhile incorporating gradient data of varying degrees of accuracy.

Keywords: Kriging, CoKriging, Surrogate modelling, Variable- fidelity modelling, Gradients

Procedia PDF Downloads 525
4939 Application of Pattern Recognition Technique to the Quality Characterization of Superficial Microstructures in Steel Coatings

Authors: H. Gonzalez-Rivera, J. L. Palmeros-Torres

Abstract:

This paper describes the application of traditional computer vision techniques as a procedure for automatic measurement of the secondary dendrite arm spacing (SDAS) from microscopic images. The algorithm is capable of finding the lineal or curve-shaped secondary column of the main microstructure, measuring its length size in a micro-meter and counting the number of spaces between dendrites. The automatic characterization was compared with a set of 1728 manually characterized images, leading to an accuracy of −0.27 µm for the length size determination and a precision of ± 2.78 counts for dendrite spacing counting, also reducing the characterization time from 7 hours to 2 minutes.

Keywords: dendrite arm spacing, microstructure inspection, pattern recognition, polynomial regression

Procedia PDF Downloads 15
4938 The Pitch Diameter of Pipe Taper Thread Measurement and Uncertainty Using Three-Wire Probe

Authors: J. Kloypayan, W. Pimpakan

Abstract:

The pipe taper thread measurement and uncertainty normally used the four-wire probe according to the JIS B 0262. Besides, according to the EA-10/10 standard, the pipe thread could be measured using the three-wire probe. This research proposed to use the three-wire probe measuring the pitch diameter of the pipe taper thread. The measuring accessory component was designed and made, then, assembled to one side of the ULM 828 CiM machine. Therefore, this machine could be used to measure and calibrate both the pipe thread and the pipe taper thread. The equations and the expanded uncertainty for pitch diameter measurement were formulated. After the experiment, the results showed that the pipe taper thread had the pitch diameter equal to 19.165 mm and the expanded uncertainty equal to 1.88µm. Then, the experiment results were compared to the results from the National Institute of Metrology Thailand. The equivalence ratio from the comparison showed that both results were related. Thus, the proposed method of using the three-wire probe measured the pitch diameter of the pipe taper thread was acceptable.

Keywords: pipe taper thread, three-wire probe, measure and calibration, the universal length measuring machine

Procedia PDF Downloads 378
4937 Factors Impacting Geostatistical Modeling Accuracy and Modeling Strategy of Fluvial Facies Models

Authors: Benbiao Song, Yan Gao, Zhuo Liu

Abstract:

Geostatistical modeling is the key technic for reservoir characterization, the quality of geological models will influence the prediction of reservoir performance greatly, but few studies have been done to quantify the factors impacting geostatistical reservoir modeling accuracy. In this study, 16 fluvial prototype models have been established to represent different geological complexity, 6 cases range from 16 to 361 wells were defined to reproduce all those 16 prototype models by different methodologies including SIS, object-based and MPFS algorithms accompany with different constraint parameters. Modeling accuracy ratio was defined to quantify the influence of each factor, and ten realizations were averaged to represent each accuracy ratio under the same modeling condition and parameters association. Totally 5760 simulations were done to quantify the relative contribution of each factor to the simulation accuracy, and the results can be used as strategy guide for facies modeling in the similar condition. It is founded that data density, geological trend and geological complexity have great impact on modeling accuracy. Modeling accuracy may up to 90% when channel sand width reaches up to 1.5 times of well space under whatever condition by SIS and MPFS methods. When well density is low, the contribution of geological trend may increase the modeling accuracy from 40% to 70%, while the use of proper variogram may have very limited contribution for SIS method. It can be implied that when well data are dense enough to cover simple geobodies, few efforts were needed to construct an acceptable model, when geobodies are complex with insufficient data group, it is better to construct a set of robust geological trend than rely on a reliable variogram function. For object-based method, the modeling accuracy does not increase obviously as SIS method by the increase of data density, but kept rational appearance when data density is low. MPFS methods have the similar trend with SIS method, but the use of proper geological trend accompany with rational variogram may have better modeling accuracy than MPFS method. It implies that the geological modeling strategy for a real reservoir case needs to be optimized by evaluation of dataset, geological complexity, geological constraint information and the modeling objective.

Keywords: fluvial facies, geostatistics, geological trend, modeling strategy, modeling accuracy, variogram

Procedia PDF Downloads 236
4936 A Sensor Placement Methodology for Chemical Plants

Authors: Omid Ataei Nia, Karim Salahshoor

Abstract:

In this paper, a new precise and reliable sensor network methodology is introduced for unit processes and operations using the Constriction Coefficient Particle Swarm Optimization (CPSO) method. CPSO is introduced as a new search engine for optimal sensor network design purposes. Furthermore, a Square Root Unscented Kalman Filter (SRUKF) algorithm is employed as a new data reconciliation technique to enhance the stability and accuracy of the filter. The proposed design procedure incorporates precision, cost, observability, reliability together with importance-of-variables (IVs) as a novel measure in Instrumentation Criteria (IC). To the best of our knowledge, no comprehensive approach has yet been proposed in the literature to take into account the importance of variables in the sensor network design procedure. In this paper, specific weight is assigned to each sensor, measuring a process variable in the sensor network to indicate the importance of that variable over the others to cater to the ultimate sensor network application requirements. A set of distinct scenarios has been conducted to evaluate the performance of the proposed methodology in a simulated Continuous Stirred Tank Reactor (CSTR) as a highly nonlinear process plant benchmark. The obtained results reveal the efficacy of the proposed method, leading to significant improvement in accuracy with respect to other alternative sensor network design approaches and securing the definite allocation of sensors to the most important process variables in sensor network design as a novel achievement.

Keywords: constriction coefficient PSO, importance of variable, MRMSE, reliability, sensor network design, square root unscented Kalman filter

Procedia PDF Downloads 137
4935 Dimensional Accuracy of CNTs/PMMA Parts and Holes Produced by Laser Cutting

Authors: A. Karimzad Ghavidel, M. Zadshakouyan

Abstract:

Laser cutting is a very common production method for cutting 2D polymeric parts. Developing of polymer composites with nano-fibers makes important their other properties like laser workability. The aim of this research is investigation of the influence different laser cutting conditions on the dimensional accuracy of parts and holes from poly methyl methacrylate (PMMA)/carbon nanotubes (CNTs) material. Experiments were carried out by considering of CNTs (in four level 0,0.5, 1 and 1.5% wt.%), laser power (60, 80, and 100 watt) and cutting speed 20, 30, and 40 mm/s as input variable factors. The results reveal that CNTs adding improves the laser workability of PMMA and the increasing of power has a significant effect on the part and hole size. The findings also show cutting speed is effective parameter on the size accuracy. Eventually, the statistical analysis of results was done, and calculated mathematical equations by the regression are presented for determining relation between input and output factor.

Keywords: dimensional accuracy, PMMA, CNTs, laser cutting

Procedia PDF Downloads 279
4934 Accuracy Improvement of Traffic Participant Classification Using Millimeter-Wave Radar by Leveraging Simulator Based on Domain Adaptation

Authors: Tokihiko Akita, Seiichi Mita

Abstract:

A millimeter-wave radar is the most robust against adverse environments, making it an essential environment recognition sensor for automated driving. However, the reflection signal is sparse and unstable, so it is difficult to obtain the high recognition accuracy. Deep learning provides high accuracy even for them in recognition, but requires large scale datasets with ground truth. Specially, it takes a lot of cost to annotate for a millimeter-wave radar. For the solution, utilizing a simulator that can generate an annotated huge dataset is effective. Simulation of the radar is more difficult to match with real world data than camera image, and recognition by deep learning with higher-order features using the simulator causes further deviation. We have challenged to improve the accuracy of traffic participant classification by fusing simulator and real-world data with domain adaptation technique. Experimental results with the domain adaptation network created by us show that classification accuracy can be improved even with a few real-world data.

Keywords: millimeter-wave radar, object classification, deep learning, simulation, domain adaptation

Procedia PDF Downloads 61