Search results for: measuring accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5073

Search results for: measuring accuracy

4653 Study of Human Upper Arm Girth during Elbow Isokinetic Contractions Based on a Smart Circumferential Measuring System

Authors: Xi Wang, Xiaoming Tao, Raymond C. H. So

Abstract:

As one of the convenient and noninvasive sensing approaches, the automatic limb girth measurement has been applied to detect intention behind human motion from muscle deformation. The sensing validity has been elaborated by preliminary researches but still need more fundamental study, especially on kinetic contraction modes. Based on the novel fabric strain sensors, a soft and smart limb girth measurement system was developed by the authors’ group, which can measure the limb girth in-motion. Experiments were carried out on elbow isometric flexion and elbow isokinetic flexion (biceps’ isokinetic contractions) of 90°/s, 60°/s, and 120°/s for 10 subjects (2 canoeists and 8 ordinary people). After removal of natural circumferential increments due to elbow position, the joint torque is found not uniformly sensitive to the limb circumferential strains, but declining as elbow joint angle rises, regardless of the angular speed. Moreover, the maximum joint torque was found as an exponential function of the joint’s angular speed. This research highly contributes to the application of the automatic limb girth measuring during kinetic contractions, and it is useful to predict the contraction level of voluntary skeletal muscles.

Keywords: fabric strain sensor, muscle deformation, isokinetic contraction, joint torque, limb girth strain

Procedia PDF Downloads 322
4652 CMOS Solid-State Nanopore DNA System-Level Sequencing Techniques Enhancement

Authors: Syed Islam, Yiyun Huang, Sebastian Magierowski, Ebrahim Ghafar-Zadeh

Abstract:

This paper presents system level CMOS solid-state nanopore techniques enhancement for speedup next generation molecular recording and high throughput channels. This discussion also considers optimum number of base-pair (bp) measurements through channel as an important role to enhance potential read accuracy. Effective power consumption estimation offered suitable rangeof multi-channel configuration. Nanopore bp extraction model in statistical method could contribute higher read accuracy with longer read-length (200 < read-length). Nanopore ionic current switching with Time Multiplexing (TM) based multichannel readout system contributed hardware savings.

Keywords: DNA, nanopore, amplifier, ADC, multichannel

Procedia PDF Downloads 435
4651 Measurement of Turbulence with PITOT Static Tube in Low Speed Subsonic Wind Tunnel

Authors: Gopikrishnan, Bharathiraja, Boopalan, Jensin Joshua

Abstract:

The Pitot static tube has proven their values and practicability in measuring velocity of fluids for many years. With the aim of extensive usage of such Pitot tube systems, one of the major enabling technologies is to use the design and fabricate a high sensitive pitot tube for the purpose of calibration of the subsonic wind tunnel. Calibration of wind tunnel is carried out by using different instruments to measure variety of parameters. Using too many instruments inside the tunnel may not only affect the fluid flow but also lead to drag or losses. So, it is essential to replace the different system with a single system that would give all the required information. This model of high sensitive Pitot tube has been designed to ease the calibration process. It minimizes the use of different instruments and this single system is capable of calibrating the wind tunnel test section. This Pitot static tube is completely digitalized and so that the velocity data`s can be collected directly from the instrument. Since the turbulence factors are dependent on velocity, the data’s that are collected from the pitot static tube are then processed and the level of turbulence in the fluid flow is calculated. It is also capable of measuring the pressure distribution inside the wind tunnel and the flow angularity of the fluid. Thus, the well-designed high sensitive Pitot static tube is utilized in calibrating the tunnel and also for the measurement of turbulence.

Keywords: pitot static tube, turbulence, wind tunnel, velocity

Procedia PDF Downloads 503
4650 A POX Controller Module to Collect Web Traffic Statistics in SDN Environment

Authors: Wisam H. Muragaa, Kamaruzzaman Seman, Mohd Fadzli Marhusin

Abstract:

Software Defined Networking (SDN) is a new norm of networks. It is designed to facilitate the way of managing, measuring, debugging and controlling the network dynamically, and to make it suitable for the modern applications. Generally, measurement methods can be divided into two categories: Active and passive methods. Active measurement method is employed to inject test packets into the network in order to monitor their behaviour (ping tool as an example). Meanwhile the passive measurement method is used to monitor the traffic for the purpose of deriving measurement values. The measurement methods, both active and passive, are useful for the collection of traffic statistics, and monitoring of the network traffic. Although there has been a work focusing on measuring traffic statistics in SDN environment, it was only meant for measuring packets and bytes rates for non-web traffic. In this study, a feasible method will be designed to measure the number of packets and bytes in a certain time, and facilitate obtaining statistics for both web traffic and non-web traffic. Web traffic refers to HTTP requests that use application layer; while non-web traffic refers to ICMP and TCP requests. Thus, this work is going to be more comprehensive than previous works. With a developed module on POX OpenFlow controller, information will be collected from each active flow in the OpenFlow switch, and presented on Command Line Interface (CLI) and wireshark interface. Obviously, statistics that will be displayed on CLI and on wireshark interfaces include type of protocol, number of bytes and number of packets, among others. Besides, this module will show the number of flows added to the switch whenever traffic is generated from and to hosts in the same statistics list. In order to carry out this work effectively, our Python module will send a statistics request message to the switch requesting its current ports and flows statistics in every five seconds; while the switch will reply with the required information in a message called statistics reply message. Thus, POX controller will be notified and updated with any changes could happen in the entire network in a very short time. Therefore, our aim of this study is to prepare a list for the important statistics elements that are collected from the whole network, to be used for any further researches; particularly, those that are dealing with the detection of the network attacks that cause a sudden rise in the number of packets and bytes like Distributed Denial of Service (DDoS).

Keywords: mininet, OpenFlow, POX controller, SDN

Procedia PDF Downloads 209
4649 Variables for Measuring the Impact of the Social Enterprises in the Field of Community Development

Authors: A. Irudaya Veni Mary, M. Victor Louis Anthuvan, P. Christie, A. Indira

Abstract:

In India, social enterprises are working to create social value in various fields including education; health; women and child development; environment protection and community development. Although social enterprises have brought about tremendous changes in the lives of beneficiaries, the importance of their works is not understood thoroughly. One of the ways to prove themselves is to measure the impact, which in recent times has received much attention. This paper focuses on the study of social value created by the social enterprises in the field of community development. It also aims to put forth a research tool for measuring the social value created by the social enterprises in the field of community development. A close-ended interview schedule was prepared to measure the social value creation and it was administered among 60 beneficiaries of two social enterprises who work in the field of community development. The study results show that the social enterprises have brought four types of impact in the life of their beneficiaries; economic impact, social impact, political impact and cultural impact. This study is limited to the social enterprises those who work towards community development. This empirical finding will enable the reader to understand various types of social value created by the social enterprises working in the field of community development. This study will also serve as guide for social enterprises in community development activities to measure their impact and thereby improve their operation towards the betterment of the society. This paper is derived from an empirical research carried out to describe the different types of social value created by the social enterprises in India.

Keywords: social enterprise, social entrepreneurs, social impact, social value, tool for social impact measurement

Procedia PDF Downloads 251
4648 Enhanced Weighted Centroid Localization Algorithm for Indoor Environments

Authors: I. Nižetić Kosović, T. Jagušt

Abstract:

Lately, with the increasing number of location-based applications, demand for highly accurate and reliable indoor localization became urgent. This is a challenging problem, due to the measurement variance which is the consequence of various factors like obstacles, equipment properties and environmental changes in complex nature of indoor environments. In this paper we propose low-cost custom-setup infrastructure solution and localization algorithm based on the Weighted Centroid Localization (WCL) method. Localization accuracy is increased by several enhancements: calibration of RSSI values gained from wireless nodes, repetitive measurements of RSSI to exclude deviating values from the position estimation, and by considering orientation of the device according to the wireless nodes. We conducted several experiments to evaluate the proposed algorithm. High accuracy of ~1m was achieved.

Keywords: indoor environment, received signal strength indicator, weighted centroid localization, wireless localization

Procedia PDF Downloads 213
4647 Grey Wolf Optimization Technique for Predictive Analysis of Products in E-Commerce: An Adaptive Approach

Authors: Shital Suresh Borse, Vijayalaxmi Kadroli

Abstract:

E-commerce industries nowadays implement the latest AI, ML Techniques to improve their own performance and prediction accuracy. This helps to gain a huge profit from the online market. Ant Colony Optimization, Genetic algorithm, Particle Swarm Optimization, Neural Network & GWO help many e-commerce industries for up-gradation of their predictive performance. These algorithms are providing optimum results in various applications, such as stock price prediction, prediction of drug-target interaction & user ratings of similar products in e-commerce sites, etc. In this study, customer reviews will play an important role in prediction analysis. People showing much interest in buying a lot of services& products suggested by other customers. This ultimately increases net profit. In this work, a convolution neural network (CNN) is proposed which further is useful to optimize the prediction accuracy of an e-commerce website. This method shows that CNN is used to optimize hyperparameters of GWO algorithm using an appropriate coding scheme. Accurate model results are verified by comparing them to PSO results whose hyperparameters have been optimized by CNN in Amazon's customer review dataset. Here, experimental outcome proves that this proposed system using the GWO algorithm achieves superior execution in terms of accuracy, precision, recovery, etc. in prediction analysis compared to the existing systems.

Keywords: prediction analysis, e-commerce, machine learning, grey wolf optimization, particle swarm optimization, CNN

Procedia PDF Downloads 93
4646 Expression of Metallothionein Gen and Protein on Hepatopancreas, Gill and Muscle of Perna viridis Caused by Biotoxicity Hg, Pb and Cd

Authors: Yulia Irnidayanti , J. J. Josua, A. Sugianto

Abstract:

Jakarta Bay with 13 rivers that flow into, the environment has deteriorated and is the most polluted bays in Asia. The entry of waste into the waters of the Bay of Jakarta has caused pollution. Heavy metal contamination has led to pollution levels and may cause toxicity to organisms that live in the sea, down to the cellular level and may affect the ecological balance. Various ways have been conducted to measure the impact of environmental degradation, such as by measuring the levels of contaminants in the environment, including measuring the accumulation of toxic compounds in the tissues of organisms. Biological responses or biomarkers known as a sensitive indicator but need relevant predictions. In heavy metal pollution monitoring, analysis of aquatic biota is very important from the analysis of the water itself. The content of metals in aquatic biota will usually always be increased from time to time due to the nature of metal bioaccumulation, so the aquatic biota is best used as an indicator of metal pollution in aquatic environments. The results of the content analysis results of sea water in coastal estuaries Angke, Kaliadem and Panimbang detected heavy metals cadmium, mercury, lead, but did not find zinc metal. Based on the results of protein electrophoresis methallotionein found heavy metals in the tissues hepatopancreas, gills and muscles, and also the mRNA expression of has detected.

Keywords: gills, heavy metal, hepatopancreas, metallothionein, muscle

Procedia PDF Downloads 370
4645 Dynamic Compensation for Environmental Temperature Variation in the Coolant Refrigeration Cycle as a Means of Increasing Machine-Tool Precision

Authors: Robbie C. Murchison, Ibrahim Küçükdemiral, Andrew Cowell

Abstract:

Thermal effects are the largest source of dimensional error in precision machining, and a major proportion is caused by ambient temperature variation. The use of coolant is a primary means of mitigating these effects, but there has been limited work on coolant temperature control. This research critically explored whether CNC-machine coolant refrigeration systems adapted to actively compensate for ambient temperature variation could increase machining accuracy. Accuracy data were collected from operators’ checklists for a CNC 5-axis mill and statistically reduced to bias and precision metrics for observations of one day over a sample period of 27 days. Temperature data were collected using three USB dataloggers in ambient air, the chiller inflow, and the chiller outflow. The accuracy and temperature data were analysed using Pearson correlation, then the thermodynamics of the system were described using system identification with MATLAB. It was found that 75% of thermal error is reflected in the hot coolant temperature but that this is negligibly dependent on ambient temperature. The effect of the coolant refrigeration process on hot coolant outflow temperature was also found to be negligible. Therefore, the evidence indicated that it would not be beneficial to adapt coolant chillers to compensate for ambient temperature variation. However, it is concluded that hot coolant outflow temperature is a robust and accessible source of thermal error data which could be used for prevention strategy evaluation or as the basis of other thermal error strategies.

Keywords: CNC manufacturing, machine-tool, precision machining, thermal error

Procedia PDF Downloads 71
4644 Astronomical Panels of Measuring and Dividing Time in Ancient Egypt

Authors: Omnia Abd Elghany Zaki Mohamed Mahmoud

Abstract:

The ancient Egyptian used the stars to measure time or in a more precise sense as one of the astronomical means of measuring time. These methods differed throughout the historical ages. They began with simple observations of observing astronomical phenomena and watching them, such as observing the movements of the stars in the sky. The year, to know the days, nights, and other means used to help set the time when the sky overcast, and so the researcher tries through archaeological evidence to demonstrate the knowledge of the ancient Egyptian stars of heaven, and movements through the first pre-history. It is not believed that the astronomical information possessed by the Egyptian was limited, and simple, it was reaching a level of almost optimal in terms of importance, and the goal he wanted to reach the ancient Egyptian, and also help him to know the time, and the passage of time; which ended in finally trying to find a system of timing and calculation of time. It was noted that there were signs that the stellar creed was known, and prosperous, especially since the pre-family ages, and this is evident on the inscriptions that come back to that period. The Egyptian realized that some of the stars remain visible at night, The ancient Egyptian was familiar with the daily journey of the stars. This is what was adopted in many paragraphs of the texts of the pyramids, and its references to the rise of the deceased king of the heavenly world between the stars of the eternal sky. It was noted that the ancient Egyptian link between the doctrine of the star, it find that the public The lunar was known to the ancient Egyptian, and sang it for two years: and the stellar solar; but it was based on the appearance of the star Sirius, and this is the first means used to measure time, and know the calendar stars.

Keywords: archaeology, astronomical panels, ancient Egypt, Egyptian

Procedia PDF Downloads 19
4643 Predicting Options Prices Using Machine Learning

Authors: Krishang Surapaneni

Abstract:

The goal of this project is to determine how to predict important aspects of options, including the ask price. We want to compare different machine learning models to learn the best model and the best hyperparameters for that model for this purpose and data set. Option pricing is a relatively new field, and it can be very complicated and intimidating, especially to inexperienced people, so we want to create a machine learning model that can predict important aspects of an option stock, which can aid in future research. We tested multiple different models and experimented with hyperparameter tuning, trying to find some of the best parameters for a machine-learning model. We tested three different models: a Random Forest Regressor, a linear regressor, and an MLP (multi-layer perceptron) regressor. The most important feature in this experiment is the ask price; this is what we were trying to predict. In the field of stock pricing prediction, there is a large potential for error, so we are unable to determine the accuracy of the models based on if they predict the pricing perfectly. Due to this factor, we determined the accuracy of the model by finding the average percentage difference between the predicted and actual values. We tested the accuracy of the machine learning models by comparing the actual results in the testing data and the predictions made by the models. The linear regression model performed worst, with an average percentage error of 17.46%. The MLP regressor had an average percentage error of 11.45%, and the random forest regressor had an average percentage error of 7.42%

Keywords: finance, linear regression model, machine learning model, neural network, stock price

Procedia PDF Downloads 60
4642 Response Delay Model: Bridging the Gap in Urban Fire Disaster Response System

Authors: Sulaiman Yunus

Abstract:

The need for modeling response to urban fire disaster cannot be over emphasized, as recurrent fire outbreaks have gutted most cities of the world. This necessitated the need for a prompt and efficient response system in order to mitigate the impact of the disaster. Promptness, as a function of time, is seen to be the fundamental determinant for efficiency of a response system and magnitude of a fire disaster. Delay, as a result of several factors, is one of the major determinants of promptgness of a response system and also the magnitude of a fire disaster. Response Delay Model (RDM) intends to bridge the gap in urban fire disaster response system through incorporating and synchronizing the delay moments in measuring the overall efficiency of a response system and determining the magnitude of a fire disaster. The model identified two delay moments (pre-notification and Intra-reflex sequence delay) that can be elastic and collectively plays a significant role in influencing the efficiency of a response system. Due to variation in the elasticity of the delay moments, the model provides for measuring the length of delays in order to arrive at a standard average delay moment for different parts of the world, putting into consideration geographic location, level of preparedness and awareness, technological advancement, socio-economic and environmental factors. It is recommended that participatory researches should be embarked on locally and globally to determine standard average delay moments within each phase of the system so as to enable determining the efficiency of response systems and predicting fire disaster magnitudes.

Keywords: delay moment, fire disaster, reflex sequence, response, response delay moment

Procedia PDF Downloads 183
4641 Double Beta Decay Experiments in Novi Sad

Authors: Nataša Todorović, Jovana Nikolov

Abstract:

Despite the great interest in β⁻β⁻ decay, β⁺β⁺ decays are rarely investigated due to the low probability of detecting these processes with available low-level equipment. If β⁺β⁺, β⁺EC, or ECEC decay occurs in a thin sample of a material, the positrons will be stopped and annihilated inside the material, leading to the emission of two or four coincidence gamma photons energy of 511 keV. The paper presents the results of measurements of double beta decay of ⁶⁴Zn, ⁵⁰Cr, and ⁵⁴Fe isotopes. In the first experiment, 511-keV gamma rays originating from the annihilation of positrons in natural zinc were measured by a coincidence technique to obtain a non-zero value for the (0ν+2ν) half-life. In the second experiment, the result of measuring double beta decay of ⁵⁰Cr is presented, which suggests a result other than zero at 95% CL and gives the lowest limit for the half-life of this process. In the third experiment, neutrino-less ECEC decay of ⁵⁴Fe was examined. Under the decay theory, gamma rays are emitted whose energy does not coincide with the energies of gamma rays emitted by nuclei from known discrete excited states. Iron shield of an internal volume of 1 m³ and thickness of 25 cm served as a source for measuring the (0ν+2ν) process in ⁵⁴Fe, whose yield in natural iron is 5.4%. We obtain the lower limit for the half-life for ⁵⁴Fe: T(0ν, K, K)>4.4x10²⁰ yr, T(0ν, K, L)>4.1x10²⁰ yr, and T(0ν, L, L)>5.0x10²⁰ yr. For ⁵⁰Cr limit for the half-life is T(0ν+2ν)>1.3(6)x10¹⁸ yr, and for ⁶⁴Zn T(0ν+2ν, ECβ+)=1.1(0.9)x10⁹ years.

Keywords: neutrinoless double beta decay, half-life, ⁶⁴Zn, ⁵⁰Cr, and, ⁵⁴Fe

Procedia PDF Downloads 89
4640 Food Irradiation in the Third Sector Development and Validation of Questionnaire to Standard Measuring Instrument for Evaluation of Acceptance and Sensory Analysis of Irradiated Foods

Authors: Juliana Sagretti, Susy Sabato

Abstract:

Despite the poverty in the world, a third of all food produced in the world is wasted. FAO, the United Nations Organization of Agriculture and Food, points out the need to combine actions and new technologies to combat hunger and waste in contrast to the high production of food in the world. The energy of ionizing radiation in food brought many positive results, such as increased validity and insect infestation control. The food banks are organizations that act at various points of food chain to collect and distribute food to the needy. So, the aim of this study was to initiate a partnership between irradiation and the food bank through the development of a questionnaire to evaluate and disseminate the knowledge and acceptance of individuals in the food bank in Brazil. In addition, this study aimed to standardize a basis questionnaire for future research assessment of irradiated foods. For the construction of the questionnaire as a measuring instrument, a comprehensive and rigorous literature review was made. Its covered qualitative research, questionnaires, sensory evaluation and food irradiated. Three stages of pre - tests were necessary and related fields of experts were consulted. As a result, the questionnaire has three parts, personal issues, assertive issues and questions of multiple choices and finally an informative question. The questionnaire was applied in Ceagesp food bank in the biggest center of food in Brazil (data not shown).

Keywords: food bank, food irradiation, food waste, sustainability

Procedia PDF Downloads 306
4639 Modification of Newton Method in Two Point Block Backward Differentiation Formulas

Authors: Khairil I. Othman, Nur N. Kamal, Zarina B. Ibrahim

Abstract:

In this paper, we present modified Newton method as a new strategy for improving the efficiency of Two Point Block Backward Differentiation Formulas (BBDF) when solving stiff systems of ordinary differential equations (ODEs). These methods are constructed to produce two approximate solutions simultaneously at each iteration The detailed implementation of the predictor corrector BBDF with PE(CE)2 with modified Newton are discussed. The proposed modification of BBDF is validated through numerical results on some standard problems found in the literature and comparisons are made with the existing Block Backward Differentiation Formula. Numerical results show the advantage of using the new strategy for solving stiff ODEs in improving the accuracy of the solution.

Keywords: newton method, two point, block, accuracy

Procedia PDF Downloads 332
4638 The Identification of Combined Genomic Expressions as a Diagnostic Factor for Oral Squamous Cell Carcinoma

Authors: Ki-Yeo Kim

Abstract:

Trends in genetics are transforming in order to identify differential coexpressions of correlated gene expression rather than the significant individual gene. Moreover, it is known that a combined biomarker pattern improves the discrimination of a specific cancer. The identification of the combined biomarker is also necessary for the early detection of invasive oral squamous cell carcinoma (OSCC). To identify the combined biomarker that could improve the discrimination of OSCC, we explored an appropriate number of genes in a combined gene set in order to attain the highest level of accuracy. After detecting a significant gene set, including the pre-defined number of genes, a combined expression was identified using the weights of genes in a gene set. We used the Principal Component Analysis (PCA) for the weight calculation. In this process, we used three public microarray datasets. One dataset was used for identifying the combined biomarker, and the other two datasets were used for validation. The discrimination accuracy was measured by the out-of-bag (OOB) error. There was no relation between the significance and the discrimination accuracy in each individual gene. The identified gene set included both significant and insignificant genes. One of the most significant gene sets in the classification of normal and OSCC included MMP1, SOCS3 and ACOX1. Furthermore, in the case of oral dysplasia and OSCC discrimination, two combined biomarkers were identified. The combined genomic expression achieved better performance in the discrimination of different conditions than in a single significant gene. Therefore, it could be expected that accurate diagnosis for cancer could be possible with a combined biomarker.

Keywords: oral squamous cell carcinoma, combined biomarker, microarray dataset, correlated genes

Procedia PDF Downloads 398
4637 U-Net Based Multi-Output Network for Lung Disease Segmentation and Classification Using Chest X-Ray Dataset

Authors: Jaiden X. Schraut

Abstract:

Medical Imaging Segmentation of Chest X-rays is used for the purpose of identification and differentiation of lung cancer, pneumonia, COVID-19, and similar respiratory diseases. Widespread application of computer-supported perception methods into the diagnostic pipeline has been demonstrated to increase prognostic accuracy and aid doctors in efficiently treating patients. Modern models attempt the task of segmentation and classification separately and improve diagnostic efficiency; however, to further enhance this process, this paper proposes a multi-output network that follows a U-Net architecture for image segmentation output and features an additional CNN module for auxiliary classification output. The proposed model achieves a final Jaccard Index of .9634 for image segmentation and a final accuracy of .9600 for classification on the COVID-19 radiography database.

Keywords: chest X-ray, deep learning, image segmentation, image classification

Procedia PDF Downloads 113
4636 Post-Earthquake Road Damage Detection by SVM Classification from Quickbird Satellite Images

Authors: Moein Izadi, Ali Mohammadzadeh

Abstract:

Detection of damaged parts of roads after earthquake is essential for coordinating rescuers. In this study, an approach is presented for the semi-automatic detection of damaged roads in a city using pre-event vector maps and both pre- and post-earthquake QuickBird satellite images. Damage is defined in this study as the debris of damaged buildings adjacent to the roads. Some spectral and texture features are considered for SVM classification step to detect damages. Finally, the proposed method is tested on QuickBird pan-sharpened images from the Bam City earthquake and the results show that an overall accuracy of 81% and a kappa coefficient of 0.71 are achieved for the damage detection. The obtained results indicate the efficiency and accuracy of the proposed approach.

Keywords: SVM classifier, disaster management, road damage detection, quickBird images

Procedia PDF Downloads 602
4635 Upon One Smoothing Problem in Project Management

Authors: Dimitri Golenko-Ginzburg

Abstract:

A CPM network project with deterministic activity durations, in which activities require homogenous resources with fixed capacities, is considered. The problem is to determine the optimal schedule of starting times for all network activities within their maximal allowable limits (in order not to exceed the network's critical time) to minimize the maximum required resources for the project at any point in time. In case when a non-critical activity may start only at discrete moments with the pregiven time span, the problem becomes NP-complete and an optimal solution may be obtained via a look-over algorithm. For the case when a look-over requires much computational time an approximate algorithm is suggested. The algorithm's performance ratio, i.e., the relative accuracy error, is determined. Experimentation has been undertaken to verify the suggested algorithm.

Keywords: resource smoothing problem, CPM network, lookover algorithm, lexicographical order, approximate algorithm, accuracy estimate

Procedia PDF Downloads 283
4634 Bioreactor for Cell-Based Impedance Measuring with Diamond Coated Gold Interdigitated Electrodes

Authors: Roman Matejka, Vaclav Prochazka, Tibor Izak, Jana Stepanovska, Martina Travnickova, Alexander Kromka

Abstract:

Cell-based impedance spectroscopy is suitable method for electrical monitoring of cell activity especially on substrates that cannot be easily inspected by optical microscope (without fluorescent markers) like decellularized tissues, nano-fibrous scaffold etc. Special sensor for this measurement was developed. This sensor consists of corning glass substrate with gold interdigitated electrodes covered with diamond layer. This diamond layer provides biocompatible non-conductive surface for cells. Also, a special PPFC flow cultivation chamber was developed. This chamber is able to fix sensor in place. The spring contacts are connecting sensor pads with external measuring device. Construction allows real-time live cell imaging. Combining with perfusion system allows medium circulation and generating shear stress stimulation. Experimental evaluation consist of several setups, including pure sensor without any coating and also collagen and fibrin coating was done. The Adipose derived stem cells (ASC) and Human umbilical vein endothelial cells (HUVEC) were seeded onto sensor in cultivation chamber. Then the chamber was installed into microscope system for live-cell imaging. The impedance measurement was utilized by vector impedance analyzer. The measured range was from 10 Hz to 40 kHz. These impedance measurements were correlated with live-cell microscopic imaging and immunofluorescent staining. Data analysis of measured signals showed response to cell adhesion of substrates, their proliferation and also change after shear stress stimulation which are important parameters during cultivation. Further experiments plan to use decellularized tissue as scaffold fixed on sensor. This kind of impedance sensor can provide feedback about cell culture conditions on opaque surfaces and scaffolds that can be used in tissue engineering in development artificial prostheses. This work was supported by the Ministry of Health, grants No. 15-29153A and 15-33018A.

Keywords: bio-impedance measuring, bioreactor, cell cultivation, diamond layer, gold interdigitated electrodes, tissue engineering

Procedia PDF Downloads 279
4633 Machine Learning Based Approach for Measuring Promotion Effectiveness in Multiple Parallel Promotions’ Scenarios

Authors: Revoti Prasad Bora, Nikita Katyal

Abstract:

Promotion is a key element in the retail business. Thus, analysis of promotions to quantify their effectiveness in terms of Revenue and/or Margin is an essential activity in the retail industry. However, measuring the sales/revenue uplift is based on estimations, as the actual sales/revenue without the promotion is not present. Further, the presence of Halo and Cannibalization in a multiple parallel promotions’ scenario complicates the problem. Calculating Baseline by considering inter-brand/competitor items or using Halo and Cannibalization's impact on Revenue calculations by considering Baseline as an interpretation of items’ unit sales in neighboring nonpromotional weeks individually may not capture the overall Revenue uplift in the case of multiple parallel promotions. Hence, this paper proposes a Machine Learning based method for calculating the Revenue uplift by considering the Halo and Cannibalization impact on the Baseline and the Revenue. In the first section of the proposed methodology, Baseline of an item is calculated by incorporating the impact of the promotions on its related items. In the later section, the Revenue of an item is calculated by considering both Halo and Cannibalization impacts. Hence, this methodology enables correct calculation of the overall Revenue uplift due a given promotion.

Keywords: Halo, Cannibalization, promotion, Baseline, temporary price reduction, retail, elasticity, cross price elasticity, machine learning, random forest, linear regression

Procedia PDF Downloads 154
4632 An Intelligent Steerable Drill System for Orthopedic Surgery

Authors: Wei Yao

Abstract:

A steerable and flexible drill is needed in orthopaedic surgery. For example, osteoarthritis is a common condition affecting millions of people for which joint replacement is an effective treatment which improves the quality and duration of life in elderly sufferers. Conventional surgery is not very accurate. Computer navigation and robotics can help increase the accuracy. For example, In Total Hip Arthroplasty (THA), robotic surgery is currently practiced mainly on acetabular side helping cup positioning and orientation. However, femoral stem positioning mostly uses hand-rasping method rather than robots for accurate positioning. The other case for using a flexible drill in surgery is Anterior Cruciate Ligament (ACL) Reconstruction. The majority of ACL Reconstruction failures are primarily caused by technical mistakes and surgical errors resulting from drilling the anatomical bone tunnels required to accommodate the ligament graft. The proposed new steerable drill system will perform orthopedic surgery through curved tunneling leading to better accuracy and patient outcomes. It may reduce intra-operative fractures, dislocations, early failure and leg length discrepancy by making possible a new level of precision. This technology is based on a robotically assisted, steerable, hand-held flexible drill, with a drill-tip tracking device and a multi-modality navigation system. The critical differentiator is that this robotically assisted surgical technology now allows the surgeon to prepare 'patient specific' and more anatomically correct 'curved' bone tunnels during orthopedic surgery rather than drilling straight holes as occurs currently with existing surgical tools. The flexible and steerable drill and its navigation system for femoral milling in total hip arthroplasty had been tested on sawbones to evaluate the accuracy of the positioning and orientation of femoral stem relative to the pre-operative plan. The data show the accuracy of the navigation system is better than traditional hand-rasping method.

Keywords: navigation, robotic orthopedic surgery, steerable drill, tracking

Procedia PDF Downloads 148
4631 Accuracy of Autonomy Navigation of Unmanned Aircraft Systems through Imagery

Authors: Sidney A. Lima, Hermann J. H. Kux, Elcio H. Shiguemori

Abstract:

The Unmanned Aircraft Systems (UAS) usually navigate through the Global Navigation Satellite System (GNSS) associated with an Inertial Navigation System (INS). However, GNSS can have its accuracy degraded at any time or even turn off the signal of GNSS. In addition, there is the possibility of malicious interferences, known as jamming. Therefore, the image navigation system can solve the autonomy problem, because if the GNSS is disabled or degraded, the image navigation system would continue to provide coordinate information for the INS, allowing the autonomy of the system. This work aims to evaluate the accuracy of the positioning though photogrammetry concepts. The methodology uses orthophotos and Digital Surface Models (DSM) as a reference to represent the object space and photograph obtained during the flight to represent the image space. For the calculation of the coordinates of the perspective center and camera attitudes, it is necessary to know the coordinates of homologous points in the object space (orthophoto coordinates and DSM altitude) and image space (column and line of the photograph). So if it is possible to automatically identify in real time the homologous points the coordinates and attitudes can be calculated whit their respective accuracies. With the methodology applied in this work, it is possible to verify maximum errors in the order of 0.5 m in the positioning and 0.6º in the attitude of the camera, so the navigation through the image can reach values equal to or higher than the GNSS receivers without differential correction. Therefore, navigating through the image is a good alternative to enable autonomous navigation.

Keywords: autonomy, navigation, security, photogrammetry, remote sensing, spatial resection, UAS

Procedia PDF Downloads 166
4630 Study on Energy Transfer in Collapsible Soil During Laboratory Proctor Compaction Test

Authors: Amritanshu Sandilya, M. V. Shah

Abstract:

Collapsible soils such as loess are a common geotechnical challenge due to their potential to undergo sudden and severe settlement under certain loading conditions. The need for filling engineering to increase developing land has grown significantly in recent years, which has created several difficulties in managing soil strength and stability during compaction. Numerous engineering problems, such as roadbed subsidence and pavement cracking, have been brought about by insufficient fill strength. Therefore, strict control of compaction parameters is essential to reduce these distresses. Accurately measuring the degree of compaction, which is often represented by compactness is an important component of compaction control. For credible predictions of how collapsible soils will behave under complicated loading situations, the accuracy of laboratory studies is essential. Therefore, this study aims to investigate the energy transfer in collapsible soils during laboratory Proctor compaction tests to provide insights into how energy transfer can be optimized to achieve more accurate and reliable results in compaction testing. The compaction characteristics in terms of energy of loess soil have been studied at moisture content corresponding to dry of optimum, at the optimum and wet side of optimum and at different compaction energy levels. The hammer impact force (E0) and soil bottom force (E) were measured using an impact load cell mounted at the bottom of the compaction mould. The variation in energy consumption ratio (E/ E0) was observed and compared with the compaction curve of the soil. The results indicate that the plot of energy consumption ratio versus moisture content can serve as a reliable indicator of the compaction characteristics of the soil in terms of energy.

Keywords: soil compaction, proctor compaction test, collapsible soil, energy transfer

Procedia PDF Downloads 66
4629 A Method to Enhance the Accuracy of Digital Forensic in the Absence of Sufficient Evidence in Saudi Arabia

Authors: Fahad Alanazi, Andrew Jones

Abstract:

Digital forensics seeks to achieve the successful investigation of digital crimes through obtaining acceptable evidence from digital devices that can be presented in a court of law. Thus, the digital forensics investigation is normally performed through a number of phases in order to achieve the required level of accuracy in the investigation processes. Since 1984 there have been a number of models and frameworks developed to support the digital investigation processes. In this paper, we review a number of the investigation processes that have been produced throughout the years and introduce a proposed digital forensic model which is based on the scope of the Saudi Arabia investigation process. The proposed model has been integrated with existing models for the investigation processes and produced a new phase to deal with a situation where there is initially insufficient evidence.

Keywords: digital forensics, process, metadata, Traceback, Sauid Arabia

Procedia PDF Downloads 334
4628 Automatic Number Plate Recognition System Based on Deep Learning

Authors: T. Damak, O. Kriaa, A. Baccar, M. A. Ben Ayed, N. Masmoudi

Abstract:

In the last few years, Automatic Number Plate Recognition (ANPR) systems have become widely used in the safety, the security, and the commercial aspects. Forethought, several methods and techniques are computing to achieve the better levels in terms of accuracy and real time execution. This paper proposed a computer vision algorithm of Number Plate Localization (NPL) and Characters Segmentation (CS). In addition, it proposed an improved method in Optical Character Recognition (OCR) based on Deep Learning (DL) techniques. In order to identify the number of detected plate after NPL and CS steps, the Convolutional Neural Network (CNN) algorithm is proposed. A DL model is developed using four convolution layers, two layers of Maxpooling, and six layers of fully connected. The model was trained by number image database on the Jetson TX2 NVIDIA target. The accuracy result has achieved 95.84%.

Keywords: ANPR, CS, CNN, deep learning, NPL

Procedia PDF Downloads 284
4627 A Comparative Soft Computing Approach to Supplier Performance Prediction Using GEP and ANN Models: An Automotive Case Study

Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari

Abstract:

In multi-echelon supply chain networks, optimal supplier selection significantly depends on the accuracy of suppliers’ performance prediction. Different methods of multi criteria decision making such as ANN, GA, Fuzzy, AHP, etc have been previously used to predict the supplier performance but the “black-box” characteristic of these methods is yet a major concern to be resolved. Therefore, the primary objective in this paper is to implement an artificial intelligence-based gene expression programming (GEP) model to compare the prediction accuracy with that of ANN. A full factorial design with %95 confidence interval is initially applied to determine the appropriate set of criteria for supplier performance evaluation. A test-train approach is then utilized for the ANN and GEP exclusively. The training results are used to find the optimal network architecture and the testing data will determine the prediction accuracy of each method based on measures of root mean square error (RMSE) and correlation coefficient (R2). The results of a case study conducted in Supplying Automotive Parts Co. (SAPCO) with more than 100 local and foreign supply chain members revealed that, in comparison with ANN, gene expression programming has a significant preference in predicting supplier performance by referring to the respective RMSE and R-squared values. Moreover, using GEP, a mathematical function was also derived to solve the issue of ANN black-box structure in modeling the performance prediction.

Keywords: Supplier Performance Prediction, ANN, GEP, Automotive, SAPCO

Procedia PDF Downloads 400
4626 Evaluate the Possibility of Using ArcGIS Basemaps as GCP for Large Scale Maps

Authors: Jali Octariady, Ida Herliningsih, Ade K. Mulyana, Annisa Fitria, Diaz C. K. Yuwana

Abstract:

Awareness of the importance large-scale maps for development of a country is growing in all walks of life, especially for governments in Indonesia. Various parties, especially local governments throughout Indonesia demanded for immediate availability the large-scale maps of 1:5000 for regional development. But in fact, the large-scale maps of 1:5000 is only available less than 5% of the entire territory of Indonesia. Unavailability precise GCP at the entire territory of Indonesia is one of causes of slow availability the large scale maps of 1:5000. This research was conducted to find an alternative solution to this problem. This study was conducted to assess the accuracy of ArcGIS base maps coordinate when it shall be used as GCP for creating a map scale of 1:5000. The study was conducted by comparing the GCP coordinate from Field survey using GPS Geodetic than the coordinate from ArcGIS basemaps in various locations in Indonesia. Some areas are used as a study area are Lombok Island, Kupang City, Surabaya City and Kediri District. The differences value of the coordinates serve as the basis for assessing the accuracy of ArcGIS basemaps coordinates. The results of the study at various study area show the variation of the amount of the coordinates value given. Differences coordinate value in the range of millimeters (mm) to meters (m) in the entire study area. This is shown the inconsistency quality of ArcGIS base maps coordinates. This inconsistency shows that the coordinate value from ArcGIS Basemaps is careless. The Careless coordinate from ArcGIS Basemaps indicates that its cannot be used as GCP for large-scale mapping on the entire territory of Indonesia.

Keywords: accuracy, ArcGIS base maps, GCP, large scale maps

Procedia PDF Downloads 354
4625 Prediction of PM₂.₅ Concentration in Ulaanbaatar with Deep Learning Models

Authors: Suriya

Abstract:

Rapid socio-economic development and urbanization have led to an increasingly serious air pollution problem in Ulaanbaatar (UB), the capital of Mongolia. PM₂.₅ pollution has become the most pressing aspect of UB air pollution. Therefore, monitoring and predicting PM₂.₅ concentration in UB is of great significance for the health of the local people and environmental management. As of yet, very few studies have used models to predict PM₂.₅ concentrations in UB. Using data from 0:00 on June 1, 2018, to 23:00 on April 30, 2020, we proposed two deep learning models based on Bayesian-optimized LSTM (Bayes-LSTM) and CNN-LSTM. We utilized hourly observed data, including Himawari8 (H8) aerosol optical depth (AOD), meteorology, and PM₂.₅ concentration, as input for the prediction of PM₂.₅ concentrations. The correlation strengths between meteorology, AOD, and PM₂.₅ were analyzed using the gray correlation analysis method; the comparison of the performance improvement of the model by using the AOD input value was tested, and the performance of these models was evaluated using mean absolute error (MAE) and root mean square error (RMSE). The prediction accuracies of Bayes-LSTM and CNN-LSTM deep learning models were both improved when AOD was included as an input parameter. Improvement of the prediction accuracy of the CNN-LSTM model was particularly enhanced in the non-heating season; in the heating season, the prediction accuracy of the Bayes-LSTM model slightly improved, while the prediction accuracy of the CNN-LSTM model slightly decreased. We propose two novel deep learning models for PM₂.₅ concentration prediction in UB, Bayes-LSTM, and CNN-LSTM deep learning models. Pioneering the use of AOD data from H8 and demonstrating the inclusion of AOD input data improves the performance of our two proposed deep learning models.

Keywords: deep learning, AOD, PM2.5, prediction, Ulaanbaatar

Procedia PDF Downloads 26
4624 Tuning Cubic Equations of State for Supercritical Water Applications

Authors: Shyh Ming Chern

Abstract:

Cubic equations of state (EoS), popular due to their simple mathematical form, ease of use, semi-theoretical nature and, reasonable accuracy are normally fitted to vapor-liquid equilibrium P-v-T data. As a result, They often show poor accuracy in the region near and above the critical point. In this study, the performance of the renowned Peng-Robinson (PR) and Patel-Teja (PT) EoS’s around the critical area has been examined against the P-v-T data of water. Both of them display large deviations at critical point. For instance, PR-EoS exhibits discrepancies as high as 47% for the specific volume, 28% for the enthalpy departure and 43% for the entropy departure at critical point. It is shown that incorporating P-v-T data of the supercritical region into the retuning of a cubic EoS can improve its performance above the critical point dramatically. Adopting a retuned acentric factor of 0.5491 instead of its genuine value of 0.344 for water in PR-EoS and a new F of 0.8854 instead of its original value of 0.6898 for water in PT-EoS reduces the discrepancies to about one third or less.

Keywords: equation of state, EoS, supercritical water, SCW

Procedia PDF Downloads 503