Search results for: measurement accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6128

Search results for: measurement accuracy

4658 Internet of Things Networks: Denial of Service Detection in Constrained Application Protocol Using Machine Learning Algorithm

Authors: Adamu Abdullahi, On Francisca, Saidu Isah Rambo, G. N. Obunadike, D. T. Chinyio

Abstract:

The paper discusses the potential threat of Denial of Service (DoS) attacks in the Internet of Things (IoT) networks on constrained application protocols (CoAP). As billions of IoT devices are expected to be connected to the internet in the coming years, the security of these devices is vulnerable to attacks, disrupting their functioning. This research aims to tackle this issue by applying mixed methods of qualitative and quantitative for feature selection, extraction, and cluster algorithms to detect DoS attacks in the Constrained Application Protocol (CoAP) using the Machine Learning Algorithm (MLA). The main objective of the research is to enhance the security scheme for CoAP in the IoT environment by analyzing the nature of DoS attacks and identifying a new set of features for detecting them in the IoT network environment. The aim is to demonstrate the effectiveness of the MLA in detecting DoS attacks and compare it with conventional intrusion detection systems for securing the CoAP in the IoT environment. Findings: The research identifies the appropriate node to detect DoS attacks in the IoT network environment and demonstrates how to detect the attacks through the MLA. The accuracy detection in both classification and network simulation environments shows that the k-means algorithm scored the highest percentage in the training and testing of the evaluation. The network simulation platform also achieved the highest percentage of 99.93% in overall accuracy. This work reviews conventional intrusion detection systems for securing the CoAP in the IoT environment. The DoS security issues associated with the CoAP are discussed.

Keywords: algorithm, CoAP, DoS, IoT, machine learning

Procedia PDF Downloads 81
4657 A Two-Stage Bayesian Variable Selection Method with the Extension of Lasso for Geo-Referenced Data

Authors: Georgiana Onicescu, Yuqian Shen

Abstract:

Due to the complex nature of geo-referenced data, multicollinearity of the risk factors in public health spatial studies is a commonly encountered issue, which leads to low parameter estimation accuracy because it inflates the variance in the regression analysis. To address this issue, we proposed a two-stage variable selection method by extending the least absolute shrinkage and selection operator (Lasso) to the Bayesian spatial setting, investigating the impact of risk factors to health outcomes. Specifically, in stage I, we performed the variable selection using Bayesian Lasso and several other variable selection approaches. Then, in stage II, we performed the model selection with only the selected variables from stage I and compared again the methods. To evaluate the performance of the two-stage variable selection methods, we conducted a simulation study with different distributions for the risk factors, using geo-referenced count data as the outcome and Michigan as the research region. We considered the cases when all candidate risk factors are independently normally distributed, or follow a multivariate normal distribution with different correlation levels. Two other Bayesian variable selection methods, Binary indicator, and the combination of Binary indicator and Lasso were considered and compared as alternative methods. The simulation results indicated that the proposed two-stage Bayesian Lasso variable selection method has the best performance for both independent and dependent cases considered. When compared with the one-stage approach, and the other two alternative methods, the two-stage Bayesian Lasso approach provides the highest estimation accuracy in all scenarios considered.

Keywords: Lasso, Bayesian analysis, spatial analysis, variable selection

Procedia PDF Downloads 146
4656 Creep Analysis and Rupture Evaluation of High Temperature Materials

Authors: Yuexi Xiong, Jingwu He

Abstract:

The structural components in an energy facility such as steam turbine machines are operated under high stress and elevated temperature in an endured time period and thus the creep deformation and creep rupture failure are important issues that need to be addressed in the design of such components. There are numerous creep models being used for creep analysis that have both advantages and disadvantages in terms of accuracy and efficiency. The Isochronous Creep Analysis is one of the simplified approaches in which a full-time dependent creep analysis is avoided and instead an elastic-plastic analysis is conducted at each time point. This approach has been established based on the rupture dependent creep equations using the well-known Larson-Miller parameter. In this paper, some fundamental aspects of creep deformation and the rupture dependent creep models are reviewed and the analysis procedures using isochronous creep curves are discussed. Four rupture failure criteria are examined from creep fundamental perspectives including criteria of Stress Damage, Strain Damage, Strain Rate Damage, and Strain Capability. The accuracy of these criteria in predicting creep life is discussed and applications of the creep analysis procedures and failure predictions of simple models will be presented. In addition, a new failure criterion is proposed to improve the accuracy and effectiveness of the existing criteria. Comparisons are made between the existing criteria and the new one using several examples materials. Both strain increase and stress relaxation form a full picture of the creep behaviour of a material under high temperature in an endured time period. It is important to bear this in mind when dealing with creep problems. Accordingly there are two sets of rupture dependent creep equations. While the rupture strength vs LMP equation shows how the rupture time depends on the stress level under load controlled condition, the strain rate vs rupture time equation reflects how the rupture time behaves under strain-controlled condition. Among the four existing failure criteria for rupture life predictions, the Stress Damage and Strain Damage Criteria provide the most conservative and non-conservative predictions, respectively. The Strain Rate and Strain Capability Criteria provide predictions in between that are believed to be more accurate because the strain rate and strain capability are more determined quantities than stress to reflect the creep rupture behaviour. A modified Strain Capability Criterion is proposed making use of the two sets of creep equations and therefore is considered to be more accurate than the original Strain Capability Criterion.

Keywords: creep analysis, high temperature mateials, rapture evalution, steam turbine machines

Procedia PDF Downloads 292
4655 Investigation of Projected Organic Waste Impact on a Tropical Wetland in Singapore

Authors: Swee Yang Low, Dong Eon Kim, Canh Tien Trinh Nguyen, Yixiong Cai, Shie-Yui Liong

Abstract:

Nee Soon swamp forest is one of the last vestiges of tropical wetland in Singapore. Understanding the hydrological regime of the swamp forest and implications for water quality is critical to guide stakeholders in implementing effective measures to preserve the wetland against anthropogenic impacts. In particular, although current field measurement data do not indicate a concern with organic pollution, reviewing the ways in which the wetland responds to elevated organic waste influx (and the corresponding impact on dissolved oxygen, DO) can help identify potential hotspots, and the impact on the outflow from the catchment which drains into downstream controlled watercourses. An integrated water quality model is therefore developed in this study to investigate spatial and temporal concentrations of DO levels and organic pollution (as quantified by biochemical oxygen demand, BOD) within the catchment’s river network under hypothetical, projected scenarios of spiked upstream inflow. The model was developed using MIKE HYDRO for modelling the study domain, as well as the MIKE ECO Lab numerical laboratory for characterising water quality processes. Model parameters are calibrated against time series of observed discharges at three measurement stations along the river network. Over a simulation period of April 2014 to December 2015, the calibrated model predicted that a continuous spiked inflow of 400 mg/l BOD will elevate downstream concentrations at the catchment outlet to an average of 12 mg/l, from an assumed nominal baseline BOD of 1 mg/l. Levels of DO were decreased from an initial 5 mg/l to 0.4 mg/l. Though a scenario of spiked organic influx at the swamp forest’s undeveloped upstream sub-catchments is currently unlikely to occur, the outcomes nevertheless will be beneficial for future planning studies in understanding how the water quality of the catchment will be impacted should urban redevelopment works be considered around the swamp forest.

Keywords: hydrology, modeling, water quality, wetland

Procedia PDF Downloads 141
4654 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model

Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You

Abstract:

The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.

Keywords: DBSCAN, potential function, speech signal, the UBSS model

Procedia PDF Downloads 135
4653 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting

Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas

Abstract:

The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.

Keywords: artificial neural network, low series manufacturing, polymer cutting, setup period estimation

Procedia PDF Downloads 245
4652 Submarine Topography and Beach Survey of Gang-Neung Port in South Korea, Using Multi-Beam Echo Sounder and Shipborne Mobile Light Detection and Ranging System

Authors: Won Hyuck Kim, Chang Hwan Kim, Hyun Wook Kim, Myoung Hoon Lee, Chan Hong Park, Hyeon Yeong Park

Abstract:

We conducted submarine topography & beach survey from December 2015 and January 2016 using multi-beam echo sounder EM3001(Kongsberg corporation) & Shipborne Mobile LiDAR System. Our survey area were the Anmok beach in Gangneung, South Korea. We made Shipborne Mobile LiDAR System for these survey. Shipborne Mobile LiDAR System includes LiDAR (RIEGL LMS-420i), IMU ((Inertial Measurement Unit, MAGUS Inertial+) and RTKGNSS (Real Time Kinematic Global Navigation Satellite System, LEIAC GS 15 GS25) for beach's measurement, LiDAR's motion compensation & precise position. Shipborne Mobile LiDAR System scans beach on the movable vessel using the laser. We mounted Shipborne Mobile LiDAR System on the top of the vessel. Before beach survey, we conducted eight circles IMU calibration survey for stabilizing heading of IMU. This exploration should be as close as possible to the beach. But our vessel could not come closer to the beach because of latency objects in the water. At the same time, we conduct submarine topography survey using multi-beam echo sounder EM3001. A multi-beam echo sounder is a device observing and recording the submarine topography using sound wave. We mounted multi-beam echo sounder on left side of the vessel. We were equipped with a motion sensor, DGNSS (Differential Global Navigation Satellite System), and SV (Sound velocity) sensor for the vessel's motion compensation, vessel's position, and the velocity of sound of seawater. Shipborne Mobile LiDAR System was able to reduce the consuming time of beach survey rather than previous conventional methods of beach survey.

Keywords: Anmok, beach survey, Shipborne Mobile LiDAR System, submarine topography

Procedia PDF Downloads 430
4651 Is It Important to Measure the Volumetric Mass Density of Nanofluids?

Authors: Z. Haddad, C. Abid, O. Rahli, O. Margeat, W. Dachraoui, A. Mataoui

Abstract:

The present study aims to measure the volumetric mass density of NiPd-heptane nanofluids synthesized using a one-step method known as thermal decomposition of metal-surfactant complexes. The particle concentration is up to 7.55 g/l and the temperature range of the experiment is from 20°C to 50°C. The measured values were compared with the mixture theory and good agreement between the theoretical equation and measurement were obtained. Moreover, the available nanofluids volumetric mass density data in the literature is reviewed.

Keywords: NiPd nanoparticles, nanofluids, volumetric mass density, stability

Procedia PDF Downloads 403
4650 A Case Study on the Field Surveys and Repair of a Marine Approach-Bridge

Authors: S. H. Park, D. W. You

Abstract:

This study is about to the field survey and repair works in a marine approach-bride. In order to evaluate the stability of the ground and the structure, field surveys such as exterior inspection, non-destructive inspection, measurement, and geophysical exploration are carried out. Numerical analysis is conducted to investigate the cause of the abutment displacement at the same time. In addition, repair works are practiced to the region damaged with intent to sustain long-term safety.

Keywords: field survey, expansion joint, repair, maintenance

Procedia PDF Downloads 292
4649 Providing Healthy Food in Primary and Secondary Schools of Saudi Arabia to Significantly Reduce Obesity and Improve Health by Using the Star Rating System for a Healthier Diet

Authors: Emran M. Badghish

Abstract:

Overweight and obesity have now become an epidemic around the globe, both in high-, as well as low-income regions. It is important to use preventive measures that are cost-effective. Schools are the essence of building societies and engaging them in healthy nutrition will offer a way to reach individuals at an early stage in life, with many positive and significant impacts. Aim: Provide healthy food in schools of children aged 5 to 18 years old. Methods: Distributing healthy food to a school and implementation of a star rating system for healthier foods, with five stars for the healthiest option to a half a star for the unhealthiest. The stars system was developed in Australia and should motivate children to consume the healthier nutritional options. Each canteen should be allowed a minimum of 3.5 stars rating for the food provided. Outcome Measurement: Body-mass-index as an indicator of overweight and obesity should be checked at the beginning of the study annually for five years for all children. Another side measurement is the performance by checking the grades and a questionnaire on eating habits at the start of the study and yearly. Expected Outcome: A lower health-risk behaviour and assistance to children in reaching their potentials as they will adapt to eating healthier. Nutrition during childhood has the potential to prevent obesity, type 2 diabetes, dental diseases, hypertension and, in later life, cardiovascular disease, osteoporosis and a variety of cancers. In Australia NSW starting from 2016 is expecting a 5% reduction of childhood overweight and obesity by 2025. As for Saudi-Arabia, it is expected to have an, even more, reduction by 2023 as a lot of our children are canteen-dependent. Conclusion: Introducing healthy food in schools is a preventative method that would have significant influence on the reduction of the prevalence of obesity in Saudi-Arabia and improves its general health.

Keywords: food, healthy, children, obesity, schools

Procedia PDF Downloads 194
4648 Influence of Alcohol Consumption on Attention in Wistar Albino Rats

Authors: Adekunle Adesina, Dorcas Adesina

Abstract:

This Research investigated the influence of alcohol consumption on attention in Wister albino rats. It was designed to test whether or not alcohol consumption affected visual and auditory attention. The sample of this study comprise of 3males albino rats and 3 females albino rats which were randomly assigned to 3 (male/female each) groups, 1, 2 and 3. The first group which was experimental Group 1 received 4ml of alcohol ingestion with cannula twice daily (morning and evening). The second group which was experimental group 2 received 2ml of alcohol ingestion with cannula twice daily (morning and evening). Third group which was the control group only received water (placebo), all these happened within a period of 2 days. Three hypotheses were advanced and testedf in the study. Hypothesis 1 stated that there will be no significant difference between the response speed of albino rats that consume alcohol and those that consume water on visual attention using 5-CSRTT. This was confirmed (DF (2, 9) = 0.72, P <.05). Hypothesis 2 stated that albino rats who consumed alcohol will perform better than those who consume water on auditory accuracy using 5-CSRTT. This was also tested but not confirmed (DF (2, 9) = 2.10, P< .05). The third hypothesis which stated that female albino rats who consumed alcohol would not perform better than male albino rats who consumed alcohol on auditory accuracy using 5-CSRTT was tested and not confirmed. (DF (4) = 0.17, P < .05). Data was analyzed using one-way ANOVA and T-test for independent measures. It was therefore recommended that government policies and programs should be directed at reducing to the barest minimum the rate of alcohol consumption especially among males as it is detrimental to the human auditory attentional organ.

Keywords: alcohol, attention, influence, rats, Wistar

Procedia PDF Downloads 267
4647 An Enhanced Approach in Validating Analytical Methods Using Tolerance-Based Design of Experiments (DoE)

Authors: Gule Teri

Abstract:

The effective validation of analytical methods forms a crucial component of pharmaceutical manufacturing. However, traditional validation techniques can occasionally fail to fully account for inherent variations within datasets, which may result in inconsistent outcomes. This deficiency in validation accuracy is particularly noticeable when quantifying low concentrations of active pharmaceutical ingredients (APIs), excipients, or impurities, introducing a risk to the reliability of the results and, subsequently, the safety and effectiveness of the pharmaceutical products. In response to this challenge, we introduce an enhanced, tolerance-based Design of Experiments (DoE) approach for the validation of analytical methods. This approach distinctly measures variability with reference to tolerance or design margins, enhancing the precision and trustworthiness of the results. This method provides a systematic, statistically grounded validation technique that improves the truthfulness of results. It offers an essential tool for industry professionals aiming to guarantee the accuracy of their measurements, particularly for low-concentration components. By incorporating this innovative method, pharmaceutical manufacturers can substantially advance their validation processes, subsequently improving the overall quality and safety of their products. This paper delves deeper into the development, application, and advantages of this tolerance-based DoE approach and demonstrates its effectiveness using High-Performance Liquid Chromatography (HPLC) data for verification. This paper also discusses the potential implications and future applications of this method in enhancing pharmaceutical manufacturing practices and outcomes.

Keywords: tolerance-based design, design of experiments, analytical method validation, quality control, biopharmaceutical manufacturing

Procedia PDF Downloads 81
4646 Evaluating Forecasting Strategies for Day-Ahead Electricity Prices: Insights From the Russia-Ukraine Crisis

Authors: Alexandra Papagianni, George Filis, Panagiotis Papadopoulos

Abstract:

The liberalization of the energy market and the increasing penetration of fluctuating renewables (e.g., wind and solar power) have heightened the importance of the spot market for ensuring efficient electricity supply. This is further emphasized by the EU’s goal of achieving net-zero emissions by 2050. The day-ahead market (DAM) plays a key role in European energy trading, accounting for 80-90% of spot transactions and providing critical insights for next-day pricing. Therefore, short-term electricity price forecasting (EPF) within the DAM is crucial for market participants to make informed decisions and improve their market positioning. Existing literature highlights out-of-sample performance as a key factor in assessing EPF accuracy, with influencing factors such as predictors, forecast horizon, model selection, and strategy. Several studies indicate that electricity demand is a primary price determinant, while renewable energy sources (RES) like wind and solar significantly impact price dynamics, often lowering prices. Additionally, incorporating data from neighboring countries, due to market coupling, further improves forecast accuracy. Most studies predict up to 24 steps ahead using hourly data, while some extend forecasts using higher-frequency data (e.g., half-hourly or quarter-hourly). Short-term EPF methods fall into two main categories: statistical and computational intelligence (CI) methods, with hybrid models combining both. While many studies use advanced statistical methods, particularly through different versions of traditional AR-type models, others apply computational techniques such as artificial neural networks (ANNs) and support vector machines (SVMs). Recent research combines multiple methods to enhance forecasting performance. Despite extensive research on EPF accuracy, a gap remains in understanding how forecasting strategy affects prediction outcomes. While iterated strategies are commonly used, they are often chosen without justification. This paper contributes by examining whether the choice of forecasting strategy impacts the quality of day-ahead price predictions, especially for multi-step forecasts. We evaluate both iterated and direct methods, exploring alternative ways of conducting iterated forecasts on benchmark and state-of-the-art forecasting frameworks. The goal is to assess whether these factors should be considered by end-users to improve forecast quality. We focus on the Greek DAM using data from July 1, 2021, to March 31, 2022. This period is chosen due to significant price volatility in Greece, driven by its dependence on natural gas and limited interconnection capacity with larger European grids. The analysis covers two phases: pre-conflict (January 1, 2022, to February 23, 2022) and post-conflict (February 24, 2022, to March 31, 2022), following the Russian-Ukraine conflict that initiated an energy crisis. We use the mean absolute percentage error (MAPE) and symmetric mean absolute percentage error (sMAPE) for evaluation, as well as the Direction of Change (DoC) measure to assess the accuracy of price movement predictions. Our findings suggest that forecasters need to apply all strategies across different horizons and models. Different strategies may be required for different horizons to optimize both accuracy and directional predictions, ensuring more reliable forecasts.

Keywords: short-term electricity price forecast, forecast strategies, forecast horizons, recursive strategy, direct strategy

Procedia PDF Downloads 11
4645 Data Augmentation for Early-Stage Lung Nodules Using Deep Image Prior and Pix2pix

Authors: Qasim Munye, Juned Islam, Haseeb Qureshi, Syed Jung

Abstract:

Lung nodules are commonly identified in computed tomography (CT) scans by experienced radiologists at a relatively late stage. Early diagnosis can greatly increase survival. We propose using a pix2pix conditional generative adversarial network to generate realistic images simulating early-stage lung nodule growth. We have applied deep images prior to 2341 slices from 895 computed tomography (CT) scans from the Lung Image Database Consortium (LIDC) dataset to generate pseudo-healthy medical images. From these images, 819 were chosen to train a pix2pix network. We observed that for most of the images, the pix2pix network was able to generate images where the nodule increased in size and intensity across epochs. To evaluate the images, 400 generated images were chosen at random and shown to a medical student beside their corresponding original image. Of these 400 generated images, 384 were defined as satisfactory - meaning they resembled a nodule and were visually similar to the corresponding image. We believe that this generated dataset could be used as training data for neural networks to detect lung nodules at an early stage or to improve the accuracy of such networks. This is particularly significant as datasets containing the growth of early-stage nodules are scarce. This project shows that the combination of deep image prior and generative models could potentially open the door to creating larger datasets than currently possible and has the potential to increase the accuracy of medical classification tasks.

Keywords: medical technology, artificial intelligence, radiology, lung cancer

Procedia PDF Downloads 72
4644 Effect of Knowledge of Bubble Point Pressure on Estimating PVT Properties from Correlations

Authors: Ahmed El-Banbi, Ahmed El-Maraghi

Abstract:

PVT properties are needed as input data in all reservoir, production, and surface facilities engineering calculations. In the absence of PVT reports on valid reservoir fluid samples, engineers rely on PVT correlations to generate the required PVT data. The accuracy of PVT correlations varies, and no correlation group has been found to provide accurate results for all oil types. The effect of inaccurate PVT data can be significant in engineering calculations and is well documented in the literature. Bubble point pressure can sometimes be obtained from external sources. In this paper, we show how to utilize the known bubble point pressure to improve the accuracy of calculated PVT properties from correlations. We conducted a systematic study using around 250 reservoir oil samples to quantify the effect of pre-knowledge of bubble point pressure. The samples spanned a wide range of oils, from very volatile oils to black oils and all the way to low-GOR oils. A method for shifting both undersaturated and saturated sections of the PVT properties curves to the correct bubble point is explained. Seven PVT correlation families were used in this study. All PVT properties (e.g., solution gas-oil ratio, formation volume factor, density, viscosity, and compressibility) were calculated using the correct bubble point pressure and the correlation estimated bubble point pressure. Comparisons between the calculated PVT properties and actual laboratory-measured values were made. It was found that pre-knowledge of bubble point pressure and using the shifting technique presented in the paper improved the correlation-estimated values by 10% to more than 30%. The most improvement was seen in the solution gas-oil ratio and formation volume factor.

Keywords: PVT data, PVT properties, PVT correlations, bubble point pressure

Procedia PDF Downloads 65
4643 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis

Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya

Abstract:

In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.

Keywords: cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis

Procedia PDF Downloads 327
4642 Information Technology Service Management System Measurement Using ISO20000-1 and ISO15504-8

Authors: Imam Asrowardi, Septafiansyah Dwi Putra, Eko Subyantoro

Abstract:

Process assessments can improve IT service management system (IT SMS) processes but the assessment method is not always transparent. This paper outlines a project to develop a solution- mediated process assessment tool to enable transparent and objective SMS process assessment. Using the international standards for SMS and process assessment, the tool is being developed following the International standard approach in collaboration and evaluate by expert judgment from committee members and ITSM practitioners.

Keywords: SMS, tools evaluation, ITIL, ISO service

Procedia PDF Downloads 482
4641 Path-Tracking Controller for Tracked Mobile Robot on Rough Terrain

Authors: Toshifumi Hiramatsu, Satoshi Morita, Manuel Pencelli, Marta Niccolini, Matteo Ragaglia, Alfredo Argiolas

Abstract:

Automation technologies for agriculture field are needed to promote labor-saving. One of the most relevant problems in automated agriculture is represented by controlling the robot along a predetermined path in presence of rough terrain or incline ground. Unfortunately, disturbances originating from interaction with the ground, such as slipping, make it quite difficult to achieve the required accuracy. In general, it is required to move within 5-10 cm accuracy with respect to the predetermined path. Moreover, lateral velocity caused by gravity on the incline field also affects slipping. In this paper, a path-tracking controller for tracked mobile robots moving on rough terrains of incline field such as vineyard is presented. The controller is composed of a disturbance observer and an adaptive controller based on the kinematic model of the robot. The disturbance observer measures the difference between the measured and the reference yaw rate and linear velocity in order to estimate slip. Then, the adaptive controller adapts “virtual” parameter of the kinematics model: Instantaneous Centers of Rotation (ICRs). Finally, target angular velocity reference is computed according to the adapted parameter. This solution allows estimating the effects of slip without making the model too complex. Finally, the effectiveness of the proposed solution is tested in a simulation environment.

Keywords: the agricultural robot, autonomous control, path-tracking control, tracked mobile robot

Procedia PDF Downloads 174
4640 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.

Keywords: cross-validation, importance sampling, information criteria, predictive accuracy

Procedia PDF Downloads 393
4639 On the Added Value of Probabilistic Forecasts Applied to the Optimal Scheduling of a PV Power Plant with Batteries in French Guiana

Authors: Rafael Alvarenga, Hubert Herbaux, Laurent Linguet

Abstract:

The uncertainty concerning the power production of intermittent renewable energy is one of the main barriers to the integration of such assets into the power grid. Efforts have thus been made to develop methods to quantify this uncertainty, allowing producers to ensure more reliable and profitable engagements related to their future power delivery. Even though a diversity of probabilistic approaches was proposed in the literature giving promising results, the added value of adopting such methods for scheduling intermittent power plants is still unclear. In this study, the profits obtained by a decision-making model used to optimally schedule an existing PV power plant connected to batteries are compared when the model is fed with deterministic and probabilistic forecasts generated with two of the most recent methods proposed in the literature. Moreover, deterministic forecasts with different accuracy levels were used in the experiments, testing the utility and the capability of probabilistic methods of modeling the progressively increasing uncertainty. Even though probabilistic approaches are unquestionably developed in the recent literature, the results obtained through a study case show that deterministic forecasts still provide the best performance if accurate, ensuring a gain of 14% on final profits compared to the average performance of probabilistic models conditioned to the same forecasts. When the accuracy of deterministic forecasts progressively decreases, probabilistic approaches start to become competitive options until they completely outperform deterministic forecasts when these are very inaccurate, generating 73% more profits in the case considered compared to the deterministic approach.

Keywords: PV power forecasting, uncertainty quantification, optimal scheduling, power systems

Procedia PDF Downloads 87
4638 Monitoring of Educational Achievements of Kazakhstani 4th and 9th Graders

Authors: Madina Tynybayeva, Sanya Zhumazhanova, Saltanat Kozhakhmetova, Merey Mussabayeva

Abstract:

One of the leading indicators of the education quality is the level of students’ educational achievements. The processes of modernization of Kazakhstani education system have predetermined the need to improve the national system by assessing the quality of education. The results of assessment greatly contribute to addressing questions about the current state of the educational system in the country. The monitoring of students’ educational achievements (MEAS) is the systematic measurement of the quality of education for compliance with the state obligatory standard of Kazakhstan. This systematic measurement is independent of educational organizations and approved by the order of the Minister of Education and Scienceof Kazakhstan. The MEAS was conducted in the regions of Kazakhstanfor the first time in 2022 by the National Testing Centre. The measurement does not have legal consequences either for students or for educational organizations. Students’ achievements were measured in three subject areas: reading, mathematics and science literacy. MEAS was held for the first time in April this year, 105 thousand students from 1436 schools of Kazakhstan took part in the testing. The monitoring was accompanied by a survey of students, teachers, and school leaders. The goal is to identify which contextual factors affect learning outcomes. The testing was carried out in a computer format. The test tasks of MEAS are ranked according to the three levels of difficulty: basic, medium, and high. Fourth graders are asked to complete 30 closed-type tasks. The average score of the results is 21 points out of 30, which means 70% of tasks were successfully completed. The total number of test tasks for 9th grade students – 75 questions. The results of ninth graders are comparatively lower, the success rate of completing tasks is 63%. MEAS participants did not reveal a statistically significant gap in results in terms of the language of instruction, territorial status, and type of school. The trend of reducing the gap in these indicators is also noted in the framework of recent international studies conducted across the country, in particular PISA for schools in Kazakhstan. However, there is a regional gap in MOES performance. The difference in the values of the indicators of the highest and lowest scores of the regions was 11% of the success of completing tasks in the 4th grade, 14% in the 9thgrade. The results of the 4th grade students in reading, mathematics, and science literacy are: 71.5%, 70%, and 66.9%, respectively. The results of ninth-graders in reading, mathematics, and science literacy are 69.6%, 54%, and 60.8%, respectively. From the surveys, it was revealed that the educational achievements of students are considerably influenced by such factors as the subject competences of teachers, as well as the school climate and motivation of students. Thus, the results of MEAS indicate the need for an integrated approach to improving the quality of education. In particular, the combination of improving the content of curricula and textbooks, internal and external assessment of the educational achievements of students, educational programs of pedagogical specialties, and advanced training courses is required.

Keywords: assessment, secondary school, monitoring, functional literacy, kazakhstan

Procedia PDF Downloads 108
4637 Development of an Experiment for Impedance Measurement of Structured Sandwich Sheet Metals by Using a Full Factorial Multi-Stage Approach

Authors: Florian Vincent Haase, Adrian Dierl, Anna Henke, Ralf Woll, Ennes Sarradj

Abstract:

Structured sheet metals and structured sandwich sheet metals are three-dimensional, lightweight structures with increased stiffness which are used in the automotive industry. The impedance, a figure of resistance of a structure to vibrations, will be determined regarding plain sheets, structured sheets, and structured sandwich sheets. The aim of this paper is generating an experimental design in order to minimize costs and duration of experiments. The design of experiments will be used to reduce the large number of single tests required for the determination of correlation between the impedance and its influencing factors. Full and fractional factorials are applied in order to systematize and plan the experiments. Their major advantages are high quality results given the relatively small number of trials and their ability to determine the most important influencing factors including their specific interactions. The developed full factorial experimental design for the study of plain sheets includes three factor levels. In contrast to the study of plain sheets, the respective impedance analysis used on structured sheets and structured sandwich sheets should be split into three phases. The first phase consists of preliminary tests which identify relevant factor levels. These factor levels are subsequently employed in main tests, which have the objective of identifying complex relationships between the parameters and the reference variable. Possible post-tests can follow up in case additional study of factor levels or other factors are necessary. By using full and fractional factorial experimental designs, the required number of tests is reduced by half. In the context of this paper, the benefits from the application of design for experiments are presented. Furthermore, a multistage approach is shown to take into account unrealizable factor combinations and minimize experiments.

Keywords: structured sheet metals, structured sandwich sheet metals, impedance measurement, design of experiment

Procedia PDF Downloads 375
4636 Location Uncertainty – A Probablistic Solution for Automatic Train Control

Authors: Monish Sengupta, Benjamin Heydecker, Daniel Woodland

Abstract:

New train control systems rely mainly on Automatic Train Protection (ATP) and Automatic Train Operation (ATO) dynamically to control the speed and hence performance. The ATP and the ATO form the vital element within the CBTC (Communication Based Train Control) and within the ERTMS (European Rail Traffic Management System) system architectures. Reliable and accurate measurement of train location, speed and acceleration are vital to the operation of train control systems. In the past, all CBTC and ERTMS system have deployed a balise or equivalent to correct the uncertainty element of the train location. Typically a CBTC train is allowed to miss only one balise on the track, after which the Automatic Train Protection (ATP) system applies emergency brake to halt the service. This is because the location uncertainty, which grows within the train control system, cannot tolerate missing more than one balise. Balises contribute a significant amount towards wayside maintenance and studies have shown that balises on the track also forms a constraint for future track layout change and change in speed profile.This paper investigates the causes of the location uncertainty that is currently experienced and considers whether it is possible to identify an effective filter to ascertain, in conjunction with appropriate sensors, more accurate speed, distance and location for a CBTC driven train without the need of any external balises. An appropriate sensor fusion algorithm and intelligent sensor selection methodology will be deployed to ascertain the railway location and speed measurement at its highest precision. Similar techniques are already in use in aviation, satellite, submarine and other navigation systems. Developing a model for the speed control and the use of Kalman filter is a key element in this research. This paper will summarize the research undertaken and its significant findings, highlighting the potential for introducing alternative approaches to train positioning that would enable removal of all trackside location correction balises, leading to huge reduction in maintenances and more flexibility in future track design.

Keywords: ERTMS, CBTC, ATP, ATO

Procedia PDF Downloads 410
4635 A Comparison Between Different Discretization Techniques for the Doyle-Fuller-Newman Li+ Battery Model

Authors: Davide Gotti, Milan Prodanovic, Sergio Pinilla, David Muñoz-Torrero

Abstract:

Since its proposal, the Doyle-Fuller-Newman (DFN) lithium-ion battery model has gained popularity in the electrochemical field. In fact, this model provides the user with theoretical support for designing the lithium-ion battery parameters, such as the material particle or the diffusion coefficient adjustment direction. However, the model is mathematically complex as it is composed of several partial differential equations (PDEs) such as Fick’s law of diffusion, the MacInnes and Ohm’s equations, among other phenomena. Thus, to efficiently use the model in a time-domain simulation environment, the selection of the discretization technique is of a pivotal importance. There are several numerical methods available in the literature that can be used to carry out this task. In this study, a comparison between the explicit Euler, Crank-Nicolson, and Chebyshev discretization methods is proposed. These three methods are compared in terms of accuracy, stability, and computational times. Firstly, the explicit Euler discretization technique is analyzed. This method is straightforward to implement and is computationally fast. In this work, the accuracy of the method and its stability properties are shown for the electrolyte diffusion partial differential equation. Subsequently, the Crank-Nicolson method is considered. It represents a combination of the implicit and explicit Euler methods that has the advantage of being of the second order in time and is intrinsically stable, thus overcoming the disadvantages of the simpler Euler explicit method. As shown in the full paper, the Crank-Nicolson method provides accurate results when applied to the DFN model. Its stability does not depend on the integration time step, thus it is feasible for both short- and long-term tests. This last remark is particularly important as this discretization technique would allow the user to implement parameter estimation and optimization techniques such as system or genetic parameter identification methods using this model. Finally, the Chebyshev discretization technique is implemented in the DFN model. This discretization method features swift convergence properties and, as other spectral methods used to solve differential equations, achieves the same accuracy with a smaller number of discretization nodes. However, as shown in the literature, these methods are not suitable for handling sharp gradients, which are common during the first instants of the charge and discharge phases of the battery. The numerical results obtained and presented in this study aim to provide the guidelines on how to select the adequate discretization technique for the DFN model according to the type of application to be performed, highlighting the pros and cons of the three methods. Specifically, the non-eligibility of the simple Euler method for longterm tests will be presented. Afterwards, the Crank-Nicolson and the Chebyshev discretization methods will be compared in terms of accuracy and computational times under a wide range of battery operating scenarios. These include both long-term simulations for aging tests, and short- and mid-term battery charge/discharge cycles, typically relevant in battery applications like grid primary frequency and inertia control and electrical vehicle breaking and acceleration.

Keywords: Doyle-Fuller-Newman battery model, partial differential equations, discretization, numerical methods

Procedia PDF Downloads 25
4634 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes

Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono

Abstract:

Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is a widely used approach for LV segmentation but suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is proposed to improve the accuracy and speed of the model-based segmentation. Firstly, a robust and efficient detector based on Hough forest is proposed to localize cardiac feature points, and such points are used to predict the initial fitting of the LV shape model. Secondly, to achieve more accurate and detailed segmentation, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. The performance of the proposed method is evaluated on a dataset of 800 cardiac ultrasound images that are mostly of abnormal shapes. The proposed method is compared to several combinations of ASM and existing initialization methods. The experiment results demonstrate that the accuracy of feature point detection for initialization was improved by 40% compared to the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops, thus speeding up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.

Keywords: hough forest, active shape model, segmentation, cardiac left ventricle

Procedia PDF Downloads 341
4633 Investigating the Influence of Solidification Rate on the Microstructural, Mechanical and Physical Properties of Directionally Solidified Al-Mg Based Multicomponent Eutectic Alloys Containing High Mg Alloys

Authors: Fatih Kılıç, Burak Birol, Necmettin Maraşlı

Abstract:

The directional solidification process is generally used for homogeneous compound production, single crystal growth, and refining (zone refining), etc. processes. The most important two parameters that control eutectic structures are temperature gradient and grain growth rate which are called as solidification parameters The solidification behavior and microstructure characteristics is an interesting topic due to their effects on the properties and performance of the alloys containing eutectic compositions. The solidification behavior of multicomponent and multiphase systems is an important parameter for determining various properties of these materials. The researches have been conducted mostly on the solidification of pure materials or alloys containing two phases. However, there are very few studies on the literature about multiphase reactions and microstructure formation of multicomponent alloys during solidification. Because of this situation, it is important to study the microstructure formation and the thermodynamical, thermophysical and microstructural properties of these alloys. The production process is difficult due to easy oxidation of magnesium and therefore, there is not a comprehensive study concerning alloys containing high Mg (> 30 wt.% Mg). With the increasing amount of Mg inside Al alloys, the specific weight decreases, and the strength shows a slight increase, while due to formation of β-Al8Mg5 phase, ductility lowers. For this reason, production, examination and development of high Mg containing alloys will initiate the production of new advanced engineering materials. The original value of this research can be described as obtaining high Mg containing (> 30% Mg) Al based multicomponent alloys by melting under vacuum; controlled directional solidification with various growth rates at a constant temperature gradient; and establishing relationship between solidification rate and microstructural, mechanical, electrical and thermal properties. Therefore, within the scope of this research, some > 30% Mg containing ternary or quaternary Al alloy compositions were determined, and it was planned to investigate the effects of directional solidification rate on the mechanical, electrical and thermal properties of these alloys. Within the scope of the research, the influence of the growth rate on microstructure parameters, microhardness, tensile strength, electrical conductivity and thermal conductivity of directionally solidified high Mg containing Al-32,2Mg-0,37Si; Al-30Mg-12Zn; Al-32Mg-1,7Ni; Al-32,2Mg-0,37Fe; Al-32Mg-1,7Ni-0,4Si; Al-33,3Mg-0,35Si-0,11Fe (wt.%) alloys with wide range of growth rate (50-2500 µm/s) and fixed temperature gradient, will be investigated. The work can be planned as; (a) directional solidification of Al-Mg based Al-Mg-Si, Al-Mg-Zn, Al-Mg-Ni, Al-Mg-Fe, Al-Mg-Ni-Si, Al-Mg-Si-Fe within wide range of growth rates (50-2500 µm/s) at a constant temperature gradient by Bridgman type solidification system, (b) analysis of microstructure parameters of directionally solidified alloys by using an optical light microscopy and Scanning Electron Microscopy (SEM), (c) measurement of microhardness and tensile strength of directionally solidified alloys, (d) measurement of electrical conductivity by four point probe technique at room temperature (e) measurement of thermal conductivity by linear heat flow method at room temperature.

Keywords: directional solidification, electrical conductivity, high Mg containing multicomponent Al alloys, microhardness, microstructure, tensile strength, thermal conductivity

Procedia PDF Downloads 261
4632 Prediction Modeling of Alzheimer’s Disease and Its Prodromal Stages from Multimodal Data with Missing Values

Authors: M. Aghili, S. Tabarestani, C. Freytes, M. Shojaie, M. Cabrerizo, A. Barreto, N. Rishe, R. E. Curiel, D. Loewenstein, R. Duara, M. Adjouadi

Abstract:

A major challenge in medical studies, especially those that are longitudinal, is the problem of missing measurements which hinders the effective application of many machine learning algorithms. Furthermore, recent Alzheimer's Disease studies have focused on the delineation of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) from cognitively normal controls (CN) which is essential for developing effective and early treatment methods. To address the aforementioned challenges, this paper explores the potential of using the eXtreme Gradient Boosting (XGBoost) algorithm in handling missing values in multiclass classification. We seek a generalized classification scheme where all prodromal stages of the disease are considered simultaneously in the classification and decision-making processes. Given the large number of subjects (1631) included in this study and in the presence of almost 28% missing values, we investigated the performance of XGBoost on the classification of the four classes of AD, NC, EMCI, and LMCI. Using 10-fold cross validation technique, XGBoost is shown to outperform other state-of-the-art classification algorithms by 3% in terms of accuracy and F-score. Our model achieved an accuracy of 80.52%, a precision of 80.62% and recall of 80.51%, supporting the more natural and promising multiclass classification.

Keywords: eXtreme gradient boosting, missing data, Alzheimer disease, early mild cognitive impairment, late mild cognitive impair, multiclass classification, ADNI, support vector machine, random forest

Procedia PDF Downloads 189
4631 Comparison between Photogrammetric and Structure from Motion Techniques in Processing Unmanned Aerial Vehicles Imageries

Authors: Ahmed Elaksher

Abstract:

Over the last few years, significant progresses have been made and new approaches have been proposed for efficient collection of 3D spatial data from Unmanned aerial vehicles (UAVs) with reduced costs compared to imagery from satellite or manned aircraft. In these systems, a low-cost GPS unit provides the position, velocity of the vehicle, a low-quality inertial measurement unit (IMU) determines its orientation, and off-the-shelf cameras capture the images. Structure from Motion (SfM) and photogrammetry are the main tools for 3D surface reconstruction from images collected by these systems. Unlike traditional techniques, SfM allows the computation of calibration parameters using point correspondences across images without performing a rigorous laboratory or field calibration process and it is more flexible in that it does not require consistent image overlap or same rotation angles between successive photos. These benefits make SfM ideal for UAVs aerial mapping. In this paper, a direct comparison between SfM Digital Elevation Models (DEM) and those generated through traditional photogrammetric techniques was performed. Data was collected by a 3DR IRIS+ Quadcopter with a Canon PowerShot S100 digital camera. Twenty ground control points were randomly distributed on the ground and surveyed with a total station in a local coordinate system. Images were collected from an altitude of 30 meters with a ground resolution of nine mm/pixel. Data was processed with PhotoScan, VisualSFM, Imagine Photogrammetry, and a photogrammetric algorithm developed by the author. The algorithm starts with performing a laboratory camera calibration then the acquired imagery undergoes an orientation procedure to determine the cameras’ positions and orientations. After the orientation is attained, correlation based image matching is conducted to automatically generate three-dimensional surface models followed by a refining step using sub-pixel image information for high matching accuracy. Tests with different number and configurations of the control points were conducted. Camera calibration parameters estimated from commercial software and those obtained with laboratory procedures were comparable. Exposure station positions were within less than few centimeters and insignificant differences, within less than three seconds, among orientation angles were found. DEM differencing was performed between generated DEMs and few centimeters vertical shifts were found.

Keywords: UAV, photogrammetry, SfM, DEM

Procedia PDF Downloads 295
4630 Jordan, Towards Eliminating Preventable Maternal Deaths

Authors: Abdelmanie Suleimat, Nagham Abu Shaqra, Sawsan Majali, Issam Adawi, Heba Abo Shindi, Anas Al Mohtaseb

Abstract:

The Government of Jordan recognizes that maternal mortality constitutes a grave public health problem. Over the past two decades, there has been significant progress in improving the quality of maternal health services, resulting in improved maternal and child health outcomes. Despite these efforts, measurement and analysis of maternal mortality remained a challenge, with significant discrepancies from previous national surveys that inhibited accuracy. In response with support from USAID, the Jordan Maternal Mortality Surveillance Response (JMMSR) System was established to collect, analyze, and equip policymakers with data for decision-making guided by interdisciplinary multi-levelled advisory groups aiming to eliminate preventable maternal deaths, A 2016 Public Health Bylaw required the notification of deaths among women of reproductive age. The JMMSR system was launched in 2018 and continues annually, analyzing data received from health facilities, to guide policy to prevent avoidable deaths. To date, there have been four annual national maternal mortality reports (2018-2021). Data is collected, reviewed by advisory groups, and then consolidated in an annual report to inform and guide the Ministry of Health (MOH); JMMSR collects the necessary information to calculate an accurate maternal mortality ratio and assists in identifying leading causes and contributing factors for each maternal death. Based on this data, national response plans are created. A monitoring and evaluation plan was designed to define, track, and improve implementation through indicators. Over the past four years, one of these indicators, ‘percent of facilities notifying respective health directorates of all deaths of women of reproductive age,’ increased annually from 82.16%, 92.95%, and 92.50% to 97.02%, respectively. The Government of Jordan demonstrated commitment to the JMMSR system by designating the MOH to primarily host the system and lead the development and dissemination of policies and procedures to standardize implementation. The data was translated into practical and evidence-based recommendations. The successful impact of results deepened the understanding of maternal mortality in Jordan, which convinced the MOH to amend the Bylaw now mandating electronic reporting of all births and neonatal deaths from health facilities to empower the JMMSR system, by developing a stillbirths and neonatal mortality surveillance and response system.

Keywords: maternal health, maternal mortality, preventable maternal deaths, maternal morbidity

Procedia PDF Downloads 40
4629 Automatic Detection of Traffic Stop Locations Using GPS Data

Authors: Areej Salaymeh, Loren Schwiebert, Stephen Remias, Jonathan Waddell

Abstract:

Extracting information from new data sources has emerged as a crucial task in many traffic planning processes, such as identifying traffic patterns, route planning, traffic forecasting, and locating infrastructure improvements. Given the advanced technologies used to collect Global Positioning System (GPS) data from dedicated GPS devices, GPS equipped phones, and navigation tools, intelligent data analysis methodologies are necessary to mine this raw data. In this research, an automatic detection framework is proposed to help identify and classify the locations of stopped GPS waypoints into two main categories: signalized intersections or highway congestion. The Delaunay triangulation is used to perform this assessment in the clustering phase. While most of the existing clustering algorithms need assumptions about the data distribution, the effectiveness of the Delaunay triangulation relies on triangulating geographical data points without such assumptions. Our proposed method starts by cleaning noise from the data and normalizing it. Next, the framework will identify stoppage points by calculating the traveled distance. The last step is to use clustering to form groups of waypoints for signalized traffic and highway congestion. Next, a binary classifier was applied to find distinguish highway congestion from signalized stop points. The binary classifier uses the length of the cluster to find congestion. The proposed framework shows high accuracy for identifying the stop positions and congestion points in around 99.2% of trials. We show that it is possible, using limited GPS data, to distinguish with high accuracy.

Keywords: Delaunay triangulation, clustering, intelligent transportation systems, GPS data

Procedia PDF Downloads 276