Search results for: HRV metrics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 610

Search results for: HRV metrics

190 Diabetes Mellitus and Blood Glucose Variability Increases the 30-day Readmission Rate after Kidney Transplantation

Authors: Harini Chakkera

Abstract:

Background: Inpatient hyperglycemia is an established independent risk factor among several patient cohorts with hospital readmission. This has not been studied after kidney transplantation. Nearly one-third of patients who have undergone a kidney transplant reportedly experience 30-day readmission. Methods: Data on first-time solitary kidney transplantations were retrieved between September 2015 to December 2018. Information was linked to the electronic health record to determine a diagnosis of diabetes mellitus and extract glucometeric and insulin therapy data. Univariate logistic regression analysis and the XGBoost algorithm were used to predict 30-day readmission. We report the average performance of the models on the testing set on five bootstrapped partitions of the data to ensure statistical significance. Results: The cohort included 1036 patients who received kidney transplantation, and 224 (22%) experienced 30-day readmission. The machine learning algorithm was able to predict 30-day readmission with an average AUC of 77.3% (95% CI 75.30-79.3%). We observed statistically significant differences in the presence of pretransplant diabetes, inpatient-hyperglycemia, inpatient-hypoglycemia, and minimum and maximum glucose values among those with higher 30-day readmission rates. The XGBoost model identified the index admission length of stay, presence of hyper- and hypoglycemia and recipient and donor BMI values as the most predictive risk factors of 30-day readmission. Additionally, significant variations in the therapeutic management of blood glucose by providers were observed. Conclusions: Suboptimal glucose metrics during hospitalization after kidney transplantation is associated with an increased risk for 30-day hospital readmission. Optimizing the hospital blood glucose management, a modifiable factor, after kidney transplantation may reduce the risk of 30-day readmission.

Keywords: kidney, transplant, diabetes, insulin

Procedia PDF Downloads 90
189 Remote Sensing through Deep Neural Networks for Satellite Image Classification

Authors: Teja Sai Puligadda

Abstract:

Satellite images in detail can serve an important role in the geographic study. Quantitative and qualitative information provided by the satellite and remote sensing images minimizes the complexity of work and time. Data/images are captured at regular intervals by satellite remote sensing systems, and the amount of data collected is often enormous, and it expands rapidly as technology develops. Interpreting remote sensing images, geographic data mining, and researching distinct vegetation types such as agricultural and forests are all part of satellite image categorization. One of the biggest challenge data scientists faces while classifying satellite images is finding the best suitable classification algorithms based on the available that could able to classify images with utmost accuracy. In order to categorize satellite images, which is difficult due to the sheer volume of data, many academics are turning to deep learning machine algorithms. As, the CNN algorithm gives high accuracy in image recognition problems and automatically detects the important features without any human supervision and the ANN algorithm stores information on the entire network (Abhishek Gupta., 2020), these two deep learning algorithms have been used for satellite image classification. This project focuses on remote sensing through Deep Neural Networks i.e., ANN and CNN with Deep Sat (SAT-4) Airborne dataset for classifying images. Thus, in this project of classifying satellite images, the algorithms ANN and CNN are implemented, evaluated & compared and the performance is analyzed through evaluation metrics such as Accuracy and Loss. Additionally, the Neural Network algorithm which gives the lowest bias and lowest variance in solving multi-class satellite image classification is analyzed.

Keywords: artificial neural network, convolutional neural network, remote sensing, accuracy, loss

Procedia PDF Downloads 159
188 Evaluation of the Ability of COVID-19 Infected Sera to Induce Netosis Using an Ex-Vivo NETosis Monitoring Tool

Authors: Constant Gillot, Pauline Michaux, Julien Favresse, Jean-Michel Dogné, Jonathan Douxfils

Abstract:

Introduction: NETosis has emerged as a crucial yet paradoxical factor in severe COVID-19 cases. While neutrophil extracellular traps (NETs) help contain and eliminate viral particles, excessive NET formation can lead to hyperinflammation, exacerbating tissue damage and acute respiratory distress syndrome (ARDS). Aims: This study evaluates the relationship between COVID-19-infected sera and NETosis using an ex-vivo model. Methods: Sera from 8 post-admission COVID-19 patients, after receiving corticoid therapy, were used to induce NETosis in neutrophils from a healthy donor. NET formation was tracked using fluorescent markers for DNA and neutrophil elastase (NE) every 2 minutes for 8 hours. The results were expressed as a percentage of DNA/NE released over time. Key metrics, including T50 (time to 50% release) and AUC (area under the curve), representing total NETosis potential), were calculated. A 27-cytokine screening kit was used to assess the cytokine composition of the sera. Results: COVID-19 sera induced NETosis based on their cytokine profile. The AUC of NE and DNA release decreased with time following corticoid therapy, showing a significant reduction in 6 of the 8 patients (p<0.05). T50 also decreased in parallel with AUC for both markers. Cytokines concentration decrease with time after therapy administration. There is correlation between 14 cytokines concentration and NE release. Conclusion: This ex-vivo model successfully demonstrated the induction of NETosis by COVID-19 sera using two markers. A clear decrease in NETosis potential was observed over time with glucocorticoid therapy. This model can be a valuable tool for monitoring NETosis and investigating potential NETosis inducers and inhibitors.

Keywords: NETosis, COVID-19, cytokine storm, biomarkers

Procedia PDF Downloads 19
187 Understanding Cyber Kill Chains: Optimal Allocation of Monitoring Resources Using Cooperative Game Theory

Authors: Roy. H. A. Lindelauf

Abstract:

Cyberattacks are complex processes consisting of multiple interwoven tasks conducted by a set of agents. Interdictions and defenses against such attacks often rely on cyber kill chain (CKC) models. A CKC is a framework that tries to capture the actions taken by a cyber attacker. There exists a growing body of literature on CKCs. Most of this work either a) describes the CKC with respect to one or more specific cyberattacks or b) discusses the tools and technologies used by the attacker at each stage of the CKC. Defenders, facing scarce resources, have to decide where to allocate their resources given the CKC and partial knowledge on the tools and techniques attackers use. In this presentation CKCs are analyzed through the lens of covert projects, i.e., interrelated tasks that have to be conducted by agents (human and/or computer) with the aim of going undetected. Various aspects of covert project models have been studied abundantly in the operations research and game theory domain, think of resource-limited interdiction actions that maximally delay completion times of a weapons project for instance. This presentation has investigated both cooperative and non-cooperative game theoretic covert project models and elucidated their relation to CKC modelling. To view a CKC as a covert project each step in the CKC is broken down into tasks and there are players of which each one is capable of executing a subset of the tasks. Additionally, task inter-dependencies are represented by a schedule. Using multi-glove cooperative games it is shown how a defender can optimize the allocation of his scarce resources (what, where and how to monitor) against an attacker scheduling a CKC. This study presents and compares several cooperative game theoretic solution concepts as metrics for assigning resources to the monitoring of agents.

Keywords: cyber defense, cyber kill chain, game theory, information warfare techniques

Procedia PDF Downloads 140
186 An Extensive Review of Drought Indices

Authors: Shamsulhaq Amin

Abstract:

Drought can arise from several hydrometeorological phenomena that result in insufficient precipitation, soil moisture, and surface and groundwater flow, leading to conditions that are considerably drier than the usual water content or availability. Drought is often assessed using indices that are associated with meteorological, agricultural, and hydrological phenomena. In order to effectively handle drought disasters, it is essential to accurately determine the kind, intensity, and extent of the drought using drought characterization. This information is critical for managing the drought before, during, and after the rehabilitation process. Over a hundred drought assessments have been created in literature to evaluate drought disasters, encompassing a range of factors and variables. Some models utilise solely hydrometeorological drivers, while others employ remote sensing technology, and some incorporate a combination of both. Comprehending the entire notion of drought and taking into account drought indices along with their calculation processes are crucial for researchers in this discipline. Examining several drought metrics in different studies requires additional time and concentration. Hence, it is crucial to conduct a thorough examination of approaches used in drought indices in order to identify the most straightforward approach to avoid any discrepancies in numerous scientific studies. In case of practical application in real-world, categorizing indices relative to their usage in meteorological, agricultural, and hydrological phenomena might help researchers maximize their efficiency. Users have the ability to explore different indexes at the same time, allowing them to compare the convenience of use and evaluate the benefits and drawbacks of each. Moreover, certain indices exhibit interdependence, which enhances comprehension of their connections and assists in making informed decisions about their suitability in various scenarios. This study provides a comprehensive assessment of various drought indices, analysing their types and computation methodologies in a detailed and systematic manner.

Keywords: drought classification, drought severity, drought indices, agriculture, hydrological

Procedia PDF Downloads 41
185 ADP Approach to Evaluate the Blood Supply Network of Ontario

Authors: Usama Abdulwahab, Mohammed Wahab

Abstract:

This paper presents the application of uncapacitated facility location problems (UFLP) and 1-median problems to support decision making in blood supply chain networks. A plethora of factors make blood supply-chain networks a complex, yet vital problem for the regional blood bank. These factors are rapidly increasing demand; criticality of the product; strict storage and handling requirements; and the vastness of the theater of operations. As in the UFLP, facilities can be opened at any of $m$ predefined locations with given fixed costs. Clients have to be allocated to the open facilities. In classical location models, the allocation cost is the distance between a client and an open facility. In this model, the costs are the allocation cost, transportation costs, and inventory costs. In order to address this problem the median algorithm is used to analyze inventory, evaluate supply chain status, monitor performance metrics at different levels of granularity, and detect potential problems and opportunities for improvement. The Euclidean distance data for some Ontario cities (demand nodes) are used to test the developed algorithm. Sitation software, lagrangian relaxation algorithm, and branch and bound heuristics are used to solve this model. Computational experiments confirm the efficiency of the proposed approach. Compared to the existing modeling and solution methods, the median algorithm approach not only provides a more general modeling framework but also leads to efficient solution times in general.

Keywords: approximate dynamic programming, facility location, perishable product, inventory model, blood platelet, P-median problem

Procedia PDF Downloads 506
184 Early Gastric Cancer Prediction from Diet and Epidemiological Data Using Machine Learning in Mizoram Population

Authors: Brindha Senthil Kumar, Payel Chakraborty, Senthil Kumar Nachimuthu, Arindam Maitra, Prem Nath

Abstract:

Gastric cancer is predominantly caused by demographic and diet factors as compared to other cancer types. The aim of the study is to predict Early Gastric Cancer (ECG) from diet and lifestyle factors using supervised machine learning algorithms. For this study, 160 healthy individual and 80 cases were selected who had been followed for 3 years (2016-2019), at Civil Hospital, Aizawl, Mizoram. A dataset containing 11 features that are core risk factors for the gastric cancer were extracted. Supervised machine algorithms: Logistic Regression, Naive Bayes, Support Vector Machine (SVM), Multilayer perceptron, and Random Forest were used to analyze the dataset using Python Jupyter Notebook Version 3. The obtained classified results had been evaluated using metrics parameters: minimum_false_positives, brier_score, accuracy, precision, recall, F1_score, and Receiver Operating Characteristics (ROC) curve. Data analysis results showed Naive Bayes - 88, 0.11; Random Forest - 83, 0.16; SVM - 77, 0.22; Logistic Regression - 75, 0.25 and Multilayer perceptron - 72, 0.27 with respect to accuracy and brier_score in percent. Naive Bayes algorithm out performs with very low false positive rates as well as brier_score and good accuracy. Naive Bayes algorithm classification results in predicting ECG showed very satisfactory results using only diet cum lifestyle factors which will be very helpful for the physicians to educate the patients and public, thereby mortality of gastric cancer can be reduced/avoided with this knowledge mining work.

Keywords: Early Gastric cancer, Machine Learning, Diet, Lifestyle Characteristics

Procedia PDF Downloads 161
183 Airborne SAR Data Analysis for Impact of Doppler Centroid on Image Quality and Registration Accuracy

Authors: Chhabi Nigam, S. Ramakrishnan

Abstract:

This paper brings out the analysis of the airborne Synthetic Aperture Radar (SAR) data to study the impact of Doppler centroid on Image quality and geocoding accuracy from the perspective of Stripmap mode of data acquisition. Although in Stripmap mode of data acquisition radar beam points at 90 degrees broad side (side looking), shift in the Doppler centroid is invariable due to platform motion. In-accurate estimation of Doppler centroid leads to poor image quality and image miss-registration. The effect of Doppler centroid is analyzed in this paper using multiple sets of data collected from airborne platform. Occurrences of ghost (ambiguous) targets and their power levels have been analyzed that impacts appropriate choice of PRF. Effect of aircraft attitudes (roll, pitch and yaw) on the Doppler centroid is also analyzed with the collected data sets. Various stages of the RDA (Range Doppler Algorithm) algorithm used for image formation in Stripmap mode, range compression, Doppler centroid estimation, azimuth compression, range cell migration correction are analyzed to find the performance limits and the dependence of the imaging geometry on the final image. The ability of Doppler centroid estimation to enhance the imaging accuracy for registration are also illustrated in this paper. The paper also tries to bring out the processing of low squint SAR data, the challenges and the performance limits imposed by the imaging geometry and the platform dynamics on the final image quality metrics. Finally, the effect on various terrain types, including land, water and bright scatters is also presented.

Keywords: ambiguous target, Doppler Centroid, image registration, Airborne SAR

Procedia PDF Downloads 218
182 Historical Analysis of the Landscape Changes and the Eco-Environment Effects on the Coastal Zone of Bohai Bay, China

Authors: Juan Zhou, Lusan Liu, Yanzhong Zhu, Kuixuan Lin, Wenqian Cai, Yu Wang, Xing Wang

Abstract:

During the past few decades, there has been an increase in the number of coastal land reclamation projects for residential, commercial and industrial purposes in more and more coastal cities of China, which led to the destruction of the wetlands and loss of the sensitive marine habitats. Meanwhile, the influences and nature of these projects attract widespread public and academic concern. For identifying the trend of landscape (esp. Coastal reclamation) and ecological environment changes, understanding of which interacted, and offering a general science for the development of regional plans. In the paper, a case study was carried out in Bohai Bay area, based on the analysis of remote sensing data. Land use maps were created for 1954, 1970, 1981, 1990, 2000 and 2010. Landscape metrics were calculated and illustrated that the degree of reclamation changes was linked to the hydrodynamic environment and macrobenthos community. The results indicated that the worst of the loss of initial areas occurred during 1954-1970, with 65.6% lost mostly to salt field; to 2010, Coastal reclamation area increased more than 200km² as artificial landscape. The numerical simulation of tidal current field in 2003 and 2010 respectively showed that the flow velocity in offshore became faster (from 2-5 cm/s to 10-20 cm/s), and the flow direction seem to go astray. These significant changes of coastline were not conducive to the spread of pollutants and degradation. Additionally, the dominant macrobenthos analysis from 1958 to 2012 showed that Musculus senhousei (Benson, 1842) spread very fast and had been the predominant species in the recent years, which was a disturbance tolerant species.

Keywords: Bohai Bay, coastal reclamation, landscape change, spatial patterns

Procedia PDF Downloads 290
181 Advancing Urban Sustainability through Data-Driven Machine Learning Solutions

Authors: Nasim Eslamirad, Mahdi Rasoulinezhad, Francesco De Luca, Sadok Ben Yahia, Kimmo Sakari Lylykangas, Francesco Pilla

Abstract:

With the ongoing urbanization, cities face increasing environmental challenges impacting human well-being. To tackle these issues, data-driven approaches in urban analysis have gained prominence, leveraging urban data to promote sustainability. Integrating Machine Learning techniques enables researchers to analyze and predict complex environmental phenomena like Urban Heat Island occurrences in urban areas. This paper demonstrates the implementation of data-driven approach and interpretable Machine Learning algorithms with interpretability techniques to conduct comprehensive data analyses for sustainable urban design. The developed framework and algorithms are demonstrated for Tallinn, Estonia to develop sustainable urban strategies to mitigate urban heat waves. Geospatial data, preprocessed and labeled with UHI levels, are used to train various ML models, with Logistic Regression emerging as the best-performing model based on evaluation metrics to derive a mathematical equation representing the area with UHI or without UHI effects, providing insights into UHI occurrences based on buildings and urban features. The derived formula highlights the importance of building volume, height, area, and shape length to create an urban environment with UHI impact. The data-driven approach and derived equation inform mitigation strategies and sustainable urban development in Tallinn and offer valuable guidance for other locations with varying climates.

Keywords: data-driven approach, machine learning transparent models, interpretable machine learning models, urban heat island effect

Procedia PDF Downloads 37
180 Quantifying the Impact of Climate Change on Agritourism: The Transformative Role of Solar Energy in Enhancing Growth and Resilience in Eritrea

Authors: Beyene Daniel, Herbert Ntuli

Abstract:

Agritourism in Eritrea is increasingly threatened by climate change, manifesting through rising temperatures, shifting rainfall patterns, and resource scarcity. This study employs quantitative methods to assess the economic and environmental impacts of climate change on agritourism, utilizing metrics such as annual income fluctuations, changes in visitor numbers, and energy consumption patterns. The methodology relies on secondary data sourced from the World Bank, government reports, and academic publications to analyze the economic viability of integrating solar energy into agritourism operations. Key variables include the Benefits from Renewable Energy (BRE), encompassing cost savings from reduced energy expenses and the monetized value of avoided greenhouse gas emissions. Using a net present value (NPV) framework, the research compares the impact of solar energy against traditional fossil fuel sources by evaluating the Value of Reduced Greenhouse Gas Emissions (CO2) and the Value of Health-Related Costs (VHRC) due to air pollution. The preliminary findings of this research are of utmost importance. They indicate that the adoption of solar energy can enhance energy independence by up to 40%, reduce operational costs by 25%, and stabilize agritourism activities in climate-sensitive regions. This research aims to provide actionable insights for policymakers and stakeholders, supporting the sustainable development of agritourism in Eritrea and contributing to broader climate adaptation strategies. By employing a comprehensive cost-benefit analysis, the study highlights the economic advantages and environmental benefits of transitioning to renewable energy in the face of climate change.

Keywords: climate change, renewable energy, resilience, cost-benefit analysis

Procedia PDF Downloads 14
179 Adaptive Energy-Aware Routing (AEAR) for Optimized Performance in Resource-Constrained Wireless Sensor Networks

Authors: Innocent Uzougbo Onwuegbuzie

Abstract:

Wireless Sensor Networks (WSNs) are crucial for numerous applications, yet they face significant challenges due to resource constraints such as limited power and memory. Traditional routing algorithms like Dijkstra, Ad hoc On-Demand Distance Vector (AODV), and Bellman-Ford, while effective in path establishment and discovery, are not optimized for the unique demands of WSNs due to their large memory footprint and power consumption. This paper introduces the Adaptive Energy-Aware Routing (AEAR) model, a solution designed to address these limitations. AEAR integrates reactive route discovery, localized decision-making using geographic information, energy-aware metrics, and dynamic adaptation to provide a robust and efficient routing strategy. We present a detailed comparative analysis using a dataset of 50 sensor nodes, evaluating power consumption, memory footprint, and path cost across AEAR, Dijkstra, AODV, and Bellman-Ford algorithms. Our results demonstrate that AEAR significantly reduces power consumption and memory usage while optimizing path weight. This improvement is achieved through adaptive mechanisms that balance energy efficiency and link quality, ensuring prolonged network lifespan and reliable communication. The AEAR model's superior performance underlines its potential as a viable routing solution for energy-constrained WSN environments, paving the way for more sustainable and resilient sensor network deployments.

Keywords: wireless sensor networks (WSNs), adaptive energy-aware routing (AEAR), routing algorithms, energy, efficiency, network lifespan

Procedia PDF Downloads 35
178 Recurrent Neural Networks for Complex Survival Models

Authors: Pius Marthin, Nihal Ata Tutkun

Abstract:

Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.

Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)

Procedia PDF Downloads 89
177 Quantifying the Impact of Climate Change on Agritourism: The Transformative Role of Solar Energy in Enhancing Growth and Resilience in Eritrea

Authors: Beyene Daniel Abrha

Abstract:

Agritourism in Eritrea is increasingly threatened by climate change, manifesting through rising temperatures, shifting rainfall patterns, and resource scarcity. This study employs quantitative methods to assess the economic and environmental impacts of climate change on agritourism, utilizing metrics such as annual income fluctuations, changes in visitor numbers, and energy consumption patterns. The methodology relies on secondary data sourced from the World Bank, government reports, and academic publications to analyze the economic viability of integrating solar energy into agritourism operations. Key variables include the Benefits from Renewable Energy (BRE), encompassing cost savings from reduced energy expenses and the monetized value of avoided greenhouse gas emissions. Using a net present value (NPV) framework, the research compares the impact of solar energy against traditional fossil fuel sources by evaluating the Value of Reduced Greenhouse Gas Emissions (CO2) and the Value of Health-Related Costs (VHRC) due to air pollution. The preliminary findings of this research are of utmost importance. They indicate that the adoption of solar energy can enhance energy independence by up to 40%, reduce operational costs by 25%, and stabilize agritourism activities in climate-sensitive regions. This research aims to provide actionable insights for policymakers and stakeholders, supporting the sustainable development of agritourism in Eritrea and contributing to broader climate adaptation strategies. By employing a comprehensive cost-benefit analysis, the study highlights the economic advantages and environmental benefits of transitioning to renewable energy in the face of climate change.

Keywords: agritourism, climate change, renewable energy, cost benefit analysis, resilience, cost-benefit analysis

Procedia PDF Downloads 10
176 The Role of Urban Agriculture in Enhancing Food Supply and Export Potential: A Case Study of Neishabour, Iran

Authors: Mohammadreza Mojtahedi

Abstract:

Rapid urbanization presents multifaceted challenges, including environmental degradation and public health concerns. As the inevitability of urban sprawl continues, it becomes essential to devise strategies to alleviate its pressures on natural ecosystems and elevate socio-economic benchmarks within cities. This research investigates urban agriculture's economic contributions, emphasizing its pivotal role in food provisioning and export potential. Adopting a descriptive-analytical approach, field survey data was primarily collected via questionnaires. The tool's validity was affirmed by expert opinions, and its reliability secured by achieving a Cronbach's alpha score over 0.70 from 30 preliminary questionnaires. The research encompasses Neishabour's populace of 264,375, extracting a sample size of 384 via Cochran's formula. Findings reveal the significance of urban agriculture in food supply and its potential for exports, underlined by a p-value < 0.05. Neishabour's urban farming can augment the export of organic commodities, fruits, vegetables, ornamental plants, and foster product branding. Moreover, it supports the provision of fresh produce, bolstering dietary quality. Urban agriculture further impacts urban development metrics—enhancing environmental quality, job opportunities, income levels, and aesthetics, while promoting rainwater utilization. Popular cultivations include peaches, Damask roses, and poultry, tailored to available spaces. Structural equation modeling indicates urban agriculture's overarching influence, accounting for a 56% variance, predominantly in food sufficiency and export proficiency.

Keywords: urban agriculture, food supply, export potential, urban development, environmental health, structural equation modeling

Procedia PDF Downloads 56
175 Comparison of Deep Learning and Machine Learning Algorithms to Diagnose and Predict Breast Cancer

Authors: F. Ghazalnaz Sharifonnasabi, Iman Makhdoom

Abstract:

Breast cancer is a serious health concern that affects many people around the world. According to a study published in the Breast journal, the global burden of breast cancer is expected to increase significantly over the next few decades. The number of deaths from breast cancer has been increasing over the years, but the age-standardized mortality rate has decreased in some countries. It’s important to be aware of the risk factors for breast cancer and to get regular check- ups to catch it early if it does occur. Machin learning techniques have been used to aid in the early detection and diagnosis of breast cancer. These techniques, that have been shown to be effective in predicting and diagnosing the disease, have become a research hotspot. In this study, we consider two deep learning approaches including: Multi-Layer Perceptron (MLP), and Convolutional Neural Network (CNN). We also considered the five-machine learning algorithm titled: Decision Tree (C4.5), Naïve Bayesian (NB), Support Vector Machine (SVM), K-Nearest Neighbors (KNN) Algorithm and XGBoost (eXtreme Gradient Boosting) on the Breast Cancer Wisconsin Diagnostic dataset. We have carried out the process of evaluating and comparing classifiers involving selecting appropriate metrics to evaluate classifier performance and selecting an appropriate tool to quantify this performance. The main purpose of the study is predicting and diagnosis breast cancer, applying the mentioned algorithms and also discovering of the most effective with respect to confusion matrix, accuracy and precision. It is realized that CNN outperformed all other classifiers and achieved the highest accuracy (0.982456). The work is implemented in the Anaconda environment based on Python programing language.

Keywords: breast cancer, multi-layer perceptron, Naïve Bayesian, SVM, decision tree, convolutional neural network, XGBoost, KNN

Procedia PDF Downloads 75
174 Evaluation of NASA POWER and CRU Precipitation and Temperature Datasets over a Desert-prone Yobe River Basin: An Investigation of the Impact of Drought in the North-East Arid Zone of Nigeria

Authors: Yusuf Dawa Sidi, Abdulrahman Bulama Bizi

Abstract:

The most dependable and precise source of climate data is often gauge observation. However, long-term records of gauge observations, on the other hand, are unavailable in many regions around the world. In recent years, a number of gridded climate datasets with high spatial and temporal resolutions have emerged as viable alternatives to gauge-based measurements. However, it is crucial to thoroughly evaluate their performance prior to utilising them in hydroclimatic applications. Therefore, this study aims to assess the effectiveness of NASA Prediction of Worldwide Energy Resources (NASA POWER) and Climate Research Unit (CRU) datasets in accurately estimating precipitation and temperature patterns within the dry region of Nigeria from 1990 to 2020. The study employs widely used statistical metrics and the Standardised Precipitation Index (SPI) to effectively capture the monthly variability of precipitation and temperature and inter-annual anomalies in rainfall. The findings suggest that CRU exhibited superior performance compared to NASA POWER in terms of monthly precipitation and minimum and maximum temperatures, demonstrating a high correlation and much lower error values for both RMSE and MAE. Nevertheless, NASA POWER has exhibited a moderate agreement with gauge observations in accurately replicating monthly precipitation. The analysis of the SPI reveals that the CRU product exhibits superior performance compared to NASA POWER in accurately reflecting inter-annual variations in rainfall anomalies. The findings of this study indicate that the CRU gridded product is often regarded as the most favourable gridded precipitation product.

Keywords: CRU, climate change, precipitation, SPI, temperature

Procedia PDF Downloads 89
173 Implementing a Hospitalist Co-Management Service in Orthopaedic Surgery

Authors: Diane Ghanem, Whitney Kagabo, Rebecca Engels, Uma Srikumaran, Babar Shafiq

Abstract:

Hospitalist co-management of orthopaedic surgery patients is a growing trend across the country. It was created as a collaborative effort to provide overarching care to patients with the goal of improving their postoperative care and decreasing in-hospital medical complications. The aim of this project is to provide a guide for implementing and optimizing a hospitalist co-management service in orthopaedic surgery. Key leaders from the hospitalist team, orthopaedic team and quality, safety and service team were identified. Multiple meetings were convened to discuss the comanagement service and determine the necessary building blocks behind an efficient and well-designed co-management framework. After meticulous deliberation, a consensus was reached on the final service agreement and a written guide was drafted. Fundamental features of the service include the identification of service stakeholders and leaders, frequent consensus meetings, a well-defined framework, with goals, program metrics and unified commands, and a regular satisfaction assessment to update and improve the program. Identified pearls for co-managing orthopaedic surgery patients are standardization, timing, adequate patient selection, and two-way feedback between hospitalists and orthopaedic surgeons to optimize the protocols. Developing a service agreement is a constant work in progress, with meetings, discussions, revisions, and multiple piloting attempts before implementation. It is a partnership created to provide hospitals with a streamlined admission process where at-risk patients are identified early, and patient care is optimized regardless of the number or nature of medical comorbidities. A wellestablished hospitalist co-management service can increase patient care quality and safety, as well as health care value.

Keywords: co-management, hospitalist co-management, implementation, orthopaedic surgery, quality improvement

Procedia PDF Downloads 88
172 A Case Study on Machine Learning-Based Project Performance Forecasting for an Urban Road Reconstruction Project

Authors: Soheila Sadeghi

Abstract:

In construction projects, predicting project performance metrics accurately is essential for effective management and successful delivery. However, conventional methods often depend on fixed baseline plans, disregarding the evolving nature of project progress and external influences. To address this issue, we introduce a distinct approach based on machine learning to forecast key performance indicators, such as cost variance and earned value, for each Work Breakdown Structure (WBS) category within an urban road reconstruction project. Our proposed model leverages time series forecasting techniques, namely Autoregressive Integrated Moving Average (ARIMA) and Long Short-Term Memory (LSTM) networks, to predict future performance by analyzing historical data and project progress. Additionally, the model incorporates external factors, including weather patterns and resource availability, as features to improve forecast accuracy. By harnessing the predictive capabilities of machine learning, our performance forecasting model enables project managers to proactively identify potential deviations from the baseline plan and take timely corrective measures. To validate the effectiveness of the proposed approach, we conduct a case study on an urban road reconstruction project, comparing the model's predictions with actual project performance data. The outcomes of this research contribute to the advancement of project management practices in the construction industry by providing a data-driven solution for enhancing project performance monitoring and control.

Keywords: project performance forecasting, machine learning, time series forecasting, cost variance, schedule variance, earned value management

Procedia PDF Downloads 39
171 People's Perspective on Water Commons in Trans-Boundary Water Governance: A Case Study from Nepal

Authors: Sristi Silwal

Abstract:

South Asian rivers support ecosystems and sustain well-being of thousands of riparian communities. Rivers however are also sources of conflict between countries and one of the contested issues between governments of the region. Governments have signed treaties to harness some of the rivers but their provisions have not been successful in improving the quality of life of those who depend on water as common property resources. This paper will present a case of the study of the status of the water commons along the lower command areas of Koshi, Gandka and Mahakali rivers. Nepal and India have signed treaties for development and management of these rivers in 1928, 1954 and 1966. The study investigated perceptions of the local community on climate-induced disasters, provision of the treaties such as water for irrigation, participation in decision-making and specific impact of women. It looked at how the local community coped with adversities. The study showed that the common pool resources are gradually getting degraded, flood events increasing while community blame ‘other state’ and state administration for exacerbating these ills. The level of awareness about provisions of existing treatise is poor. Ongoing approach to trans-boundary water management has taken inadequate cognizance of these realities as the dominant narrative perpetuates cooperation between the governments. The paper argues that on-going discourses on trans-boundary water development and management need to use a new metrics of taking cognizance of the condition of the commons and that of the people depended on them for sustenance. In absence of such narratives, the scale of degradation would increase making those already marginalized more vulnerable to impacts of global climate change.

Keywords: climate change vulnerability, conflict, cooperation, water commons

Procedia PDF Downloads 236
170 Accuracy of Peak Demand Estimates for Office Buildings Using Quick Energy Simulation Tool

Authors: Mahdiyeh Zafaranchi, Ethan S. Cantor, William T. Riddell, Jess W. Everett

Abstract:

The New Jersey Department of Military and Veteran’s Affairs (NJ DMAVA) operates over 50 facilities throughout the state of New Jersey, U.S. NJDMAVA is under a mandate to move toward decarbonization, which will eventually include eliminating the use of natural gas and other fossil fuels for heating. At the same time, the organization requires increased resiliency regarding electric grid disruption. These competing goals necessitate adopting the use of on-site renewables such as photovoltaic and geothermal power, as well as implementing power control strategies through microgrids. Planning for these changes requires a detailed understanding of current and future electricity use on yearly, monthly, and shorter time scales, as well as a breakdown of consumption by heating, ventilation, and air conditioning (HVAC) equipment. This paper discusses case studies of two buildings that were simulated using the QUick Energy Simulation Tool (eQUEST). Both buildings use electricity from the grid and photovoltaics. One building also uses natural gas. While electricity use data are available in hourly intervals and natural gas data are available in monthly intervals, the simulations were developed using monthly and yearly totals. This approach was chosen to reflect the information available for most NJ DMAVA facilities. Once completed, simulation results are compared to metrics recommended by several organizations to validate energy use simulations. In addition to yearly and monthly totals, the simulated peak demands are compared to actual monthly peak demand values. The simulations resulted in monthly peak demand values that were within 30% of the measured values. These benchmarks will help to assess future energy planning efforts for NJ DMAVA.

Keywords: building energy modeling, eQUEST, peak demand, smart meters

Procedia PDF Downloads 68
169 Transit-Oriented Development as a Tool for Building Social Capital

Authors: Suneet Jagdev

Abstract:

Rapid urbanization has resulted in informal settlements on the periphery of nearly all big cities in the developing world due to lack of affordable housing options in the city. Residents of these communities have to travel long distances to get to work or search for jobs in these cities, and women, children and elderly people are excluded from urban opportunities. Affordable and safe public transport facilities can help them expand their possibilities. The aim of this research is to identify social capital as another important element of livable cities that can be protected and nurtured through transit-oriented development, as a tool to provide real resources that can help these transit-oriented communities become self-sustainable. Social capital has been referred to the collective value of all social networks and the inclinations that arise from these networks to do things for each other. It is one of the key component responsible to build and maintain democracy. Public spaces, pedestrian amenities and social equity are the other essential part of Transit Oriented Development models that will be analyzed in this research. The data has been collected through the analysis of several case studies, the urban design strategies implemented and their impact on the perception and on the community´s experience, and, finally, how these focused on the social capital. Case studies have been evaluated on several metrics, namely ecological, financial, energy consumption, etc. A questionnaire and other tools were designed to collect data to analyze the research objective and reflect the dimension of social capital. The results of the questionnaire indicated that almost all the participants have a positive attitude towards this dimensions of building a social capital with the aid of transit-oriented development. Statistical data of the identified key motivators against against demographic characteristics have been generated based on the case studies used for the paper. The findings suggested that there is a direct relation between urbanization, transit-oriented developments, and social capital.

Keywords: better opportunities, low-income settlements, social capital, social inclusion, transit oriented development

Procedia PDF Downloads 331
168 Usability Evaluation of a Self-Report Mobile App for COVID-19 Symptoms: Supporting Health Monitoring in the Work Context

Authors: Kevin Montanez, Patricia Garcia

Abstract:

The confinement and restrictions adopted to avoid an exponential spread of the COVID-19 have negatively impacted the Peruvian economy. In this context, Industries offering essential products could continue operating, but they have to follow safety protocols and implement strategies to ensure employee health. In view of the increasing internet access and mobile phone ownership, “Alerta Temprana”, a mobile app, was developed to self-report COVID-19 symptoms in the work context. In this study, the usability of the mobile app “Alerta Temprana” was evaluated from the perspective of health monitors and workers. In addition to reporting the metrics related to the usability of the application, the utility of the system is also evaluated from the monitors' perspective. In this descriptive study, the participants used the mobile app for two months. Afterwards, System Usability Scale (SUS) questionnaire was answered by the workers and monitors. A Usefulness questionnaire with open questions was also used for the monitors. The data related to the use of the application was collected during one month. Furthermore, descriptive statistics and bivariate analysis were used. The workers rated the application as good (70.39). In the case of the monitors, usability was excellent (83.0). The most important feature for the monitors were the emails generated by the application. The average interaction per user was 30 seconds and a total of 6172 self-reports were sent. Finally, a statistically significant association was found between the acceptability scale and the work area. The results of this study suggest that Alerta Temprana has the potential to be used for surveillance and health monitoring in any context of face-to-face modality. Participants reported a high degree of ease of use. However, from the perspective of workers, SUS cannot diagnose usability issues and we suggest we use another standard usability questionnaire to improve "Alerta Temprana" for future use.

Keywords: public health in informatics, mobile app, usability, self-report

Procedia PDF Downloads 117
167 The Crossroads of Corruption and Terrorism in the Global South

Authors: Stephen M. Magu

Abstract:

The 9/11 and Christmas bombing attacks in the United States are mostly associated with the inability of intelligence agencies to connect dots based on intelligence that was already available. The 1998, 2002, 2013 and several 2014 terrorist attacks in Kenya, on the other hand, are probably driven by a completely different dynamic: the invisible hand of corruption. The World Bank and Transparency International annually compute the Worldwide Governance Indicators and the Corruption Perception Index respectively. What perhaps is not adequately captured in the corruption metrics is the impact of corruption on terrorism. The World Bank data includes variables such as the control of corruption, (estimates of) government effectiveness, political stability and absence of violence/terrorism, regulatory quality, rule of law and voice and accountability. TI's CPI does not include measures related to terrorism, but it is plausible that there is an expectation of some terrorism impact arising from corruption. This paper, by examining the incidence, frequency and total number of terrorist attacks that have occurred especially since 1990, and further examining the specific cases of Kenya and Nigeria, argues that in addition to having major effects on governance, corruption has an even more frightening impact: that of facilitating and/or violating security mechanisms to the extent that foreign nationals can easily obtain identification that enables them to perpetuate major events, targeting powerful countries' interests in countries with weak corruption-fighting mechanisms. The paper aims to model interactions that demonstrate the cost/benefit analysis and agents' rational calculations as being non-rational calculations, given the ultimate impact. It argues that eradication of corruption is not just a matter of a better business environment, but that it is implicit in national security, and that for anti-corruption crusaders, this is an argument more potent than the economic cost / cost of doing business argument.

Keywords: corruption, global south, identification, passports, terrorism

Procedia PDF Downloads 422
166 Maximization of Lifetime for Wireless Sensor Networks Based on Energy Efficient Clustering Algorithm

Authors: Frodouard Minani

Abstract:

Since last decade, wireless sensor networks (WSNs) have been used in many areas like health care, agriculture, defense, military, disaster hit areas and so on. Wireless Sensor Networks consist of a Base Station (BS) and more number of wireless sensors in order to monitor temperature, pressure, motion in different environment conditions. The key parameter that plays a major role in designing a protocol for Wireless Sensor Networks is energy efficiency which is a scarcest resource of sensor nodes and it determines the lifetime of sensor nodes. Maximizing sensor node’s lifetime is an important issue in the design of applications and protocols for Wireless Sensor Networks. Clustering sensor nodes mechanism is an effective topology control approach for helping to achieve the goal of this research. In this paper, the researcher presents an energy efficiency protocol to prolong the network lifetime based on Energy efficient clustering algorithm. The Low Energy Adaptive Clustering Hierarchy (LEACH) is a routing protocol for clusters which is used to lower the energy consumption and also to improve the lifetime of the Wireless Sensor Networks. Maximizing energy dissipation and network lifetime are important matters in the design of applications and protocols for wireless sensor networks. Proposed system is to maximize the lifetime of the Wireless Sensor Networks by choosing the farthest cluster head (CH) instead of the closest CH and forming the cluster by considering the following parameter metrics such as Node’s density, residual-energy and distance between clusters (inter-cluster distance). In this paper, comparisons between the proposed protocol and comparative protocols in different scenarios have been done and the simulation results showed that the proposed protocol performs well over other comparative protocols in various scenarios.

Keywords: base station, clustering algorithm, energy efficient, sensors, wireless sensor networks

Procedia PDF Downloads 144
165 Adding Business Value in Enterprise Applications through Quality Matrices Using Agile

Authors: Afshan Saad, Muhammad Saad, Shah Muhammad Emaduddin

Abstract:

Nowadays the business condition is so quick paced that enhancing ourselves consistently has turned into a huge factor for the presence of an undertaking. We can check this for structural building and significantly more so in the quick-paced universe of data innovation and programming designing. The lithe philosophies, similar to Scrum, have a devoted advance in the process that objectives the enhancement of the improvement procedure and programming items. Pivotal to process enhancement is to pick up data that grants you to assess the condition of the procedure and its items. From the status data, you can design activities for the upgrade and furthermore assess the accomplishment of those activities. This investigation builds a model that measures the product nature of the improvement procedure. The product quality is dependent on the useful and auxiliary nature of the product items, besides the nature of the advancement procedure is likewise vital to enhance programming quality. Utilitarian quality covers the adherence to client prerequisites, while the auxiliary quality tends to the structure of the product item's source code with reference to its practicality. The procedure quality is identified with the consistency and expectedness of the improvement procedure. The product quality model is connected in a business setting by social occasion the information for the product measurements in the model. To assess the product quality model, we investigate the information and present it to the general population engaged with the light-footed programming improvement process. The outcomes from the application and the client input recommend that the model empowers a reasonable evaluation of the product quality and that it very well may be utilized to help the persistent enhancement of the advancement procedure and programming items.

Keywords: Agile SDLC Tools, Agile Software development, business value, enterprise applications, IBM, IBM Rational Team Concert, RTC, software quality, software metrics

Procedia PDF Downloads 174
164 A Trend Based Forecasting Framework of the ATA Method and Its Performance on the M3-Competition Data

Authors: H. Taylan Selamlar, I. Yavuz, G. Yapar

Abstract:

It is difficult to make predictions especially about the future and making accurate predictions is not always easy. However, better predictions remain the foundation of all science therefore the development of accurate, robust and reliable forecasting methods is very important. Numerous number of forecasting methods have been proposed and studied in the literature. There are still two dominant major forecasting methods: Box-Jenkins ARIMA and Exponential Smoothing (ES), and still new methods are derived or inspired from them. After more than 50 years of widespread use, exponential smoothing is still one of the most practically relevant forecasting methods available due to their simplicity, robustness and accuracy as automatic forecasting procedures especially in the famous M-Competitions. Despite its success and widespread use in many areas, ES models have some shortcomings that negatively affect the accuracy of forecasts. Therefore, a new forecasting method in this study will be proposed to cope with these shortcomings and it will be called ATA method. This new method is obtained from traditional ES models by modifying the smoothing parameters therefore both methods have similar structural forms and ATA can be easily adapted to all of the individual ES models however ATA has many advantages due to its innovative new weighting scheme. In this paper, the focus is on modeling the trend component and handling seasonality patterns by utilizing classical decomposition. Therefore, ATA method is expanded to higher order ES methods for additive, multiplicative, additive damped and multiplicative damped trend components. The proposed models are called ATA trended models and their predictive performances are compared to their counter ES models on the M3 competition data set since it is still the most recent and comprehensive time-series data collection available. It is shown that the models outperform their counters on almost all settings and when a model selection is carried out amongst these trended models ATA outperforms all of the competitors in the M3- competition for both short term and long term forecasting horizons when the models’ forecasting accuracies are compared based on popular error metrics.

Keywords: accuracy, exponential smoothing, forecasting, initial value

Procedia PDF Downloads 177
163 Hyper Parameter Optimization of Deep Convolutional Neural Networks for Pavement Distress Classification

Authors: Oumaima Khlifati, Khadija Baba

Abstract:

Pavement distress is the main factor responsible for the deterioration of road structure durability, damage vehicles, and driver comfort. Transportation agencies spend a high proportion of their funds on pavement monitoring and maintenance. The auscultation of pavement distress was based on the manual survey, which was extremely time consuming, labor intensive, and required domain expertise. Therefore, the automatic distress detection is needed to reduce the cost of manual inspection and avoid more serious damage by implementing the appropriate remediation actions at the right time. Inspired by recent deep learning applications, this paper proposes an algorithm for automatic road distress detection and classification using on the Deep Convolutional Neural Network (DCNN). In this study, the types of pavement distress are classified as transverse or longitudinal cracking, alligator, pothole, and intact pavement. The dataset used in this work is composed of public asphalt pavement images. In order to learn the structure of the different type of distress, the DCNN models are trained and tested as a multi-label classification task. In addition, to get the highest accuracy for our model, we adjust the structural optimization hyper parameters such as the number of convolutions and max pooling, filers, size of filters, loss functions, activation functions, and optimizer and fine-tuning hyper parameters that conclude batch size and learning rate. The optimization of the model is executed by checking all feasible combinations and selecting the best performing one. The model, after being optimized, performance metrics is calculated, which describe the training and validation accuracies, precision, recall, and F1 score.

Keywords: distress pavement, hyperparameters, automatic classification, deep learning

Procedia PDF Downloads 93
162 Performance Evaluation of a Very High-Resolution Satellite Telescope

Authors: Walid A. Attia, Taher M. Bazan, Fawzy Eltohamy, Mahmoud Fathy

Abstract:

System performance evaluation is an essential stage in the design of high-resolution satellite telescopes prior to the development process. In this paper, a system performance evaluation of a very high-resolution satellite telescope is investigated. The evaluated system has a Korsch optical scheme design. This design has been discussed in another paper with respect to three-mirror anastigmat (TMA) scheme design and the former configuration showed better results. The investigated system is based on the Korsch optical design integrated with a time-delay and integration charge coupled device (TDI-CCD) sensor to achieve a ground sampling distance (GSD) of 25 cm. The key performance metrics considered are the spatial resolution, the signal to noise ratio (SNR) and the total modulation transfer function (MTF) of the system. In addition, the national image interpretability rating scale (NIIRS) metric is assessed to predict the image quality according to the modified general image quality equation (GIQE). Based on the orbital, optical and detector parameters, the estimated GSD is found to be 25 cm. The SNR has been analyzed at different illumination conditions of target albedos, sun and sensor angles. The system MTF has been computed including diffraction, aberration, optical manufacturing, smear and detector sampling as the main contributors for evaluation the MTF. Finally, the system performance evaluation results show that the computed MTF value is found to be around 0.08 at the Nyquist frequency, the SNR value was found to be 130 at albedo 0.2 with a nadir viewing angles and the predicted NIIRS is in the order of 6.5 which implies a very good system image quality.

Keywords: modulation transfer function, national image interpretability rating scale, signal to noise ratio, satellite telescope performance evaluation

Procedia PDF Downloads 384
161 An Automated Stock Investment System Using Machine Learning Techniques: An Application in Australia

Authors: Carol Anne Hargreaves

Abstract:

A key issue in stock investment is how to select representative features for stock selection. The objective of this paper is to firstly determine whether an automated stock investment system, using machine learning techniques, may be used to identify a portfolio of growth stocks that are highly likely to provide returns better than the stock market index. The second objective is to identify the technical features that best characterize whether a stock’s price is likely to go up and to identify the most important factors and their contribution to predicting the likelihood of the stock price going up. Unsupervised machine learning techniques, such as cluster analysis, were applied to the stock data to identify a cluster of stocks that was likely to go up in price – portfolio 1. Next, the principal component analysis technique was used to select stocks that were rated high on component one and component two – portfolio 2. Thirdly, a supervised machine learning technique, the logistic regression method, was used to select stocks with a high probability of their price going up – portfolio 3. The predictive models were validated with metrics such as, sensitivity (recall), specificity and overall accuracy for all models. All accuracy measures were above 70%. All portfolios outperformed the market by more than eight times. The top three stocks were selected for each of the three stock portfolios and traded in the market for one month. After one month the return for each stock portfolio was computed and compared with the stock market index returns. The returns for all three stock portfolios was 23.87% for the principal component analysis stock portfolio, 11.65% for the logistic regression portfolio and 8.88% for the K-means cluster portfolio while the stock market performance was 0.38%. This study confirms that an automated stock investment system using machine learning techniques can identify top performing stock portfolios that outperform the stock market.

Keywords: machine learning, stock market trading, logistic regression, cluster analysis, factor analysis, decision trees, neural networks, automated stock investment system

Procedia PDF Downloads 157