Search results for: patch metrics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 746

Search results for: patch metrics

176 Tractography Analysis and the Evolutionary Origin of Schizophrenia

Authors: Mouktafi Amine, Tahiri Asmaa

Abstract:

A substantial number of traditional medical research has been put forward to managing and treating mental disorders. At the present time, to our best knowledge, it is believed that a fundamental understanding of the underlying causes of the majority of psychological disorders needs to be explored further to inform early diagnosis, managing symptoms and treatment. The emerging field of evolutionary psychology is a promising prospect to address the origin of mental disorders, potentially leading to more effective treatments. Schizophrenia as a topical mental disorder has been linked to the evolutionary adaptation of the human brain represented in the brain connectivity and asymmetry directly linked to humans' higher brain cognition in contrast to other primates being our direct living representation of the structure and connectivity of our earliest common African ancestors. As proposed in the evolutionary psychology scientific literature, the pathophysiology of schizophrenia is expressed and directly linked to altered connectivity between the Hippocampal Formation (HF) and Dorsolateral Prefrontal Cortex (DLPFC). This research paper presents the results of the use of tractography analysis using multiple open access Diffusion Weighted Imaging (DWI) datasets of healthy subjects, schizophrenia-affected subjects and primates to illustrate the relevance of the aforementioned brain regions' connectivity and the underlying evolutionary changes in the human brain. Deterministic fiber tracking and streamline analysis were used to generate connectivity matrices from the DWI datasets overlaid to compute distances and highlight disconnectivity patterns in conjunction with other fiber tracking metrics: Fractional Anisotropy (FA), Mean Diffusivity (MD) and Radial Diffusivity (RD).

Keywords: tractography, diffusion weighted imaging, schizophrenia, evolutionary psychology

Procedia PDF Downloads 6
175 EcoMush: Mapping Sustainable Mushroom Production in Bangladesh

Authors: A. A. Sadia, A. Emdad, E. Hossain

Abstract:

The increasing importance of mushrooms as a source of nutrition, health benefits, and even potential cancer treatment has raised awareness of the impact of climate-sensitive variables on their cultivation. Factors like temperature, relative humidity, air quality, and substrate composition play pivotal roles in shaping mushroom growth, especially in Bangladesh. Oyster mushrooms, a commonly cultivated variety in this region, are particularly vulnerable to climate fluctuations. This research explores the climatic dynamics affecting oyster mushroom cultivation and, presents an approach to address these challenges and provides tangible solutions to fortify the agro-economy, ensure food security, and promote the sustainability of this crucial food source. Using climate and production data, this study evaluates the performance of three clustering algorithms -KMeans, OPTICS, and BIRCH- based on various quality metrics. While each algorithm demonstrates specific strengths, the findings provide insights into their effectiveness for this specific dataset. The results yield essential information, pinpointing the optimal temperature range of 13°C-22°C, the unfavorable temperature threshold of 28°C and above, and the ideal relative humidity range of 75-85% with the suitable production regions in three different seasons: Kharif-1, 2, and Robi. Additionally, a user-friendly web application is developed to support mushroom farmers in making well-informed decisions about their cultivation practices. This platform offers valuable insights into the most advantageous periods for oyster mushroom farming, with the overarching goal of enhancing the efficiency and profitability of mushroom farming.

Keywords: climate variability, mushroom cultivation, clustering techniques, food security, sustainability, web-application

Procedia PDF Downloads 35
174 Diabetes Mellitus and Blood Glucose Variability Increases the 30-day Readmission Rate after Kidney Transplantation

Authors: Harini Chakkera

Abstract:

Background: Inpatient hyperglycemia is an established independent risk factor among several patient cohorts with hospital readmission. This has not been studied after kidney transplantation. Nearly one-third of patients who have undergone a kidney transplant reportedly experience 30-day readmission. Methods: Data on first-time solitary kidney transplantations were retrieved between September 2015 to December 2018. Information was linked to the electronic health record to determine a diagnosis of diabetes mellitus and extract glucometeric and insulin therapy data. Univariate logistic regression analysis and the XGBoost algorithm were used to predict 30-day readmission. We report the average performance of the models on the testing set on five bootstrapped partitions of the data to ensure statistical significance. Results: The cohort included 1036 patients who received kidney transplantation, and 224 (22%) experienced 30-day readmission. The machine learning algorithm was able to predict 30-day readmission with an average AUC of 77.3% (95% CI 75.30-79.3%). We observed statistically significant differences in the presence of pretransplant diabetes, inpatient-hyperglycemia, inpatient-hypoglycemia, and minimum and maximum glucose values among those with higher 30-day readmission rates. The XGBoost model identified the index admission length of stay, presence of hyper- and hypoglycemia and recipient and donor BMI values as the most predictive risk factors of 30-day readmission. Additionally, significant variations in the therapeutic management of blood glucose by providers were observed. Conclusions: Suboptimal glucose metrics during hospitalization after kidney transplantation is associated with an increased risk for 30-day hospital readmission. Optimizing the hospital blood glucose management, a modifiable factor, after kidney transplantation may reduce the risk of 30-day readmission.

Keywords: kidney, transplant, diabetes, insulin

Procedia PDF Downloads 51
173 Remote Sensing through Deep Neural Networks for Satellite Image Classification

Authors: Teja Sai Puligadda

Abstract:

Satellite images in detail can serve an important role in the geographic study. Quantitative and qualitative information provided by the satellite and remote sensing images minimizes the complexity of work and time. Data/images are captured at regular intervals by satellite remote sensing systems, and the amount of data collected is often enormous, and it expands rapidly as technology develops. Interpreting remote sensing images, geographic data mining, and researching distinct vegetation types such as agricultural and forests are all part of satellite image categorization. One of the biggest challenge data scientists faces while classifying satellite images is finding the best suitable classification algorithms based on the available that could able to classify images with utmost accuracy. In order to categorize satellite images, which is difficult due to the sheer volume of data, many academics are turning to deep learning machine algorithms. As, the CNN algorithm gives high accuracy in image recognition problems and automatically detects the important features without any human supervision and the ANN algorithm stores information on the entire network (Abhishek Gupta., 2020), these two deep learning algorithms have been used for satellite image classification. This project focuses on remote sensing through Deep Neural Networks i.e., ANN and CNN with Deep Sat (SAT-4) Airborne dataset for classifying images. Thus, in this project of classifying satellite images, the algorithms ANN and CNN are implemented, evaluated & compared and the performance is analyzed through evaluation metrics such as Accuracy and Loss. Additionally, the Neural Network algorithm which gives the lowest bias and lowest variance in solving multi-class satellite image classification is analyzed.

Keywords: artificial neural network, convolutional neural network, remote sensing, accuracy, loss

Procedia PDF Downloads 127
172 Understanding Cyber Kill Chains: Optimal Allocation of Monitoring Resources Using Cooperative Game Theory

Authors: Roy. H. A. Lindelauf

Abstract:

Cyberattacks are complex processes consisting of multiple interwoven tasks conducted by a set of agents. Interdictions and defenses against such attacks often rely on cyber kill chain (CKC) models. A CKC is a framework that tries to capture the actions taken by a cyber attacker. There exists a growing body of literature on CKCs. Most of this work either a) describes the CKC with respect to one or more specific cyberattacks or b) discusses the tools and technologies used by the attacker at each stage of the CKC. Defenders, facing scarce resources, have to decide where to allocate their resources given the CKC and partial knowledge on the tools and techniques attackers use. In this presentation CKCs are analyzed through the lens of covert projects, i.e., interrelated tasks that have to be conducted by agents (human and/or computer) with the aim of going undetected. Various aspects of covert project models have been studied abundantly in the operations research and game theory domain, think of resource-limited interdiction actions that maximally delay completion times of a weapons project for instance. This presentation has investigated both cooperative and non-cooperative game theoretic covert project models and elucidated their relation to CKC modelling. To view a CKC as a covert project each step in the CKC is broken down into tasks and there are players of which each one is capable of executing a subset of the tasks. Additionally, task inter-dependencies are represented by a schedule. Using multi-glove cooperative games it is shown how a defender can optimize the allocation of his scarce resources (what, where and how to monitor) against an attacker scheduling a CKC. This study presents and compares several cooperative game theoretic solution concepts as metrics for assigning resources to the monitoring of agents.

Keywords: cyber defense, cyber kill chain, game theory, information warfare techniques

Procedia PDF Downloads 118
171 An Extensive Review Of Drought Indices

Authors: Shamsulhaq Amin

Abstract:

Drought can arise from several hydrometeorological phenomena that result in insufficient precipitation, soil moisture, and surface and groundwater flow, leading to conditions that are considerably drier than the usual water content or availability. Drought is often assessed using indices that are associated with meteorological, agricultural, and hydrological phenomena. In order to effectively handle drought disasters, it is essential to accurately determine the kind, intensity, and extent of the drought using drought characterization. This information is critical for managing the drought before, during, and after the rehabilitation process. Over a hundred drought assessments have been created in literature to evaluate drought disasters, encompassing a range of factors and variables. Some models utilise solely hydrometeorological drivers, while others employ remote sensing technology, and some incorporate a combination of both. Comprehending the entire notion of drought and taking into account drought indices along with their calculation processes are crucial for researchers in this discipline. Examining several drought metrics in different studies requires additional time and concentration. Hence, it is crucial to conduct a thorough examination of approaches used in drought indices in order to identify the most straightforward approach to avoid any discrepancies in numerous scientific studies. In case of practical application in real-world, categorizing indices relative to their usage in meteorological, agricultural, and hydrological phenomena might help researchers maximize their efficiency. Users have the ability to explore different indexes at the same time, allowing them to compare the convenience of use and evaluate the benefits and drawbacks of each. Moreover, certain indices exhibit interdependence, which enhances comprehension of their connections and assists in making informed decisions about their suitability in various scenarios. This study provides a comprehensive assessment of various drought indices, analysing their types and computation methodologies in a detailed and systematic manner.

Keywords: drought classification, drought severity, drought indices, agricultur, hydrological

Procedia PDF Downloads 13
170 ADP Approach to Evaluate the Blood Supply Network of Ontario

Authors: Usama Abdulwahab, Mohammed Wahab

Abstract:

This paper presents the application of uncapacitated facility location problems (UFLP) and 1-median problems to support decision making in blood supply chain networks. A plethora of factors make blood supply-chain networks a complex, yet vital problem for the regional blood bank. These factors are rapidly increasing demand; criticality of the product; strict storage and handling requirements; and the vastness of the theater of operations. As in the UFLP, facilities can be opened at any of $m$ predefined locations with given fixed costs. Clients have to be allocated to the open facilities. In classical location models, the allocation cost is the distance between a client and an open facility. In this model, the costs are the allocation cost, transportation costs, and inventory costs. In order to address this problem the median algorithm is used to analyze inventory, evaluate supply chain status, monitor performance metrics at different levels of granularity, and detect potential problems and opportunities for improvement. The Euclidean distance data for some Ontario cities (demand nodes) are used to test the developed algorithm. Sitation software, lagrangian relaxation algorithm, and branch and bound heuristics are used to solve this model. Computational experiments confirm the efficiency of the proposed approach. Compared to the existing modeling and solution methods, the median algorithm approach not only provides a more general modeling framework but also leads to efficient solution times in general.

Keywords: approximate dynamic programming, facility location, perishable product, inventory model, blood platelet, P-median problem

Procedia PDF Downloads 484
169 Early Gastric Cancer Prediction from Diet and Epidemiological Data Using Machine Learning in Mizoram Population

Authors: Brindha Senthil Kumar, Payel Chakraborty, Senthil Kumar Nachimuthu, Arindam Maitra, Prem Nath

Abstract:

Gastric cancer is predominantly caused by demographic and diet factors as compared to other cancer types. The aim of the study is to predict Early Gastric Cancer (ECG) from diet and lifestyle factors using supervised machine learning algorithms. For this study, 160 healthy individual and 80 cases were selected who had been followed for 3 years (2016-2019), at Civil Hospital, Aizawl, Mizoram. A dataset containing 11 features that are core risk factors for the gastric cancer were extracted. Supervised machine algorithms: Logistic Regression, Naive Bayes, Support Vector Machine (SVM), Multilayer perceptron, and Random Forest were used to analyze the dataset using Python Jupyter Notebook Version 3. The obtained classified results had been evaluated using metrics parameters: minimum_false_positives, brier_score, accuracy, precision, recall, F1_score, and Receiver Operating Characteristics (ROC) curve. Data analysis results showed Naive Bayes - 88, 0.11; Random Forest - 83, 0.16; SVM - 77, 0.22; Logistic Regression - 75, 0.25 and Multilayer perceptron - 72, 0.27 with respect to accuracy and brier_score in percent. Naive Bayes algorithm out performs with very low false positive rates as well as brier_score and good accuracy. Naive Bayes algorithm classification results in predicting ECG showed very satisfactory results using only diet cum lifestyle factors which will be very helpful for the physicians to educate the patients and public, thereby mortality of gastric cancer can be reduced/avoided with this knowledge mining work.

Keywords: Early Gastric cancer, Machine Learning, Diet, Lifestyle Characteristics

Procedia PDF Downloads 126
168 Airborne SAR Data Analysis for Impact of Doppler Centroid on Image Quality and Registration Accuracy

Authors: Chhabi Nigam, S. Ramakrishnan

Abstract:

This paper brings out the analysis of the airborne Synthetic Aperture Radar (SAR) data to study the impact of Doppler centroid on Image quality and geocoding accuracy from the perspective of Stripmap mode of data acquisition. Although in Stripmap mode of data acquisition radar beam points at 90 degrees broad side (side looking), shift in the Doppler centroid is invariable due to platform motion. In-accurate estimation of Doppler centroid leads to poor image quality and image miss-registration. The effect of Doppler centroid is analyzed in this paper using multiple sets of data collected from airborne platform. Occurrences of ghost (ambiguous) targets and their power levels have been analyzed that impacts appropriate choice of PRF. Effect of aircraft attitudes (roll, pitch and yaw) on the Doppler centroid is also analyzed with the collected data sets. Various stages of the RDA (Range Doppler Algorithm) algorithm used for image formation in Stripmap mode, range compression, Doppler centroid estimation, azimuth compression, range cell migration correction are analyzed to find the performance limits and the dependence of the imaging geometry on the final image. The ability of Doppler centroid estimation to enhance the imaging accuracy for registration are also illustrated in this paper. The paper also tries to bring out the processing of low squint SAR data, the challenges and the performance limits imposed by the imaging geometry and the platform dynamics on the final image quality metrics. Finally, the effect on various terrain types, including land, water and bright scatters is also presented.

Keywords: ambiguous target, Doppler Centroid, image registration, Airborne SAR

Procedia PDF Downloads 189
167 Historical Analysis of the Landscape Changes and the Eco-Environment Effects on the Coastal Zone of Bohai Bay, China

Authors: Juan Zhou, Lusan Liu, Yanzhong Zhu, Kuixuan Lin, Wenqian Cai, Yu Wang, Xing Wang

Abstract:

During the past few decades, there has been an increase in the number of coastal land reclamation projects for residential, commercial and industrial purposes in more and more coastal cities of China, which led to the destruction of the wetlands and loss of the sensitive marine habitats. Meanwhile, the influences and nature of these projects attract widespread public and academic concern. For identifying the trend of landscape (esp. Coastal reclamation) and ecological environment changes, understanding of which interacted, and offering a general science for the development of regional plans. In the paper, a case study was carried out in Bohai Bay area, based on the analysis of remote sensing data. Land use maps were created for 1954, 1970, 1981, 1990, 2000 and 2010. Landscape metrics were calculated and illustrated that the degree of reclamation changes was linked to the hydrodynamic environment and macrobenthos community. The results indicated that the worst of the loss of initial areas occurred during 1954-1970, with 65.6% lost mostly to salt field; to 2010, Coastal reclamation area increased more than 200km² as artificial landscape. The numerical simulation of tidal current field in 2003 and 2010 respectively showed that the flow velocity in offshore became faster (from 2-5 cm/s to 10-20 cm/s), and the flow direction seem to go astray. These significant changes of coastline were not conducive to the spread of pollutants and degradation. Additionally, the dominant macrobenthos analysis from 1958 to 2012 showed that Musculus senhousei (Benson, 1842) spread very fast and had been the predominant species in the recent years, which was a disturbance tolerant species.

Keywords: Bohai Bay, coastal reclamation, landscape change, spatial patterns

Procedia PDF Downloads 265
166 A Hebbian Neural Network Model of the Stroop Effect

Authors: Vadim Kulikov

Abstract:

The classical Stroop effect is the phenomenon that it takes more time to name the ink color of a printed word if the word denotes a conflicting color than if it denotes the same color. Over the last 80 years, there have been many variations of the experiment revealing various mechanisms behind semantic, attentional, behavioral and perceptual processing. The Stroop task is known to exhibit asymmetry. Reading the words out loud is hardly dependent on the ink color, but naming the ink color is significantly influenced by the incongruent words. This asymmetry is reversed, if instead of naming the color, one has to point at a corresponding color patch. Another debated aspects are the notions of automaticity and how much of the effect is due to semantic and how much due to response stage interference. Is automaticity a continuous or an all-or-none phenomenon? There are many models and theories in the literature tackling these questions which will be discussed in the presentation. None of them, however, seems to capture all the findings at once. A computational model is proposed which is based on the philosophical idea developed by the author that the mind operates as a collection of different information processing modalities such as different sensory and descriptive modalities, which produce emergent phenomena through mutual interaction and coherence. This is the framework theory where ‘framework’ attempts to generalize the concepts of modality, perspective and ‘point of view’. The architecture of this computational model consists of blocks of neurons, each block corresponding to one framework. In the simplest case there are four: visual color processing, text reading, speech production and attention selection modalities. In experiments where button pressing or pointing is required, a corresponding block is added. In the beginning, the weights of the neural connections are mostly set to zero. The network is trained using Hebbian learning to establish connections (corresponding to ‘coherence’ in framework theory) between these different modalities. The amount of data fed into the network is supposed to mimic the amount of practice a human encounters, in particular it is assumed that converting written text into spoken words is a more practiced skill than converting visually perceived colors to spoken color-names. After the training, the network performs the Stroop task. The RT’s are measured in a canonical way, as these are continuous time recurrent neural networks (CTRNN). The above-described aspects of the Stroop phenomenon along with many others are replicated. The model is similar to some existing connectionist models but as will be discussed in the presentation, has many advantages: it predicts more data, the architecture is simpler and biologically more plausible.

Keywords: connectionism, Hebbian learning, artificial neural networks, philosophy of mind, Stroop

Procedia PDF Downloads 244
165 Chloride Ion Channels Play a Role in Mediating Immune Response during Pseudomonas aeruginosa Infection

Authors: Hani M. Alothaid, Louise Robson, Richmond Muimo

Abstract:

Cystic fibrosis (CF) is a disease that affects respiratory function and in EU it affects about 1 in 2,500 live births with an average 40-year life expectancy. This disease caused by mutations within the gene encoding the CFTR (Cystic Fibrosis Transmembrane Conductance Regulator) chloride channel leading to dysregulation of epithelial fluid transport and chronic lung inflammation, suggesting functional alterations of immune cells. In airways, CFTR been found to form a functional complex with S100A10 and AnxA2 in a cAMP/PKA dependent manner. The multiprotein complex of AnxA2-S100A10 and CFTR is also regulated by calcineurin. The aim of this study was i) to investigate whether chloride ion (Cl−) channels are activated by Pseudomonas aeruginosa lipopolysaccharide (LPS from PA), ii) if this activation is regulated by cAMP/PKA/calcineurin pathway and iii) to investigate the role of LPS-activated Cl− channels in the release of pro-inflammatory cytokines by immune cells. Human peripheral blood monocytes were used in the study. Whole-cell patch records showed that LPS from PA can activate Cl− channels, including CFTR and outwardly-rectifying Cl− channel (ORCC). This activation appears to require an intact PKA/calcineurin signalling pathway. The Gout in the presence of LPS was significantly inhibited by diisothiocyanatostilbene-disulfonic acid (DIDS), an ORCC blocker (p<0.001). The Gout was further suppressed by CFTR(inh)-172, a specific inhibitor for CFTR channels (p<0.001). Monocytes pre-incubated with PKA inhibitor or calcineurin inhibitor before stimulated with LPS from PA that were resulted in DIDS and CFTR(inh)-172 insensitive currents. Activation of both ORCC and CFTR was however, observed in response to monocytes exposure to LPS. Additionally, ELISA showed that the CFTR and ORCC play a role in mediating the release of pro-inflammatory cytokines such as IL-1β upon exposure of monocytes to LPS. However, this secretion was significantly inhibited due to CFTR and ORCC inhibition. However, Cl− may play a role in IL-1β release independent of cAMP/PKA/calcineurin signalling due to the enhancement of IL-1β secretion even when cAMP/PKA/calcineurin pathway was inhibited. In conclusion, our data confirmed that LPS from PA activates Cl− channels in human peripheral blood monocytes. Our data also confirmed that Cl− channels were involved in IL-1β release in monocytes upon exposure to LPS. However, it has been found that PKA and calcineurin does not seem to influence the Cl− dependent cytokine release.

Keywords: cystic fibrosis, CFTR, Annexin A2, S100A10, PP2B, PKA, outwardly-rectifying Cl− channel, Pseudomonas aeruginosa

Procedia PDF Downloads 150
164 Recurrent Neural Networks for Complex Survival Models

Authors: Pius Marthin, Nihal Ata Tutkun

Abstract:

Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.

Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)

Procedia PDF Downloads 58
163 The Role of Urban Agriculture in Enhancing Food Supply and Export Potential: A Case Study of Neishabour, Iran

Authors: Mohammadreza Mojtahedi

Abstract:

Rapid urbanization presents multifaceted challenges, including environmental degradation and public health concerns. As the inevitability of urban sprawl continues, it becomes essential to devise strategies to alleviate its pressures on natural ecosystems and elevate socio-economic benchmarks within cities. This research investigates urban agriculture's economic contributions, emphasizing its pivotal role in food provisioning and export potential. Adopting a descriptive-analytical approach, field survey data was primarily collected via questionnaires. The tool's validity was affirmed by expert opinions, and its reliability secured by achieving a Cronbach's alpha score over 0.70 from 30 preliminary questionnaires. The research encompasses Neishabour's populace of 264,375, extracting a sample size of 384 via Cochran's formula. Findings reveal the significance of urban agriculture in food supply and its potential for exports, underlined by a p-value < 0.05. Neishabour's urban farming can augment the export of organic commodities, fruits, vegetables, ornamental plants, and foster product branding. Moreover, it supports the provision of fresh produce, bolstering dietary quality. Urban agriculture further impacts urban development metrics—enhancing environmental quality, job opportunities, income levels, and aesthetics, while promoting rainwater utilization. Popular cultivations include peaches, Damask roses, and poultry, tailored to available spaces. Structural equation modeling indicates urban agriculture's overarching influence, accounting for a 56% variance, predominantly in food sufficiency and export proficiency.

Keywords: urban agriculture, food supply, export potential, urban development, environmental health, structural equation modeling

Procedia PDF Downloads 33
162 Environmental Interactions in Riparian Vegetation Cover in an Urban Stream Corridor: A Case Study of Duzce Asar Suyu

Authors: Engin Eroğlu, Oktay Yıldız, Necmi Aksoy, Akif Keten, Mehmet Kıvanç Ak, Şeref Keskin, Elif Atmaca, Sertaç Kaya

Abstract:

Nowadays, green spaces in urban areas are under threat and decreasing their percentages in the urban areas because of increasing population, urbanization, migration, and some cultural changes in quality. An important element of the natural landscape water and water-related natural ecosystems are exposed to corruption due to these pressures. A landscape has owned many different types of elements or units, a more dominant structure than other landscapes as good or bad perceptible extent different direction and variable reveals a unique structure and character of the landscape. Whereas landscapes deal with two main groups as urban and rural according to their location on the world, especially intersection areas of urban and rural named semi-urban or semi-rural present variety landscape features. The main components of the landscape are defined as patch-matrix-corridor. The corridors include quite various vegetation types such as riparian, wetland and the others. In urban areas, natural water corridors are an important elements of the diversity of the riparian vegetation cover. In particular, water corridors attract attention with a natural diversity and lack of fragmentation, degradation and artificial results. Thanks to these features, without a doubt, water corridors are the important component of all cities in the world. These corridors not only divide the city into two separate sides, but also assured the ecological connectivity between the two sides of the city. The main objective of this study is to determine the vegetation and habitat features of urban stream corridor according to environmental interactions. Within this context, this study will be realized that 'Asar Suyu' is an important component of the city of Düzce. Moreover, the riparian zone touched contiguous area borders of the city and overlaid the urban development limits of the city, determining of characteristics of the corridor will be carried out as floristic and habitat analysis. Consequently, vegetation structure and habitat features which play an important role between riparian zone vegetation covers and environmental interaction will be determined. This study includes first results of The Scientific and Technological Research Council of Turkey (TUBITAK-116O596; 'Determining of Landscape Character of Urban Water Corridors as Visual and Ecological; A Case Study of Asar Suyu in Duzce').

Keywords: corridor, Duzce, landscape ecology, riparian vegetation

Procedia PDF Downloads 314
161 Comparison of Deep Learning and Machine Learning Algorithms to Diagnose and Predict Breast Cancer

Authors: F. Ghazalnaz Sharifonnasabi, Iman Makhdoom

Abstract:

Breast cancer is a serious health concern that affects many people around the world. According to a study published in the Breast journal, the global burden of breast cancer is expected to increase significantly over the next few decades. The number of deaths from breast cancer has been increasing over the years, but the age-standardized mortality rate has decreased in some countries. It’s important to be aware of the risk factors for breast cancer and to get regular check- ups to catch it early if it does occur. Machin learning techniques have been used to aid in the early detection and diagnosis of breast cancer. These techniques, that have been shown to be effective in predicting and diagnosing the disease, have become a research hotspot. In this study, we consider two deep learning approaches including: Multi-Layer Perceptron (MLP), and Convolutional Neural Network (CNN). We also considered the five-machine learning algorithm titled: Decision Tree (C4.5), Naïve Bayesian (NB), Support Vector Machine (SVM), K-Nearest Neighbors (KNN) Algorithm and XGBoost (eXtreme Gradient Boosting) on the Breast Cancer Wisconsin Diagnostic dataset. We have carried out the process of evaluating and comparing classifiers involving selecting appropriate metrics to evaluate classifier performance and selecting an appropriate tool to quantify this performance. The main purpose of the study is predicting and diagnosis breast cancer, applying the mentioned algorithms and also discovering of the most effective with respect to confusion matrix, accuracy and precision. It is realized that CNN outperformed all other classifiers and achieved the highest accuracy (0.982456). The work is implemented in the Anaconda environment based on Python programing language.

Keywords: breast cancer, multi-layer perceptron, Naïve Bayesian, SVM, decision tree, convolutional neural network, XGBoost, KNN

Procedia PDF Downloads 46
160 Evaluation of NASA POWER and CRU Precipitation and Temperature Datasets over a Desert-prone Yobe River Basin: An Investigation of the Impact of Drought in the North-East Arid Zone of Nigeria

Authors: Yusuf Dawa Sidi, Abdulrahman Bulama Bizi

Abstract:

The most dependable and precise source of climate data is often gauge observation. However, long-term records of gauge observations, on the other hand, are unavailable in many regions around the world. In recent years, a number of gridded climate datasets with high spatial and temporal resolutions have emerged as viable alternatives to gauge-based measurements. However, it is crucial to thoroughly evaluate their performance prior to utilising them in hydroclimatic applications. Therefore, this study aims to assess the effectiveness of NASA Prediction of Worldwide Energy Resources (NASA POWER) and Climate Research Unit (CRU) datasets in accurately estimating precipitation and temperature patterns within the dry region of Nigeria from 1990 to 2020. The study employs widely used statistical metrics and the Standardised Precipitation Index (SPI) to effectively capture the monthly variability of precipitation and temperature and inter-annual anomalies in rainfall. The findings suggest that CRU exhibited superior performance compared to NASA POWER in terms of monthly precipitation and minimum and maximum temperatures, demonstrating a high correlation and much lower error values for both RMSE and MAE. Nevertheless, NASA POWER has exhibited a moderate agreement with gauge observations in accurately replicating monthly precipitation. The analysis of the SPI reveals that the CRU product exhibits superior performance compared to NASA POWER in accurately reflecting inter-annual variations in rainfall anomalies. The findings of this study indicate that the CRU gridded product is often regarded as the most favourable gridded precipitation product.

Keywords: CRU, climate change, precipitation, SPI, temperature

Procedia PDF Downloads 48
159 Implementing a Hospitalist Co-Management Service in Orthopaedic Surgery

Authors: Diane Ghanem, Whitney Kagabo, Rebecca Engels, Uma Srikumaran, Babar Shafiq

Abstract:

Hospitalist co-management of orthopaedic surgery patients is a growing trend across the country. It was created as a collaborative effort to provide overarching care to patients with the goal of improving their postoperative care and decreasing in-hospital medical complications. The aim of this project is to provide a guide for implementing and optimizing a hospitalist co-management service in orthopaedic surgery. Key leaders from the hospitalist team, orthopaedic team and quality, safety and service team were identified. Multiple meetings were convened to discuss the comanagement service and determine the necessary building blocks behind an efficient and well-designed co-management framework. After meticulous deliberation, a consensus was reached on the final service agreement and a written guide was drafted. Fundamental features of the service include the identification of service stakeholders and leaders, frequent consensus meetings, a well-defined framework, with goals, program metrics and unified commands, and a regular satisfaction assessment to update and improve the program. Identified pearls for co-managing orthopaedic surgery patients are standardization, timing, adequate patient selection, and two-way feedback between hospitalists and orthopaedic surgeons to optimize the protocols. Developing a service agreement is a constant work in progress, with meetings, discussions, revisions, and multiple piloting attempts before implementation. It is a partnership created to provide hospitals with a streamlined admission process where at-risk patients are identified early, and patient care is optimized regardless of the number or nature of medical comorbidities. A wellestablished hospitalist co-management service can increase patient care quality and safety, as well as health care value.

Keywords: co-management, hospitalist co-management, implementation, orthopaedic surgery, quality improvement

Procedia PDF Downloads 56
158 Clinical and Chemokine Profile in Leprosy Patients During Multidrug Therapy (MDT) and Their Healthy Contacts: A Randomized Control Trial

Authors: Rohit Kothari

Abstract:

Background: Leprosyis a chronic granulomatous diseasecaused by Mycobacterium leprae (M. Lepra). Reactions may interrupt its usual chronic course.Type-1 (T1R)and type-2 lepra reaction(T2R) are acute events and signifytype-IV and type-III hypersensitivity responses, respectively. Various chemokines like CCL3, 5, 11, and CCL24 may be increased during the course of leprosy or during reactions and may serve as markers of early diagnosis, response to therapy, and prognosis. Objective: To find correlation of CCL3, 5, 11, and CCL24 in leprosy patients on multidrug therapy and their family contacts after ruling out active disease during leprosy treatment and during periods of lepra reactions. Methodology: This randomized control trial was conducted in 50 clinico-histopathologically diagnosed cases of leprosy in a tertiary care hospital in Bengaluru, India. 50 of their family contacts were adequately examined and investigated should the need be to rule out active disease. The two study-groups comprised of leprosy cases, and the age, sex, and area of residence matched healthy contactswho were given single-dose rifampicin prophylaxis, respectively. Blood samples were taken at baseline, six months, and after one yearin both the groups (on completion of MDT in leprosy cases)and also during periods of reaction if occurred in leprosy cases. Results: Our study found that at baseline, CCL5, 11, and 24 were higher in leprosy cases as compared to the healthy contacts, and the difference was statistically significant.CCL3 was also found to be higherat baseline in leprosy cases, however, the difference was not statistically significant. At six months and one year, the levels of CCL 5, 11, and 24 reduced, and the difference was statistically significant in leprosy cases, whereas it remained almost static in all the healthy contacts. Twenty patients of leprosy developed lepra reaction during the course of one year, and during reaction, the increase in CCL11 and 24 was statistically significant from baseline, whereas CCL3 and 5 did not rise significantly. One of the healthy contacts developed signs of leprosy in the form of hypopigmented numb patch and was clinico-histopathologically, and CCL11 and 24 were found to be higher with a statistically significant difference from the baseline values. Conclusion: CCL5, 11, and 24 are sensitive markers of diagnosing leprosy, response to MDT, and prognosis and are not increased in healthy contacts. CCL11 and 24 are sensitive markers of lepra reactions and may serve as one of the early diagnostic modalities for identifying lepra reaction and also leprosy in healthy contacts. To the best of our knowledge, this is the first study to evaluate these biomarkers in leprosy cases and their healthy contacts with a follow-up of upto one year with one of them developing the disease, and the same was confirmed based on these biomarkers as well.

Keywords: chemokine profile, healthy contacts, leprosy, lepra reactions

Procedia PDF Downloads 110
157 People's Perspective on Water Commons in Trans-Boundary Water Governance: A Case Study from Nepal

Authors: Sristi Silwal

Abstract:

South Asian rivers support ecosystems and sustain well-being of thousands of riparian communities. Rivers however are also sources of conflict between countries and one of the contested issues between governments of the region. Governments have signed treaties to harness some of the rivers but their provisions have not been successful in improving the quality of life of those who depend on water as common property resources. This paper will present a case of the study of the status of the water commons along the lower command areas of Koshi, Gandka and Mahakali rivers. Nepal and India have signed treaties for development and management of these rivers in 1928, 1954 and 1966. The study investigated perceptions of the local community on climate-induced disasters, provision of the treaties such as water for irrigation, participation in decision-making and specific impact of women. It looked at how the local community coped with adversities. The study showed that the common pool resources are gradually getting degraded, flood events increasing while community blame ‘other state’ and state administration for exacerbating these ills. The level of awareness about provisions of existing treatise is poor. Ongoing approach to trans-boundary water management has taken inadequate cognizance of these realities as the dominant narrative perpetuates cooperation between the governments. The paper argues that on-going discourses on trans-boundary water development and management need to use a new metrics of taking cognizance of the condition of the commons and that of the people depended on them for sustenance. In absence of such narratives, the scale of degradation would increase making those already marginalized more vulnerable to impacts of global climate change.

Keywords: climate change vulnerability, conflict, cooperation, water commons

Procedia PDF Downloads 207
156 Accuracy of Peak Demand Estimates for Office Buildings Using Quick Energy Simulation Tool

Authors: Mahdiyeh Zafaranchi, Ethan S. Cantor, William T. Riddell, Jess W. Everett

Abstract:

The New Jersey Department of Military and Veteran’s Affairs (NJ DMAVA) operates over 50 facilities throughout the state of New Jersey, U.S. NJDMAVA is under a mandate to move toward decarbonization, which will eventually include eliminating the use of natural gas and other fossil fuels for heating. At the same time, the organization requires increased resiliency regarding electric grid disruption. These competing goals necessitate adopting the use of on-site renewables such as photovoltaic and geothermal power, as well as implementing power control strategies through microgrids. Planning for these changes requires a detailed understanding of current and future electricity use on yearly, monthly, and shorter time scales, as well as a breakdown of consumption by heating, ventilation, and air conditioning (HVAC) equipment. This paper discusses case studies of two buildings that were simulated using the QUick Energy Simulation Tool (eQUEST). Both buildings use electricity from the grid and photovoltaics. One building also uses natural gas. While electricity use data are available in hourly intervals and natural gas data are available in monthly intervals, the simulations were developed using monthly and yearly totals. This approach was chosen to reflect the information available for most NJ DMAVA facilities. Once completed, simulation results are compared to metrics recommended by several organizations to validate energy use simulations. In addition to yearly and monthly totals, the simulated peak demands are compared to actual monthly peak demand values. The simulations resulted in monthly peak demand values that were within 30% of the measured values. These benchmarks will help to assess future energy planning efforts for NJ DMAVA.

Keywords: building energy modeling, eQUEST, peak demand, smart meters

Procedia PDF Downloads 44
155 Transit-Oriented Development as a Tool for Building Social Capital

Authors: Suneet Jagdev

Abstract:

Rapid urbanization has resulted in informal settlements on the periphery of nearly all big cities in the developing world due to lack of affordable housing options in the city. Residents of these communities have to travel long distances to get to work or search for jobs in these cities, and women, children and elderly people are excluded from urban opportunities. Affordable and safe public transport facilities can help them expand their possibilities. The aim of this research is to identify social capital as another important element of livable cities that can be protected and nurtured through transit-oriented development, as a tool to provide real resources that can help these transit-oriented communities become self-sustainable. Social capital has been referred to the collective value of all social networks and the inclinations that arise from these networks to do things for each other. It is one of the key component responsible to build and maintain democracy. Public spaces, pedestrian amenities and social equity are the other essential part of Transit Oriented Development models that will be analyzed in this research. The data has been collected through the analysis of several case studies, the urban design strategies implemented and their impact on the perception and on the community´s experience, and, finally, how these focused on the social capital. Case studies have been evaluated on several metrics, namely ecological, financial, energy consumption, etc. A questionnaire and other tools were designed to collect data to analyze the research objective and reflect the dimension of social capital. The results of the questionnaire indicated that almost all the participants have a positive attitude towards this dimensions of building a social capital with the aid of transit-oriented development. Statistical data of the identified key motivators against against demographic characteristics have been generated based on the case studies used for the paper. The findings suggested that there is a direct relation between urbanization, transit-oriented developments, and social capital.

Keywords: better opportunities, low-income settlements, social capital, social inclusion, transit oriented development

Procedia PDF Downloads 306
154 Usability Evaluation of a Self-Report Mobile App for COVID-19 Symptoms: Supporting Health Monitoring in the Work Context

Authors: Kevin Montanez, Patricia Garcia

Abstract:

The confinement and restrictions adopted to avoid an exponential spread of the COVID-19 have negatively impacted the Peruvian economy. In this context, Industries offering essential products could continue operating, but they have to follow safety protocols and implement strategies to ensure employee health. In view of the increasing internet access and mobile phone ownership, “Alerta Temprana”, a mobile app, was developed to self-report COVID-19 symptoms in the work context. In this study, the usability of the mobile app “Alerta Temprana” was evaluated from the perspective of health monitors and workers. In addition to reporting the metrics related to the usability of the application, the utility of the system is also evaluated from the monitors' perspective. In this descriptive study, the participants used the mobile app for two months. Afterwards, System Usability Scale (SUS) questionnaire was answered by the workers and monitors. A Usefulness questionnaire with open questions was also used for the monitors. The data related to the use of the application was collected during one month. Furthermore, descriptive statistics and bivariate analysis were used. The workers rated the application as good (70.39). In the case of the monitors, usability was excellent (83.0). The most important feature for the monitors were the emails generated by the application. The average interaction per user was 30 seconds and a total of 6172 self-reports were sent. Finally, a statistically significant association was found between the acceptability scale and the work area. The results of this study suggest that Alerta Temprana has the potential to be used for surveillance and health monitoring in any context of face-to-face modality. Participants reported a high degree of ease of use. However, from the perspective of workers, SUS cannot diagnose usability issues and we suggest we use another standard usability questionnaire to improve "Alerta Temprana" for future use.

Keywords: public health in informatics, mobile app, usability, self-report

Procedia PDF Downloads 80
153 The Crossroads of Corruption and Terrorism in the Global South

Authors: Stephen M. Magu

Abstract:

The 9/11 and Christmas bombing attacks in the United States are mostly associated with the inability of intelligence agencies to connect dots based on intelligence that was already available. The 1998, 2002, 2013 and several 2014 terrorist attacks in Kenya, on the other hand, are probably driven by a completely different dynamic: the invisible hand of corruption. The World Bank and Transparency International annually compute the Worldwide Governance Indicators and the Corruption Perception Index respectively. What perhaps is not adequately captured in the corruption metrics is the impact of corruption on terrorism. The World Bank data includes variables such as the control of corruption, (estimates of) government effectiveness, political stability and absence of violence/terrorism, regulatory quality, rule of law and voice and accountability. TI's CPI does not include measures related to terrorism, but it is plausible that there is an expectation of some terrorism impact arising from corruption. This paper, by examining the incidence, frequency and total number of terrorist attacks that have occurred especially since 1990, and further examining the specific cases of Kenya and Nigeria, argues that in addition to having major effects on governance, corruption has an even more frightening impact: that of facilitating and/or violating security mechanisms to the extent that foreign nationals can easily obtain identification that enables them to perpetuate major events, targeting powerful countries' interests in countries with weak corruption-fighting mechanisms. The paper aims to model interactions that demonstrate the cost/benefit analysis and agents' rational calculations as being non-rational calculations, given the ultimate impact. It argues that eradication of corruption is not just a matter of a better business environment, but that it is implicit in national security, and that for anti-corruption crusaders, this is an argument more potent than the economic cost / cost of doing business argument.

Keywords: corruption, global south, identification, passports, terrorism

Procedia PDF Downloads 395
152 Maximization of Lifetime for Wireless Sensor Networks Based on Energy Efficient Clustering Algorithm

Authors: Frodouard Minani

Abstract:

Since last decade, wireless sensor networks (WSNs) have been used in many areas like health care, agriculture, defense, military, disaster hit areas and so on. Wireless Sensor Networks consist of a Base Station (BS) and more number of wireless sensors in order to monitor temperature, pressure, motion in different environment conditions. The key parameter that plays a major role in designing a protocol for Wireless Sensor Networks is energy efficiency which is a scarcest resource of sensor nodes and it determines the lifetime of sensor nodes. Maximizing sensor node’s lifetime is an important issue in the design of applications and protocols for Wireless Sensor Networks. Clustering sensor nodes mechanism is an effective topology control approach for helping to achieve the goal of this research. In this paper, the researcher presents an energy efficiency protocol to prolong the network lifetime based on Energy efficient clustering algorithm. The Low Energy Adaptive Clustering Hierarchy (LEACH) is a routing protocol for clusters which is used to lower the energy consumption and also to improve the lifetime of the Wireless Sensor Networks. Maximizing energy dissipation and network lifetime are important matters in the design of applications and protocols for wireless sensor networks. Proposed system is to maximize the lifetime of the Wireless Sensor Networks by choosing the farthest cluster head (CH) instead of the closest CH and forming the cluster by considering the following parameter metrics such as Node’s density, residual-energy and distance between clusters (inter-cluster distance). In this paper, comparisons between the proposed protocol and comparative protocols in different scenarios have been done and the simulation results showed that the proposed protocol performs well over other comparative protocols in various scenarios.

Keywords: base station, clustering algorithm, energy efficient, sensors, wireless sensor networks

Procedia PDF Downloads 111
151 Adding Business Value in Enterprise Applications through Quality Matrices Using Agile

Authors: Afshan Saad, Muhammad Saad, Shah Muhammad Emaduddin

Abstract:

Nowadays the business condition is so quick paced that enhancing ourselves consistently has turned into a huge factor for the presence of an undertaking. We can check this for structural building and significantly more so in the quick-paced universe of data innovation and programming designing. The lithe philosophies, similar to Scrum, have a devoted advance in the process that objectives the enhancement of the improvement procedure and programming items. Pivotal to process enhancement is to pick up data that grants you to assess the condition of the procedure and its items. From the status data, you can design activities for the upgrade and furthermore assess the accomplishment of those activities. This investigation builds a model that measures the product nature of the improvement procedure. The product quality is dependent on the useful and auxiliary nature of the product items, besides the nature of the advancement procedure is likewise vital to enhance programming quality. Utilitarian quality covers the adherence to client prerequisites, while the auxiliary quality tends to the structure of the product item's source code with reference to its practicality. The procedure quality is identified with the consistency and expectedness of the improvement procedure. The product quality model is connected in a business setting by social occasion the information for the product measurements in the model. To assess the product quality model, we investigate the information and present it to the general population engaged with the light-footed programming improvement process. The outcomes from the application and the client input recommend that the model empowers a reasonable evaluation of the product quality and that it very well may be utilized to help the persistent enhancement of the advancement procedure and programming items.

Keywords: Agile SDLC Tools, Agile Software development, business value, enterprise applications, IBM, IBM Rational Team Concert, RTC, software quality, software metrics

Procedia PDF Downloads 145
150 A Trend Based Forecasting Framework of the ATA Method and Its Performance on the M3-Competition Data

Authors: H. Taylan Selamlar, I. Yavuz, G. Yapar

Abstract:

It is difficult to make predictions especially about the future and making accurate predictions is not always easy. However, better predictions remain the foundation of all science therefore the development of accurate, robust and reliable forecasting methods is very important. Numerous number of forecasting methods have been proposed and studied in the literature. There are still two dominant major forecasting methods: Box-Jenkins ARIMA and Exponential Smoothing (ES), and still new methods are derived or inspired from them. After more than 50 years of widespread use, exponential smoothing is still one of the most practically relevant forecasting methods available due to their simplicity, robustness and accuracy as automatic forecasting procedures especially in the famous M-Competitions. Despite its success and widespread use in many areas, ES models have some shortcomings that negatively affect the accuracy of forecasts. Therefore, a new forecasting method in this study will be proposed to cope with these shortcomings and it will be called ATA method. This new method is obtained from traditional ES models by modifying the smoothing parameters therefore both methods have similar structural forms and ATA can be easily adapted to all of the individual ES models however ATA has many advantages due to its innovative new weighting scheme. In this paper, the focus is on modeling the trend component and handling seasonality patterns by utilizing classical decomposition. Therefore, ATA method is expanded to higher order ES methods for additive, multiplicative, additive damped and multiplicative damped trend components. The proposed models are called ATA trended models and their predictive performances are compared to their counter ES models on the M3 competition data set since it is still the most recent and comprehensive time-series data collection available. It is shown that the models outperform their counters on almost all settings and when a model selection is carried out amongst these trended models ATA outperforms all of the competitors in the M3- competition for both short term and long term forecasting horizons when the models’ forecasting accuracies are compared based on popular error metrics.

Keywords: accuracy, exponential smoothing, forecasting, initial value

Procedia PDF Downloads 157
149 Hyper Parameter Optimization of Deep Convolutional Neural Networks for Pavement Distress Classification

Authors: Oumaima Khlifati, Khadija Baba

Abstract:

Pavement distress is the main factor responsible for the deterioration of road structure durability, damage vehicles, and driver comfort. Transportation agencies spend a high proportion of their funds on pavement monitoring and maintenance. The auscultation of pavement distress was based on the manual survey, which was extremely time consuming, labor intensive, and required domain expertise. Therefore, the automatic distress detection is needed to reduce the cost of manual inspection and avoid more serious damage by implementing the appropriate remediation actions at the right time. Inspired by recent deep learning applications, this paper proposes an algorithm for automatic road distress detection and classification using on the Deep Convolutional Neural Network (DCNN). In this study, the types of pavement distress are classified as transverse or longitudinal cracking, alligator, pothole, and intact pavement. The dataset used in this work is composed of public asphalt pavement images. In order to learn the structure of the different type of distress, the DCNN models are trained and tested as a multi-label classification task. In addition, to get the highest accuracy for our model, we adjust the structural optimization hyper parameters such as the number of convolutions and max pooling, filers, size of filters, loss functions, activation functions, and optimizer and fine-tuning hyper parameters that conclude batch size and learning rate. The optimization of the model is executed by checking all feasible combinations and selecting the best performing one. The model, after being optimized, performance metrics is calculated, which describe the training and validation accuracies, precision, recall, and F1 score.

Keywords: distress pavement, hyperparameters, automatic classification, deep learning

Procedia PDF Downloads 60
148 Performance Evaluation of a Very High-Resolution Satellite Telescope

Authors: Walid A. Attia, Taher M. Bazan, Fawzy Eltohamy, Mahmoud Fathy

Abstract:

System performance evaluation is an essential stage in the design of high-resolution satellite telescopes prior to the development process. In this paper, a system performance evaluation of a very high-resolution satellite telescope is investigated. The evaluated system has a Korsch optical scheme design. This design has been discussed in another paper with respect to three-mirror anastigmat (TMA) scheme design and the former configuration showed better results. The investigated system is based on the Korsch optical design integrated with a time-delay and integration charge coupled device (TDI-CCD) sensor to achieve a ground sampling distance (GSD) of 25 cm. The key performance metrics considered are the spatial resolution, the signal to noise ratio (SNR) and the total modulation transfer function (MTF) of the system. In addition, the national image interpretability rating scale (NIIRS) metric is assessed to predict the image quality according to the modified general image quality equation (GIQE). Based on the orbital, optical and detector parameters, the estimated GSD is found to be 25 cm. The SNR has been analyzed at different illumination conditions of target albedos, sun and sensor angles. The system MTF has been computed including diffraction, aberration, optical manufacturing, smear and detector sampling as the main contributors for evaluation the MTF. Finally, the system performance evaluation results show that the computed MTF value is found to be around 0.08 at the Nyquist frequency, the SNR value was found to be 130 at albedo 0.2 with a nadir viewing angles and the predicted NIIRS is in the order of 6.5 which implies a very good system image quality.

Keywords: modulation transfer function, national image interpretability rating scale, signal to noise ratio, satellite telescope performance evaluation

Procedia PDF Downloads 354
147 An Automated Stock Investment System Using Machine Learning Techniques: An Application in Australia

Authors: Carol Anne Hargreaves

Abstract:

A key issue in stock investment is how to select representative features for stock selection. The objective of this paper is to firstly determine whether an automated stock investment system, using machine learning techniques, may be used to identify a portfolio of growth stocks that are highly likely to provide returns better than the stock market index. The second objective is to identify the technical features that best characterize whether a stock’s price is likely to go up and to identify the most important factors and their contribution to predicting the likelihood of the stock price going up. Unsupervised machine learning techniques, such as cluster analysis, were applied to the stock data to identify a cluster of stocks that was likely to go up in price – portfolio 1. Next, the principal component analysis technique was used to select stocks that were rated high on component one and component two – portfolio 2. Thirdly, a supervised machine learning technique, the logistic regression method, was used to select stocks with a high probability of their price going up – portfolio 3. The predictive models were validated with metrics such as, sensitivity (recall), specificity and overall accuracy for all models. All accuracy measures were above 70%. All portfolios outperformed the market by more than eight times. The top three stocks were selected for each of the three stock portfolios and traded in the market for one month. After one month the return for each stock portfolio was computed and compared with the stock market index returns. The returns for all three stock portfolios was 23.87% for the principal component analysis stock portfolio, 11.65% for the logistic regression portfolio and 8.88% for the K-means cluster portfolio while the stock market performance was 0.38%. This study confirms that an automated stock investment system using machine learning techniques can identify top performing stock portfolios that outperform the stock market.

Keywords: machine learning, stock market trading, logistic regression, cluster analysis, factor analysis, decision trees, neural networks, automated stock investment system

Procedia PDF Downloads 131