Search results for: probabilistic classification vector machines
2069 Probabilistic Approach of Dealing with Uncertainties in Distributed Constraint Optimization Problems and Situation Awareness for Multi-agent Systems
Authors: Sagir M. Yusuf, Chris Baber
Abstract:
In this paper, we describe how Bayesian inferential reasoning will contributes in obtaining a well-satisfied prediction for Distributed Constraint Optimization Problems (DCOPs) with uncertainties. We also demonstrate how DCOPs could be merged to multi-agent knowledge understand and prediction (i.e. Situation Awareness). The DCOPs functions were merged with Bayesian Belief Network (BBN) in the form of situation, awareness, and utility nodes. We describe how the uncertainties can be represented to the BBN and make an effective prediction using the expectation-maximization algorithm or conjugate gradient descent algorithm. The idea of variable prediction using Bayesian inference may reduce the number of variables in agents’ sampling domain and also allow missing variables estimations. Experiment results proved that the BBN perform compelling predictions with samples containing uncertainties than the perfect samples. That is, Bayesian inference can help in handling uncertainties and dynamism of DCOPs, which is the current issue in the DCOPs community. We show how Bayesian inference could be formalized with Distributed Situation Awareness (DSA) using uncertain and missing agents’ data. The whole framework was tested on multi-UAV mission for forest fire searching. Future work focuses on augmenting existing architecture to deal with dynamic DCOPs algorithms and multi-agent information merging.Keywords: DCOP, multi-agent reasoning, Bayesian reasoning, swarm intelligence
Procedia PDF Downloads 1192068 A Strategy of Direct Power Control for PWM Rectifier Reducing Ripple in Instantaneous Power
Authors: T. Mohammed Chikouche, K. Hartani
Abstract:
Based on the analysis of basic direct torque control, a parallel master slave for four in-wheel permanent magnet synchronous motors (PMSM) fed by two three phase inverters used in electric vehicle is proposed in this paper. A conventional system with multi-inverter and multi-machine comprises a three phase inverter for each machine to be controlled. Another approach consists in using only one three-phase inverter to supply several permanent magnet synchronous machines. A modified direct torque control (DTC) algorithm is used for the control of the bi-machine traction system. Simulation results show that the proposed control strategy is well adapted for the synchronism of this system and provide good speed tracking performance.Keywords: electric vehicle, multi-machine single-inverter system, multi-machine multi-inverter control, in-wheel motor, master-slave control
Procedia PDF Downloads 2212067 Renovate to nZEB of an Existing Building in the Mediterranean Area: Analysis of the Use of Renewable Energy Sources for the HVAC System
Authors: M. Baratieri, M. Beccali, S. Corradino, B. Di Pietra, C. La Grassa, F. Monteleone, G. Morosinotto, G. Puglisi
Abstract:
The energy renovation of existing buildings represents an important opportunity to increase the decarbonization and the sustainability of urban environments. In this context, the work carried out has the objective of demonstrating the technical and economic feasibility of an energy renovate of a public building destined for offices located on the island of Lampedusa in the Mediterranean Sea. By applying the Italian transpositions of European Directives 2010/31/EU and 2009/28/EC, the building has been renovated from the current energy requirements of 111.7 kWh/m² to 16.4 kWh/m². The result achieved classifies the building as nZEB (nearly Zero Energy Building) according to the Italian national definition. The analysis was carried out using in parallel a quasi-stationary software, normally used in the professional field, and a dynamic simulation model often used in the academic world. The proposed interventions cover the components of the building’s envelope, the heating-cooling system and the supply of energy from renewable sources. In these latter points, the analysis has focused more on assessing two aspects that affect the supply of renewable energy. The first concerns the use of advanced logic control systems for air conditioning units in order to increase photovoltaic self-consumption. With these adjustments, a considerable increase in photovoltaic self-consumption and a decrease in the electricity exported to the Island's electricity grid have been obtained. The second point concerned the evaluation of the building's energy classification considering the real efficiency of the heating-cooling plant. Normally the energy plants have lower operational efficiency than the designed one due to multiple reasons; the decrease in the energy classification of the building for this factor has been quantified. This study represents an important example for the evaluation of the best interventions for the energy renovation of buildings in the Mediterranean Climate and a good description of the correct methodology to evaluate the resulting improvements.Keywords: heat pumps, HVAC systems, nZEB renovation, renewable energy sources
Procedia PDF Downloads 4512066 Transmit Power Optimization for Cooperative Beamforming in Reverse-Link MIMO Ad-Hoc Networks
Authors: Younghyun Jeon, Seungjoo Maeng
Abstract:
In the Ad-hoc network, the great interests regarding MIMO scheme leads to their combination, which is also utilized into its applicable network. We manage the field of the problem into Reverse-link MIMO Ad-hoc Network (RMAN) and propose the methodology to maximize the data rate with its power consumption using Node-Cooperative beamforming technique. Based on the result of mathematical optimization formulation, we design the algorithm to construct optimal orthogonal weight vector according to channel feedback and control its transmission power according to QoS-pricing value level. In simulation results, we show the validity of the proposed mathematical optimization result and algorithm which mean that the sum-rate of each link is converged into some point.Keywords: ad-hoc network, MIMO, cooperative beamforming, transmit power
Procedia PDF Downloads 3982065 A Performance Comparison between Conventional and Flexible Box Erecting Machines Using Dispatching Rules
Authors: Min Kyu Kim, Eun Young Lee, Dong Woo Son, Yoon Seok Chang
Abstract:
In this paper, we introduce a flexible box erecting machine (BEM) that swiftly and automatically transforms cardboard into a three dimensional box. Recently, the parcel service and home-shopping industries have grown rapidly, and there is an increasing need for various box types to ship various products. However, workers cannot fold thousands of boxes manually in a day. As such, automatic BEMs are garnering greater attention. This study takes equipment operation into consideration as well as mechanical improvements in order to design a BEM that is able to outperform its conventional counterparts. We analyzed six dispatching rules – First In First Out (FIFO), Shortest Processing Time (SPT), Earliest Due Date (EDD), Setup Avoidance, EDD + SPT, and EDD + Setup Avoidance – to determine which one was most suitable for BEM operation. Consequently, SPT and Setup Avoidance were found to be the most critical rules, followed by EDD + Setup Avoidance, EDD + SPT, EDD, and FIFO. This hierarchy was valid for both our conventional BEM and our new flexible BEM from the viewpoint of processing time. We believe that this research can contribute to flexible BEM management, which has the potential to increase productivity and convenience.Keywords: automation, box erecting machine, dispatching rule, setup time
Procedia PDF Downloads 3632064 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution
Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone
Abstract:
The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder
Procedia PDF Downloads 1132063 Performance Study of ZigBee-Based Wireless Sensor Networks
Authors: Afif Saleh Abugharsa
Abstract:
The IEEE 802.15.4 standard is designed for low-rate wireless personal area networks (LR-WPAN) with focus on enabling wireless sensor networks. It aims to give a low data rate, low power consumption, and low cost wireless networking on the device-level communication. The objective of this study is to investigate the performance of IEEE 802.15.4 based networks using simulation tool. In this project the network simulator 2 NS2 was used to several performance measures of wireless sensor networks. Three scenarios were considered, multi hop network with a single coordinator, star topology, and an ad hoc on demand distance vector AODV. Results such as packet delivery ratio, hop delay, and number of collisions are obtained from these scenarios.Keywords: ZigBee, wireless sensor networks, IEEE 802.15.4, low power, low data rate
Procedia PDF Downloads 4332062 A Survey on Intelligent Techniques Based Modelling of Size Enlargement Process for Fine Materials
Authors: Mohammad Nadeem, Haider Banka, R. Venugopal
Abstract:
Granulation or agglomeration is a size enlargement process to transform the fine particulates into larger aggregates since the fine size of available materials and minerals poses difficulty in their utilization. Though a long list of methods is available in the literature for the modeling of granulation process to facilitate the in-depth understanding and interpretation of the system, there is still scope of improvements using novel tools and techniques. Intelligent techniques, such as artificial neural network, fuzzy logic, self-organizing map, support vector machine and others, have emerged as compelling alternatives for dealing with imprecision and complex non-linearity of the systems. The present study tries to review the applications of intelligent techniques in the modeling of size enlargement process for fine materials.Keywords: fine material, granulation, intelligent technique, modelling
Procedia PDF Downloads 3742061 Improvement of the Q-System Using the Rock Engineering System: A Case Study of Water Conveyor Tunnel of Azad Dam
Authors: Sahand Golmohammadi, Sana Hosseini Shirazi
Abstract:
Because the status and mechanical parameters of discontinuities in the rock mass are included in the calculations, various methods of rock engineering classification are often used as a starting point for the design of different types of structures. The Q-system is one of the most frequently used methods for stability analysis and determination of support systems of underground structures in rock, including tunnel. In this method, six main parameters of the rock mass, namely, the rock quality designation (RQD), joint set number (Jn), joint roughness number (Jr), joint alteration number (Ja), joint water parameter (Jw) and stress reduction factor (SRF) are required. In this regard, in order to achieve a reasonable and optimal design, identifying the effective parameters for the stability of the mentioned structures is one of the most important goals and the most necessary actions in rock engineering. Therefore, it is necessary to study the relationships between the parameters of a system and how they interact with each other and, ultimately, the whole system. In this research, it has attempted to determine the most effective parameters (key parameters) from the six parameters of rock mass in the Q-system using the rock engineering system (RES) method to improve the relationships between the parameters in the calculation of the Q value. The RES system is, in fact, a method by which one can determine the degree of cause and effect of a system's parameters by making an interaction matrix. In this research, the geomechanical data collected from the water conveyor tunnel of Azad Dam were used to make the interaction matrix of the Q-system. For this purpose, instead of using the conventional methods that are always accompanied by defects such as uncertainty, the Q-system interaction matrix is coded using a technique that is actually a statistical analysis of the data and determining the correlation coefficient between them. So, the effect of each parameter on the system is evaluated with greater certainty. The results of this study show that the formed interaction matrix provides a reasonable estimate of the effective parameters in the Q-system. Among the six parameters of the Q-system, the SRF and Jr parameters have the maximum and minimum impact on the system, respectively, and also the RQD and Jw parameters have the maximum and minimum impact on the system, respectively. Therefore, by developing this method, we can obtain a more accurate relation to the rock mass classification by weighting the required parameters in the Q-system.Keywords: Q-system, rock engineering system, statistical analysis, rock mass, tunnel
Procedia PDF Downloads 732060 Performance Evaluation of Distributed and Co-Located MIMO LTE Physical Layer Using Wireless Open-Access Research Platform
Authors: Ishak Suleiman, Ahmad Kamsani Samingan, Yeoh Chun Yeow, Abdul Aziz Bin Abdul Rahman
Abstract:
In this paper, we evaluate the benefits of distributed 4x4 MIMO LTE downlink systems compared to that of the co-located 4x4 MIMO LTE downlink system. The performance evaluation was carried out experimentally by using Wireless Open-Access Research Platform (WARP), where the comparison between the 4x4 MIMO LTE transmission downlink system in distributed and co-located techniques was examined. The measured Error Vector Magnitude (EVM) results showed that the distributed technique achieved better system performance compared to the co-located arrangement.Keywords: multiple-input-multiple-output (MIMO), distributed MIMO, co-located MIMO, LTE
Procedia PDF Downloads 4232059 The Effect of Dynamic Eccentricity on the Stator Current Spectrum of 550 kW Induction Motor
Authors: Saleh Elawgali
Abstract:
In order to present the effect of the dynamic eccentricity on the stator currents of squirrel cage induction machines, the current spectrums of a 550 kW induction motor was calculated for the cases of full symmetry and dynamic eccentricity. The calculations presented in this paper are based on the Poly-Harmonic Model accounting for static and dynamic eccentricity, stator and rotor slotting, parallel branches as well as cage asymmetry. The calculations were followed by Fourier analysis of the stator currents in steady state operation. The paper presents the stator current spectrums for full symmetry and dynamic eccentricity cases, and demonstrates the harmonics present in each case. The effect of dynamic eccentricity is demonstrating via comparing the current spectrums related to dynamic eccentricity cases with the full symmetry one.Keywords: current spectrum, dynamic eccentricity, harmonics, Induction machine, slot harmonic zone.
Procedia PDF Downloads 4002058 Assessment of Rangeland Condition in a Dryland System Using UAV-Based Multispectral Imagery
Authors: Vistorina Amputu, Katja Tielboerger, Nichola Knox
Abstract:
Primary productivity in dry savannahs is constraint by moisture availability and under increasing anthropogenic pressure. Thus, considering climate change and the unprecedented pace and scale of rangeland deterioration, methods for assessing the status of such rangelands should be easy to apply, yield reliable and repeatable results that can be applied over large spatial scales. Global and local scale monitoring of rangelands through satellite data and labor-intensive field measurements respectively, are limited in accurately assessing the spatiotemporal heterogeneity of vegetation dynamics to provide crucial information that detects degradation in its early stages. Fortunately, newly emerging techniques such as unmanned aerial vehicles (UAVs), associated miniaturized sensors and improving digital photogrammetric software provide an opportunity to transcend these limitations. Yet, they have not been extensively calibrated in natural systems to encompass their complexities if they are to be integrated for long-term monitoring. Limited research using drone technology has been conducted in arid savannas, for example to assess the health status of this dynamic two-layer vegetation ecosystem. In our study, we fill this gap by testing the relationship between UAV-estimated cover of rangeland functional attributes and field data collected in discrete sample plots in a Namibian dryland savannah along a degradation gradient. The first results are based on a supervised classification performed on the ultra-high resolution multispectral imagery to distinguish between rangeland functional attributes (bare, non-woody, and woody), with a relatively good match to the field observations. Integrating UAV-based observations to improve rangeland monitoring could greatly assist in climate-adapted rangeland management.Keywords: arid savannah, degradation gradient, field observations, narrow-band sensor, supervised classification
Procedia PDF Downloads 1342057 Predicting Football Player Performance: Integrating Data Visualization and Machine Learning
Authors: Saahith M. S., Sivakami R.
Abstract:
In the realm of football analytics, particularly focusing on predicting football player performance, the ability to forecast player success accurately is of paramount importance for teams, managers, and fans. This study introduces an elaborate examination of predicting football player performance through the integration of data visualization methods and machine learning algorithms. The research entails the compilation of an extensive dataset comprising player attributes, conducting data preprocessing, feature selection, model selection, and model training to construct predictive models. The analysis within this study will involve delving into feature significance using methodologies like Select Best and Recursive Feature Elimination (RFE) to pinpoint pertinent attributes for predicting player performance. Various machine learning algorithms, including Random Forest, Decision Tree, Linear Regression, Support Vector Regression (SVR), and Artificial Neural Networks (ANN), will be explored to develop predictive models. The evaluation of each model's performance utilizing metrics such as Mean Squared Error (MSE) and R-squared will be executed to gauge their efficacy in predicting player performance. Furthermore, this investigation will encompass a top player analysis to recognize the top-performing players based on the anticipated overall performance scores. Nationality analysis will entail scrutinizing the player distribution based on nationality and investigating potential correlations between nationality and player performance. Positional analysis will concentrate on examining the player distribution across various positions and assessing the average performance of players in each position. Age analysis will evaluate the influence of age on player performance and identify any discernible trends or patterns associated with player age groups. The primary objective is to predict a football player's overall performance accurately based on their individual attributes, leveraging data-driven insights to enrich the comprehension of player success on the field. By amalgamating data visualization and machine learning methodologies, the aim is to furnish valuable tools for teams, managers, and fans to effectively analyze and forecast player performance. This research contributes to the progression of sports analytics by showcasing the potential of machine learning in predicting football player performance and offering actionable insights for diverse stakeholders in the football industry.Keywords: football analytics, player performance prediction, data visualization, machine learning algorithms, random forest, decision tree, linear regression, support vector regression, artificial neural networks, model evaluation, top player analysis, nationality analysis, positional analysis
Procedia PDF Downloads 382056 Rapid Monitoring of Earthquake Damages Using Optical and SAR Data
Authors: Saeid Gharechelou, Ryutaro Tateishi
Abstract:
Earthquake is an inevitable catastrophic natural disaster. The damages of buildings and man-made structures, where most of the human activities occur are the major cause of casualties from earthquakes. A comparison of optical and SAR data is presented in the case of Kathmandu valley which was hardly shaken by 2015-Nepal Earthquake. Though many existing researchers have conducted optical data based estimated or suggested combined use of optical and SAR data for improved accuracy, however finding cloud-free optical images when urgently needed are not assured. Therefore, this research is specializd in developing SAR based technique with the target of rapid and accurate geospatial reporting. Should considers that limited time available in post-disaster situation offering quick computation exclusively based on two pairs of pre-seismic and co-seismic single look complex (SLC) images. The InSAR coherence pre-seismic, co-seismic and post-seismic was used to detect the change in damaged area. In addition, the ground truth data from field applied to optical data by random forest classification for detection of damaged area. The ground truth data collected in the field were used to assess the accuracy of supervised classification approach. Though a higher accuracy obtained from the optical data then integration by optical-SAR data. Limitation of cloud-free images when urgently needed for earthquak evevent are and is not assured, thus further research on improving the SAR based damage detection is suggested. Availability of very accurate damage information is expected for channelling the rescue and emergency operations. It is expected that the quick reporting of the post-disaster damage situation quantified by the rapid earthquake assessment should assist in channeling the rescue and emergency operations, and in informing the public about the scale of damage.Keywords: Sentinel-1A data, Landsat-8, earthquake damage, InSAR, rapid damage monitoring, 2015-Nepal earthquake
Procedia PDF Downloads 1722055 Optimization of Air Pollution Control Model for Mining
Authors: Zunaira Asif, Zhi Chen
Abstract:
The sustainable measures on air quality management are recognized as one of the most serious environmental concerns in the mining region. The mining operations emit various types of pollutants which have significant impacts on the environment. This study presents a stochastic control strategy by developing the air pollution control model to achieve a cost-effective solution. The optimization method is formulated to predict the cost of treatment using linear programming with an objective function and multi-constraints. The constraints mainly focus on two factors which are: production of metal should not exceed the available resources, and air quality should meet the standard criteria of the pollutant. The applicability of this model is explored through a case study of an open pit metal mine, Utah, USA. This method simultaneously uses meteorological data as a dispersion transfer function to support the practical local conditions. The probabilistic analysis and the uncertainties in the meteorological conditions are accomplished by Monte Carlo simulation. Reasonable results have been obtained to select the optimized treatment technology for PM2.5, PM10, NOx, and SO2. Additional comparison analysis shows that baghouse is the least cost option as compared to electrostatic precipitator and wet scrubbers for particulate matter, whereas non-selective catalytical reduction and dry-flue gas desulfurization are suitable for NOx and SO2 reduction respectively. Thus, this model can aid planners to reduce these pollutants at a marginal cost by suggesting control pollution devices, while accounting for dynamic meteorological conditions and mining activities.Keywords: air pollution, linear programming, mining, optimization, treatment technologies
Procedia PDF Downloads 2082054 Changing Emphases in Mental Health Research Methodology: Opportunities for Occupational Therapy
Authors: Jeffrey Chase
Abstract:
Historically the profession of Occupational Therapy was closely tied to the treatment of those suffering from mental illness; more recently, and especially in the U.S., the percentage of OTs identifying as working in the mental health area has declined significantly despite the estimate that by 2020 behavioral health disorders will surpass physical illnesses as the major cause of disability worldwide. In the U.S. less than 10% of OTs identify themselves as working with the mentally ill and/or practicing in mental health settings. Such a decline has implications for both those suffering from mental illness and the profession of Occupational Therapy. One reason cited for the decline of OT in mental health has been the limited research in the discipline addressing mental health practice. Despite significant advances in technology and growth in the field of neuroscience, major institutions and funding sources such as the National Institute of Mental Health (NIMH) have noted that research into the etiology and treatment of mental illness have met with limited success over the past 25 years. One major reason posited by NIMH is that research has been limited by how we classify individuals, that being mostly on what is observable. A new classification system being developed by NIMH, the Research Domain Criteria (RDoc), has the goal to look beyond just descriptors of disorders for common neural, genetic, and physiological characteristics that cut across multiple supposedly separate disorders. The hope is that by classifying individuals along RDoC measures that both reliability and validity will improve resulting in greater advances in the field. As a result of this change NIH and NIMH will prioritize research funding to those projects using the RDoC model. Multiple disciplines across many different setting will be required for RDoC or similar classification systems to be developed. During this shift in research methodology OT has an opportunity to reassert itself into the research and treatment of mental illness, both in developing new ways to more validly classify individuals, and to document the legitimacy of previously ill-defined and validated disorders such as sensory integration.Keywords: global mental health and neuroscience, research opportunities for ot, greater integration of ot in mental health research, research and funding opportunities, research domain criteria (rdoc)
Procedia PDF Downloads 2752053 Automatic Diagnosis of Electrical Equipment Using Infrared Thermography
Authors: Y. Laib Dit Leksir, S. Bouhouche
Abstract:
Analysis and processing of data bases resulting from infrared thermal measurements made on the electrical installation requires the development of new tools in order to obtain correct and additional information to the visual inspections. Consequently, the methods based on the capture of infrared digital images show a great potential and are employed increasingly in various fields. Although, there is an enormous need for the development of effective techniques to analyse these data base in order to extract relevant information relating to the state of the equipments. Our goal consists in introducing recent techniques of modeling based on new methods, image and signal processing to develop mathematical models in this field. The aim of this work is to capture the anomalies existing in electrical equipments during an inspection of some machines using A40 Flir camera. After, we use binarisation techniques in order to select the region of interest and we make comparison between these methods of thermal images obtained to choose the best one.Keywords: infrared thermography, defect detection, troubleshooting, electrical equipment
Procedia PDF Downloads 4762052 Automated Prediction of HIV-associated Cervical Cancer Patients Using Data Mining Techniques for Survival Analysis
Authors: O. J. Akinsola, Yinan Zheng, Rose Anorlu, F. T. Ogunsola, Lifang Hou, Robert Leo-Murphy
Abstract:
Cervical Cancer (CC) is the 2nd most common cancer among women living in low and middle-income countries, with no associated symptoms during formative periods. With the advancement and innovative medical research, there are numerous preventive measures being utilized, but the incidence of cervical cancer cannot be truncated with the application of only screening tests. The mortality associated with this invasive cervical cancer can be nipped in the bud through the important role of early-stage detection. This study research selected an array of different top features selection techniques which was aimed at developing a model that could validly diagnose the risk factors of cervical cancer. A retrospective clinic-based cohort study was conducted on 178 HIV-associated cervical cancer patients in Lagos University teaching Hospital, Nigeria (U54 data repository) in April 2022. The outcome measure was the automated prediction of the HIV-associated cervical cancer cases, while the predictor variables include: demographic information, reproductive history, birth control, sexual history, cervical cancer screening history for invasive cervical cancer. The proposed technique was assessed with R and Python programming software to produce the model by utilizing the classification algorithms for the detection and diagnosis of cervical cancer disease. Four machine learning classification algorithms used are: the machine learning model was split into training and testing dataset into ratio 80:20. The numerical features were also standardized while hyperparameter tuning was carried out on the machine learning to train and test the data. Logistic Regression (LR), Decision Tree (DT), Random Forest (RF), and K-Nearest Neighbor (KNN). Some fitting features were selected for the detection and diagnosis of cervical cancer diseases from selected characteristics in the dataset using the contribution of various selection methods for the classification cervical cancer into healthy or diseased status. The mean age of patients was 49.7±12.1 years, mean age at pregnancy was 23.3±5.5 years, mean age at first sexual experience was 19.4±3.2 years, while the mean BMI was 27.1±5.6 kg/m2. A larger percentage of the patients are Married (62.9%), while most of them have at least two sexual partners (72.5%). Age of patients (OR=1.065, p<0.001**), marital status (OR=0.375, p=0.011**), number of pregnancy live-births (OR=1.317, p=0.007**), and use of birth control pills (OR=0.291, p=0.015**) were found to be significantly associated with HIV-associated cervical cancer. On top ten 10 features (variables) considered in the analysis, RF claims the overall model performance, which include: accuracy of (72.0%), the precision of (84.6%), a recall of (84.6%) and F1-score of (74.0%) while LR has: an accuracy of (74.0%), precision of (70.0%), recall of (70.0%) and F1-score of (70.0%). The RF model identified 10 features predictive of developing cervical cancer. The age of patients was considered as the most important risk factor, followed by the number of pregnancy livebirths, marital status, and use of birth control pills, The study shows that data mining techniques could be used to identify women living with HIV at high risk of developing cervical cancer in Nigeria and other sub-Saharan African countries.Keywords: associated cervical cancer, data mining, random forest, logistic regression
Procedia PDF Downloads 842051 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models
Authors: V. Mantey, N. Findlay, I. Maddox
Abstract:
The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.Keywords: building detection, disaster relief, mask-RCNN, satellite mapping
Procedia PDF Downloads 1692050 Outcome of Unilateral Retinoblastoma: A Ten Years Experience of Children's Cancer, Hospital Egypt
Authors: Ahmed Elhussein, Hossam El-Zomor, Adel Alieldin, Mahmoud A. Afifi, Abdullah Elhusseiny, Hala Taha, Amal Refaat, Soha Ahmed, Mohamed S. Zagloul
Abstract:
Background: A majority of children with retinoblastoma (60%) have a disease in one eye only (unilateral disease). This is a retrospective study to evaluate two different treatment modalities in those patients for saving their lives and vision. Methods: Four hundred and four patients were diagnosed with unilateral intraocular retinoblastoma at Children’s Cancer, Hospital Egypt (CCHE) through the period of July/2007 until December/2017. Management strategies included primary enucleation versus ocular salvage treatment. Results: Patients presented with mean age 24.5 months with range (1.2-154.3 months). According to the international retinoblastoma classification, Group D (n=172, 42%) was the most common, followed by group E (n=142, 35%), group C (n=63, 16%), and group B (n=27, 7%). All patients were alive at the end of the study except four patients who died, with 5-years overall survival 98.3% [CI, (96.5-100%)]. Patients presented with advanced disease and poor visual prognosis (n=241, 59.6%) underwent primary enucleation with 6 cycles adjuvant chemotherapy if they had high-risk features in the enucleated eye; only four patients out of 241 ended-up either with extraocular metastasis (n=3) or death (n=1). While systemic chemotherapy and focal therapy were the primary treatment for those who presented with favorable disease status and good visual prognosis (n=163, 40.4%); seventy-seven patients of them (47%) ended up with a pre-defined event (enucleation, EBRT, off protocol chemotherapy or 2ry malignancy). Ocular survival for patients received primary chemotherapy + focal therapy was [50.9% (CI, 43.5-59.6%)] at 3 years and [46.9% (CI,39.3-56%)] at 5 years. Comparison between upfront enucleation and primary chemotherapy for occurrence of extraocular metastasis revealed that there was no statistical difference between them except in group D (p value). While for occurrence of death, no statistical difference in all classification groups. Conclusion: In retinoblastoma, primary chemotherapy is a reasonable option and has a good probability for ocular salvage without increasing the risk of metastasis in comparison to upfront enucleation except in group D.Keywords: CCHE, chemotherapy, enucleation, retinoblastoma
Procedia PDF Downloads 1552049 Evaluation of the Efficacy of Basic Life Support Teaching in Second and Third Year Medical Students
Authors: Bianca W. O. Silva, Adriana C. M. Andrade, Gustavo C. M. Lucena, Virna M. S. Lima
Abstract:
Introduction: Basic life support (BLS) involves the immediate recognition of cardiopulmonary arrest. Each year, 359.400 and 275.000 individuals with cardiac arrest are attended in emergency departments in USA and Europe. Brazilian data shows that 200.000 cardiac arrests occur every year, and half of them out of the hospital. Medical schools around the world teach BLS in the first years of the course, but studies show that there is a decline of the knowledge as the years go by, affecting the chain of survival. The objective was to analyze the knowledge of medical students about BLS and the retention of this learning throughout the course. Methods: This study included 150 students who were at the second and third year of a medical school in Salvador, Bahia, Brazil. The instrument of data collection was a structured questionnaire composed of 20 questions based on the 2015 American Heart Association guideline. The Pearson Chi-square test was used in order to study the association between previous training, sex and semester with the degree of knowledge of the students. The Kruskal-Wallis test was used to evaluate the different yields obtained between the various semesters. The number of correct answers was described by average and quartiles. Results: Regarding the degree of knowledge, 19.6% of the female students reached the optimal classification, a better outcome than the achieved by the male participants. Of those with previous training, 33.33% were classified as good and optimal, none of the students reached the optimal classification and only 2.2% of them were classified as bad (those who did not have 52.6% of correct answers). The analysis of the degree of knowledge related to each semester revealed that the 5th semester had the highest outcome: 30.5%. However, the acquaintance presented by the semesters was generally unsatisfactory, since 50% of the students, or more, demonstrated knowledge levels classified as bad or regular. When confronting the different semesters and the achieved scores, the value of p was 0.831. Conclusion: It is important to focus on the training of medical professionals that are capable of facing emergency situations, improving the systematization of care, and thereby increasing the victims' possibility of survival.Keywords: basic life support, cardiopulmonary ressucitacion, education, medical students
Procedia PDF Downloads 1862048 Hydrodynamic Analysis of Journal Bearing Operating With Nanolubricants
Authors: R. Hariprakash, K. Prabhakaran Nair
Abstract:
In this paper, the static and dynamic characteristics of hydrodynamic journal bearings operating under nano lubricants are presented. Hydrodynamic journal bearings are used in turbo machines of power plants to support heavy load. In power plants, bearings are getting failure because of its inability to support the heavy load due to various reasons. Failures of bearings make the power plant to be shutdown. The load carrying capacity of journal bearing mainly depends upon the viscosity of the lubricants. The addition of nano particles on commercially available lubricant may enhance the viscosity of lubricant and in turn, change the performance characteristics. In the literature, though many studies have been carried out for the hydrodynamic bearing operating under Newtonian and non-Newtonian lubricants, studies on hydrodynamic bearings operating under nano lubricants is scarce. Thus, it is felt that there is a need to recompute the performance characteristics of journal bearings operating under nano lubricants.Keywords: hydrodynamic, journal, bearing, analysis
Procedia PDF Downloads 4352047 Hardware Error Analysis and Severity Characterization in Linux-Based Server Systems
Authors: Nikolaos Georgoulopoulos, Alkis Hatzopoulos, Konstantinos Karamitsios, Konstantinos Kotrotsios, Alexandros I. Metsai
Abstract:
In modern server systems, business critical applications run in different types of infrastructure, such as cloud systems, physical machines and virtualization. Often, due to high load and over time, various hardware faults occur in servers that translate to errors, resulting to malfunction or even server breakdown. CPU, RAM and hard drive (HDD) are the hardware parts that concern server administrators the most regarding errors. In this work, selected RAM, HDD and CPU errors, that have been observed or can be simulated in kernel ring buffer log files from two groups of Linux servers, are investigated. Moreover, a severity characterization is given for each error type. Better understanding of such errors can lead to more efficient analysis of kernel logs that are usually exploited for fault diagnosis and prediction. In addition, this work summarizes ways of simulating hardware errors in RAM and HDD, in order to test the error detection and correction mechanisms of a Linux server.Keywords: hardware errors, Kernel logs, Linux servers, RAM, hard disk, CPU
Procedia PDF Downloads 1552046 Adopting Cloud-Based Techniques to Reduce Energy Consumption: Toward a Greener Cloud
Authors: Sandesh Achar
Abstract:
The cloud computing industry has set new goals for better service delivery and deployment, so anyone can access services such as computation, application, and storage anytime. Cloud computing promises new possibilities for approaching sustainable solutions to deploy and advance their services in this distributed environment. This work explores energy-efficient approaches and how cloud-based architecture can reduce energy consumption levels amongst enterprises leveraging cloud computing services. Adopting cloud-based networking, database, and server machines provide a comprehensive means of achieving the potential gains in energy efficiency that cloud computing offers. In energy-efficient cloud computing, virtualization is one aspect that can integrate several technologies to achieve consolidation and better resource utilization. Moreover, the Green Cloud Architecture for cloud data centers is discussed in terms of cost, performance, and energy consumption, and appropriate solutions for various application areas are provided.Keywords: greener cloud, cloud computing, energy efficiency, energy consumption, metadata tags, green cloud advisor
Procedia PDF Downloads 872045 Machine Learning Techniques to Predict Cyberbullying and Improve Social Work Interventions
Authors: Oscar E. Cariceo, Claudia V. Casal
Abstract:
Machine learning offers a set of techniques to promote social work interventions and can lead to support decisions of practitioners in order to predict new behaviors based on data produced by the organizations, services agencies, users, clients or individuals. Machine learning techniques include a set of generalizable algorithms that are data-driven, which means that rules and solutions are derived by examining data, based on the patterns that are present within any data set. In other words, the goal of machine learning is teaching computers through 'examples', by training data to test specifics hypothesis and predict what would be a certain outcome, based on a current scenario and improve that experience. Machine learning can be classified into two general categories depending on the nature of the problem that this technique needs to tackle. First, supervised learning involves a dataset that is already known in terms of their output. Supervising learning problems are categorized, into regression problems, which involve a prediction from quantitative variables, using a continuous function; and classification problems, which seek predict results from discrete qualitative variables. For social work research, machine learning generates predictions as a key element to improving social interventions on complex social issues by providing better inference from data and establishing more precise estimated effects, for example in services that seek to improve their outcomes. This paper exposes the results of a classification algorithm to predict cyberbullying among adolescents. Data were retrieved from the National Polyvictimization Survey conducted by the government of Chile in 2017. A logistic regression model was created to predict if an adolescent would experience cyberbullying based on the interaction and behavior of gender, age, grade, type of school, and self-esteem sentiments. The model can predict with an accuracy of 59.8% if an adolescent will suffer cyberbullying. These results can help to promote programs to avoid cyberbullying at schools and improve evidence based practice.Keywords: cyberbullying, evidence based practice, machine learning, social work research
Procedia PDF Downloads 1682044 Computational Identification of Signalling Pathways in Protein Interaction Networks
Authors: Angela U. Makolo, Temitayo A. Olagunju
Abstract:
The knowledge of signaling pathways is central to understanding the biological mechanisms of organisms since it has been identified that in eukaryotic organisms, the number of signaling pathways determines the number of ways the organism will react to external stimuli. Signaling pathways are studied using protein interaction networks constructed from protein-protein interaction data obtained using high throughput experimental procedures. However, these high throughput methods are known to produce very high rates of false positive and negative interactions. In order to construct a useful protein interaction network from this noisy data, computational methods are applied to validate the protein-protein interactions. In this study, a computational technique to identify signaling pathways from a protein interaction network constructed using validated protein-protein interaction data was designed. A weighted interaction graph of the Saccharomyces cerevisiae (Baker’s Yeast) organism using the proteins as the nodes and interactions between them as edges was constructed. The weights were obtained using Bayesian probabilistic network to estimate the posterior probability of interaction between two proteins given the gene expression measurement as biological evidence. Only interactions above a threshold were accepted for the network model. A pathway was formalized as a simple path in the interaction network from a starting protein and an ending protein of interest. We were able to identify some pathway segments, one of which is a segment of the pathway that signals the start of the process of meiosis in S. cerevisiae.Keywords: Bayesian networks, protein interaction networks, Saccharomyces cerevisiae, signalling pathways
Procedia PDF Downloads 5452043 Decision Support: How Explainable A.I. Can Improve Transparency and Trust with Human Users
Authors: Devon Brown, Liu Chunmei
Abstract:
This paper will present an analysis as part of the researchers dissertation topic focusing on the intersection of affective and analytical directed acyclic graphs (DAGs) in the context of Decision Support Systems (DSS). The researcher’s work involves analyzing decision theory models like Affective and Bayesian Decision theory models and how they could be implemented under an Affective Computing Framework using Information Fusion and Human-Centered Design. Additionally, the researcher is beginning research on an Affective-Analytic Decision Framework (AADF) model for their dissertation research and are looking to merge logic and analytic models with empathetic insights into affective DAGs. Data-collection efforts begin Fall 2024 and in preparation for the efforts this paper looks to analyze previous research in this area and introduce the AADF framework and propose conceptual models for consideration. For this paper, the research emphasis is placed on analyzing Bayesian networks and Markov models which offer probabilistic techniques during uncertainty in decision-making. Ideally, including affect into analytic models will ensure algorithms can increase user trust with algorithms by including emotional states and the user’s experience with the goal of developing emotionally intelligent A.I. systems that can start to navigate the complex fabric of human emotion during decision-making.Keywords: decision support systems, explainable AI, HCAI techniques, affective-analytical decision framework
Procedia PDF Downloads 212042 Numerical Simulation of Magnetohydrodynamic (MHD) Blood Flow in a Stenosed Artery
Authors: Sreeparna Majee, G. C. Shit
Abstract:
Unsteady blood flow has been numerically investigated through stenosed arteries to achieve an idea about the physiological blood flow pattern in diseased arteries. The blood is treated as Newtonian fluid and the arterial wall is considered to be rigid having deposition of plaque in its lumen. For direct numerical simulation, vorticity-stream function formulation has been adopted to solve the problem using implicit finite difference method by developing well known Peaceman-Rachford Alternating Direction Implicit (ADI) scheme. The effects of magnetic parameter and Reynolds number on velocity and wall shear stress are being studied and presented quantitatively over the entire arterial segment. The streamlines have been plotted to understand the flow pattern in the stenosed artery, which has significant alterations in the downstream of the stenosis in the presence of magnetic field. The results show that there are nominal changes in the flow pattern when magnetic field strength is enhanced upto 8T which can have remarkable usage to MRI machines.Keywords: magnetohydrodynamics, blood flow, stenosis, energy dissipation
Procedia PDF Downloads 2762041 Calibration and Validation of ArcSWAT Model for Estimation of Surface Runoff and Sediment Yield from Dhangaon Watershed
Authors: M. P. Tripathi, Priti Tiwari
Abstract:
Soil and Water Assessment Tool (SWAT) is a distributed parameter continuous time model and was tested on daily and fortnightly basis for a small agricultural watershed (Dhangaon) of Chhattisgarh state in India. The SWAT model recently interfaced with ArcGIS and called as ArcSWAT. The watershed and sub-watershed boundaries, drainage networks, slope and texture maps were generated in the environment of ArcGIS of ArcSWAT. Supervised classification method was used for land use/cover classification from satellite imageries of the years 2009 and 2012. Manning's roughness coefficient 'n' for overland flow and channel flow and Fraction of Field Capacity (FFC) were calibrated for monsoon season of the years 2009 and 2010. The model was validated on a daily basis for the years 2011 and 2012 by using the observed daily rainfall and temperature data. Calibration and validation results revealed that the model was predicting the daily surface runoff and sediment yield satisfactorily. Sensitivity analysis showed that the annual sediment yield was inversely proportional to the overland and channel 'n' values whereas; annual runoff and sediment yields were directly proportional to the FFC. The model was also tested (calibrated and validated) for the fortnightly runoff and sediment yield for the year 2009-10 and 2011-12, respectively. Simulated values of fortnightly runoff and sediment yield for the calibration and validation years compared well with their observed counterparts. The calibration and validation results revealed that the ArcSWAT model could be used for identification of critical sub-watershed and for developing management scenarios for the Dhangaon watershed. Further, the model should be tested for simulating the surface runoff and sediment yield using generated rainfall and temperature before applying it for developing the management scenario for the critical or priority sub-watersheds.Keywords: watershed, hydrologic and water quality, ArcSWAT model, remote sensing, GIS, runoff and sediment yield
Procedia PDF Downloads 3792040 Comparative Study od Three Artificial Intelligence Techniques for Rain Domain in Precipitation Forecast
Authors: Nabilah Filzah Mohd Radzuan, Andi Putra, Zalinda Othman, Azuraliza Abu Bakar, Abdul Razak Hamdan
Abstract:
Precipitation forecast is important to avoid natural disaster incident which can cause losses in the involved area. This paper reviews three techniques logistic regression, decision tree, and random forest which are used in making precipitation forecast. These combination techniques through the vector auto-regression (VAR) model help in finding the advantages and strengths of each technique in the forecast process. The data-set contains variables of the rain’s domain. Adaptation of artificial intelligence techniques involved in rain domain enables the forecast process to be easier and systematic for precipitation forecast.Keywords: logistic regression, decisions tree, random forest, VAR model
Procedia PDF Downloads 446