Search results for: mixed dataset
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3756

Search results for: mixed dataset

3336 Alleviation of Thermal Stress in Pinus ponderosa by Plant-Growth Promoting Rhizobacteria Isolated from Mixed-Conifer Forests

Authors: Kelli G. Thorup, Kristopher A. Blee

Abstract:

Climate change enhances the occurrence of extreme weather: wildfires, drought, rising summer temperatures, all of which dramatically decline forest growth and increase tree mortality in the mixed-conifer forests of Sierra Nevada, California. However, microbiota living in mutualistic relations with plant rhizospheres have been found to mitigate the effects of suboptimal environmental conditions. The goal of this research is to isolate native beneficial bacteria, plant-growth promoting rhizobacteria (PGPR), that can alleviate heat stress in Pinus ponderosa seedlings. Bacteria were isolated from the rhizosphere of Pinus ponderosa juveniles located in mixed-conifer stand and further characterized for PGP potential based on their ability to produce key growth regulatory phytohormones including auxin, cytokinin, and gibberellic acid. Out of ten soil samples taken, sixteen colonies were isolated and qualitatively confirmed to produce indole-3-acetic acid (auxin) using Salkowski’s reagent. Future testing will be conducted to quantitatively assess phytohormone production in bacterial isolates. Furthermore, bioassays will be performed to determine isolates abilities to increase tolerance in heat-stressed Pinus ponderosa seedlings. Upon completion of this research, a PGPR could be utilized to support the growth and transplantation of conifer seedlings as summer temperatures continue to rise due to the effects of climate change.

Keywords: conifer, heat-stressed, phytohormones, Pinus ponderosa, plant-growth promoting rhizobacteria

Procedia PDF Downloads 97
3335 Effects of Radiation on Mixed Convection in Power Law Fluids along Vertical Wedge Embedded in a Saturated Porous Medium under Prescribed Surface Heat Flux Condition

Authors: Qaisar Ali, Waqar A. Khan, Shafiq R. Qureshi

Abstract:

Heat transfer in Power Law Fluids across cylindrical surfaces has copious engineering applications. These applications comprises of areas such as underwater pollution, bio medical engineering, filtration systems, chemical, petroleum, polymer, food processing, recovery of geothermal energy, crude oil extraction, pharmaceutical and thermal energy storage. The quantum of research work with diversified conditions to study the effects of combined heat transfer and fluid flow across porous media has increased considerably over last few decades. The most non-Newtonian fluids of practical interest are highly viscous and therefore are often processed in the laminar flow regime. Several studies have been performed to investigate the effects of free and mixed convection in Newtonian fluids along vertical and horizontal cylinder embedded in a saturated porous medium, whereas very few analysis have been performed on Power law fluids along wedge. In this study, boundary layer analysis under the effects of radiation-mixed convection in power law fluids along vertical wedge in porous medium have been investigated using an implicit finite difference method (Keller box method). Steady, 2-D laminar flow has been considered under prescribed surface heat flux condition. Darcy, Boussinesq and Roseland approximations are assumed to be valid. Neglecting viscous dissipation effects and the radiate heat flux in the flow direction, the boundary layer equations governing mixed convection flow over a vertical wedge are transformed into dimensionless form. The single mathematical model represents the case for vertical wedge, cone and plate by introducing the geometry parameter. Both similar and Non- similar solutions have been obtained and results for Non similar case have been presented/ plotted. Effects of radiation parameter, variable heat flux parameter, wedge angle parameter ‘m’ and mixed convection parameter have been studied for both Newtonian and Non-Newtonian fluids. The results are also compared with the available data for the analysis of heat transfer in the prescribed range of parameters and found in good agreement. Results for the details of dimensionless local Nusselt number, temperature and velocity fields have also been presented for both Newtonian and Non-Newtonian fluids. Analysis of data revealed that as the radiation parameter or wedge angle is increased, the Nusselt number decreases whereas it increases with increase in the value of heat flux parameter at a given value of mixed convection parameter. Also, it is observed that as viscosity increases, the skin friction co-efficient increases which tends to reduce the velocity. Moreover, pseudo plastic fluids are more heat conductive than Newtonian and dilatant fluids respectively. All fluids behave identically in pure forced convection domain.

Keywords: porous medium, power law fluids, surface heat flux, vertical wedge

Procedia PDF Downloads 286
3334 Consideration of Failed Fuel Detector Location through Computational Flow Dynamics Analysis on Primary Cooling System Flow with Two Outlets

Authors: Sanghoon Bae, Hanju Cha

Abstract:

Failed fuel detector (FFD) in research reactor is a very crucial instrument to detect the anomaly from failed fuels in the early stage around primary cooling system (PCS) outlet prior to the decay tank. FFD is considered as a mandatory sensor to ensure the integrity of fuel assemblies and mitigate the consequence from a failed fuel accident. For the effective function of FFD, the location of them should be determined by contemplating the effect from coolant flow around two outlets. For this, the analysis on computational flow dynamics (CFD) should be first performed how the coolant outlet flow including radioactive materials from failed fuels are mixed and discharged through the outlet plenum within certain seconds. The analysis result shows that the outlet flow is well mixed regardless of the position of failed fuel and ultimately illustrates the effect of detector location.

Keywords: computational flow dynamics (CFD), failed fuel detector (FFD), fresh fuel assembly (FFA), spent fuel assembly (SFA)

Procedia PDF Downloads 224
3333 Optimization Model for Support Decision for Maximizing Production of Mixed Fresh Fruit Farms

Authors: Andrés I. Ávila, Patricia Aros, César San Martín, Elizabeth Kehr, Yovana Leal

Abstract:

Planning models for fresh products is a very useful tool for improving the net profits. To get an efficient supply chain model, several functions should be considered to get a complete simulation of several operational units. We consider a linear programming model to help farmers to decide if it is convenient to choose what area should be planted for three kinds of export fruits considering their future investment. We consider area, investment, water, productivity minimal unit, and harvest restrictions to develop a monthly based model to compute the average income in five years. Also, conditions on the field as area, water availability, and initial investment are required. Using the Chilean costs and dollar-peso exchange rate, we can simulate several scenarios to understand the possible risks associated to this market. Also, this tool help to support decisions for government and individual farmers.

Keywords: mixed integer problem, fresh fruit production, support decision model, agricultural and biosystems engineering

Procedia PDF Downloads 413
3332 Predicting the Next Offensive Play Types will be Implemented to Maximize the Defense’s Chances of Success in the National Football League

Authors: Chris Schoborg, Morgan C. Wang

Abstract:

In the realm of the National Football League (NFL), substantial dedication of time and effort is invested by both players and coaches in meticulously analyzing the game footage of their opponents. The primary aim is to anticipate the actions of the opposing team. Defensive players and coaches are especially focused on deciphering their adversaries' intentions to effectively counter their strategies. Acquiring insights into the specific play type and its intended direction on the field would confer a significant competitive advantage. This study establishes pre-snap information as the cornerstone for predicting both the play type (e.g., deep pass, short pass, or run) and its spatial trajectory (right, left, or center). The dataset for this research spans the regular NFL season data for all 32 teams from 2013 to 2022. This dataset is acquired using the nflreadr package, which conveniently extracts play-by-play data from NFL games and imports it into the R environment as structured datasets. In this study, we employ a recently developed machine learning algorithm, XGBoost. The final predictive model achieves an impressive lift of 2.61. This signifies that the presented model is 2.61 times more effective than random guessing—a significant improvement. Such a model has the potential to markedly enhance defensive coaches' ability to formulate game plans and adequately prepare their players, thus mitigating the opposing offense's yardage and point gains.

Keywords: lift, NFL, sports analytics, XGBoost

Procedia PDF Downloads 37
3331 Comparative Assessment of hCG with Estrogen in Increasing Pregnancy Rate in Mixed Parity Buffaloes

Authors: Sanan Raza, Tariq Abbas, Ahmad Yar Qamar, Muhammad Younus, Hamayun Khan, Mujahid Zafar

Abstract:

Water Buffaloes contribute significantly in Asian agriculture. The objective of this study was to evaluate the efficacy of two synchronization protocols in enhancing pregnancy rate in 105 mixed parity buffaloes particularly in summer season. Buffaloes are seasonal breeders showing more fertility from October to January in subtropical environment of Pakistan. In current study 105 lactating buffaloes of mixed parity were used having normal estrous cycle, age ranging 5-9 years, weighing between 400-650 kg, BCS 4 ± 0.5 (1-5) and lactation varied from first to 5th. Experimental animals were divided into three groups based on corpus leteummorphometry. Morphometry of C.L was done using rectal population and ultrasonography. All animals were injected 25mg of PGi.m. (Cloprostenol). In Group-1 (n=35) hCG was administered at follicular size of 10mm having scanned after detection of heat. Similarly Group-2 (n=35) received 25 mg EB i.m (Estradiol Benzoate) after confirmation of follicular size of 10mm with ultrasound. Likewise, buffaloes of Group-3 (n=35) were administered normal saline respectively using as control. All buffaloes of three groups were inseminated after 12h of hCG, EB, and normal saline administration respectively. Pregnancy was assessed by ultrasound at 18th and 45th day post insemination. Pregnancy rates at 18th day were 38.2%, 34.5%, and 27.3% for G1, G2, and G3 respectively indicating that hCG and EB administered groups have no difference in results except control group having lower conception rate than both groups respectively. Similarly on 42nd day, these were 40.4%, 32.7% for G1 and G2 which are significantly higher than G3= 26.6 (control Group). Also, hCG and EB treated buffaloes have more probability of pregnancy than control group. Based on the findings of current study, it seems reasonable that the use of hCG and EB has been associated with improving pregnancy rates in non-breeding season of buffaloes.

Keywords: buffalo, hCG, EB, pregnancy rate, follicle, insemination

Procedia PDF Downloads 779
3330 Out of Hospital Cardiac Arrest in Kuala Lumpur: A Mixed Method Study on Incidence, Adherence to Protocol, and Issues

Authors: Mohd Said Nurumal, Sarah Sheikh Abdul Karim

Abstract:

Information regarding out of hospital cardiac arrest incidence include outcome in Malaysia is limited and fragmented. This study aims to identify incidence and adherence to protocol of out of hospital cardiac arrest and also to explore the issues faced by the pre-hospital personnel in regards managing cardiac arrest victim in Kuala Lumpur, Malaysia. A mixed method approach combining the qualitative and quantitative study design was used. The 285 pre-hospital care data sheet of out of hospital cardiac arrest during the year of 2011 were examined by using checklists for identify the incidence and adherence to protocol. Nine semi-structured interviews and two focus group discussions were performed. For the incidence based on the overall out of hospital cardiac arrest cases that occurred in 2011 (n=285), the survival rates were 16.8%. For adherence to protocol, only 89 (41.8%) of the cases adhered to the given protocol and 124 did not adhere to such protocol. The qualitative information provided insight about the issues related to out of hospital cardiac arrest in every aspect. All the relevant qualitative data were merged into few categories relating issues that could affect the management of out of hospital cardiac arrest performed by pre-hospital care team. One of the essential elements in the out of hospital cardiac arrest handling by pre-hospital care is to ensure increase of survival rates and excellent outcomes by adhering to given protocols based on international standard benchmarks. Measures are needed to strengthen the quick activation of the pre-hospital care service, prompt bystander cardiopulmonary resuscitation, early defibrillation and timely advanced cardiac life support and also to tackle all the issues highlighted in qualitative results.

Keywords: pre-hospital care, out of hospital cardiac arrest, incidence, protocol, mixed method research

Procedia PDF Downloads 393
3329 Springback Prediction for Sheet Metal Cold Stamping Using Convolutional Neural Networks

Authors: Lei Zhu, Nan Li

Abstract:

Cold stamping has been widely applied in the automotive industry for the mass production of a great range of automotive panels. Predicting the springback to ensure the dimensional accuracy of the cold-stamped components is a critical step. The main approaches for the prediction and compensation of springback in cold stamping include running Finite Element (FE) simulations and conducting experiments, which require forming process expertise and can be time-consuming and expensive for the design of cold stamping tools. Machine learning technologies have been proven and successfully applied in learning complex system behaviours using presentative samples. These technologies exhibit the promising potential to be used as supporting design tools for metal forming technologies. This study, for the first time, presents a novel application of a Convolutional Neural Network (CNN) based surrogate model to predict the springback fields for variable U-shape cold bending geometries. A dataset is created based on the U-shape cold bending geometries and the corresponding FE simulations results. The dataset is then applied to train the CNN surrogate model. The result shows that the surrogate model can achieve near indistinguishable full-field predictions in real-time when compared with the FE simulation results. The application of CNN in efficient springback prediction can be adopted in industrial settings to aid both conceptual and final component designs for designers without having manufacturing knowledge.

Keywords: springback, cold stamping, convolutional neural networks, machine learning

Procedia PDF Downloads 125
3328 Sentiment Analysis of Chinese Microblog Comments: Comparison between Support Vector Machine and Long Short-Term Memory

Authors: Xu Jiaqiao

Abstract:

Text sentiment analysis is an important branch of natural language processing. This technology is widely used in public opinion analysis and web surfing recommendations. At present, the mainstream sentiment analysis methods include three parts: sentiment analysis based on a sentiment dictionary, based on traditional machine learning, and based on deep learning. This paper mainly analyzes and compares the advantages and disadvantages of the SVM method of traditional machine learning and the Long Short-term Memory (LSTM) method of deep learning in the field of Chinese sentiment analysis, using Chinese comments on Sina Microblog as the data set. Firstly, this paper classifies and adds labels to the original comment dataset obtained by the web crawler, and then uses Jieba word segmentation to classify the original dataset and remove stop words. After that, this paper extracts text feature vectors and builds document word vectors to facilitate the training of the model. Finally, SVM and LSTM models are trained respectively. After accuracy calculation, it can be obtained that the accuracy of the LSTM model is 85.80%, while the accuracy of SVM is 91.07%. But at the same time, LSTM operation only needs 2.57 seconds, SVM model needs 6.06 seconds. Therefore, this paper concludes that: compared with the SVM model, the LSTM model is worse in accuracy but faster in processing speed.

Keywords: sentiment analysis, support vector machine, long short-term memory, Chinese microblog comments

Procedia PDF Downloads 63
3327 Image Segmentation with Deep Learning of Prostate Cancer Bone Metastases on Computed Tomography

Authors: Joseph M. Rich, Vinay A. Duddalwar, Assad A. Oberai

Abstract:

Prostate adenocarcinoma is the most common cancer in males, with osseous metastases as the commonest site of metastatic prostate carcinoma (mPC). Treatment monitoring is based on the evaluation and characterization of lesions on multiple imaging studies, including Computed Tomography (CT). Monitoring of the osseous disease burden, including follow-up of lesions and identification and characterization of new lesions, is a laborious task for radiologists. Deep learning algorithms are increasingly used to perform tasks such as identification and segmentation for osseous metastatic disease and provide accurate information regarding metastatic burden. Here, nnUNet was used to produce a model which can segment CT scan images of prostate adenocarcinoma vertebral bone metastatic lesions. nnUNet is an open-source Python package that adds optimizations to deep learning-based UNet architecture but has not been extensively combined with transfer learning techniques due to the absence of a readily available functionality of this method. The IRB-approved study data set includes imaging studies from patients with mPC who were enrolled in clinical trials at the University of Southern California (USC) Health Science Campus and Los Angeles County (LAC)/USC medical center. Manual segmentation of metastatic lesions was completed by an expert radiologist Dr. Vinay Duddalwar (20+ years in radiology and oncologic imaging), to serve as ground truths for the automated segmentation. Despite nnUNet’s success on some medical segmentation tasks, it only produced an average Dice Similarity Coefficient (DSC) of 0.31 on the USC dataset. DSC results fell in a bimodal distribution, with most scores falling either over 0.66 (reasonably accurate) or at 0 (no lesion detected). Applying more aggressive data augmentation techniques dropped the DSC to 0.15, and reducing the number of epochs reduced the DSC to below 0.1. Datasets have been identified for transfer learning, which involve balancing between size and similarity of the dataset. Identified datasets include the Pancreas data from the Medical Segmentation Decathlon, Pelvic Reference Data, and CT volumes with multiple organ segmentations (CT-ORG). Some of the challenges of producing an accurate model from the USC dataset include small dataset size (115 images), 2D data (as nnUNet generally performs better on 3D data), and the limited amount of public data capturing annotated CT images of bone lesions. Optimizations and improvements will be made by applying transfer learning and generative methods, including incorporating generative adversarial networks and diffusion models in order to augment the dataset. Performance with different libraries, including MONAI and custom architectures with Pytorch, will be compared. In the future, molecular correlations will be tracked with radiologic features for the purpose of multimodal composite biomarker identification. Once validated, these models will be incorporated into evaluation workflows to optimize radiologist evaluation. Our work demonstrates the challenges of applying automated image segmentation to small medical datasets and lays a foundation for techniques to improve performance. As machine learning models become increasingly incorporated into the workflow of radiologists, these findings will help improve the speed and accuracy of vertebral metastatic lesions detection.

Keywords: deep learning, image segmentation, medicine, nnUNet, prostate carcinoma, radiomics

Procedia PDF Downloads 73
3326 Generative Adversarial Network Based Fingerprint Anti-Spoofing Limitations

Authors: Yehjune Heo

Abstract:

Fingerprint Anti-Spoofing approaches have been actively developed and applied in real-world applications. One of the main problems for Fingerprint Anti-Spoofing is not robust to unseen samples, especially in real-world scenarios. A possible solution will be to generate artificial, but realistic fingerprint samples and use them for training in order to achieve good generalization. This paper contains experimental and comparative results with currently popular GAN based methods and uses realistic synthesis of fingerprints in training in order to increase the performance. Among various GAN models, the most popular StyleGAN is used for the experiments. The CNN models were first trained with the dataset that did not contain generated fake images and the accuracy along with the mean average error rate were recorded. Then, the fake generated images (fake images of live fingerprints and fake images of spoof fingerprints) were each combined with the original images (real images of live fingerprints and real images of spoof fingerprints), and various CNN models were trained. The best performances for each CNN model, trained with the dataset of generated fake images and each time the accuracy and the mean average error rate, were recorded. We observe that current GAN based approaches need significant improvements for the Anti-Spoofing performance, although the overall quality of the synthesized fingerprints seems to be reasonable. We include the analysis of this performance degradation, especially with a small number of samples. In addition, we suggest several approaches towards improved generalization with a small number of samples, by focusing on what GAN based approaches should learn and should not learn.

Keywords: anti-spoofing, CNN, fingerprint recognition, GAN

Procedia PDF Downloads 166
3325 Durrmeyer Type Modification of q-Generalized Bernstein Operators

Authors: Ruchi, A. M. Acu, Purshottam N. Agrawal

Abstract:

The purpose of this paper to introduce the Durrmeyer type modification of q-generalized-Bernstein operators which include the Bernstein polynomials in the particular α = 0. We investigate the rate of convergence by means of the Lipschitz class and the Peetre’s K-functional. Also, we define the bivariate case of Durrmeyer type modification of q-generalized-Bernstein operators and study the degree of approximation with the aid of the partial modulus of continuity and the Peetre’s K-functional. Finally, we introduce the GBS (Generalized Boolean Sum) of the Durrmeyer type modification of q- generalized-Bernstein operators and investigate the approximation of the Bögel continuous and Bögel differentiable functions with the aid of the Lipschitz class and the mixed modulus of smoothness.

Keywords: Bögel continuous, Bögel differentiable, generalized Boolean sum, Peetre’s K-functional, Lipschitz class, mixed modulus of smoothness

Procedia PDF Downloads 187
3324 Productive Engagements and Psychological Wellbeing of Older Adults; An Analysis of HRS Dataset

Authors: Mohammad Didar Hossain

Abstract:

Background/Purpose: The purpose of this study was to examine the associations between productive engagements and the psychological well-being of older adults in the U.S by analyzing cross-sectional data from a secondary dataset. Specifically, this paper analyzed the associations of 4 different types of productive engagements, including current work status, caregiving to the family members, volunteering and religious strengths with the psychological well-being as an outcome variable. Methods: Data and sample: The study used the data from the Health and Retirement Study (HRS). The HRS is a nationally representative prospective longitudinal cohort study that has been conducting biennial surveys since 1992 to community-dwelling individuals 50 years of age or older on diverse issues. This analysis was based on the 2016 wave (cross-sectional) of the HRS dataset and the data collection period was April 2016 through August 2017. The samples were recruited from a multistage, national area-clustered probability sampling frame. Measures: Four different variables were considered as the predicting variables in this analysis. Firstly, current working status was a binary variable that measured by 0=Yes and 1= No. The second and third variables were respectively caregiving and volunteering, and both of them were measured by; 0=Regularly, 1= Irregularly. Finally, find in strength was measured by 0= Agree and 1= Disagree. Outcome (Wellbeing) variable was measured by 0= High level of well-being, 1= Low level of well-being. Control variables including age were measured in years, education in the categories of 0=Low level of education, 1= Higher level of education and sex r in the categories 0=male, 1= female. Analysis and Results: Besides the descriptive statistics, binary logistic regression analyses were applied to examine the association between independent and dependent variables. The results showed that among the four independent variables, three of them including working status (OR: .392, p<.001), volunteering (OR: .471, p<.003) and strengths in religion (OR .588, p<.003), were significantly associated with psychological well-being while controlling for age, gender and education factors. Also, no significant association was found between the caregiving engagement of older adults and their psychological well-being outcome. Conclusions and Implications: The findings of this study are mostly consistent with the previous studies except for the caregiving engagements and their impact on older adults’ well-being outcomes. Therefore, the findings support the proactive initiatives from different micro to macro levels to facilitate opportunities for productive engagements for the older adults, and all of these may ultimately benefit their psychological well-being and life satisfaction in later life.

Keywords: productive engagements, older adults, psychological wellbeing, productive aging

Procedia PDF Downloads 137
3323 Low Temperature Powders Synthesis of la1-xMgxAlO3 through Sol-Gel Method

Authors: R. Benakcha, M. Omari

Abstract:

Powders of La1-xMgxAlO3 (0 ≤ x ≤ 5) oxides, with large surface areas were synthesized by sol-gel process, utilizing citric acid. Heating of a mixed solution of CA, EtOH, and nitrates of lanthanum, aluminium and magnesium at 70°C gave transparent gel without any precipitation. The formation of pure perovskite La1-xMgxAlO3, occurred when the precursor was heat-treated at 800°C for 6 h. No X-ray diffraction evidence for the presence of crystalline impurities was obtained. The La1-xMgxAlO3 powders prepared by the sol-gel method have a considerably large surface area in the range of 12.9–20 m^2.g^-1 when compared with 0.3 m^2.g^-1 for the conventional solid-state reaction of LaAlO3. The structural characteristics were examined by means of conventional techniques namely X-ray diffraction, infrared spectroscopy, thermogravimetry and differential thermal (TG-DTA) and specific surface SBET. Pore diameters and crystallite sizes are in the 8.8-11.28 nm and 25.4-30.5 nm ranges, respectively. The sol-gel method is a simple technique that has several advantages. In addition to that of not requiring high temperatures, it has the potential to synthesize many kinds of mixed oxides and obtain other materials homogeneous and large purities. It also allows formatting a variety of materials: very fine powders, fibers and films.

Keywords: aluminate, lanthan, perovskite, sol-gel

Procedia PDF Downloads 256
3322 A Comparative Study on Deep Learning Models for Pneumonia Detection

Authors: Hichem Sassi

Abstract:

Pneumonia, being a respiratory infection, has garnered global attention due to its rapid transmission and relatively high mortality rates. Timely detection and treatment play a crucial role in significantly reducing mortality associated with pneumonia. Presently, X-ray diagnosis stands out as a reasonably effective method. However, the manual scrutiny of a patient's X-ray chest radiograph by a proficient practitioner usually requires 5 to 15 minutes. In situations where cases are concentrated, this places immense pressure on clinicians for timely diagnosis. Relying solely on the visual acumen of imaging doctors proves to be inefficient, particularly given the low speed of manual analysis. Therefore, the integration of artificial intelligence into the clinical image diagnosis of pneumonia becomes imperative. Additionally, AI recognition is notably rapid, with convolutional neural networks (CNNs) demonstrating superior performance compared to human counterparts in image identification tasks. To conduct our study, we utilized a dataset comprising chest X-ray images obtained from Kaggle, encompassing a total of 5216 training images and 624 test images, categorized into two classes: normal and pneumonia. Employing five mainstream network algorithms, we undertook a comprehensive analysis to classify these diseases within the dataset, subsequently comparing the results. The integration of artificial intelligence, particularly through improved network architectures, stands as a transformative step towards more efficient and accurate clinical diagnoses across various medical domains.

Keywords: deep learning, computer vision, pneumonia, models, comparative study

Procedia PDF Downloads 35
3321 The Outcome of Using Machine Learning in Medical Imaging

Authors: Adel Edwar Waheeb Louka

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery

Procedia PDF Downloads 35
3320 Comparison and Validation of a dsDNA biomimetic Quality Control Reference for NGS based BRCA CNV analysis versus MLPA

Authors: A. Delimitsou, C. Gouedard, E. Konstanta, A. Koletis, S. Patera, E. Manou, K. Spaho, S. Murray

Abstract:

Background: There remains a lack of International Standard Control Reference materials for Next Generation Sequencing-based approaches or device calibration. We have designed and validated dsDNA biomimetic reference materials for targeted such approaches incorporating proprietary motifs (patent pending) for device/test calibration. They enable internal single-sample calibration, alleviating sample comparisons to pooled historical population-based data assembly or statistical modelling approaches. We have validated such an approach for BRCA Copy Number Variation analytics using iQRS™-CNVSUITE versus Mixed Ligation-dependent Probe Amplification. Methods: Standard BRCA Copy Number Variation analysis was compared between mixed ligation-dependent probe amplification and next generation sequencing using a cohort of 198 breast/ovarian cancer patients. Next generation sequencing based copy number variation analysis of samples spiked with iQRS™ dsDNA biomimetics were analysed using proprietary CNVSUITE software. Mixed ligation-dependent probe amplification analyses were performed on an ABI-3130 Sequencer and analysed with Coffalyser software. Results: Concordance of BRCA – copy number variation events for mixed ligation-dependent probe amplification and CNVSUITE indicated an overall sensitivity of 99.88% and specificity of 100% for iQRS™-CNVSUITE. The negative predictive value of iQRS-CNVSUITE™ for BRCA was 100%, allowing for accurate exclusion of any event. The positive predictive value was 99.88%, with no discrepancy between mixed ligation-dependent probe amplification and iQRS™-CNVSUITE. For device calibration purposes, precision was 100%, spiking of patient DNA demonstrated linearity to 1% (±2.5%) and range from 100 copies. Traditional training was supplemented by predefining the calibrator to sample cut-off (lock-down) for amplicon gain or loss based upon a relative ratio threshold, following training of iQRS™-CNVSUITE using spiked iQRS™ calibrator and control mocks. BRCA copy number variation analysis using iQRS™-CNVSUITE™ was successfully validated and ISO15189 accredited and now enters CE-IVD performance evaluation. Conclusions: The inclusion of a reference control competitor (iQRS™ dsDNA mimetic) to next generation sequencing-based sequencing offers a more robust sample-independent approach for the assessment of copy number variation events compared to mixed ligation-dependent probe amplification. The approach simplifies data analyses, improves independent sample data analyses, and allows for direct comparison to an internal reference control for sample-specific quantification. Our iQRS™ biomimetic reference materials allow for single sample copy number variation analytics and further decentralisation of diagnostics to single patient sample assessment.

Keywords: validation, diagnostics, oncology, copy number variation, reference material, calibration

Procedia PDF Downloads 51
3319 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, penalized quasi-likelihood, power, quasi-likelihood, type-I error

Procedia PDF Downloads 121
3318 Artificial Neural Network Approach for Modeling Very Short-Term Wind Speed Prediction

Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Juan C. Seck-Tuoh-Mora, Norberto Hernandez-Romero, Irving Barragán-Vite

Abstract:

Wind speed forecasting is an important issue for planning wind power generation facilities. The accuracy in the wind speed prediction allows a good performance of wind turbines for electricity generation. A model based on artificial neural networks is presented in this work. A dataset with atmospheric information about air temperature, atmospheric pressure, wind direction, and wind speed in Pachuca, Hidalgo, México, was used to train the artificial neural network. The data was downloaded from the web page of the National Meteorological Service of the Mexican government. The records were gathered for three months, with time intervals of ten minutes. This dataset was used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The model with the best performance contains three hidden layers and 9, 6, and 5 neurons, respectively; and the coefficient of determination obtained was r²=0.9414, and the Root Mean Squared Error is 1.0559. In summary, the ANN approach is suitable to predict the wind speed in Pachuca City because the r² value denotes a good fitting of gathered records, and the obtained ANN model can be used in the planning of wind power generation grids.

Keywords: wind power generation, artificial neural networks, wind speed, coefficient of determination

Procedia PDF Downloads 88
3317 Efficient Human Motion Detection Feature Set by Using Local Phase Quantization Method

Authors: Arwa Alzughaibi

Abstract:

Human Motion detection is a challenging task due to a number of factors including variable appearance, posture and a wide range of illumination conditions and background. So, the first need of such a model is a reliable feature set that can discriminate between a human and a non-human form with a fair amount of confidence even under difficult conditions. By having richer representations, the classification task becomes easier and improved results can be achieved. The Aim of this paper is to investigate the reliable and accurate human motion detection models that are able to detect the human motions accurately under varying illumination levels and backgrounds. Different sets of features are tried and tested including Histogram of Oriented Gradients (HOG), Deformable Parts Model (DPM), Local Decorrelated Channel Feature (LDCF) and Aggregate Channel Feature (ACF). However, we propose an efficient and reliable human motion detection approach by combining Histogram of oriented gradients (HOG) and local phase quantization (LPQ) as the feature set, and implementing search pruning algorithm based on optical flow to reduce the number of false positive. Experimental results show the effectiveness of combining local phase quantization descriptor and the histogram of gradient to perform perfectly well for a large range of illumination conditions and backgrounds than the state-of-the-art human detectors. Areaunder th ROC Curve (AUC) of the proposed method achieved 0.781 for UCF dataset and 0.826 for CDW dataset which indicates that it performs comparably better than HOG, DPM, LDCF and ACF methods.

Keywords: human motion detection, histograms of oriented gradient, local phase quantization, local phase quantization

Procedia PDF Downloads 234
3316 Investigation of Biogas from Slaughterhouse and Dairy Farm Waste

Authors: Saadelnour Abdueljabbar Adam

Abstract:

Wastes from slaughterhouses in most towns in Sudan are often poorly managed and sometimes discharged into adjoining streams due to poor implementation of standards, thus causing environmental and public health hazards and also there is a large amount of manure from dairy farms. This paper presents a solution of organic waste from cow dairy farms and slaughterhouse. We present the findings of experimental investigation of biogas production using cow manure, blood and rumen content were mixed at three proportions :72.3%, 61%, 39% manure, 6%, 8.5%, 22% blood; and 21.7%, 30.5%, 39% rumen content in volume for bio-digester 1,2,3 respectively. This paper analyses the quantitative and qualitative composition of biogas: gas content, and the concentration of methane. The highest biogas output 0.116L/g dry matter from bio-digester1 together with a high-quality biogas of 85% methane Was from the mixture of cow manure with blood and rumen content were mixed at 72.3%manure, 6%blood and 21.7%rumen content which is useful for combustion and energy production. While bio-digester 2 and 3 gave 0.012L/g dry matter and 0.013L/g dry matter respectively with the weak concentration of methane (50%).

Keywords: anaerobic digestion, bio-digester, blood, cow manure, rumen content

Procedia PDF Downloads 543
3315 Studying Second Language Development from a Complex Dynamic Systems Perspective

Authors: L. Freeborn

Abstract:

This paper discusses the application of complex dynamic system theory (DST) to the study of individual differences in second language development. This transdisciplinary framework allows researchers to view the trajectory of language development as a dynamic, non-linear process. A DST approach views language as multi-componential, consisting of multiple complex systems and nested layers. These multiple components and systems continuously interact and influence each other at both the macro- and micro-level. Dynamic systems theory aims to explain and describe the development of the language system, rather than make predictions about its trajectory. Such a holistic and ecological approach to second language development allows researchers to include various research methods from neurological, cognitive, and social perspectives. A DST perspective would involve in-depth analyses as well as mixed methods research. To illustrate, a neurobiological approach to second language development could include non-invasive neuroimaging techniques such as electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) to investigate areas of brain activation during language-related tasks. A cognitive framework would further include behavioural research methods to assess the influence of intelligence and personality traits, as well as individual differences in foreign language aptitude, such as phonetic coding ability and working memory capacity. Exploring second language development from a DST approach would also benefit from including perspectives from the field of applied linguistics, regarding the teaching context, second language input, and the role of affective factors such as motivation. In this way, applying mixed research methods from neurobiological, cognitive, and social approaches would enable researchers to have a more holistic view of the dynamic and complex processes of second language development.

Keywords: dynamic systems theory, mixed methods, research design, second language development

Procedia PDF Downloads 111
3314 A Mixed Approach to Assess Information System Risk, Operational Risk, and Congolese Microfinance Institutions Performance

Authors: Alfred Kamate Siviri, Angelus Mafikiri Tsongo, Jean Robert Kala Kamdjoug

Abstract:

Digitalization and information systems well organized have been selected as relevant measures to mitigate operational risks within organizations. Unfortunately, information system comes with new threats that can cause severe damage and quick organization lockout. This study aims to measure perceived information system risks and their effects on operational risks within the microfinance institution in D.R. Congo. Also, the factors influencing the operational risk are identified, and the link between operational risk with other risks and performance is to be assessed. The study proposes a research model drawn on the combination of Resources-Based-View, dynamic capabilities, the agency theory, the Information System Security Model, and social theories of risk. Therefore, we suggest adopting a mixed methods research with the sole aim of increasing the literature that already exists on perceived operational risk assessment and its link with other risk and performance, a focus on IT risk.

Keywords: Democratic Republic Congo, information system risk, microfinance performance, operational risk

Procedia PDF Downloads 200
3313 Study of Mixed Convection in a Vertical Channel Filled with a Reactive Porous Medium in the Absence of Local Thermal Equilibrium

Authors: Hamid Maidat, Khedidja Bouhadef, Djamel Eddine Ameziani, Azzedine Abdedou

Abstract:

This work consists of a numerical simulation of convective heat transfer in a vertical plane channel filled with a heat generating porous medium, in the absence of local thermal equilibrium. The walls are maintained to a constant temperature and the inlet velocity is uniform. The dynamic range is described by the Darcy-Brinkman model and the thermal field by two energy equations model. A dimensionless formulation is developed for performing a parametric study based on certain dimensionless groups such as, the Biot interstitial number, the thermal conductivity ratio and the volumetric heat generation. The governing equations are solved using the finite volume method, gave rise to a multitude of results concerning in particular the thermal field in the porous channel and the existence or not of the local thermal equilibrium.

Keywords: local thermal non equilibrium model, mixed convection, porous medium, power generation

Procedia PDF Downloads 585
3312 FRATSAN: A New Software for Fractal Analysis of Signals

Authors: Hamidreza Namazi

Abstract:

Fractal analysis is assessing fractal characteristics of data. It consists of several methods to assign fractal characteristics to a dataset which may be a theoretical dataset or a pattern or signal extracted from phenomena including natural geometric objects, sound, market fluctuations, heart rates, digital images, molecular motion, networks, etc. Fractal analysis is now widely used in all areas of science. An important limitation of fractal analysis is that arriving at an empirically determined fractal dimension does not necessarily prove that a pattern is fractal; rather, other essential characteristics have to be considered. For this purpose a Visual C++ based software called FRATSAN (FRActal Time Series ANalyser) was developed which extract information from signals through three measures. These measures are Fractal Dimensions, Jeffrey’s Measure and Hurst Exponent. After computing these measures, the software plots the graphs for each measure. Besides computing three measures the software can classify whether the signal is fractal or no. In fact, the software uses a dynamic method of analysis for all the measures. A sliding window is selected with a value equal to 10% of the total number of data entries. This sliding window is moved one data entry at a time to obtain all the measures. This makes the computation very sensitive to slight changes in data, thereby giving the user an acute analysis of the data. In order to test the performance of this software a set of EEG signals was given as input and the results were computed and plotted. This software is useful not only for fundamental fractal analysis of signals but can be used for other purposes. For instance by analyzing the Hurst exponent plot of a given EEG signal in patients with epilepsy the onset of seizure can be predicted by noticing the sudden changes in the plot.

Keywords: EEG signals, fractal analysis, fractal dimension, hurst exponent, Jeffrey’s measure

Procedia PDF Downloads 437
3311 Transformer Fault Diagnostic Predicting Model Using Support Vector Machine with Gradient Decent Optimization

Authors: R. O. Osaseri, A. R. Usiobaifo

Abstract:

The power transformer which is responsible for the voltage transformation is of great relevance in the power system and oil-immerse transformer is widely used all over the world. A prompt and proper maintenance of the transformer is of utmost importance. The dissolved gasses content in power transformer, oil is of enormous importance in detecting incipient fault of the transformer. There is a need for accurate prediction of the incipient fault in transformer oil in order to facilitate the prompt maintenance and reducing the cost and error minimization. Study on fault prediction and diagnostic has been the center of many researchers and many previous works have been reported on the use of artificial intelligence to predict incipient failure of transformer faults. In this study machine learning technique was employed by using gradient decent algorithms and Support Vector Machine (SVM) in predicting incipient fault diagnosis of transformer. The method focuses on creating a system that improves its performance on previous result and historical data. The system design approach is basically in two phases; training and testing phase. The gradient decent algorithm is trained with a training dataset while the learned algorithm is applied to a set of new data. This two dataset is used to prove the accuracy of the proposed model. In this study a transformer fault diagnostic model based on Support Vector Machine (SVM) and gradient decent algorithms has been presented with a satisfactory diagnostic capability with high percentage in predicting incipient failure of transformer faults than existing diagnostic methods.

Keywords: diagnostic model, gradient decent, machine learning, support vector machine (SVM), transformer fault

Procedia PDF Downloads 294
3310 Interactive Solutions for the Multi-Objective Capacitated Transportation Problem with Mixed Constraints under Fuzziness

Authors: Aquil Ahmed, Srikant Gupta, Irfan Ali

Abstract:

In this paper, we study a multi-objective capacitated transportation problem (MOCTP) with mixed constraints. This paper is comprised of the modelling and optimisation of an MOCTP in a fuzzy environment in which some goals are fractional and some are linear. In real life application of the fuzzy goal programming (FGP) problem with multiple objectives, it is difficult for the decision maker(s) to determine the goal value of each objective precisely as the goal values are imprecise or uncertain. Also, we developed the concept of linearization of fractional goal for solving the MOCTP. In this paper, imprecision of the parameter is handled by the concept of fuzzy set theory by considering these parameters as a trapezoidal fuzzy number. α-cut approach is used to get the crisp value of the parameters. Numerical examples are used to illustrate the method for solving MOCTP.

Keywords: capacitated transportation problem, multi objective linear programming, multi-objective fractional programming, fuzzy goal programming, fuzzy sets, trapezoidal fuzzy number

Procedia PDF Downloads 408
3309 Data Augmentation for Early-Stage Lung Nodules Using Deep Image Prior and Pix2pix

Authors: Qasim Munye, Juned Islam, Haseeb Qureshi, Syed Jung

Abstract:

Lung nodules are commonly identified in computed tomography (CT) scans by experienced radiologists at a relatively late stage. Early diagnosis can greatly increase survival. We propose using a pix2pix conditional generative adversarial network to generate realistic images simulating early-stage lung nodule growth. We have applied deep images prior to 2341 slices from 895 computed tomography (CT) scans from the Lung Image Database Consortium (LIDC) dataset to generate pseudo-healthy medical images. From these images, 819 were chosen to train a pix2pix network. We observed that for most of the images, the pix2pix network was able to generate images where the nodule increased in size and intensity across epochs. To evaluate the images, 400 generated images were chosen at random and shown to a medical student beside their corresponding original image. Of these 400 generated images, 384 were defined as satisfactory - meaning they resembled a nodule and were visually similar to the corresponding image. We believe that this generated dataset could be used as training data for neural networks to detect lung nodules at an early stage or to improve the accuracy of such networks. This is particularly significant as datasets containing the growth of early-stage nodules are scarce. This project shows that the combination of deep image prior and generative models could potentially open the door to creating larger datasets than currently possible and has the potential to increase the accuracy of medical classification tasks.

Keywords: medical technology, artificial intelligence, radiology, lung cancer

Procedia PDF Downloads 45
3308 Hysteresis Behaviour of Mass Concrete Mixed with Plastic Fibre under Compression

Authors: A. A. Okeola, T. I. Sijuade

Abstract:

Unreinforced concrete is a comparatively brittle substance when exposed to tensile stresses, the required tensile strength is provided by the introduction of steel which is used as reinforcement. The strength of concrete may be improved tremendously by the addition of fibre. This study focused on investigating the compressive strength of mass concrete mixed with different percentage of plastic fibre. Twelve samples of concrete cubes with varied percentage of plastic fibre at 7, 14 and 28 days of water submerged curing were tested under compression loading. The result shows that the compressive strength of plastic fibre reinforced concrete increased with rise in curing age. The strength increases for all percentage dosage of fibre used for the concrete. The density of the Plastic Fibre Reinforced Concrete (PFRC) also increases with curing age, which implies that during curing, concrete absorbs water which aids its hydration. The least compressive strength obtained with the introduction of plastic fibre is more than the targeted 20 N/mm2 recommended for construction work showing that PFRC can be used where significant loading is expected.

Keywords: compressive strength, concrete, curing, density, plastic fibre

Procedia PDF Downloads 386
3307 The Intense Fascination of Ancient Egypt: A Cross-Cultural Phenomenological Study

Authors: Patrick Andrew McCoy

Abstract:

The intense fascination with ancient Egypt has persisted for thousands of years and across cultures globally, known popularly as Egyptomania,’ ‘Tutmania,’ ‘Mummymania,’ and ‘Orientalism. A review of the literature indicates psychological themes for its behavior are curiosity, escapism, existentialism, religiosity and spirituality, and cultural, racial, and ethnic identity. A mixed-methods study is initiated with established tools to explore these themes and discover additional motivators. Objectives: The purpose of the study is to explore the themes underlying the intense fascination of ancient Egypt. The abstract themes of the fascination of ancient Egypt are cross-cultural phenomena that motivate people in their interactions with other cultures. These interactions have both been beneficial and combative. Methodology: A mixed methods research study is designed where quantitative (QUAN) survey of participants’ strong fascination with ancient Egypt, within psychological themes derived from a review of the literature. The qualitative (QUAL) survey consists of open-ended questions to explore participants’ exposure to ancient Egypt that may have influenced their fascination and their behaviors resulting from the phenomenon. The themes are explored in QUAN data and QUAL data to discover what themes are established and inferred the psychological motivations of the phenomenon. Main Contributions: This study will provide more information on several scientific disciplines, including psychology, anthropology, Egyptology, and tourism. This study seeks to benefit the tourism industry for not only in Egypt but hopefully with generalizability of cultural tourist industries in other countries.

Keywords: cross-cultural psychology, international psychology, mixed-methods, identity, ancient Egypt, phenomenology, escapism, curiosity, existentialism, religiosity, spirituality

Procedia PDF Downloads 102