Search results for: deep approaches
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5623

Search results for: deep approaches

5383 Development of Risk-Based Ambient Air Quality Standards in the Russian Federation on the Basis of Risk Assessment Procedures Harmonized with International Approaches

Authors: Nina V. Zaitseva, Pavel Z. Shur, Nina G. Atiskova

Abstract:

Nowadays harmonization of sanitary and hygienic standards of environmental quality with international standards is crucial part of integration of Russia into the international community. Harmonization of Russian and international ambient air quality standards may be realized by risk-based standards development. In this paper approaches to risk-based standards development and examples of these approaches implementation are presented.

Keywords: harmonization, health risk assessment, evolutionary modelling, benchmark level, nickel, manganese

Procedia PDF Downloads 367
5382 Neural Network Based Decision Trees Using Machine Learning for Alzheimer's Diagnosis

Authors: P. S. Jagadeesh Kumar, Tracy Lin Huan, S. Meenakshi Sundaram

Abstract:

Alzheimer’s disease is one of the prevalent kind of ailment, expected for impudent reconciliation or an effectual therapy is to be accredited hitherto. Probable detonation of patients in the upcoming years, and consequently an enormous deal of apprehension in early discovery of the disorder, this will conceivably chaperon to enhanced healing outcomes. Complex impetuosity of the brain is an observant symbolic of the disease and a unique recognition of genetic sign of the disease. Machine learning alongside deep learning and decision tree reinforces the aptitude to absorb characteristics from multi-dimensional data’s and thus simplifies automatic classification of Alzheimer’s disease. Susceptible testing was prophesied and realized in training the prospect of Alzheimer’s disease classification built on machine learning advances. It was shrewd that the decision trees trained with deep neural network fashioned the excellent results parallel to related pattern classification.

Keywords: Alzheimer's diagnosis, decision trees, deep neural network, machine learning, pattern classification

Procedia PDF Downloads 272
5381 Restricted Boltzmann Machines and Deep Belief Nets for Market Basket Analysis: Statistical Performance and Managerial Implications

Authors: H. Hruschka

Abstract:

This paper presents the first comparison of the performance of the restricted Boltzmann machine and the deep belief net on binary market basket data relative to binary factor analysis and the two best-known topic models, namely Dirichlet allocation and the correlated topic model. This comparison shows that the restricted Boltzmann machine and the deep belief net are superior to both binary factor analysis and topic models. Managerial implications that differ between the investigated models are treated as well. The restricted Boltzmann machine is defined as joint Boltzmann distribution of hidden variables and observed variables (purchases). It comprises one layer of observed variables and one layer of hidden variables. Note that variables of the same layer are not connected. The comparison also includes deep belief nets with three layers. The first layer is a restricted Boltzmann machine based on category purchases. Hidden variables of the first layer are used as input variables by the second-layer restricted Boltzmann machine which then generates second-layer hidden variables. Finally, in the third layer hidden variables are related to purchases. A public data set is analyzed which contains one month of real-world point-of-sale transactions in a typical local grocery outlet. It consists of 9,835 market baskets referring to 169 product categories. This data set is randomly split into two halves. One half is used for estimation, the other serves as holdout data. Each model is evaluated by the log likelihood for the holdout data. Performance of the topic models is disappointing as the holdout log likelihood of the correlated topic model – which is better than Dirichlet allocation - is lower by more than 25,000 compared to the best binary factor analysis model. On the other hand, binary factor analysis on its own is clearly surpassed by both the restricted Boltzmann machine and the deep belief net whose holdout log likelihoods are higher by more than 23,000. Overall, the deep belief net performs best. We also interpret hidden variables discovered by binary factor analysis, the restricted Boltzmann machine and the deep belief net. Hidden variables characterized by the product categories to which they are related differ strongly between these three models. To derive managerial implications we assess the effect of promoting each category on total basket size, i.e., the number of purchased product categories, due to each category's interdependence with all the other categories. The investigated models lead to very different implications as they disagree about which categories are associated with higher basket size increases due to a promotion. Of course, recommendations based on better performing models should be preferred. The impressive performance advantages of the restricted Boltzmann machine and the deep belief net suggest continuing research by appropriate extensions. To include predictors, especially marketing variables such as price, seems to be an obvious next step. It might also be feasible to take a more detailed perspective by considering purchases of brands instead of purchases of product categories.

Keywords: binary factor analysis, deep belief net, market basket analysis, restricted Boltzmann machine, topic models

Procedia PDF Downloads 169
5380 Deciphering Orangutan Drawing Behavior Using Artificial Intelligence

Authors: Benjamin Beltzung, Marie Pelé, Julien P. Renoult, Cédric Sueur

Abstract:

To this day, it is not known if drawing is specifically human behavior or if this behavior finds its origins in ancestor species. An interesting window to enlighten this question is to analyze the drawing behavior in genetically close to human species, such as non-human primate species. A good candidate for this approach is the orangutan, who shares 97% of our genes and exhibits multiple human-like behaviors. Focusing on figurative aspects may not be suitable for orangutans’ drawings, which may appear as scribbles but may have meaning. A manual feature selection would lead to an anthropocentric bias, as the features selected by humans may not match with those relevant for orangutans. In the present study, we used deep learning to analyze the drawings of a female orangutan named Molly († in 2011), who has produced 1,299 drawings in her last five years as part of a behavioral enrichment program at the Tama Zoo in Japan. We investigate multiple ways to decipher Molly’s drawings. First, we demonstrate the existence of differences between seasons by training a deep learning model to classify Molly’s drawings according to the seasons. Then, to understand and interpret these seasonal differences, we analyze how the information spreads within the network, from shallow to deep layers, where early layers encode simple local features and deep layers encode more complex and global information. More precisely, we investigate the impact of feature complexity on classification accuracy through features extraction fed to a Support Vector Machine. Last, we leverage style transfer to dissociate features associated with drawing style from those describing the representational content and analyze the relative importance of these two types of features in explaining seasonal variation. Content features were relevant for the classification, showing the presence of meaning in these non-figurative drawings and the ability of deep learning to decipher these differences. The style of the drawings was also relevant, as style features encoded enough information to have a classification better than random. The accuracy of style features was higher for deeper layers, demonstrating and highlighting the variation of style between seasons in Molly’s drawings. Through this study, we demonstrate how deep learning can help at finding meanings in non-figurative drawings and interpret these differences.

Keywords: cognition, deep learning, drawing behavior, interpretability

Procedia PDF Downloads 132
5379 Improved Rare Species Identification Using Focal Loss Based Deep Learning Models

Authors: Chad Goldsworthy, B. Rajeswari Matam

Abstract:

The use of deep learning for species identification in camera trap images has revolutionised our ability to study, conserve and monitor species in a highly efficient and unobtrusive manner, with state-of-the-art models achieving accuracies surpassing the accuracy of manual human classification. The high imbalance of camera trap datasets, however, results in poor accuracies for minority (rare or endangered) species due to their relative insignificance to the overall model accuracy. This paper investigates the use of Focal Loss, in comparison to the traditional Cross Entropy Loss function, to improve the identification of minority species in the “255 Bird Species” dataset from Kaggle. The results show that, although Focal Loss slightly decreased the accuracy of the majority species, it was able to increase the F1-score by 0.06 and improve the identification of the bottom two, five and ten (minority) species by 37.5%, 15.7% and 10.8%, respectively, as well as resulting in an improved overall accuracy of 2.96%.

Keywords: convolutional neural networks, data imbalance, deep learning, focal loss, species classification, wildlife conservation

Procedia PDF Downloads 151
5378 Effect of Punch and Die Profile Radii on the Maximum Drawing Force and the Total Consumed Work in Deep Drawing of a Flat Ended Cylindrical Brass

Authors: A. I. O. Zaid

Abstract:

Deep drawing is considered to be the most widely used sheet metal forming processes among the particularly in automobile and aircraft industries. It is widely used for manufacturing a large number of the body and spare parts. In its simplest form it may be defined as a secondary forming process by which a sheet metal is formed into a cylinder or alike by subjecting the sheet to compressive force through a punch with a flat end of the same geometry as the required shape of the cylinder end while it is held by a blank holder which hinders its movement but does not stop it. The punch and die profile radii play In this paper, the effects of punch and die profile radii on the autographic record, the minimum thickness strain location where the cracks normally start and cause the fracture, the maximum deep drawing force and the total consumed work in the drawing flat ended cylindrical brass cups are investigated. Five punches and five dies each having different profile radii were manufactured for this investigation. Furthermore, their effect on the quality of the drawn cups is also presented and discussed. It was found that the die profile radius has more effect on the maximum drawing force and the total consumed work than the punch profile radius.

Keywords: punch and die profile radii, deep drawing process, maximum drawing force, total consumed work, quality of produced parts, flat ended cylindrical brass cups

Procedia PDF Downloads 319
5377 Algorithmic Skills Transferred from Secondary CSI Studies into Tertiary Education

Authors: Piroska Biró, Mária Csernoch, János Máth, Kálmán Abari

Abstract:

Testing the first year students of Informatics at the University of Debrecen revealed that students start their tertiary studies in programming with a low level of programming knowledge and algorithmic skills. The possible reasons which lead the students to this very unfortunate result were examined. The results of the test were compared to the students’ results in the school leaving exams and to their self-assessment values. It was found that there is only a slight connection between the students’ results in the test and in the school leaving exams, especially at intermediate level. Beyond this, the school leaving exams do not seem to enable students to evaluate their own abilities.

Keywords: deep and surface approaches, metacognitive abilities, programming and algorithmic skills, school leaving exams, tracking code

Procedia PDF Downloads 359
5376 Estimating Gait Parameter from Digital RGB Camera Using Real Time AlphaPose Learning Architecture

Authors: Murad Almadani, Khalil Abu-Hantash, Xinyu Wang, Herbert Jelinek, Kinda Khalaf

Abstract:

Gait analysis is used by healthcare professionals as a tool to gain a better understanding of the movement impairment and track progress. In most circumstances, monitoring patients in their real-life environments with low-cost equipment such as cameras and wearable sensors is more important. Inertial sensors, on the other hand, cannot provide enough information on angular dynamics. This research offers a method for tracking 2D joint coordinates using cutting-edge vision algorithms and a single RGB camera. We provide an end-to-end comprehensive deep learning pipeline for marker-less gait parameter estimation, which, to our knowledge, has never been done before. To make our pipeline function in real-time for real-world applications, we leverage the AlphaPose human posture prediction model and a deep learning transformer. We tested our approach on the well-known GPJATK dataset, which produces promising results.

Keywords: gait analysis, human pose estimation, deep learning, real time gait estimation, AlphaPose, transformer

Procedia PDF Downloads 89
5375 Distangling Biological Noise in Cellular Images with a Focus on Explainability

Authors: Manik Sharma, Ganapathy Krishnamurthi

Abstract:

The cost of some drugs and medical treatments has risen in recent years, that many patients are having to go without. A classification project could make researchers more efficient. One of the more surprising reasons behind the cost is how long it takes to bring new treatments to market. Despite improvements in technology and science, research and development continues to lag. In fact, finding new treatment takes, on average, more than 10 years and costs hundreds of millions of dollars. If successful, we could dramatically improve the industry's ability to model cellular images according to their relevant biology. In turn, greatly decreasing the cost of treatments and ensure these treatments get to patients faster. This work aims at solving a part of this problem by creating a cellular image classification model which can decipher the genetic perturbations in cell (occurring naturally or artificially). Another interesting question addressed is what makes the deep-learning model decide in a particular fashion, which can further help in demystifying the mechanism of action of certain perturbations and paves a way towards the explainability of the deep-learning model.

Keywords: cellular images, genetic perturbations, deep-learning, explainability

Procedia PDF Downloads 88
5374 A Deep Learning Based Approach for Dynamically Selecting Pre-processing Technique for Images

Authors: Revoti Prasad Bora, Nikita Katyal, Saurabh Yadav

Abstract:

Pre-processing plays an important role in various image processing applications. Most of the time due to the similar nature of images, a particular pre-processing or a set of pre-processing steps are sufficient to produce the desired results. However, in the education domain, there is a wide variety of images in various aspects like images with line-based diagrams, chemical formulas, mathematical equations, etc. Hence a single pre-processing or a set of pre-processing steps may not yield good results. Therefore, a Deep Learning based approach for dynamically selecting a relevant pre-processing technique for each image is proposed. The proposed method works as a classifier to detect hidden patterns in the images and predicts the relevant pre-processing technique needed for the image. This approach experimented for an image similarity matching problem but it can be adapted to other use cases too. Experimental results showed significant improvement in average similarity ranking with the proposed method as opposed to static pre-processing techniques.

Keywords: deep-learning, classification, pre-processing, computer vision, image processing, educational data mining

Procedia PDF Downloads 122
5373 Chitin Crystalline Phase Transition Promoted by Deep Eutectic Solvent

Authors: Diana G. Ramirez-Wong, Marius Ramirez, Regina Sanchez-Leija, Adriana Rugerio, R. Araceli Mauricio-Sanchez, Martin A. Hernandez-Landaverde, Arturo Carranza, John A. Pojman, Josue D. Mota-Morales, Gabriel Luna-Barcenas

Abstract:

Chitin films were prepared using alpha-chitin from shrimp shells as raw material and a simple method of precipitation-evaporation. Choline chloride: urea Deep Eutectic Solvent (DES) was used to disperse chitin and compared against hexafluoroisopropanol (HFIP). A careful analysis of the chemical and crystalline structure was followed along the synthesis of the films, revealing crystalline-phase transitions. The full conversion of alpha- to beta-, or alpha- to gamma-chitin structure were detected by XRD and NMR on the films. The synthesis of highly crystalline monophasic gamma-chitin films was achieved using a DES; whereas HFIP helps to promote the beta-phase. These results are encouraging to continue in the study of DES as good processing media to control the final properties of chitin based materials.

Keywords: chitin, deep eutectic solvent, polymorph, phase transformation

Procedia PDF Downloads 511
5372 Developing API Economy: Associating Value to APIs and Microservices in an Enterprise

Authors: Mujahid Sultan

Abstract:

The IT industry has seen many transformations in the Software Development Life Cycle (SDLC) methodologies and development approaches. SDLCs range from waterfall to agile, and the development approaches from monolith to microservices. Management, orchestration, and monetization of microservices have created an API economy in the modern enterprise. There are two approaches to API design, code first and design first. Design first is gaining popularity in the industry as this allows capturing the API needs from the stakeholders rather than the development teams guesstimating the needs and associating a monetary value with the APIs and microservices. In this publication, we describe an approach to organizing and creating stakeholder needs and requirements for designing microservices and APIs.

Keywords: requirements engineering, enterprise architecture, APIs, microservices, DevOps, continuous delivery, continuous integration, stakeholder viewpoints

Procedia PDF Downloads 162
5371 Working Fluids in Absorption Chillers: Investigation of the Use of Deep Eutectic Solvents

Authors: L. Cesari, D. Alonso, F. Mutelet

Abstract:

The interest in cold production has been on the increase in absorption chillers for many years. In fact, the absorption cycles replace the compressor and thus reduce electrical consumption. The devices also allow waste heat generated through industrial activities to be recovered and cooled to a moderate temperature in accordance with regulatory guidelines. Many working fluids were investigated but could not compete with the commonly used {H2O + LiBr} and {H2O + NH3} to author’s best knowledge. Yet, the corrosion, toxicity and crystallization phenomena of these mixtures prevent the development of the absorption technology. This work investigates the possible use of a glyceline deep eutectic solvent (DES) and CO2 as working fluid in an absorption chiller. To do so, good knowledge of the mixtures is required. Experimental measurements (vapor-liquid equilibria, density, and heat capacity) were performed to complete the data lacking in the literature. The performance of the mixtures was quantified by the calculation of the coefficient of performance (COP). The results show that working fluids containing DES + CO2 are an interesting alternative and lead to different trails of working mixtures for absorption and chiller.

Keywords: absorption devices, deep eutectic solvent, energy valorization, experimental data, simulation

Procedia PDF Downloads 91
5370 Inculcating the Reading and Writing Approaches through Community-Based Teacher Workshops: A Case of Primary Schools in Limpopo Province

Authors: Tsebe Wilfred Molotja, Mahlapahlapane Themane, Kgetja Maruma

Abstract:

It is globally accepted that reading in the primary schools serves as a foundational basis for good reading skills. This is evident in the students’ academic success throughout their studying life. However, the PIRLS (2016) report on Literacy performance found that primary school learners are not able to read as fluently as expected. The results from ANA (2012) also indicated that South African learners achieved the lowest as compared to other global ones. The purpose of this study is to investigate the approaches employed by educators in developing learners’ reading and writing skills and to workshop them on the best reading and writing approaches to be implemented. The study adopted an explorative qualitative design where 27 educators from primary schools around the University of Limpopo were purposefully sampled to participate in this study. Data was collected through interviews and classroom observation during class visits facilitated by research assistants. The study found that teachers are aware of different approaches to developing learners’ reading and writing skills even thou these are not aligned with the curriculum. However, the problem is with implementation, as the conditions in the classrooms are not conducive for such. The study recommends that more workshops on capacitating teachers with the pedagogical approaches to teaching reading be held. The appeal is also made to the Department of Basic Education that it makes the classrooms to be conducive for teaching and learning to take place.

Keywords: academic success, reading and writing, community based, approaches

Procedia PDF Downloads 70
5369 Wolof Voice Response Recognition System: A Deep Learning Model for Wolof Audio Classification

Authors: Krishna Mohan Bathula, Fatou Bintou Loucoubar, FNU Kaleemunnisa, Christelle Scharff, Mark Anthony De Castro

Abstract:

Voice recognition algorithms such as automatic speech recognition and text-to-speech systems with African languages can play an important role in bridging the digital divide of Artificial Intelligence in Africa, contributing to the establishment of a fully inclusive information society. This paper proposes a Deep Learning model that can classify the user responses as inputs for an interactive voice response system. A dataset with Wolof language words ‘yes’ and ‘no’ is collected as audio recordings. A two stage Data Augmentation approach is adopted for enhancing the dataset size required by the deep neural network. Data preprocessing and feature engineering with Mel-Frequency Cepstral Coefficients are implemented. Convolutional Neural Networks (CNNs) have proven to be very powerful in image classification and are promising for audio processing when sounds are transformed into spectra. For performing voice response classification, the recordings are transformed into sound frequency feature spectra and then applied image classification methodology using a deep CNN model. The inference model of this trained and reusable Wolof voice response recognition system can be integrated with many applications associated with both web and mobile platforms.

Keywords: automatic speech recognition, interactive voice response, voice response recognition, wolof word classification

Procedia PDF Downloads 90
5368 Author Profiling: Prediction of Learners’ Gender on a MOOC Platform Based on Learners’ Comments

Authors: Tahani Aljohani, Jialin Yu, Alexandra. I. Cristea

Abstract:

The more an educational system knows about a learner, the more personalised interaction it can provide, which leads to better learning. However, asking a learner directly is potentially disruptive, and often ignored by learners. Especially in the booming realm of MOOC Massive Online Learning platforms, only a very low percentage of users disclose demographic information about themselves. Thus, in this paper, we aim to predict learners’ demographic characteristics, by proposing an approach using linguistically motivated Deep Learning Architectures for Learner Profiling, particularly targeting gender prediction on a FutureLearn MOOC platform. Additionally, we tackle here the difficult problem of predicting the gender of learners based on their comments only – which are often available across MOOCs. The most common current approaches to text classification use the Long Short-Term Memory (LSTM) model, considering sentences as sequences. However, human language also has structures. In this research, rather than considering sentences as plain sequences, we hypothesise that higher semantic - and syntactic level sentence processing based on linguistics will render a richer representation. We thus evaluate, the traditional LSTM versus other bleeding edge models, which take into account syntactic structure, such as tree-structured LSTM, Stack-augmented Parser-Interpreter Neural Network (SPINN) and the Structure-Aware Tag Augmented model (SATA). Additionally, we explore using different word-level encoding functions. We have implemented these methods on Our MOOC dataset, which is the most performant one comparing with a public dataset on sentiment analysis that is further used as a cross-examining for the models' results.

Keywords: deep learning, data mining, gender predication, MOOCs

Procedia PDF Downloads 116
5367 Defect Classification of Hydrogen Fuel Pressure Vessels using Deep Learning

Authors: Dongju Kim, Youngjoo Suh, Hyojin Kim, Gyeongyeong Kim

Abstract:

Acoustic Emission Testing (AET) is widely used to test the structural integrity of an operational hydrogen storage container, and clustering algorithms are frequently used in pattern recognition methods to interpret AET results. However, the interpretation of AET results can vary from user to user as the tuning of the relevant parameters relies on the user's experience and knowledge of AET. Therefore, it is necessary to use a deep learning model to identify patterns in acoustic emission (AE) signal data that can be used to classify defects instead. In this paper, a deep learning-based model for classifying the types of defects in hydrogen storage tanks, using AE sensor waveforms, is proposed. As hydrogen storage tanks are commonly constructed using carbon fiber reinforced polymer composite (CFRP), a defect classification dataset is collected through a tensile test on a specimen of CFRP with an AE sensor attached. The performance of the classification model, using one-dimensional convolutional neural network (1-D CNN) and synthetic minority oversampling technique (SMOTE) data augmentation, achieved 91.09% accuracy for each defect. It is expected that the deep learning classification model in this paper, used with AET, will help in evaluating the operational safety of hydrogen storage containers.

Keywords: acoustic emission testing, carbon fiber reinforced polymer composite, one-dimensional convolutional neural network, smote data augmentation

Procedia PDF Downloads 68
5366 A Review of Machine Learning for Big Data

Authors: Devatha Kalyan Kumar, Aravindraj D., Sadathulla A.

Abstract:

Big data are now rapidly expanding in all engineering and science and many other domains. The potential of large or massive data is undoubtedly significant, make sense to require new ways of thinking and learning techniques to address the various big data challenges. Machine learning is continuously unleashing its power in a wide range of applications. In this paper, the latest advances and advancements in the researches on machine learning for big data processing. First, the machine learning techniques methods in recent studies, such as deep learning, representation learning, transfer learning, active learning and distributed and parallel learning. Then focus on the challenges and possible solutions of machine learning for big data.

Keywords: active learning, big data, deep learning, machine learning

Procedia PDF Downloads 410
5365 Vulnerability of People to Climate Change: Influence of Methods and Computation Approaches on Assessment Outcomes

Authors: Adandé Belarmain Fandohan

Abstract:

Climate change has become a major concern globally, particularly in rural communities that have to find rapid coping solutions. Several vulnerability assessment approaches have been developed in the last decades. This comes along with a higher risk for different methods to result in different conclusions, thereby making comparisons difficult and decision-making non-consistent across areas. The effect of methods and computational approaches on estimates of people’s vulnerability was assessed using data collected from the Gambia. Twenty-four indicators reflecting vulnerability components: (exposure, sensitivity, and adaptive capacity) were selected for this purpose. Data were collected through household surveys and key informant interviews. One hundred and fifteen respondents were surveyed across six communities and two administrative districts. Results were compared over three computational approaches: the maximum value transformation normalization, the z-score transformation normalization, and simple averaging. Regardless of the approaches used, communities that have high exposure to climate change and extreme events were the most vulnerable. Furthermore, the vulnerability was strongly related to the socio-economic characteristics of farmers. The survey evidenced variability in vulnerability among communities and administrative districts. Comparing output across approaches, overall, people in the study area were found to be highly vulnerable using the simple average and maximum value transformation, whereas they were only moderately vulnerable using the z-score transformation approach. It is suggested that assessment approach-induced discrepancies be accounted for in international debates to harmonize/standardize assessment approaches to the end of making outputs comparable across regions. This will also likely increase the relevance of decision-making for adaptation policies.

Keywords: maximum value transformation, simple averaging, vulnerability assessment, West Africa, z-score transformation

Procedia PDF Downloads 80
5364 Domain-Specific Deep Neural Network Model for Classification of Abnormalities on Chest Radiographs

Authors: Nkechinyere Joy Olawuyi, Babajide Samuel Afolabi, Bola Ibitoye

Abstract:

This study collected a preprocessed dataset of chest radiographs and formulated a deep neural network model for detecting abnormalities. It also evaluated the performance of the formulated model and implemented a prototype of the formulated model. This was with the view to developing a deep neural network model to automatically classify abnormalities in chest radiographs. In order to achieve the overall purpose of this research, a large set of chest x-ray images were sourced for and collected from the CheXpert dataset, which is an online repository of annotated chest radiographs compiled by the Machine Learning Research Group, Stanford University. The chest radiographs were preprocessed into a format that can be fed into a deep neural network. The preprocessing techniques used were standardization and normalization. The classification problem was formulated as a multi-label binary classification model, which used convolutional neural network architecture to make a decision on whether an abnormality was present or not in the chest radiographs. The classification model was evaluated using specificity, sensitivity, and Area Under Curve (AUC) score as the parameter. A prototype of the classification model was implemented using Keras Open source deep learning framework in Python Programming Language. The AUC ROC curve of the model was able to classify Atelestasis, Support devices, Pleural effusion, Pneumonia, A normal CXR (no finding), Pneumothorax, and Consolidation. However, Lung opacity and Cardiomegaly had a probability of less than 0.5 and thus were classified as absent. Precision, recall, and F1 score values were 0.78; this implies that the number of False Positive and False Negative is the same, revealing some measure of label imbalance in the dataset. The study concluded that the developed model is sufficient to classify abnormalities present in chest radiographs into present or absent.

Keywords: transfer learning, convolutional neural network, radiograph, classification, multi-label

Procedia PDF Downloads 87
5363 An Integrated Label Propagation Network for Structural Condition Assessment

Authors: Qingsong Xiong, Cheng Yuan, Qingzhao Kong, Haibei Xiong

Abstract:

Deep-learning-driven approaches based on vibration responses have attracted larger attention in rapid structural condition assessment while obtaining sufficient measured training data with corresponding labels is relevantly costly and even inaccessible in practical engineering. This study proposes an integrated label propagation network for structural condition assessment, which is able to diffuse the labels from continuously-generating measurements by intact structure to those of missing labels of damage scenarios. The integrated network is embedded with damage-sensitive features extraction by deep autoencoder and pseudo-labels propagation by optimized fuzzy clustering, the architecture and mechanism which are elaborated. With a sophisticated network design and specified strategies for improving performance, the present network achieves to extends the superiority of self-supervised representation learning, unsupervised fuzzy clustering and supervised classification algorithms into an integration aiming at assessing damage conditions. Both numerical simulations and full-scale laboratory shaking table tests of a two-story building structure were conducted to validate its capability of detecting post-earthquake damage. The identifying accuracy of a present network was 0.95 in numerical validations and an average 0.86 in laboratory case studies, respectively. It should be noted that the whole training procedure of all involved models in the network stringently doesn’t rely upon any labeled data of damage scenarios but only several samples of intact structure, which indicates a significant superiority in model adaptability and feasible applicability in practice.

Keywords: autoencoder, condition assessment, fuzzy clustering, label propagation

Procedia PDF Downloads 76
5362 Software Cloning and Agile Environment

Authors: Ravi Kumar, Dhrubajit Barman, Nomi Baruah

Abstract:

Software Cloning has grown an active area in software engineering research community yielding numerous techniques, various tools and other methods for clone detection and removal. The copying, modifying a block of code is identified as cloning as it is the most basic means of software reuse. Agile Software Development is an approach which is currently being used in various software projects, so that it helps to respond the unpredictability of building software through incremental, iterative, work cadences. Software Cloning has been introduced to Agile Environment and many Agile Software Development approaches are using the concept of Software Cloning. This paper discusses the various Agile Software Development approaches. It also discusses the degree to which the Software Cloning concept is being introduced in the Agile Software Development approaches.

Keywords: agile environment, refactoring, reuse, software cloning

Procedia PDF Downloads 504
5361 Severe Bone Marrow Edema on Sacroiliac Joint MRI Increases the Risk of Low BMD in Patients with Axial Spondyloarthritis

Authors: Kwi Young Kang

Abstract:

Objective: To determine the association between inflammatory and structural lesions on sacroiliac joint (SIJ) MRI and BMD and to identify risk factors for low BMD in patients with axial spondyloarthritis (axSpA). Methods: Seventy-six patients who fulfilled the ASAS axSpA criteria were enrolled. All underwent SIJ MRI and BMD measurement at the lumbar spine, femoral neck, and total hip. Inflammatory and structural lesions on SIJ MRI were scored. Laboratory tests and assessment of radiographic and disease activity were performed at the time of MRI. The association between SIJ MRI findings and BMD was evaluated. Results: Among the 76 patients, 14 (18%) had low BMD. Patients with low BMD showed significantly higher bone marrow edema (BME) and deep BME scores on MRI than those with normal BMD (p<0.047 and 0.007, respectively). Inflammatory lesions on SIJ MRI correlated with BMD at the femoral neck and total hip. Multivariate analysis identified the presence of deep BME on SIJ MRI, increased CRP, and sacroiliitis on X-ray as risk factors for low BMD (OR: 5.6, 14.6, and 2.5, respectively). Conclusion: The presence of deep BME on SIJ MRI, increased CRP levels, and severity of sacroiliitis on X-ray were independent risk factors for low BMD.

Keywords: axial spondyloarthritis, sacroiliac joint MRI, bone mineral density, sacroiliitis

Procedia PDF Downloads 510
5360 Approaches To Counseling As Done By Traditional Cultural Healers In North America

Authors: Lewis Mehl-Madrona, Barbara Mainguy

Abstract:

We describe the type of counseling done by traditional cultural healers in North America. We follow an autoethnographic course development through the first author’s integration of mainstream training and Native-American heritage and study with traditional medicine people. We assemble traditional healing elders from North America and discuss with them their practices and their philosophies of healing. We draw parallels for their approaches in some European-based philosophies and religion, including the work of Heidegger, Levin, Fox, Kierkegaard, and others. An example of the treatment process with a depressed client is provided and similarities and differences with conventional psychotherapies are described.

Keywords: indigenous approaches to counseling, indigenous bodywork, indigenous healing, North American indigenous people

Procedia PDF Downloads 247
5359 Recurrent Neural Networks with Deep Hierarchical Mixed Structures for Chinese Document Classification

Authors: Zhaoxin Luo, Michael Zhu

Abstract:

In natural languages, there are always complex semantic hierarchies. Obtaining the feature representation based on these complex semantic hierarchies becomes the key to the success of the model. Several RNN models have recently been proposed to use latent indicators to obtain the hierarchical structure of documents. However, the model that only uses a single-layer latent indicator cannot achieve the true hierarchical structure of the language, especially a complex language like Chinese. In this paper, we propose a deep layered model that stacks arbitrarily many RNN layers equipped with latent indicators. After using EM and training it hierarchically, our model solves the computational problem of stacking RNN layers and makes it possible to stack arbitrarily many RNN layers. Our deep hierarchical model not only achieves comparable results to large pre-trained models on the Chinese short text classification problem but also achieves state of art results on the Chinese long text classification problem.

Keywords: nature language processing, recurrent neural network, hierarchical structure, document classification, Chinese

Procedia PDF Downloads 41
5358 Preventing the Drought of Lakes by Using Deep Reinforcement Learning in France

Authors: Farzaneh Sarbandi Farahani

Abstract:

Drought and decrease in the level of lakes in recent years due to global warming and excessive use of water resources feeding lakes are of great importance, and this research has provided a structure to investigate this issue. First, the information required for simulating lake drought is provided with strong references and necessary assumptions. Entity-Component-System (ECS) structure has been used for simulation, which can consider assumptions flexibly in simulation. Three major users (i.e., Industry, agriculture, and Domestic users) consume water from groundwater and surface water (i.e., streams, rivers and lakes). Lake Mead has been considered for simulation, and the information necessary to investigate its drought has also been provided. The results are presented in the form of a scenario-based design and optimal strategy selection. For optimal strategy selection, a deep reinforcement algorithm is developed to select the best set of strategies among all possible projects. These results can provide a better view of how to plan to prevent lake drought.

Keywords: drought simulation, Mead lake, entity component system programming, deep reinforcement learning

Procedia PDF Downloads 64
5357 Breakthrough Highly-Effective Extraction of Perfluoroctanoic Acid Using Natural Deep Eutectic Solvents

Authors: Sana Eid, Ahmad S. Darwish, Tarek Lemaoui, Maguy Abi Jaoude, Fawzi Banat, Shadi W. Hasan, Inas M. AlNashef

Abstract:

Addressing the growing challenge of per- and polyfluoroalkyl substances (PFAS) pollution in water bodies, this study introduces natural deep eutectic solvents (NADESs) as a pioneering solution for the efficient extraction of perfluorooctanoic acid (PFOA), one of the most persistent and concerning PFAS pollutants. Among the tested NADESs, trioctylphosphine oxide: lauric acid (TOPO:LauA) in a 1:1 molar ratio was distinguished as the most effective, achieving an extraction efficiency of approximately 99.52% at a solvent-to-feed (S:F) ratio of 1:2, room temperature, and neutral pH. This efficiency is achieved within a notably short mixing time of only one min, which is significantly less than the time required by conventional methods, underscoring the potential of TOPO:LauA for rapid and effective PFAS remediation. TOPO:LauA maintained consistent performance across various operational parameters, including a range of initial PFOA concentrations (0.1 ppm to 1000 ppm), temperatures (15 °C to 100 °C), pH values (3 to 9), and S:F ratios (2:3 to 1:7), demonstrating its versatility and robustness. Furthermore, its effectiveness was consistently high over seven consecutive extraction cycles, highlighting TOPO:LauA as a sustainable, environmentally friendly alternative to hazardous organic solvents, with promising applications for reliable, repeatable use in combating persistent water pollutants such as PFOA.

Keywords: deep eutectic solvents, natural deep eutectic solvents, perfluorooctanoic acid, water remediation

Procedia PDF Downloads 35
5356 Heavy Metal Distribution in Tissues of Two Commercially Important Fish Species, Euryglossa orientalis and Psettodes erumei

Authors: Reza Khoshnood, Zahra Khoshnood, Ali Hajinajaf, Farzad Fahim, Behdokht Hajinajaf, Farhad Fahim

Abstract:

In 2013, 24 fish samples were taken from two fishery regions in Bandar-Abbas and Bandar-Lengeh, the fishing grounds north of Hormoz Strait (Persian Gulf) near the Iranian coastline. The two flat fishes were oriental sole (Euryglossa orientalis) and deep flounder (Psettodes erumei). Using the ROPME method (MOOPAM) for chemical digestion, Cd concentration was measured with a nonflame atomic absorption spectrophotometry technique. The average concentration of Cd in the edible muscle tissue of deep flounder was measured in Bandar-Abbas and was found to be 0.15±.06 µg g-1. It was 0.1±.05 µg.g-1 in Bandar-Lengeh. The corresponding values for oriental sole were 0.2±0.13 and 0.13±0.11 µg.g-1. The average concentration of Cd in the liver tissue of deep flounder in Bandar-Abbas was 0.22±.05 µg g-1 and that in Bandar-Lengeh was 0.2±0.04 µg.g-1. The values for oriental sole were 0.31±0.09 and 0.24±0.13 µg g-1 in Bandar-Abbas and Bandar-Lengeh, respectively.

Keywords: trace metal, Euryglossa orientalis, Psettodes erumei, Persian Gulf

Procedia PDF Downloads 640
5355 Recurrent Neural Networks for Complex Survival Models

Authors: Pius Marthin, Nihal Ata Tutkun

Abstract:

Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.

Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)

Procedia PDF Downloads 59
5354 Computational Approaches for Ballistic Impact Response of Stainless Steel 304

Authors: A. Mostafa

Abstract:

This paper presents a numerical study on determination of ballistic limit velocity (V50) of stainless steel 304 (SS 304) used in manufacturing security screens. The simulated ballistic impact tests were conducted on clamped sheets with different thicknesses using ABAQUS/Explicit nonlinear finite element (FE) package. The ballistic limit velocity was determined using three approaches, namely: numerical tests based on material properties, FE calculated residual velocities and FE calculated residual energies. Johnson-Cook plasticity and failure criterion were utilized to simulate the dynamic behaviour of the SS 304 under various strain rates, while the well-known Lambert-Jonas equation was used for the data regression for the residual velocity and energy model. Good agreement between the investigated numerical methods was achieved. Additionally, the dependence of the ballistic limit velocity on the sheet thickness was observed. The proposed approaches present viable and cost-effective assessment methods of the ballistic performance of SS 304, which will support the development of robust security screen systems.

Keywords: ballistic velocity, stainless steel, numerical approaches, security screen

Procedia PDF Downloads 136