Search results for: Abaqus Python scripting
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 437

Search results for: Abaqus Python scripting

137 Evaluation of Seismic Behavior of Steel Shear Wall with Opening with Hardener and Beam with Reduced Cross Section under Cycle Loading with Finite Element Analysis Method

Authors: Masoud Mahdavi

Abstract:

During an earthquake, the structure is subjected to seismic loads that cause tension in the members of the building. The use of energy dissipation elements in the structure reduces the percentage of seismic forces on the main members of the building (especially the columns). Steel plate shear wall, as one of the most widely used types of energy dissipation element, has evolved today, and regular drilling of its inner plate is one of the common cases. In the present study, using a finite element method, the shear wall of the steel plate is designed as a floor (with dimensions of 447 × 6/246 cm) with Abacus software and in three different modes on which a cyclic load has been applied. The steel shear wall has a horizontal element (beam) with a reduced beam section (RBS). The hole in the interior plate of the models is created in such a way that it has the process of increasing the area, which makes the effect of increasing the surface area of the hole on the seismic performance of the steel shear wall completely clear. In the end, it was found that with increasing the opening level in the steel shear wall (with reduced cross-section beam), total displacement and plastic strain indicators increased, structural capacity and total energy indicators decreased and the Mises Monson stress index did not change much.

Keywords: steel plate shear wall with opening, cyclic loading, reduced cross-section beam, finite element method, Abaqus software

Procedia PDF Downloads 99
136 Numerical Investigation of Material Behavior During Non-Equal Channel Multi Angular Extrusion

Authors: Mohamed S. El-Asfoury, Ahmed Abdel-Moneim, Mohamed N. A. Nasr

Abstract:

The current study uses finite element modeling to investigate and analyze a modified form of the from the conventional equal channel multi-angular pressing (ECMAP), using non-equal channels, on the workpiece material plastic deformation. The modified process non-equal channel multi-angular extrusion (NECMAE) is modeled using two-dimensional plane strain finite element model built using the commercial software ABAQUS. The workpiece material used is pure aluminum. The model was first validated by comparing its results to analytical solutions for single-pass equal channel angular extrusion (ECAP), as well as previously published data. After that, the model was used to examine the effects of different % of reductions of the area (for the second stage) on material plastic deformation, corner gap, and required the load. Three levels of reduction in the area were modeled; 10%, 30%, and 50%, and compared to single-pass and double-pass ECAP. Cases with a higher reduction in the area were found to have smaller corner gaps, higher and much uniform plastic deformation, as well as higher required loads. The current results are mainly attributed to the back pressure effects exerted by the second stage, as well as strain hardening effects experienced during the first stage.

Keywords: non-equal channel angular extrusion, multi-pass, sever plastic deformation, back pressure, Finite Element Modelling (FEM)

Procedia PDF Downloads 402
135 Multiscale Hub: An Open-Source Framework for Practical Atomistic-To-Continuum Coupling

Authors: Masoud Safdari, Jacob Fish

Abstract:

Despite vast amount of existing theoretical knowledge, the implementation of a universal multiscale modeling, analysis, and simulation software framework remains challenging. Existing multiscale software and solutions are often domain-specific, closed-source and mandate a high-level of experience and skills in both multiscale analysis and programming. Furthermore, tools currently existing for Atomistic-to-Continuum (AtC) multiscaling are developed with the assumptions such as accessibility of high-performance computing facilities to the users. These issues mentioned plus many other challenges have reduced the adoption of multiscale in academia and especially industry. In the current work, we introduce Multiscale Hub (MsHub), an effort towards making AtC more accessible through cloud services. As a joint effort between academia and industry, MsHub provides a universal web-enabled framework for practical multiscaling. Developed on top of universally acclaimed scientific programming language Python, the package currently provides an open-source, comprehensive, easy-to-use framework for AtC coupling. MsHub offers an easy to use interface to prominent molecular dynamics and multiphysics continuum mechanics packages such as LAMMPS and MFEM (a free, lightweight, scalable C++ library for finite element methods). In this work, we first report on the design philosophy of MsHub, challenges identified and issues faced regarding its implementation. MsHub takes the advantage of a comprehensive set of tools and algorithms developed for AtC that can be used for a variety of governing physics. We then briefly report key AtC algorithms implemented in MsHub. Finally, we conclude with a few examples illustrating the capabilities of the package and its future directions.

Keywords: atomistic, continuum, coupling, multiscale

Procedia PDF Downloads 155
134 Development of Medical Intelligent Process Model Using Ontology Based Technique

Authors: Emmanuel Chibuogu Asogwa, Tochukwu Sunday Belonwu

Abstract:

An urgent demand for creative solutions has been created by the rapid expansion of medical knowledge, the complexity of patient care, and the requirement for more precise decision-making. As a solution to this problem, the creation of a Medical Intelligent Process Model (MIPM) utilizing ontology-based appears as a promising way to overcome this obstacle and unleash the full potential of healthcare systems. The development of a Medical Intelligent Process Model (MIPM) using ontology-based techniques is motivated by a lack of quick access to relevant medical information and advanced tools for treatment planning and clinical decision-making, which ontology-based techniques can provide. The aim of this work is to develop a structured and knowledge-driven framework that leverages ontology, a formal representation of domain knowledge, to enhance various aspects of healthcare. Object-Oriented Analysis and Design Methodology (OOADM) were adopted in the design of the system as we desired to build a usable and evolvable application. For effective implementation of this work, we used the following materials/methods/tools: the medical dataset for the test of our model in this work was obtained from Kaggle. The ontology-based technique was used with Confusion Matrix, MySQL, Python, Hypertext Markup Language (HTML), Hypertext Preprocessor (PHP), Cascaded Style Sheet (CSS), JavaScript, Dreamweaver, and Fireworks. According to test results on the new system using Confusion Matrix, both the accuracy and overall effectiveness of the medical intelligent process significantly improved by 20% compared to the previous system. Therefore, using the model is recommended for healthcare professionals.

Keywords: ontology-based, model, database, OOADM, healthcare

Procedia PDF Downloads 45
133 Behavior of Composite Reinforced Concrete Circular Columns with Glass Fiber Reinforced Polymer I-Section

Authors: Hiba S. Ahmed, Abbas A. Allawi, Riyadh A. Hindi

Abstract:

Pultruded materials made of fiber-reinforced polymer (FRP) come in a broad range of shapes, such as bars, I-sections, C-sections, and other structural sections. These FRP materials are starting to compete with steel as structural materials because of their great resistance, low self-weight, and cheap maintenance costs-especially in corrosive conditions. This study aimed to evaluate the effectiveness of Glass Fiber Reinforced Polymer (GFRP) of the hybrid columns built by combining (GFRP) profiles with concrete columns because of their low cost and high structural efficiency. To achieve the aims of this study, nine circular columns with a diameter of (150 mm) and a height of (1000mm) were cast using normal concrete with compression strength equal to (35 MPa). The research involved three different types of reinforcement: hybrid circular columns type (IG) with GFRP I-section and 1% of the reinforcement ratio of steel bars, hybrid circular columns type (IS) with steel I-section and 1% of the reinforcement ratio of steel bars, (where the cross-section area of I-section for GFRP and steel was the same), compared with reference column (R) without I-section. To investigate the ultimate capacity, axial and lateral deformation, strain in longitudinal and transverse reinforcement, and failure mode of the circular column under different loading conditions (concentric and eccentric) with eccentricities of 25 mm and 50 mm, respectively. In the second part, an analytical finite element model will be performed using ABAQUS software to validate the experimental results.

Keywords: composite, columns, reinforced concrete, GFRP, axial load

Procedia PDF Downloads 26
132 Perforation Analysis of the Aluminum Alloy Sheets Subjected to High Rate of Loading and Heated Using Thermal Chamber: Experimental and Numerical Approach

Authors: A. Bendarma, T. Jankowiak, A. Rusinek, T. Lodygowski, M. Klósak, S. Bouslikhane

Abstract:

The analysis of the mechanical characteristics and dynamic behavior of aluminum alloy sheet due to perforation tests based on the experimental tests coupled with the numerical simulation is presented. The impact problems (penetration and perforation) of the metallic plates have been of interest for a long time. Experimental, analytical as well as numerical studies have been carried out to analyze in details the perforation process. Based on these approaches, the ballistic properties of the material have been studied. The initial and residual velocities laser sensor is used during experiments to obtain the ballistic curve and the ballistic limit. The energy balance is also reported together with the energy absorbed by the aluminum including the ballistic curve and ballistic limit. The high speed camera helps to estimate the failure time and to calculate the impact force. A wide range of initial impact velocities from 40 up to 180 m/s has been covered during the tests. The mass of the conical nose shaped projectile is 28 g, its diameter is 12 mm, and the thickness of the aluminum sheet is equal to 1.0 mm. The ABAQUS/Explicit finite element code has been used to simulate the perforation processes. The comparison of the ballistic curve was obtained numerically and was verified experimentally, and the failure patterns are presented using the optimal mesh densities which provide the stability of the results. A good agreement of the numerical and experimental results is observed.

Keywords: aluminum alloy, ballistic behavior, failure criterion, numerical simulation

Procedia PDF Downloads 290
131 Early Gastric Cancer Prediction from Diet and Epidemiological Data Using Machine Learning in Mizoram Population

Authors: Brindha Senthil Kumar, Payel Chakraborty, Senthil Kumar Nachimuthu, Arindam Maitra, Prem Nath

Abstract:

Gastric cancer is predominantly caused by demographic and diet factors as compared to other cancer types. The aim of the study is to predict Early Gastric Cancer (ECG) from diet and lifestyle factors using supervised machine learning algorithms. For this study, 160 healthy individual and 80 cases were selected who had been followed for 3 years (2016-2019), at Civil Hospital, Aizawl, Mizoram. A dataset containing 11 features that are core risk factors for the gastric cancer were extracted. Supervised machine algorithms: Logistic Regression, Naive Bayes, Support Vector Machine (SVM), Multilayer perceptron, and Random Forest were used to analyze the dataset using Python Jupyter Notebook Version 3. The obtained classified results had been evaluated using metrics parameters: minimum_false_positives, brier_score, accuracy, precision, recall, F1_score, and Receiver Operating Characteristics (ROC) curve. Data analysis results showed Naive Bayes - 88, 0.11; Random Forest - 83, 0.16; SVM - 77, 0.22; Logistic Regression - 75, 0.25 and Multilayer perceptron - 72, 0.27 with respect to accuracy and brier_score in percent. Naive Bayes algorithm out performs with very low false positive rates as well as brier_score and good accuracy. Naive Bayes algorithm classification results in predicting ECG showed very satisfactory results using only diet cum lifestyle factors which will be very helpful for the physicians to educate the patients and public, thereby mortality of gastric cancer can be reduced/avoided with this knowledge mining work.

Keywords: Early Gastric cancer, Machine Learning, Diet, Lifestyle Characteristics

Procedia PDF Downloads 125
130 Multi-Point Dieless Forming Product Defect Reduction Using Reliability-Based Robust Process Optimization

Authors: Misganaw Abebe Baye, Ji-Woo Park, Beom-Soo Kang

Abstract:

The product quality of multi-point dieless forming (MDF) is identified to be dependent on the process parameters. Moreover, a certain variation of friction and material properties may have a substantially worse influence on the final product quality. This study proposed on how to compensate the MDF product defects by minimizing the sensitivity of noise parameter variations. This can be attained by reliability-based robust optimization (RRO) technique to obtain the optimal process setting of the controllable parameters. Initially two MDF Finite Element (FE) simulations of AA3003-H14 saddle shape showed a substantial amount of dimpling, wrinkling, and shape error. FE analyses are consequently applied on ABAQUS commercial software to obtain the correlation between the control process setting and noise variation with regard to the product defects. The best prediction models are chosen from the family of metamodels to swap the computational expensive FE simulation. Genetic algorithm (GA) is applied to determine the optimal process settings of the control parameters. Monte Carlo Analysis (MCA) is executed to determine how the noise parameter variation affects the final product quality. Finally, the RRO FE simulation and the experimental result show that the amendment of the control parameters in the final forming process leads to a considerably better-quality product.

Keywords: dimpling, multi-point dieless forming, reliability-based robust optimization, shape error, variation, wrinkling

Procedia PDF Downloads 225
129 Graph Neural Network-Based Classification for Disease Prediction in Health Care Heterogeneous Data Structures of Electronic Health Record

Authors: Raghavi C. Janaswamy

Abstract:

In the healthcare sector, heterogenous data elements such as patients, diagnosis, symptoms, conditions, observation text from physician notes, and prescriptions form the essentials of the Electronic Health Record (EHR). The data in the form of clear text and images are stored or processed in a relational format in most systems. However, the intrinsic structure restrictions and complex joins of relational databases limit the widespread utility. In this regard, the design and development of realistic mapping and deep connections as real-time objects offer unparallel advantages. Herein, a graph neural network-based classification of EHR data has been developed. The patient conditions have been predicted as a node classification task using a graph-based open source EHR data, Synthea Database, stored in Tigergraph. The Synthea DB dataset is leveraged due to its closer representation of the real-time data and being voluminous. The graph model is built from the EHR heterogeneous data using python modules, namely, pyTigerGraph to get nodes and edges from the Tigergraph database, PyTorch to tensorize the nodes and edges, PyTorch-Geometric (PyG) to train the Graph Neural Network (GNN) and adopt the self-supervised learning techniques with the AutoEncoders to generate the node embeddings and eventually perform the node classifications using the node embeddings. The model predicts patient conditions ranging from common to rare situations. The outcome is deemed to open up opportunities for data querying toward better predictions and accuracy.

Keywords: electronic health record, graph neural network, heterogeneous data, prediction

Procedia PDF Downloads 63
128 Investigating the Minimum RVE Size to Simulate Poly (Propylene carbonate) Composites Reinforced with Cellulose Nanocrystals as a Bio-Nanocomposite

Authors: Hamed Nazeri, Pierre Mertiny, Yongsheng Ma, Kajsa Duke

Abstract:

The background of the present study is the use of environment-friendly biopolymer and biocomposite materials. Among the recently introduced biopolymers, poly (propylene carbonate) (PPC) has been gaining attention. This study focuses on the size of representative volume elements (RVE) in order to simulate PPC composites reinforced by cellulose nanocrystals (CNCs) as a bio-nanocomposite. Before manufacturing nanocomposites, numerical modeling should be implemented to explore and predict mechanical properties, which may be accomplished by creating and studying a suitable RVE. In other studies, modeling of composites with rod shaped fillers has been reported assuming that fillers are unidirectionally aligned. But, modeling of non-aligned filler dispersions is considerably more difficult. This study investigates the minimum RVE size to enable subsequent FEA modeling. The matrix and nano-fillers were modeled using the finite element software ABAQUS, assuming randomly dispersed fillers with a filler mass fraction of 1.5%. To simulate filler dispersion, a Monte Carlo technique was employed. The numerical simulation was implemented to find composite elastic moduli. After commencing the simulation with a single filler particle, the number of particles was increased to assess the minimum number of filler particles that satisfies the requirements for an RVE, providing the composite elastic modulus in a reliable fashion.

Keywords: biocomposite, Monte Carlo method, nanocomposite, representative volume element

Procedia PDF Downloads 421
127 Finite Element Analysis of Cold Formed Steel Screwed Connections

Authors: Jikhil Joseph, S. R. Satish Kumar

Abstract:

Steel Structures are commonly used for rapid erections and multistory constructions due to its inherent advantages. However, the high accuracy required in detailing and heavier sections, make it difficult to erect in place and transport. Cold Formed steel which are specially made by reducing carbon and other alloys are used nowadays to make thin-walled structures. Various types of connections are being reported as well as practiced for the thin-walled members such as bolting, riveting, welding and other mechanical connections. Commonly self-drilling screw connections are used for cold-formed purlin sheeting connection. In this paper an attempt is made to develop a moment resting frame which can be rapidly and remotely constructed with thin walled sections and self-drilling screws. Semi-rigid Moment connections are developed with Rectangular thin-walled tubes and the screws. The Finite Element Analysis programme ABAQUS is used for modelling the screwed connections. The various modelling procedures for simulating the connection behavior such as tie-constraint model, oriented spring model and solid interaction modelling are compared and are critically reviewed. From the experimental validations the solid-interaction modelling identified to be the most accurate one and are used for predicting the connection behaviors. From the finite element analysis, hysteresis curves and the modes of failure were identified. Parametric studies were done on the connection model to optimize the connection configurations to get desired connection characteristics.

Keywords: buckling, cold formed steel, finite element analysis, screwed connections

Procedia PDF Downloads 155
126 Developing an Automated Protocol for the Wristband Extraction Process Using Opentrons

Authors: Tei Kim, Brooklynn McNeil, Kathryn Dunn, Douglas I. Walker

Abstract:

To better characterize the relationship between complex chemical exposures and disease, our laboratory uses an approach that combines low-cost, polydimethylsiloxane (silicone) wristband samplers that absorb many of the chemicals we are exposed to with untargeted high-resolution mass spectrometry (HRMS) to characterize 1000’s of chemicals at a time. In studies with human populations, these wristbands can provide an important measure of our environment: however, there is a need to use this approach in large cohorts to study exposures associated with the disease. To facilitate the use of silicone samplers in large scale population studies, the goal of this research project was to establish automated sample preparation methods that improve throughput, robustness, and scalability of analytical methods for silicone wristbands. Using the Opentron OT2 automated liquid platform, which provides a low-cost and opensource framework for automated pipetting, we created two separate workflows that translate the manual wristband preparation method to a fully automated protocol that requires minor intervention by the operator. These protocols include a sequence generation step, which defines the location of all plates and labware according to user-specified settings, and a transfer protocol that includes all necessary instrument parameters and instructions for automated solvent extraction of wristband samplers. These protocols were written in Python and uploaded to GitHub for use by others in the research community. Results from this project show it is possible to establish automated and open source methods for the preparation of silicone wristband samplers to support profiling of many environmental exposures. Ongoing studies include deployment in longitudinal cohort studies to investigate the relationship between personal chemical exposure and disease.

Keywords: bioinformatics, automation, opentrons, research

Procedia PDF Downloads 84
125 Comparison of Deep Learning and Machine Learning Algorithms to Diagnose and Predict Breast Cancer

Authors: F. Ghazalnaz Sharifonnasabi, Iman Makhdoom

Abstract:

Breast cancer is a serious health concern that affects many people around the world. According to a study published in the Breast journal, the global burden of breast cancer is expected to increase significantly over the next few decades. The number of deaths from breast cancer has been increasing over the years, but the age-standardized mortality rate has decreased in some countries. It’s important to be aware of the risk factors for breast cancer and to get regular check- ups to catch it early if it does occur. Machin learning techniques have been used to aid in the early detection and diagnosis of breast cancer. These techniques, that have been shown to be effective in predicting and diagnosing the disease, have become a research hotspot. In this study, we consider two deep learning approaches including: Multi-Layer Perceptron (MLP), and Convolutional Neural Network (CNN). We also considered the five-machine learning algorithm titled: Decision Tree (C4.5), Naïve Bayesian (NB), Support Vector Machine (SVM), K-Nearest Neighbors (KNN) Algorithm and XGBoost (eXtreme Gradient Boosting) on the Breast Cancer Wisconsin Diagnostic dataset. We have carried out the process of evaluating and comparing classifiers involving selecting appropriate metrics to evaluate classifier performance and selecting an appropriate tool to quantify this performance. The main purpose of the study is predicting and diagnosis breast cancer, applying the mentioned algorithms and also discovering of the most effective with respect to confusion matrix, accuracy and precision. It is realized that CNN outperformed all other classifiers and achieved the highest accuracy (0.982456). The work is implemented in the Anaconda environment based on Python programing language.

Keywords: breast cancer, multi-layer perceptron, Naïve Bayesian, SVM, decision tree, convolutional neural network, XGBoost, KNN

Procedia PDF Downloads 46
124 Limited Component Evaluation of the Effect of Regular Cavities on the Sheet Metal Element of the Steel Plate Shear Wall

Authors: Seyyed Abbas Mojtabavi, Mojtaba Fatzaneh Moghadam, Masoud Mahdavi

Abstract:

Steel Metal Shear Wall is one of the most common and widely used energy dissipation systems in structures, which is used today as a damping system due to the increase in the construction of metal structures. In the present study, the shear wall of the steel plate with dimensions of 5×3 m and thickness of 0.024 m was modeled with 2 floors of total height from the base level with finite element method in Abaqus software. The loading is done as a concentrated load at the upper point of the shear wall on the second floor based on step type buckle. The mesh in the model is applied in two directions of length and width of the shear wall, equal to 0.02 and 0.033, respectively, and the mesh in the models is of sweep type. Finally, it was found that the steel plate shear wall with cavity (CSPSW) compared to the SPSW model, S (Mises), Smax (In-Plane Principal), Smax (In-Plane Principal-ABS), Smax (Min Principal) increased by 53%, 70%, 68% and 43%, respectively. The presence of cavities has led to an increase in the estimated stresses, but their presence has caused critical stresses and critical deformations created to be removed from the inner surface of the shear wall and transferred to the desired sections (regular cavities) which can be suggested as a solution in seismic design and improvement of the structure to transfer possible damage during the earthquake and storm to the desired and pre-designed location in the structure.

Keywords: steel plate shear wall, abacus software, finite element method, , boundary element, seismic structural improvement, von misses stress

Procedia PDF Downloads 71
123 Assessing the Effect of the Position of the Cavities on the Inner Plate of the Steel Shear Wall under Time History Dynamic Analysis

Authors: Masoud Mahdavi, Mojtaba Farzaneh Moghadam

Abstract:

The seismic forces caused by the waves created in the depths of the earth during the earthquake hit the structure and cause the building to vibrate. Creating large seismic forces will cause low-strength sections in the structure to suffer extensive surface damage. The use of new steel shear walls in steel structures has caused the strength of the building and its main members (columns) to increase due to the reduction and depreciation of seismic forces during earthquakes. In the present study, an attempt was made to evaluate a type of steel shear wall that has regular holes in the inner sheet by modeling the finite element model with Abacus software. The shear wall of the steel plate, measuring 6000 × 3000 mm (one floor) and 3 mm thickness, was modeled with four different pores with a cross-sectional area. The shear wall was dynamically subjected to a time history of 5 seconds by three accelerators, El Centro, Imperial Valley and Kobe. The results showed that increasing the distance between the geometric center of the hole and the geometric center of the inner plate in the steel shear wall (increasing the RCS index) caused the total maximum acceleration to be transferred from the perimeter of the hole to horizontal and vertical beams. The results also show that there is no direct relationship between RCS index and total acceleration in steel shear wall and RCS index is separate from the peak ground acceleration value of earthquake.

Keywords: hollow steel plate shear wall, time history analysis, finite element method, abaqus software

Procedia PDF Downloads 85
122 Micro-Meso 3D FE Damage Modelling of Woven Carbon Fibre Reinforced Plastic Composite under Quasi-Static Bending

Authors: Aamir Mubashar, Ibrahim Fiaz

Abstract:

This research presents a three-dimensional finite element modelling strategy to simulate damage in a quasi-static three-point bending analysis of woven twill 2/2 type carbon fibre reinforced plastic (CFRP) composite on a micro-meso level using cohesive zone modelling technique. A meso scale finite element model comprised of a number of plies was developed in the commercial finite element code Abaqus/explicit. The interfaces between the plies were explicitly modelled using cohesive zone elements to allow for debonding by crack initiation and propagation. Load-deflection response of the CRFP within the quasi-static range was obtained and compared with the data existing in the literature. This provided validation of the model at the global scale. The outputs resulting from the global model were then used to develop a simulation model capturing the micro-meso scale material features. The sub-model consisted of a refined mesh representative volume element (RVE) modelled in texgen software, which was later embedded with cohesive elements in the finite element software environment. The results obtained from the developed strategy were successful in predicting the overall load-deflection response and the damage in global and sub-model at the flexure limit of the specimen. Detailed analysis of the effects of the micro-scale features was carried out.

Keywords: woven composites, multi-scale modelling, cohesive zone, finite element model

Procedia PDF Downloads 114
121 Domain-Specific Deep Neural Network Model for Classification of Abnormalities on Chest Radiographs

Authors: Nkechinyere Joy Olawuyi, Babajide Samuel Afolabi, Bola Ibitoye

Abstract:

This study collected a preprocessed dataset of chest radiographs and formulated a deep neural network model for detecting abnormalities. It also evaluated the performance of the formulated model and implemented a prototype of the formulated model. This was with the view to developing a deep neural network model to automatically classify abnormalities in chest radiographs. In order to achieve the overall purpose of this research, a large set of chest x-ray images were sourced for and collected from the CheXpert dataset, which is an online repository of annotated chest radiographs compiled by the Machine Learning Research Group, Stanford University. The chest radiographs were preprocessed into a format that can be fed into a deep neural network. The preprocessing techniques used were standardization and normalization. The classification problem was formulated as a multi-label binary classification model, which used convolutional neural network architecture to make a decision on whether an abnormality was present or not in the chest radiographs. The classification model was evaluated using specificity, sensitivity, and Area Under Curve (AUC) score as the parameter. A prototype of the classification model was implemented using Keras Open source deep learning framework in Python Programming Language. The AUC ROC curve of the model was able to classify Atelestasis, Support devices, Pleural effusion, Pneumonia, A normal CXR (no finding), Pneumothorax, and Consolidation. However, Lung opacity and Cardiomegaly had a probability of less than 0.5 and thus were classified as absent. Precision, recall, and F1 score values were 0.78; this implies that the number of False Positive and False Negative is the same, revealing some measure of label imbalance in the dataset. The study concluded that the developed model is sufficient to classify abnormalities present in chest radiographs into present or absent.

Keywords: transfer learning, convolutional neural network, radiograph, classification, multi-label

Procedia PDF Downloads 87
120 Evaluation of Forming Properties on AA 5052 Aluminium Alloy by Incremental Forming

Authors: A. Anbu Raj, V. Mugendiren

Abstract:

Sheet metal forming is a vital manufacturing process used in automobile, aerospace, agricultural industries, etc. Incremental forming is a promising process providing a short and inexpensive way of forming complex three-dimensional parts without using die. The aim of this research is to study the forming behaviour of AA 5052, Aluminium Alloy, using incremental forming and also to study the FLD of cone shape AA 5052 Aluminium Alloy at room temperature and various annealing temperature. Initially the surface roughness and wall thickness through incremental forming on AA 5052 Aluminium Alloy sheet at room temperature is optimized by controlling the effects of forming parameters. The central composite design (CCD) was utilized to plan the experiment. The step depth, feed rate, and spindle speed were considered as input parameters in this study. The surface roughness and wall thickness were used as output response. The process performances such as average thickness and surface roughness were evaluated. The optimized results are taken for minimum surface roughness and maximum wall thickness. The optimal results are determined based on response surface methodology and the analysis of variance. Formability Limit Diagram is constructed on AA 5052 Aluminium Alloy at room temperature and various annealing temperature by using optimized process parameters from the response surface methodology. The cone has higher formability than the square pyramid and higher wall thickness distribution. Finally the FLD on cone shape and square pyramid shape at room temperature and the various annealing temperature is compared experimentally and simulated with Abaqus software.

Keywords: incremental forming, response surface methodology, optimization, wall thickness, surface roughness

Procedia PDF Downloads 313
119 FlameCens: Visualization of Expressive Deviations in Music Performance

Authors: Y. Trantafyllou, C. Alexandraki

Abstract:

Music interpretation accounts to the way musicians shape their performance by deliberately deviating from composers’ intentions, which are commonly communicated via some form of music transcription, such as a music score. For transcribed and non-improvised music, music expression is manifested by introducing subtle deviations in tempo, dynamics and articulation during the evolution of performance. This paper presents an application, named FlameCens, which, given two recordings of the same piece of music, presumably performed by different musicians, allow visualising deviations in tempo and dynamics during playback. The application may also compare a certain performance to the music score of that piece (i.e. MIDI file), which may be thought of as an expression-neutral representation of that piece, hence depicting the expressive queues employed by certain performers. FlameCens uses the Dynamic Time Warping algorithm to compare two audio sequences, based on CENS (Chroma Energy distribution Normalized Statistics) audio features. Expressive deviations are illustrated in a moving flame, which is generated by an animation of particles. The length of the flame is mapped to deviations in dynamics, while the slope of the flame is mapped to tempo deviations so that faster tempo changes the slope to the right and slower tempo changes the slope to the left. Constant slope signifies no tempo deviation. The detected deviations in tempo and dynamics can be additionally recorded in a text file, which allows for offline investigation. Moreover, in the case of monophonic music, the color of particles is used to convey the pitch of the notes during performance. FlameCens has been implemented in Python and it is openly available via GitHub. The application has been experimentally validated for different music genres including classical, contemporary, jazz and popular music. These experiments revealed that FlameCens can be a valuable tool for music specialists (i.e. musicians or musicologists) to investigate the expressive performance strategies employed by different musicians, as well as for music audience to enhance their listening experience.

Keywords: audio synchronization, computational music analysis, expressive music performance, information visualization

Procedia PDF Downloads 106
118 Application of Continuum Damage Concept to Simulation of the Interaction between Hydraulic Fractures and Natural Fractures

Authors: Anny Zambrano, German Gonzalez, Yair Quintero

Abstract:

The continuum damage concept is used to study the interaction between hydraulic fractures and natural fractures, the objective is representing the path and relation among this two fractures types and predict its complex behavior without the need to pre-define their direction as occurs in other finite element applications, providing results more consistent with the physical behavior of the phenomenon. The approach uses finite element simulations through Abaqus software to model damage fracturing, the fracturing process by damage propagation in a rock. The modeling the phenomenon develops in two dimensional (2D) so that the fracture will be represented by a line and the crack front by a point. It considers nonlinear constitutive behavior, finite strain, time-dependent deformation, complex boundary conditions, strain hardening and softening, and strain based damage evolution in compression and tension. The complete governing equations are provided and the method is described in detail to permit readers to replicate all results. The model is compared to models that are published and available. Comparisons are focused in five interactions between natural fractures (NF) and hydraulic fractures: Fractured arrested at NF, crossing NF with or without offset, branching at intersecting NFs, branching at end of NF and NF dilation due to shear slippage. The most significant new finding is, that is not necessary to use pre-defined addresses propagation and stress condition can be evaluated as a dominant factor in the process. This is important because it can model in a more real way the generated complex hydraulic fractures, and be a valuable tool to predict potential problems and different geometries of the fracture network in the process of fracturing due to fluid injection.

Keywords: continuum damage, hydraulic fractures, natural fractures, complex fracture network, stiffness

Procedia PDF Downloads 304
117 Sexual Consent: Exploring the Perceptions of Heterosexual, Gay, and Bisexual Men

Authors: Shulamit Sternin, Raymond M. McKie, Carter Winberg, Robb N. Travers, Terry P. Humphreys, Elke D. Reissing

Abstract:

Issues surrounding sexual consent negotiation have become a major topic of societal concern. The majority of current research focuses on the complexities of sexual consent negotiations and the multitude of nuanced issues that surround the consent obtainment of heterosexual adults in post-secondary educational institutions. To date, the only study that has addressed sexual consent negotiation behaviour in same-sex relationships focused on the extent to which individuals used a variety of different verbal and nonverbal sexual consent behaviours to initiate or respond to sexual activity. The results were consistent with trends found within heterosexual individuals; thus, suggesting that the current understanding of sexual consent negotiation, which is grounded in heterosexual research, can serve as a strong foundation for further exploration of sexual consent negotiation within same-sex relationships populations. The current study quantitatively investigated the differences between heterosexual men and gay and bisexual men (GBM) in their understanding of sexual consent negotiation. Exploring how the perceptions of GBM differ from heterosexual males provides insight into some of the unique challenges faced by GBM. Data were collected from a sample of 252 heterosexual men and 314 GBM from Canada, the United States, and Western Europe. Participants responded to the question, 'do you think sexual consent and sex negotiation is different for heterosexual men compared to gay men? If so, how?' by completed an online survey. Responses were analysed following Braun & Clarke’s (2006) six phase thematic analysis guidelines. Inter-rater coding was validated using Cohen’s Kappa value and was calculated at (ϰ = 0.84), indicating a very strong level of agreement between raters. The final thematic structure yielded four major themes: understanding of sexual interaction, unique challenges, scripted role, and universal consent. Respondents spoke to their understanding of sexual interaction, believing GBM sexual consent negotiation to be faster and more immediate. This was linked to perceptions of emotional attachment and the idea that sexual interaction and emotional involvement were distinct and separate processes in GBM sexual consent negotiation, not believed to be the case in heterosexual interactions. Unique challenges such as different protection concerns, role declaration, and sexualization of spaces were understood to hold differing levels of consideration for heterosexual men and GBM. The perception of a clearly defined sexual script for GBM was suggested as a factor that may create ambiguity surrounding sexual consent negotiation, which in turn holds significant implications on unwanted sexual experiences for GBM. Broadening the scope of the current understanding of sexual consent negotiation by focusing on heterosexual and GBM population, the current study has revealed variations in perception of sexual consent negotiation between these two populations. These differences may be understood within the context of sexual scripting theory and masculinity gender role theory. We suggest that sexual consent negotiation is a health risk factor for GBM that has not yet been adequately understood and addressed. Awareness of the perceptions that surround the sexual consent negotiation of both GBM and heterosexual men holds implications on public knowledge, which in turn can better inform policy making, education, future research, and clinical treatment.

Keywords: sexual consent, negotiation, heterosexual men, GBM, sexual script

Procedia PDF Downloads 172
116 An Overview of Domain Models of Urban Quantitative Analysis

Authors: Mohan Li

Abstract:

Nowadays, intelligent research technology is more and more important than traditional research methods in urban research work, and this proportion will greatly increase in the next few decades. Frequently such analyzing work cannot be carried without some software engineering knowledge. And here, domain models of urban research will be necessary when applying software engineering knowledge to urban work. In many urban plan practice projects, making rational models, feeding reliable data, and providing enough computation all make indispensable assistance in producing good urban planning. During the whole work process, domain models can optimize workflow design. At present, human beings have entered the era of big data. The amount of digital data generated by cities every day will increase at an exponential rate, and new data forms are constantly emerging. How to select a suitable data set from the massive amount of data, manage and process it has become an ability that more and more planners and urban researchers need to possess. This paper summarizes and makes predictions of the emergence of technologies and technological iterations that may affect urban research in the future, discover urban problems, and implement targeted sustainable urban strategies. They are summarized into seven major domain models. They are urban and rural regional domain model, urban ecological domain model, urban industry domain model, development dynamic domain model, urban social and cultural domain model, urban traffic domain model, and urban space domain model. These seven domain models can be used to guide the construction of systematic urban research topics and help researchers organize a series of intelligent analytical tools, such as Python, R, GIS, etc. These seven models make full use of quantitative spatial analysis, machine learning, and other technologies to achieve higher efficiency and accuracy in urban research, assisting people in making reasonable decisions.

Keywords: big data, domain model, urban planning, urban quantitative analysis, machine learning, workflow design

Procedia PDF Downloads 155
115 Coupled Hydro-Geomechanical Modeling of Oil Reservoir Considering Non-Newtonian Fluid through a Fracture

Authors: Juan Huang, Hugo Ninanya

Abstract:

Oil has been used as a source of energy and supply to make materials, such as asphalt or rubber for many years. This is the reason why new technologies have been implemented through time. However, research still needs to continue increasing due to new challenges engineers face every day, just like unconventional reservoirs. Various numerical methodologies have been applied in petroleum engineering as tools in order to optimize the production of reservoirs before drilling a wellbore, although not all of these have the same efficiency when talking about studying fracture propagation. Analytical methods like those based on linear elastic fractures mechanics fail to give a reasonable prediction when simulating fracture propagation in ductile materials whereas numerical methods based on the cohesive zone method (CZM) allow to represent the elastoplastic behavior in a reservoir based on a constitutive model; therefore, predictions in terms of displacements and pressure will be more reliable. In this work, a hydro-geomechanical coupled model of horizontal wells in fractured rock was developed using ABAQUS; both extended element method and cohesive elements were used to represent predefined fractures in a model (2-D). A power law for representing the rheological behavior of fluid (shear-thinning, power index <1) through fractures and leak-off rate permeating to the matrix was considered. Results have been showed in terms of aperture and length of the fracture, pressure within fracture and fluid loss. It was showed a high infiltration rate to the matrix as power index decreases. A sensitivity analysis is conclusively performed to identify the most influential factor of fluid loss.

Keywords: fracture, hydro-geomechanical model, non-Newtonian fluid, numerical analysis, sensitivity analysis

Procedia PDF Downloads 180
114 Improving Search Engine Performance by Removing Indexes to Malicious URLs

Authors: Durga Toshniwal, Lokesh Agrawal

Abstract:

As the web continues to play an increasing role in information exchange, and conducting daily activities, computer users have become the target of miscreants which infects hosts with malware or adware for financial gains. Unfortunately, even a single visit to compromised web site enables the attacker to detect vulnerabilities in the user’s applications and force the downloading of multitude of malware binaries. We provide an approach to effectively scan the so-called drive-by downloads on the Internet. Drive-by downloads are result of URLs that attempt to exploit their visitors and cause malware to be installed and run automatically. To scan the web for malicious pages, the first step is to use a crawler to collect URLs that live on the Internet, and then to apply fast prefiltering techniques to reduce the amount of pages that are needed to be examined by precise, but slower, analysis tools (such as honey clients or antivirus programs). Although the technique is effective, it requires a substantial amount of resources. A main reason is that the crawler encounters many pages on the web that are legitimate and needs to be filtered. In this paper, to characterize the nature of this rising threat, we present implementation of a web crawler on Python, an approach to search the web more efficiently for pages that are likely to be malicious, filtering benign pages and passing remaining pages to antivirus program for detection of malwares. Our approaches starts from an initial seed of known, malicious web pages. Using these seeds, our system generates search engines queries to identify other malicious pages that are similar to the ones in the initial seed. By doing so, it leverages the crawling infrastructure of search engines to retrieve URLs that are much more likely to be malicious than a random page on the web. The results shows that this guided approach is able to identify malicious web pages more efficiently when compared to random crawling-based approaches.

Keywords: web crawler, malwares, seeds, drive-by-downloads, security

Procedia PDF Downloads 211
113 A Convolutional Neural Network Based Vehicle Theft Detection, Location, and Reporting System

Authors: Michael Moeti, Khuliso Sigama, Thapelo Samuel Matlala

Abstract:

One of the principal challenges that the world is confronted with is insecurity. The crime rate is increasing exponentially, and protecting our physical assets especially in the motorist industry, is becoming impossible when applying our own strength. The need to develop technological solutions that detect and report theft without any human interference is inevitable. This is critical, especially for vehicle owners, to ensure theft detection and speedy identification towards recovery efforts in cases where a vehicle is missing or attempted theft is taking place. The vehicle theft detection system uses Convolutional Neural Network (CNN) to recognize the driver's face captured using an installed mobile phone device. The location identification function uses a Global Positioning System (GPS) to determine the real-time location of the vehicle. Upon identification of the location, Global System for Mobile Communications (GSM) technology is used to report or notify the vehicle owner about the whereabouts of the vehicle. The installed mobile app was implemented by making use of python as it is undoubtedly the best choice in machine learning. It allows easy access to machine learning algorithms through its widely developed library ecosystem. The graphical user interface was developed by making use of JAVA as it is better suited for mobile development. Google's online database (Firebase) was used as a means of storage for the application. The system integration test was performed using a simple percentage analysis. Sixty (60) vehicle owners participated in this study as a sample, and questionnaires were used in order to establish the acceptability of the system developed. The result indicates the efficiency of the proposed system, and consequently, the paper proposes the use of the system can effectively monitor the vehicle at any given place, even if it is driven outside its normal jurisdiction. More so, the system can be used as a database to detect, locate and report missing vehicles to different security agencies.

Keywords: CNN, location identification, tracking, GPS, GSM

Procedia PDF Downloads 130
112 Dislocation Density-Based Modeling of the Grain Refinement in Surface Mechanical Attrition Treatment

Authors: Reza Miresmaeili, Asghar Heydari Astaraee, Fereshteh Dolati

Abstract:

In the present study, an analytical model based on dislocation density model was developed to simulate grain refinement in surface mechanical attrition treatment (SMAT). The correlation between SMAT time and development in plastic strain on one hand, and dislocation density evolution, on the other hand, was established to simulate the grain refinement in SMAT. A dislocation density-based constitutive material law was implemented using VUHARD subroutine. A random sequence of shots is taken into consideration for multiple impacts model using Python programming language by utilizing a random function. The simulation technique was to model each impact in a separate run and then transferring the results of each run as initial conditions for the next run (impact). The developed Finite Element (FE) model of multiple impacts describes the coverage evolution in SMAT. Simulations were run to coverage levels as high as 4500%. It is shown that the coverage implemented in the FE model is equal to the experimental coverage. It is depicted that numerical SMAT coverage parameter is adequately conforming to the well-known Avrami model. Comparison between numerical results and experimental measurements for residual stresses and depth of deformation layers confirms the performance of the established FE model for surface engineering evaluations in SMA treatment. X-ray diffraction (XRD) studies of grain refinement, including resultant grain size and dislocation density, were conducted to validate the established model. The full width at half-maximum in XRD profiles can be used to measure the grain size. Numerical results and experimental measurements of grain refinement illustrate good agreement and show the capability of established FE model to predict the gradient microstructure in SMA treatment.

Keywords: dislocation density, grain refinement, severe plastic deformation, simulation, surface mechanical attrition treatment

Procedia PDF Downloads 111
111 Evaluation of the Impact of Telematics Use on Young Drivers’ Driving Behaviour: A Naturalistic Driving Study

Authors: WonSun Chen, James Boylan, Erwin Muharemovic, Denny Meyer

Abstract:

In Australia, drivers aged between 18 and 24 remained at high risk of road fatality over the last decade. Despite the successful implementation of the Graduated Licensing System (GLS) that supports young drivers in their early phases of driving, the road fatality statistics for these drivers remains high. In response to these statistics, studies conducted in Australia prior to the start of the COVID-19 pandemic have demonstrated the benefits of using telematics devices for improving driving behaviour, However, the impact of COVID-19 lockdown on young drivers’ driving behaviour has emerged as a global concern. Therefore, this naturalistic study aimed to evaluate and compare the driving behaviour(such as acceleration, braking, speeding, etc.) of young drivers with the adoption of in-vehicle telematics devices. Forty-two drivers aged between 18 and 30 and residing in the Australian state of Victoria participated in this study during the period of May to October 2022. All participants drove with the telematics devices during the first 30-day. At the start of the second 30-day, twenty-one participants were randomised to an intervention group where they were provided with an additional telematics ray device that provided visual feedback to the drivers, especially when they committed to aggressive driving behaviour. The remaining twenty-one participants remined their driving journeys without the extra telematics ray device (control group). Such trustworthy data enabled the assessment of changes in the driving behaviour of these young drivers using a machine learning approach in Python. Results are expected to show participants from the intervention group will show improvements in their driving behaviour compared to those from the control group.Furthermore, the telematics data enable the assessment and quantification of such improvements in driving behaviour. The findings from this study are anticipated to shed some light in guiding the development of customised campaigns and interventions to further address the high road fatality among young drivers in Australia.

Keywords: driving behaviour, naturalistic study, telematics data, young drivers

Procedia PDF Downloads 94
110 Simulating Studies on Phosphate Removal from Laundry Wastewater Using Biochar: Dudinin Approach

Authors: Eric York, James Tadio, Silas Owusu Antwi

Abstract:

Laundry wastewater contains a diverse range of chemical pollutants that can have detrimental effects on human health and the environment. In this study, simulation studies by Spyder Python software v 3.2 to assess the efficacy of biochar in removing PO₄³⁻ from wastewater were conducted. Through modeling and simulation, the mechanisms involved in the adsorption process of phosphate by biochar were studied by altering variables which is specific to the phosphate from common laundry phosphate detergents, such as the aqueous solubility, initial concentration, and temperature using the Dudinin Approach (DA). Results showed that the concentration equilibrate at near the highest concentrations for Sugar beet-120 mgL⁻¹, Tailing-85 mgL⁻¹, CaO- rich-50 mgL⁻¹, Eggshell and rice straw-48 mgL⁻¹, Undaria Pinnatifida Roots-190 mgL⁻¹, Ca-Alginate Granular Beads -240 mgL⁻¹, Laminaria Japonica Powder -900 mgL⁻¹, Pinesaw dust-57 mgL⁻¹, Ricehull-190 mgL⁻¹, sesame straw- 470 mgL⁻¹, Sugar Bagasse-380 mgL⁻¹, Miscanthus Giganteus-240 mgL⁻¹, Wood Bc-130 mgL⁻¹, Pine-25 mgL⁻¹, Sawdust-6.8 mgL⁻¹, Sewage Sludge-, Rice husk-12 mgL⁻¹, Corncob-117 mgL⁻¹, Maize straw- 1800 mgL⁻¹ while Peanut -Eucalyptus polybractea-, Crawfish equilibrated at near concentration. CO₂ activated Thalia, sewage sludge biochar, Broussonetia Papyrifera Leaves equilibrated just at the lower concentration. Only Soyer bean Stover exhibited a sharp rise and fall peak in mid-concentration at 2 mgL⁻¹ volume. The modelling results were consistent with experimental findings from the literature, ensuring the accuracy, repeatability, and reliability of the simulation study. The simulation study provided insights into adsorption for PO₄³⁻ from wastewater by biochar using concentration per volume that can be adsorbed ideally under the given conditions. Studies showed that applying the principle experimentally in real wastewater with all its complexity is warranted and not far-fetched.

Keywords: simulation studies, phosphate removal, biochar, adsorption, wastewater treatment

Procedia PDF Downloads 75
109 Design of Hybrid Auxetic Metamaterials for Enhanced Energy Absorption under Compression

Authors: Ercan Karadogan, Fatih Usta

Abstract:

Auxetic materials have a negative Poisson’s ratio (NPR), which is not often found in nature. They are metamaterials that have potential applications in many engineering fields. Mechanical metamaterials are synthetically designed structures with unusual mechanical properties. These mechanical properties are dependent on the properties of the matrix structure. They have the following special characteristics, i.e., improved shear modulus, increased energy absorption, and intensive fracture toughness. Non-auxetic materials compress transversely when they are stretched. The system naturally is inclined to keep its density constant. The transversal compression increases the density to balance the loss in the longitudinal direction. This study proposes to improve the crushing performance of hybrid auxetic materials. The re-entrant honeycomb structure has been combined with a star honeycomb, an S-shaped unit cell, a double arrowhead, and a structurally hexagonal re-entrant honeycomb by 9 X 9 cells, i.e., the number of cells is 9 in the lateral direction and 9 in the vertical direction. The Finite Element (FE) and experimental methods have been used to determine the compression behavior of the developed hybrid auxetic structures. The FE models have been developed by using Abaqus software. The specimens made of polymer plastic materials have been 3D printed and subjected to compression loading. The results are compared in terms of specific energy absorption and strength. This paper describes the quasi-static crushing behavior of two types of hybrid lattice structures (auxetic + auxetic and auxetic + non-auxetic). The results show that the developed hybrid structures can be useful to control collapse mechanisms and present larger energy absorption compared to conventional re-entrant auxetic structures.

Keywords: auxetic materials, compressive behavior, metamaterials, negative Poisson’s ratio

Procedia PDF Downloads 73
108 Fundamental Natural Frequency of Chromite Composite Floor System

Authors: Farhad Abbas Gandomkar, Mona Danesh

Abstract:

This paper aims to determine Fundamental Natural Frequency (FNF) of a structural composite floor system known as Chromite. To achieve this purpose, FNFs of studied panels are determined by development of Finite Element Models (FEMs) in ABAQUS program. American Institute of Steel Construction (AISC) code in Steel Design Guide Series 11, presents a fundamental formula to calculate FNF of a steel framed floor system. This formula has been used to verify results of the FEMs. The variability in the FNF of the studied system under various parameters such as dimensions of floor, boundary conditions, rigidity of main and secondary beams around the floor, thickness of concrete slab, height of composite joists, distance between composite joists, thickness of top and bottom flanges of the open web steel joists, and adding tie beam perpendicular on the composite joists, is determined. The results show that changing in dimensions of the system, its boundary conditions, rigidity of main beam, and also adding tie beam, significant changes the FNF of the system up to 452.9%, 50.8%, -52.2%, %52.6%, respectively. In addition, increasing thickness of concrete slab increases the FNF of the system up to 10.8%. Furthermore, the results demonstrate that variation in rigidity of secondary beam, height of composite joist, and distance between composite joists, and thickness of top and bottom flanges of open web steel joists insignificant changes the FNF of the studied system up to -0.02%, -3%, -6.1%, and 0.96%, respectively. Finally, the results of this study help designer predict occurrence of resonance, comfortableness, and design criteria of the studied system.

Keywords: Fundamental Natural Frequency, Chromite Composite Floor System, Finite Element Method, low and high frequency floors, Comfortableness, resonance.

Procedia PDF Downloads 431