Search results for: deep acting
1636 Tornado Disaster Impacts and Management: Learning from the 2016 Tornado Catastrophe in Jiangsu Province, China
Authors: Huicong Jia, Donghua Pan
Abstract:
As a key component of disaster reduction management, disaster emergency relief and reconstruction is an important process. Based on disaster system theory, this study analyzed the Jiangsu tornado from the formation mechanism of disasters, through to the economic losses, loss of life, and social infrastructure losses along the tornado disaster chain. The study then assessed the emergency relief and reconstruction efforts, based on an analytic hierarchy process method. The results were as follows: (1) An unstable weather system was the root cause of the tornado. The potentially hazardous local environment, acting in concert with the terrain and the river network, was able to gather energy from the unstable atmosphere. The wind belt passed through a densely populated district, with vulnerable infrastructure and other hazard-prone elements, which led to an accumulative disaster situation and the triggering of a catastrophe. (2) The tornado was accompanied by a hailstorm, which is an important triggering factor for a tornado catastrophe chain reaction. (3) The evaluation index (EI) of the emergency relief and reconstruction effect for the ‘‘6.23’’ tornado disaster in Yancheng was 91.5. Compared to other relief work in areas affected by disasters of the same magnitude, there was a more successful response than has previously been experienced. The results provide new insights for studies of disaster systems and the recovery measures in response to tornado catastrophe in China.Keywords: China, disaster system, emergency relief, tornado catastrophe
Procedia PDF Downloads 2721635 The Quality of Public Space in Mexico City: Current State and Trends
Authors: Mildred Moreno Villanueva
Abstract:
Public space is essential to strengthen the social and urban fabric and the social cohesion; there lies the importance of its study. Hence, the aim of this paper is to analyze the quality of public space in the XXI century in both quantitative and qualitative terms. In this article, the concept of public space includes open spaces such as parks, public squares and walking areas. To make this analysis we take Mexico City as the case study. It has a population of nearly 9 million inhabitants and it is composed of sixteen boroughs. For this analysis, we consider both, existing public spaces and the government intervention for building and improvement of new and existent public spaces. Results show that on the one hand, quantitatively there is not an equitable distribution of public spaces because of both, the growth of the city itself, as well as for the absence of political will to create public spaces. Another factor is the evolution of this city, which has been growing merely in a 'patched pattern', where public space has played no role at all with a total absence of urban design. On the other hand, qualitatively, even the boroughs with the most public spaces have not shown interest in making these spaces qualitatively inclusive and open to the general population aiming for integration. Therefore, urban projects that privatize public space seem to be the rule, rather than a rehabilitation effort of the existent public spaces. Hence, state intervention should reinforce its role as an agent of social change acting in the benefit of the majority of the inhabitants with the promotion of more inclusive public spaces.Keywords: exclusion, inclusion, Mexico City, public space
Procedia PDF Downloads 6231634 Effect of Yb and Sm doping on Thermoluminescence and Optical Properties of LiF Nanophosphor
Authors: Rakesh Dogra, Arun Kumar, Arvind Kumar Sharma
Abstract:
This paper reports the thermoluminescence as well as optical properties of rare earth doped lithium fluoride (LiF) nanophosphor, synthesized via chemical route. The rare earth impurities (Yb and Sm) have been observed to increase the deep trap center capacity, which, in turn, enhance the radiation resistance of the LiF. This suggests the viability of these materials to be used as high dose thermoluminescent detectors at high temperature. Further, optical absorption measurements revealed the formation of radiation induced stable color centers in LiF at room temperature, which are independent of the rare earth dopant.Keywords: lithium flouride, thermoluminescence, UV-VIS spectroscopy, Gamma radiations
Procedia PDF Downloads 1541633 Effect of TERGITOL NP-9 and PEG-10 Oleyl Phosphate as Surfactant and Corrosion Inhibitor on Tribo-Corrosion Performance of Carbon Steel in Emulsion-Based Drilling Fluids
Authors: Mohammadjavad Palimi, D. Y. Li, E. Kuru
Abstract:
Emulsion-based drilling fluids containing mineral oil are commonly used for drilling operations, which generate a lubricating film to prevent direct contact between moving metal parts, thus reducing friction, wear, and corrosion. For long-lasting lubrication, the thin lubricating film formed on the metal surface should possess good anti-wear and anti-corrosion capabilities. This study aims to investigate the effects of two additives, TERGITOL NP-9 and PEG-10 oleyl phosphate, acting as surfactant and corrosion inhibitor, respectively, on the tribo-corrosion behavior of 1018 carbon steel immersed in 5% KCl solution at room temperature. A pin-on-disc tribometer attached to an electrochemical system was used to investigate the corrosive wear of the steel immersed in emulsion-based fluids containing the surfactant and corrosion inhibitor. The wear track, surface chemistry and composition of the protective film formed on the steel surface were analyzed with an optical profilometer, SEM, and SEM-EDX. Results of the study demonstrate that the performance of the emulsion-based drilling fluids was significantly improved by the corrosion inhibitor by a remarkable reduction in corrosion, coefficient of friction (COF) and wear.Keywords: corrosion inhibitor, emulsion-based drilling fluid, tribo-corrosion, friction, wear
Procedia PDF Downloads 751632 Deep-Learning Coupled with Pragmatic Categorization Method to Classify the Urban Environment of the Developing World
Authors: Qianwei Cheng, A. K. M. Mahbubur Rahman, Anis Sarker, Abu Bakar Siddik Nayem, Ovi Paul, Amin Ahsan Ali, M. Ashraful Amin, Ryosuke Shibasaki, Moinul Zaber
Abstract:
Thomas Friedman, in his famous book, argued that the world in this 21st century is flat and will continue to be flatter. This is attributed to rapid globalization and the interdependence of humanity that engendered tremendous in-flow of human migration towards the urban spaces. In order to keep the urban environment sustainable, policy makers need to plan based on extensive analysis of the urban environment. With the advent of high definition satellite images, high resolution data, computational methods such as deep neural network analysis, and hardware capable of high-speed analysis; urban planning is seeing a paradigm shift. Legacy data on urban environments are now being complemented with high-volume, high-frequency data. However, the first step of understanding urban space lies in useful categorization of the space that is usable for data collection, analysis, and visualization. In this paper, we propose a pragmatic categorization method that is readily usable for machine analysis and show applicability of the methodology on a developing world setting. Categorization to plan sustainable urban spaces should encompass the buildings and their surroundings. However, the state-of-the-art is mostly dominated by classification of building structures, building types, etc. and largely represents the developed world. Hence, these methods and models are not sufficient for developing countries such as Bangladesh, where the surrounding environment is crucial for the categorization. Moreover, these categorizations propose small-scale classifications, which give limited information, have poor scalability and are slow to compute in real time. Our proposed method is divided into two steps-categorization and automation. We categorize the urban area in terms of informal and formal spaces and take the surrounding environment into account. 50 km × 50 km Google Earth image of Dhaka, Bangladesh was visually annotated and categorized by an expert and consequently a map was drawn. The categorization is based broadly on two dimensions-the state of urbanization and the architectural form of urban environment. Consequently, the urban space is divided into four categories: 1) highly informal area; 2) moderately informal area; 3) moderately formal area; and 4) highly formal area. In total, sixteen sub-categories were identified. For semantic segmentation and automatic categorization, Google’s DeeplabV3+ model was used. The model uses Atrous convolution operation to analyze different layers of texture and shape. This allows us to enlarge the field of view of the filters to incorporate larger context. Image encompassing 70% of the urban space was used to train the model, and the remaining 30% was used for testing and validation. The model is able to segment with 75% accuracy and 60% Mean Intersection over Union (mIoU). In this paper, we propose a pragmatic categorization method that is readily applicable for automatic use in both developing and developed world context. The method can be augmented for real-time socio-economic comparative analysis among cities. It can be an essential tool for the policy makers to plan future sustainable urban spaces.Keywords: semantic segmentation, urban environment, deep learning, urban building, classification
Procedia PDF Downloads 1941631 Brain Age Prediction Based on Brain Magnetic Resonance Imaging by 3D Convolutional Neural Network
Authors: Leila Keshavarz Afshar, Hedieh Sajedi
Abstract:
Estimation of biological brain age from MR images is a topic that has been much addressed in recent years due to the importance it attaches to early diagnosis of diseases such as Alzheimer's. In this paper, we use a 3D Convolutional Neural Network (CNN) to provide a method for estimating the biological age of the brain. The 3D-CNN model is trained by MRI data that has been normalized. In addition, to reduce computation while saving overall performance, some effectual slices are selected for age estimation. By this method, the biological age of individuals using selected normalized data was estimated with Mean Absolute Error (MAE) of 4.82 years.Keywords: brain age estimation, biological age, 3D-CNN, deep learning, T1-weighted image, SPM, preprocessing, MRI, canny, gray matter
Procedia PDF Downloads 1511630 Finite Element Analysis of Connecting Rod
Authors: Mohammed Mohsin Ali H., Mohamed Haneef
Abstract:
The connecting rod transmits the piston load to the crank causing the latter to turn, thus converting the reciprocating motion of the piston into a rotary motion of the crankshaft. Connecting rods are subjected to forces generated by mass and fuel combustion. This study investigates and compares the fatigue behavior of forged steel, powder forged and ASTM a 514 steel cold quenched connecting rods. The objective is to suggest for a new material with reduced weight and cost with the increased fatigue life. This has entailed performing a detailed load analysis. Therefore, this study has dealt with two subjects: first, dynamic load and stress analysis of the connecting rod, and second, optimization for material, weight and cost. In the first part of the study, the loads acting on the connecting rod as a function of time were obtained. Based on the observations of the dynamic FEA, static FEA, and the load analysis results, the load for the optimization study was selected. It is the conclusion of this study that the connecting rod can be designed and optimized under a load range comprising tensile load and compressive load. Tensile load corresponds to 360o crank angle at the maximum engine speed. The compressive load is corresponding to the peak gas pressure. Furthermore, the existing connecting rod can be replaced with a new connecting rod made of ASTM a 514 steel cold quenched that is 12% lighter and 28% cheaper.Keywords: connecting rod, ASTM a514 cold quenched material, static analysis, fatigue analysis, stress life approach
Procedia PDF Downloads 3031629 The Interdisciplinary Synergy Between Computer Engineering and Mathematics
Authors: Mitat Uysal, Aynur Uysal
Abstract:
Computer engineering and mathematics share a deep and symbiotic relationship, with mathematics providing the foundational theories and models for computer engineering advancements. From algorithm development to optimization techniques, mathematics plays a pivotal role in solving complex computational problems. This paper explores key mathematical principles that underpin computer engineering, illustrating their significance through a case study that demonstrates the application of optimization techniques using Python code. The case study addresses the well-known vehicle routing problem (VRP), an extension of the traveling salesman problem (TSP), and solves it using a genetic algorithm.Keywords: VRP, TSP, genetic algorithm, computer engineering, optimization
Procedia PDF Downloads 201628 The Impact of Varying the Detector and Modulation Types on Inter Satellite Link (ISL) Realizing the Allowable High Data Rate
Authors: Asmaa Zaki M., Ahmed Abd El Aziz, Heba A. Fayed, Moustafa H. Aly
Abstract:
ISLs are the most popular choice for deep space communications because these links are attractive alternatives to present day microwave links. This paper explored the allowable high data rate in this link over different orbits, which is affected by variation in modulation scheme and detector type. Moreover, the objective of this paper is to optimize and analyze the performance of ISL in terms of Q-factor and Minimum Bit Error Rate (Min-BER) based on different detectors comprising some parameters.Keywords: free space optics (FSO), field of view (FOV), inter satellite link (ISL), optical wireless communication (OWC)
Procedia PDF Downloads 4001627 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation
Authors: Jonathan Gong
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning
Procedia PDF Downloads 1331626 Proniosomes as a Drug Carrier for Topical Delivery of Tolnaftate
Authors: Mona Mahmoud Abou Samra, Alaa Hamed Salama, Ghada Awad, Soheir Said Mansy
Abstract:
Proniosomes are well documented for topical drug delivery and preferred over other vesicular systems because they are biodegradable, biocompatible, non-toxic, possess skin penetration ability and prolong the release of drugs by acting as depot in deeper layers of skin. Proniosome drug delivery was preferred due to improved stability of the system than niosomes. The present investigation aimed at formulation development and performance evaluation of proniosomal gel as a vesicular drug carrier system for antifungal drug tolnaftate. Proniosomes was developed using different nonionic surfactants such as span 60 and span 65 with cholesterol in different molar ratios by the Coacervation phase separation method in presence or absence of either lecithin or phospholipon 80 H. Proniosomal gel formulations of tolnaftate were characterized for vesicular shape & size, entrapment efficiency, rheological properties and release study. The effect of surfactants and additives on the entrapment efficiency, particle size and percent of drug released was studied. The selected proniosomal formulations for topical delivery of tolnaftate was subjected to a microbiological study in male rats infected with Trichophyton rubrum; the main cause of Tinea Pedis compared to the free drug and a market product and the results was recorded.Keywords: fungal infection, proniosome, tolnaftate, trichophyton rubrum
Procedia PDF Downloads 5161625 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine
Procedia PDF Downloads 1271624 Stationary Gas Turbines in Power Generation: Past, Present and Future Challenges
Authors: Michel Moliere
Abstract:
In the next decades, the thermal power generation segment will survive only if it achieves deep mutations, including drastical abatements of CO2 emissions and strong efficiency gains. In this challenging perspective, stationary gas turbines appear as serious candidates to lead the energy transition. Indeed, during the past decades, these turbomachines have made brisk technological advances in terms of efficiency, reliability, fuel flex (including the combustion of hydrogen), and the ability to hybridize with regenrables. It is, therefore, timely to summarize the progresses achieved by gas turbines in the recent past and to examine what are their assets to face the challenges of the energy transition.Keywords: energy transition, gas turbines, decarbonization, power generation
Procedia PDF Downloads 2131623 Segmented Pupil Phasing with Deep Learning
Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan
Abstract:
Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.Keywords: wavefront sensing, deep learning, deployable telescope, space telescope
Procedia PDF Downloads 1061622 A Simple Approach to Reliability Assessment of Structures via Anomaly Detection
Authors: Rims Janeliukstis, Deniss Mironovs, Andrejs Kovalovs
Abstract:
Operational Modal Analysis (OMA) is widely applied as a method for Structural Health Monitoring for structural damage identification and assessment by tracking the changes of the identified modal parameters over time. Unfortunately, modal parameters also depend on such external factors as temperature and loads. Any structural condition assessment using modal parameters should be done taking into consideration those external factors, otherwise there is a high chance of false positives. A method of structural reliability assessment based on anomaly detection technique called Machalanobis Squared Distance (MSD) is proposed. It requires a set of reference conditions to learn healthy state of a structure, which all future parameters are compared to. In this study, structural modal parameters (natural frequency and mode shape), as well as ambient temperature and loads acting on the structure are used as features. Numerical tests were performed on a finite element model of a carbon fibre reinforced polymer composite beam with delamination damage at various locations and of various severities. The advantages of the demonstrated approach include relatively few computational steps, ability to distinguish between healthy and damaged conditions and discriminate between different damage severities. It is anticipated to be promising in reliability assessment of massively produced structural parts.Keywords: operational modal analysis, reliability assessment, anomaly detection, damage, mahalanobis squared distance
Procedia PDF Downloads 1181621 Mathematical Modelling of Slag Formation in an Entrained-Flow Gasifier
Authors: Girts Zageris, Vadims Geza, Andris Jakovics
Abstract:
Gasification processes are of great interest due to their generation of renewable energy in the form of syngas from biodegradable waste. It is, therefore, important to study the factors that play a role in the efficiency of gasification and the longevity of the machines in which gasification takes place. This study focuses on the latter, aiming to optimize an entrained-flow gasifier by reducing slag formation on its walls to reduce maintenance costs. A CFD mathematical model for an entrained-flow gasifier is constructed – the model of an actual gasifier is rendered in 3D and appropriately meshed. Then, the turbulent gas flow in the gasifier is modeled with the realizable k-ε approach, taking devolatilization, combustion and coal gasification into account. Various such simulations are conducted, obtaining results for different air inlet positions and by tracking particles of varying sizes undergoing devolatilization and gasification. The model identifies potential problematic zones where most particles collide with the gasifier walls, indicating risk regions where ash deposits could most likely form. In conclusion, the effects on the formation of an ash layer of air inlet positioning and particle size allowed in the main gasifier tank are discussed, and possible solutions for decreasing a number of undesirable deposits are proposed. Additionally, an estimate of the impact of different factors such as temperature, gas properties and gas content, and different forces acting on the particles undergoing gasification is given.Keywords: biomass particles, gasification, slag formation, turbulence k-ε modelling
Procedia PDF Downloads 2871620 Targeting Calcium Dysregulation for Treatment of Dementia in Alzheimer's Disease
Authors: Huafeng Wei
Abstract:
Dementia in Alzheimer’s Disease (AD) is the number one cause of dementia internationally, without effective treatments. Increasing evidence suggest that disruption of intracellular calcium homeostasis, primarily pathological elevation of cytosol and mitochondria but reduction of endoplasmic reticulum (ER) calcium concentrations, play critical upstream roles on multiple pathologies and associated neurodegeneration, impaired neurogenesis, synapse, and cognitive dysfunction in various AD preclinical studies. The last federal drug agency (FDA) approved drug for AD dementia treatment, memantine, exert its therapeutic effects by ameliorating N-methyl-D-aspartate (NMDA) glutamate receptor overactivation and subsequent calcium dysregulation. More research works are needed to develop other drugs targeting calcium dysregulation at multiple pharmacological acting sites for future effective AD dementia treatment. Particularly, calcium channel blockers for the treatment of hypertension and dantrolene for the treatment of muscle spasm and malignant hyperthermia can be repurposed for this purpose. In our own research work, intranasal administration of dantrolene significantly increased its brain concentrations and durations, rendering it a more effective therapeutic drug with less side effects for chronic AD dementia treatment. This review summarizesthe progress of various studies repurposing drugs targeting calcium dysregulation for future effective AD dementia treatment as potentially disease-modifying drugs.Keywords: alzheimer, calcium, cognitive dysfunction, dementia, neurodegeneration, neurogenesis
Procedia PDF Downloads 1871619 Multibody Constrained Dynamics of Y-Method Installation System for a Large Scale Subsea Equipment
Authors: Naeem Ullah, Menglan Duan, Mac Darlington Uche Onuoha
Abstract:
The lowering of subsea equipment into the deep waters is a challenging job due to the harsh offshore environment. Many researchers have introduced various installation systems to deploy the payload safely into the deep oceans. In general practice, dual floating vessels are not employed owing to the prevalent safety risks and hazards caused by ever-increasing dynamical effects sourced by mutual interaction between the bodies. However, while keeping in the view of the optimal grounds, such as economical one, the Y-method, the two conventional tugboats supporting the equipment by the two independent strands connected to a tri-plate above the equipment, has been employed to study multibody dynamics of the dual barge lifting operations. In this study, the two tugboats and the suspended payload (Y-method) are deployed for the lowering of subsea equipment into the deep waters as a multibody dynamic system. The two-wire ropes are used for the lifting and installation operation by this Y-method installation system. 6-dof (degree of freedom) for each body are considered to establish coupled 18-dof multibody model by embedding technique or velocity transformation technique. The fundamental and prompt advantage of this technique is that the constraint forces can be eliminated directly, and no extra computational effort is required for the elimination of the constraint forces. The inertial frame of reference is taken at the surface of the water as the time-independent frame of reference, and the floating frames of reference are introduced in each body as the time-dependent frames of reference in order to formulate the velocity transformation matrix. The local transformation of the generalized coordinates to the inertial frame of reference is executed by applying the Euler Angle approach. The spherical joints are articulated amongst the multibody as the kinematic joints. The hydrodynamic force, the two-strand forces, the hydrostatic force, and the mooring forces are taken into consideration as the external forces. The radiation force of the hydrodynamic force is obtained by employing the Cummins equation. The wave exciting part of the hydrodynamic force is obtained by using force response amplitude operators (RAOs) that are obtained by the commercial solver ‘OpenFOAM’. The strand force is obtained by considering the wire rope as an elastic spring. The nonlinear hydrostatic force is obtained by the pressure integration technique at each time step of the wave movement. The mooring forces are evaluated by using Faltinsen analytical approach. ‘The Runge Kutta Method’ of Fourth-Order is employed to evaluate the coupled equations of motion obtained for 18-dof multibody model. The results are correlated with the simulated Orcaflex Model. Moreover, the results from Orcaflex Model are compared with the MOSES Model from previous studies. The MBDS of single barge lifting operation from the former studies are compared with the MBDS of the established dual barge lifting operation. The dynamics of the dual barge lifting operation are found larger in magnitude as compared to the single barge lifting operation. It is noticed that the traction at the top connection point of the cable decreases with the increase in the length, and it becomes almost constant after passing through the splash zone.Keywords: dual barge lifting operation, Y-method, multibody dynamics, shipbuilding, installation of subsea equipment, shipbuilding
Procedia PDF Downloads 2061618 An Architecture Based on Capsule Networks for the Identification of Handwritten Signature Forgery
Authors: Luisa Mesquita Oliveira Ribeiro, Alexei Manso Correa Machado
Abstract:
Handwritten signature is a unique form for recognizing an individual, used to discern documents, carry out investigations in the criminal, legal, banking areas and other applications. Signature verification is based on large amounts of biometric data, as they are simple and easy to acquire, among other characteristics. Given this scenario, signature forgery is a worldwide recurring problem and fast and precise techniques are needed to prevent crimes of this nature from occurring. This article carried out a study on the efficiency of the Capsule Network in analyzing and recognizing signatures. The chosen architecture achieved an accuracy of 98.11% and 80.15% for the CEDAR and GPDS databases, respectively.Keywords: biometrics, deep learning, handwriting, signature forgery
Procedia PDF Downloads 871617 Discussion of Blackness in Wrestling
Authors: Jason Michael Crozier
Abstract:
The wrestling territories of the mid-twentieth century in the United States are widely considered the birthplace of modern professional wrestling, and by many professional wrestlers, to be a beacon of hope for the easing of racial tensions during the civil rights era and beyond. The performers writing on this period speak of racial equality but fail to acknowledge the exploitation of black athletes as a racialized capital commodity who suffered the challenges of systemic racism, codified by a false narrative of aspirational exceptionalism and equality measured by audience diversity. The promoters’ ability to equate racial and capital exploitation with equality leads to a broader discussion of the history of Muscular Christianity in the United States and the exploitation of black bodies. Narratives of racial erasure that dominate the historical discourse when examining athleticism and exceptionalism redefined how blackness existed and how physicality and race are conceived of in sport and entertainment spaces. When discussing the implications of race and professional wrestling, it is important to examine the role of promotions as ‘imagined communities’ where the social agency of wrestlers is defined and quantified based on their ‘desired elements’ as a performer. The intentionally vague nature of this language masks a deep history of racialization that has been perpetuated by promoters and never fully examined by scholars. Sympathetic racism and the omission of cultural identity are also key factors in the limitations and racial barriers placed upon black athletes in the squared circle. The use of sympathetic racism within professional wrestling during the twentieth century defined black athletes into two distinct categorizations, the ‘black savage’ or the ‘black minstrel’. Black wrestlers of the twentieth century were defined by their strength as a capital commodity and their physicality rather than their knowledge of the business and in-ring skill. These performers had little agency in their ability to shape their own character development inside and outside the ring. Promoters would often create personas that heavily racialized the performer by tying them to a regional past or memory, such as that of slavery in the deep south using dog collar matches and adoring black characters in chains. Promoters softened cultural memory by satirizing the historic legacy of slavery and the black identity.Keywords: sympathetic racism, social agency, racial commodification, stereotyping
Procedia PDF Downloads 1371616 Effect of Wind Braces to Earthquake Resistance of Steel Structures
Authors: H. Gokdemir
Abstract:
All structures are subject to vertical and lateral loads. Under these loads, structures make deformations and deformation values of structural elements mustn't exceed their capacity for structural stability. Especially, lateral loads cause critical deformations because of their random directions and magnitudes. Wind load is one of the lateral loads which can act in any direction and any magnitude. Although wind has nearly no effect on reinforced concrete structures, it must be considered for steel structures, roof systems and slender structures like minarets. Therefore, every structure must be able to resist wind loads acting parallel and perpendicular to any side. One of the effective methods for resisting lateral loads is assembling cross steel elements between columns which are called as wind bracing. These cross elements increases lateral rigidity of a structure and prevent exceeding of deformation capacity of the structural system. So, this means cross elements are also effective in resisting earthquake loads too. In this paper; Effects of wind bracing to earthquake resistance of structures are studied. Structure models (with and without wind bracing) are generated and these models are solved under both earthquake and wind loads with different seismic zone parameters. It is concluded by the calculations that; in low-seismic risk zones, wind bracing can easily resist earthquake loads and no additional reinforcement for earthquake loads is necessary. Similarly; in high-seismic risk zones, earthquake cross elements resist wind loads too.Keywords: wind bracings, earthquake, steel structures, vertical and lateral loads
Procedia PDF Downloads 4741615 Increasing Efficiency, Performance and Safety of Aircraft during Takeoff and Landing by Interpreting Electromagnetism
Authors: Sambit Supriya Dash
Abstract:
Aerospace Industry has evolved over the last century and is growing by approaching towards, more fuel efficient, cheaper, simpler, convenient and safer ways of flight stages. In this paper, the accident records of aircrafts are studied and found about 71% of accidents caused on runways during Takeoff and Landing. By introducing the concept of interpreting electromagnetism, the cause of bounced touchdown and flare failure such as landing impact loads and instability could be eliminated. During Takeoff, the rate of fuel consumption is observed to be maximum. By applying concept of interpreting electromagnetism, a remarkable rate of fuel consumption is reduced, which can be used in case of emergency due to lack of fuel or in case of extended flight. A complete setup of the concept, its effects and characteristics are studied and provided with references of few popular aircrafts. By embedding series of strong and controlled electromagnets below the runway along and aside the centre line and fixed in the line of acting force through wing-fuselage aerodynamic centre. By the essence of its strength controllable nature, it can contribute to performance and fuel efficiency for aircraft. This ensures a perfect Takeoff with less fuel consumption followed by safe cruise stage, which in turn ensures a short and safe landing, eliminating the till known failures, due to bounced touchdowns and flare failure.Keywords: efficiency, elctromagnetism, performance, reduced fuel consumption, safety
Procedia PDF Downloads 2361614 Exploring the Synergistic Effects of Aerobic Exercise and Cinnamon Extract on Metabolic Markers in Insulin-Resistant Rats through Advanced Machine Learning and Deep Learning Techniques
Authors: Masoomeh Alsadat Mirshafaei
Abstract:
The present study aims to explore the effect of an 8-week aerobic training regimen combined with cinnamon extract on serum irisin and leptin levels in insulin-resistant rats. Additionally, this research leverages various machine learning (ML) and deep learning (DL) algorithms to model the complex interdependencies between exercise, nutrition, and metabolic markers, offering a groundbreaking approach to obesity and diabetes research. Forty-eight Wistar rats were selected and randomly divided into four groups: control, training, cinnamon, and training cinnamon. The training protocol was conducted over 8 weeks, with sessions 5 days a week at 75-80% VO2 max. The cinnamon and training-cinnamon groups were injected with 200 ml/kg/day of cinnamon extract. Data analysis included serum data, dietary intake, exercise intensity, and metabolic response variables, with blood samples collected 72 hours after the final training session. The dataset was analyzed using one-way ANOVA (P<0.05) and fed into various ML and DL models, including Support Vector Machines (SVM), Random Forest (RF), and Convolutional Neural Networks (CNN). Traditional statistical methods indicated that aerobic training, with and without cinnamon extract, significantly increased serum irisin and decreased leptin levels. Among the algorithms, the CNN model provided superior performance in identifying specific interactions between cinnamon extract concentration and exercise intensity, optimizing the increase in irisin and the decrease in leptin. The CNN model achieved an accuracy of 92%, outperforming the SVM (85%) and RF (88%) models in predicting the optimal conditions for metabolic marker improvements. The study demonstrated that advanced ML and DL techniques could uncover nuanced relationships and potential cellular responses to exercise and dietary supplements, which is not evident through traditional methods. These findings advocate for the integration of advanced analytical techniques in nutritional science and exercise physiology, paving the way for personalized health interventions in managing obesity and diabetes.Keywords: aerobic training, cinnamon extract, insulin resistance, irisin, leptin, convolutional neural networks, exercise physiology, support vector machines, random forest
Procedia PDF Downloads 441613 Mathematical Modeling of the Operating Process and a Method to Determine the Design Parameters in an Electromagnetic Hammer Using Solenoid Electromagnets
Authors: Song Hyok Choe
Abstract:
This study presented a method to determine the optimum design parameters based on a mathematical model of the operating process in a manual electromagnetic hammer using solenoid electromagnets. The operating process of the electromagnetic hammer depends on the circuit scheme of the power controller. Mathematical modeling of the operating process was carried out by considering the energy transfer process in the forward and reverse windings and the electromagnetic force acting on the impact and brake pistons. Using the developed mathematical model, the initial design data of a manual electromagnetic hammer proposed in this paper are encoded and analyzed in Matlab. On the other hand, a measuring experiment was carried out by using a measurement device to check the accuracy of the developed mathematical model. The relative errors of the analytical results for measured stroke distance of the impact piston, peak value of forward stroke current and peak value of reverse stroke current were −4.65%, 9.08% and 9.35%, respectively. Finally, it was shown that the mathematical model of the operating process of an electromagnetic hammer is relatively accurate, and it can be used to determine the design parameters of the electromagnetic hammer. Therefore, the design parameters that can provide the required impact energy in the manual electromagnetic hammer were determined using a mathematical model developed. The proposed method will be used for the further design and development of the various types of percussion rock drills.Keywords: solenoid electromagnet, electromagnetic hammer, stone processing, mathematical modeling
Procedia PDF Downloads 521612 Understanding Seismic Behavior of Masonry Buildings in Earthquake
Authors: Alireza Mirzaee, Soosan Abdollahi, Mohammad Abdollahi
Abstract:
Unreinforced Masonry (URM) wall is vulnerable in resisting horizontal load such as wind and seismic loading. It is due to the low tensile strength of masonry, the mortar connection between the brick units. URM structures are still widely used in the world as an infill wall and commonly constructed with door and window openings. This research aimed to investigate the behavior of URM wall with openings when horizontal load acting on it and developed load-drift relationship of the wall. The finite element (FE) method was chosen to numerically simulate the behavior of URM with openings. In this research, ABAQUS, commercially available FE software with explicit solver was employed. In order to ensure the numerical model can accurately represent the behavior of an URM wall, the model was validated for URM wall without openings using available experimental results. Load-displacement relationship of numerical model is well agreed with experimental results. Evidence shows the same load displacement curve shape obtained from the FE model. After validating the model, parametric study conducted on URM wall with openings to investigate the influence of area of openings and pre-compressive load on the horizontal load capacity of the wall. The result showed that the increasing of area of openings decreases the capacity of the wall in resisting horizontal loading. It is also well observed from the result that capacity of the wall increased with the increasing of pre-compressive load applied on the top of the walls.Keywords: masonry constructions, performance at earthquake, MSJC-08 (ASD), bearing wall, tie-column
Procedia PDF Downloads 2531611 Fatigue Life Evaluation of Al6061/Al2O3 and Al6061/SiC Composites under Uniaxial and Multiaxial Loading Conditions
Authors: C. E. Sutton, A. Varvani-Farahani
Abstract:
Fatigue damage and life prediction of particle metal matrix composites (PMMCs) under uniaxial and multiaxial loading conditions were investigated. Three PMM composite materials of Al6061/Al2O3/20p-T6, Al6061/Al2O3/22p-T6 and Al6061/SiC/17w-T6 tested under tensile, torsion, and combined tension-torsion fatigue cycling were evaluated with various fatigue damage models. The fatigue damage models of Smith-Watson-Topper (S. W. T.), Ellyin, Brown-Miller, Fatemi-Socie, and Varvani were compared for their capability to assess the fatigue damage of materials undergoing various loading conditions. Fatigue life predication results were then evaluated by implementing material-dependent coefficients that factored in the effects of the particle reinforcement in the earlier developed Varvani model. The critical plane-energy approach incorporated the critical plane as the plane of crack initiation and early stage of crack growth. The strain energy density was calculated on the critical plane incorporating stress and strain components acting on the plane. This approach successfully evaluated fatigue damage values versus fatigue lives within a narrower band for both uniaxial and multiaxial loading conditions as compared with other damage approaches studied in this paper.Keywords: fatigue damage, life prediction, critical plane approach, energy approach, PMM composites
Procedia PDF Downloads 4041610 Teachers' Design and Implementation of Collaborative Learning Tasks in Higher Education
Authors: Bing Xu, Kerry Lee, Jason M. Stephen
Abstract:
Collaborative learning (CL) has been regarded as a way to facilitate students to gain knowledge and improve social skills. In China, lecturers in higher education institutions have commonly adopted CL in their daily practice. However, such a strategy could not be effective when it is designed and applied in an inappropriate way. Previous research hardly focused on how CL was applied in Chinese universities. This present study aims to gain a deep understanding of how Chinese lecturers design and implement CL tasks. The researchers interviewed ten lecturers from different faculties in various universities in China and usedGroup Learning Activity Instructional Design (GLAID) framework to analyse the data. We found that not all lecturers pay enough attention to eight essential components (proposed by GLAID) when they designed CL tasks, especially the components of Structure and Guidance. Meanwhile, only a small part of lecturers made formative assessment to help students improve learning. We also discuss the strengths and limitations and CL design and further provide suggestions to the lecturers who intend to use CL in class. Research Objectives: The aims of the present research are threefold. We intend to 1) gain a deep understanding of how Chinese lecturers design and implement collaborative learning (CL) tasks, 2) find strengths and limitations of CL design in higher education, and 3) give suggestions about how to improve the design and implement. Research Methods: This research adopted qualitative methods. We applied the semi-structured interview method to interview ten Chinese lecturers about how they designed and implemented CL tasks in their courses. There were 9 questions in the interview protocol focusing on eight components of GLAID. Then, underpinning the GLAID framework, we utilized the coding reliability thematic analysis method to analyse the research data. The coding work was done by two PhD students whose research fields are CL, and the Cohen’s Kappa was 0.772 showing the inter-coder reliability was good. Contribution: Though CL has been commonly adopted in China, few studies have paid attention to the details about how lecturers designed and implemented CL tasks in practice. This research addressed such a gap and found not lecturers were aware of how to design CL and felt it difficult to structure the task and guide the students on collaboration, and further ensure student engagement in CL. In summary, this research advocates for teacher training; otherwise, students may not gain the expected learning outcomes.Keywords: collaborative learning, higher education, task design, GLAID framework
Procedia PDF Downloads 1021609 Prediction of Flow Around a NACA 0015 Profile
Authors: Boukhadia Karima
Abstract:
The fluid mechanics is the study of fluid motion laws and their interaction with solid bodies, this project leads to illustrate this interaction with depth studies and approved by experiments on the wind tunnel TE44, ensuring the efficiency, accuracy and reliability of these tests on a NACA0015 profile. A symmetric NACA0015 was placed in a subsonic wind tunnel, and measurements were made of the pressure on the upper and lower surface of the wing and of the velocity across the vortex trailing downstream from the tip of the wing. The aim of this work is to investigate experimentally the scattered pressure profile in a free airflow and the aerodynamic forces acting on this profile. The addition of around-lateral edge to the wing tip was found to eliminate the secondary vortex near the wing tip, but had little effect on the downstream characteristics of the trailing vortex. The increase in wing lift near the tip because of the presence of the trailing vortex was evident in the surface pressure, but was not captured by circulation-box measurements. The circumferential velocity within the vortex was found to reach free-stream values and produce core rotational speeds. Near the wing, the trailing vortex is asymmetric and contains definite zones where the stream wise velocity both exceeds and falls behind the free-stream value. When referenced to the free stream velocity, the maximum vertical velocity of the vortex is directly dependent on α and is independent of Re. A numerical study was conducted through a CFD code called FLUENT 6.0, and the results are compared with experimental.Keywords: CFD code, NACA Profile, detachment, angle of incidence, wind tunnel
Procedia PDF Downloads 4141608 Depth Estimation in DNN Using Stereo Thermal Image Pairs
Authors: Ahmet Faruk Akyuz, Hasan Sakir Bilge
Abstract:
Depth estimation using stereo images is a challenging problem in computer vision. Many different studies have been carried out to solve this problem. With advancing machine learning, tackling this problem is often done with neural network-based solutions. The images used in these studies are mostly in the visible spectrum. However, the need to use the Infrared (IR) spectrum for depth estimation has emerged because it gives better results than visible spectra in some conditions. At this point, we recommend using thermal-thermal (IR) image pairs for depth estimation. In this study, we used two well-known networks (PSMNet, FADNet) with minor modifications to demonstrate the viability of this idea.Keywords: thermal stereo matching, deep neural networks, CNN, Depth estimation
Procedia PDF Downloads 2841607 Global Modeling of Drill String Dragging and Buckling in 3D Curvilinear Bore-Holes
Authors: Valery Gulyayev, Sergey Glazunov, Elena Andrusenko, Nataliya Shlyun
Abstract:
Enhancement of technology and techniques for drilling deep directed oil and gas bore-wells are of essential industrial significance because these wells make it possible to increase their productivity and output. Generally, they are used for drilling in hard and shale formations, that is why their drivage processes are followed by the emergency and failure effects. As is corroborated by practice, the principal drilling drawback occurring in drivage of long curvilinear bore-wells is conditioned by the need to obviate essential force hindrances caused by simultaneous action of the gravity, contact and friction forces. Primarily, these forces depend on the type of the technological regime, drill string stiffness, bore-hole tortuosity and its length. They can lead to the Eulerian buckling of the drill string and its sticking. To predict and exclude these states, special mathematic models and methods of computer simulation should play a dominant role. At the same time, one might note that these mechanical phenomena are very complex and only simplified approaches (‘soft string drag and torque models’) are used for their analysis. Taking into consideration that now the cost of directed wells increases essentially with complication of their geometry and enlargement of their lengths, it can be concluded that the price of mistakes of the drill string behavior simulation through the use of simplified approaches can be very high and so the problem of correct software elaboration is very urgent. This paper deals with the problem of simulating the regimes of drilling deep curvilinear bore-wells with prescribed imperfect geometrical trajectories of their axial lines. On the basis of the theory of curvilinear flexible elastic rods, methods of differential geometry, and numerical analysis methods, the 3D ‘stiff-string drag and torque model’ of the drill string bending and the appropriate software are elaborated for the simulation of the tripping in and out regimes and drilling operations. It is shown by the computer calculations that the contact and friction forces can be calculated and regulated, providing predesigned trouble-free modes of operation. The elaborated mathematic models and software can be used for the emergency situations prognostication and their exclusion at the stages of the drilling process design and realization.Keywords: curvilinear drilling, drill string tripping in and out, contact forces, resistance forces
Procedia PDF Downloads 149