Search results for: deep work
14648 A Metallography Study of Secondary A226 Aluminium Alloy Used in Automotive Industries
Authors: Lenka Hurtalová, Eva Tillová, Mária Chalupová, Juraj Belan, Milan Uhríčik
Abstract:
The secondary alloy A226 is used for many automotive casting produced by mould casting and high pressure die-casting. This alloy has excellent castability, good mechanical properties and cost-effectiveness. Production of primary aluminium alloys belong to heavy source fouling of life environs. The European Union calls for the emission reduction and reduction in energy consumption, therefore, increase production of recycled (secondary) aluminium cast alloys. The contribution is deal with influence of recycling on the quality of the casting made from A226 in automotive industry. The properties of the casting made from secondary aluminium alloys were compared with the required properties of primary aluminium alloys. The effect of recycling on microstructure was observed using combination different analytical techniques (light microscopy upon black-white etching, scanning electron microscopy-SEM upon deep etching and energy dispersive X-ray analysis-EDX). These techniques were used for the identification of the various structure parameters, which was used to compare secondary alloy microstructure with primary alloy microstructure.Keywords: A226 secondary aluminium alloy, deep etching, mechanical properties, recycling foundry aluminium alloy
Procedia PDF Downloads 54114647 Elevated Temperature Shot Peening for M50 Steel
Authors: Xinxin Ma, Guangze Tang, Shuxin Yang, Jinguang He, Fan Zhang, Peiling Sun, Ming Liu, Minyu Sun, Liqin Wang
Abstract:
As a traditional surface hardening technique, shot peening is widely used in industry. By using shot peening, a residual compressive stress is formed in the surface which is beneficial for improving the fatigue life of metal materials. At the same time, very fine grains and high density defects are generated in the surface layer which enhances the surface hardness, either. However, most of the processes are carried out at room temperature. For high strength steel, such as M50, the thickness of the strengthen layer is limited. In order to obtain a thick strengthen surface layer, elevated temperature shot peening was carried out in this work by using Φ1mm cast ion balls with a speed of 80m/s. Considering the tempering temperature of M50 steel is about 550 oC, the processing temperature was in the range from 300 to 500 oC. The effect of processing temperature and processing time of shot peening on distribution of residual stress and surface hardness was investigated. As we known, the working temperature of M50 steel can be as high as 315 oC. Because the defects formed by shot peening are unstable when the working temperature goes higher, it is worthy to understand what happens during the shot peening process, and what happens when the strengthen samples were kept at a certain temperature. In our work, the shot peening time was selected from 2 to 10 min. And after the strengthening process, the samples were annealed at various temperatures from 200 to 500 oC up to 60 h. The results show that the maximum residual compressive stress is near 900 MPa. Compared with room temperature shot peening, the strengthening depth of 500 oC shot peening sample is about 2 times deep. The surface hardness increased with the processing temperature, and the saturation peening time decreases. After annealing, the residual compressive stress decreases, however, for 500 oC peening sample, even annealing at 500 oC for 20 h, the residual compressive stress is still over 600 MPa. However, it is clean to see from SEM that the grain size of surface layers is still very small.Keywords: shot peening, M50 steel, residual compressive stress, elevated temperature
Procedia PDF Downloads 45614646 Image Processing-Based Maize Disease Detection Using Mobile Application
Authors: Nathenal Thomas
Abstract:
In the food chain and in many other agricultural products, corn, also known as maize, which goes by the scientific name Zea mays subsp, is a widely produced agricultural product. Corn has the highest adaptability. It comes in many different types, is employed in many different industrial processes, and is more adaptable to different agro-climatic situations. In Ethiopia, maize is among the most widely grown crop. Small-scale corn farming may be a household's only source of food in developing nations like Ethiopia. The aforementioned data demonstrates that the country's requirement for this crop is excessively high, and conversely, the crop's productivity is very low for a variety of reasons. The most damaging disease that greatly contributes to this imbalance between the crop's supply and demand is the corn disease. The failure to diagnose diseases in maize plant until they are too late is one of the most important factors influencing crop output in Ethiopia. This study will aid in the early detection of such diseases and support farmers during the cultivation process, directly affecting the amount of maize produced. The diseases in maize plants, such as northern leaf blight and cercospora leaf spot, have distinct symptoms that are visible. This study aims to detect the most frequent and degrading maize diseases using the most efficiently used subset of machine learning technology, deep learning so, called Image Processing. Deep learning uses networks that can be trained from unlabeled data without supervision (unsupervised). It is a feature that simulates the exercises the human brain goes through when digesting data. Its applications include speech recognition, language translation, object classification, and decision-making. Convolutional Neural Network (CNN) for Image Processing, also known as convent, is a deep learning class that is widely used for image classification, image detection, face recognition, and other problems. it will also use this algorithm as the state-of-the-art for my research to detect maize diseases by photographing maize leaves using a mobile phone.Keywords: CNN, zea mays subsp, leaf blight, cercospora leaf spot
Procedia PDF Downloads 7414645 Basic Research on Applying Temporary Work Engineering at the Design Phase
Authors: Jin Woong Lee, Kyuman Cho, Taehoon Kim
Abstract:
The application of constructability is increasingly required not only in the construction phase but also in the whole project stage. In particular, the proper application of construction experience and knowledge during the design phase enables the minimization of inefficiencies such as design changes and improvements in constructability during the construction phase. In order to apply knowledge effectively, engineering technology efforts should be implemented with design progress. Among many engineering technologies, engineering for temporary works, including facilities, equipment, and other related construction methods, is important to improve constructability. Therefore, as basic research, this study investigates the applicability of temporary work engineering during the design phase in the building construction industry. As a result, application of temporary work engineering has a greater impact on construction cost reduction and constructability improvement. In contrast to the existing design-bid-build method, the turn-key and CM (construct management) procurement methods currently being implemented in Korea are expected to have a significant impact on the direction of temporary work engineering. To introduce temporary work engineering, expert/professional organization training is first required, and a lack of client awareness should be preferentially improved. The results of this study are expected to be useful as reference material for the development of more effective temporary work engineering tasks and work processes in the future.Keywords: Temporary Work Engineering, Design Phase, Constructability, Building Construction
Procedia PDF Downloads 38614644 Towards Long-Range Pixels Connection for Context-Aware Semantic Segmentation
Authors: Muhammad Zubair Khan, Yugyung Lee
Abstract:
Deep learning has recently achieved enormous response in semantic image segmentation. The previously developed U-Net inspired architectures operate with continuous stride and pooling operations, leading to spatial data loss. Also, the methods lack establishing long-term pixels connection to preserve context knowledge and reduce spatial loss in prediction. This article developed encoder-decoder architecture with bi-directional LSTM embedded in long skip-connections and densely connected convolution blocks. The network non-linearly combines the feature maps across encoder-decoder paths for finding dependency and correlation between image pixels. Additionally, the densely connected convolutional blocks are kept in the final encoding layer to reuse features and prevent redundant data sharing. The method applied batch-normalization for reducing internal covariate shift in data distributions. The empirical evidence shows a promising response to our method compared with other semantic segmentation techniques.Keywords: deep learning, semantic segmentation, image analysis, pixels connection, convolution neural network
Procedia PDF Downloads 10214643 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine
Procedia PDF Downloads 12514642 Sedimentological Study of Bivalve Fossils Site Locality in Hong Hoi Formation in Lampang, Thailand
Authors: Kritsada Moonpa, Kannipa Motanated, Weerapan Srichan
Abstract:
Hong Hoi Formation is a Middle Triassic deep marine succession presented in outcrops throughout the Lampang Basin of northern Thailand. The primary goal of this research is to diagnose the paleoenvironment, petrographic compositions, and sedimentary sources of the Hong Hoi Formation in Ban Huat, Ngao District. The Triassic Hong Hoi Formation is chosen because the outcrops are continuous and fossils are greatly exposed and abundant. Depositional environment is reconstructed through sedimentological studies along with facies analysis. The Hong Hoi Formation is petrographically divided into two major facies, they are: sandstones with mudstone interbeds, and mudstones or shale with sandstone interbeds. Sandstone beds are lithic arenite and lithic greywacke, volcanic lithic fragments are dominated. Sedimentary structures, paleocurrent data and lithofacies arrangement indicate that the formation deposited in a part of deep marine abyssal plain environment. The sedimentological and petrographic features suggest that during the deposition the Hong Hoi Formation received sediment supply from nearby volcanic arc. This suggested that the intensive volcanic activity within the Sukhothai Arc during the Middle Triassic is the main sediment source.Keywords: Sukhothai zone, petrography, Hong Hoi formation, Lampang, Triassic
Procedia PDF Downloads 21314641 Segmented Pupil Phasing with Deep Learning
Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan
Abstract:
Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.Keywords: wavefront sensing, deep learning, deployable telescope, space telescope
Procedia PDF Downloads 10414640 Bidirectional Long Short-Term Memory-Based Signal Detection for Orthogonal Frequency Division Multiplexing With All Index Modulation
Authors: Mahmut Yildirim
Abstract:
This paper proposed the bidirectional long short-term memory (Bi-LSTM) network-aided deep learning (DL)-based signal detection for Orthogonal frequency division multiplexing with all index modulation (OFDM-AIM), namely Bi-DeepAIM. OFDM-AIM is developed to increase the spectral efficiency of OFDM with index modulation (OFDM-IM), a promising multi-carrier technique for communication systems beyond 5G. In this paper, due to its strong classification ability, Bi-LSTM is considered an alternative to the maximum likelihood (ML) algorithm, which is used for signal detection in the classical OFDM-AIM scheme. The performance of the Bi-DeepAIM is compared with LSTM network-aided DL-based OFDM-AIM (DeepAIM) and classic OFDM-AIM that uses (ML)-based signal detection via BER performance and computational time criteria. Simulation results show that Bi-DeepAIM obtains better bit error rate (BER) performance than DeepAIM and lower computation time in signal detection than ML-AIM.Keywords: bidirectional long short-term memory, deep learning, maximum likelihood, OFDM with all index modulation, signal detection
Procedia PDF Downloads 7214639 Violence Detection and Tracking on Moving Surveillance Video Using Machine Learning Approach
Authors: Abe Degale D., Cheng Jian
Abstract:
When creating automated video surveillance systems, violent action recognition is crucial. In recent years, hand-crafted feature detectors have been the primary method for achieving violence detection, such as the recognition of fighting activity. Researchers have also looked into learning-based representational models. On benchmark datasets created especially for the detection of violent sequences in sports and movies, these methods produced good accuracy results. The Hockey dataset's videos with surveillance camera motion present challenges for these algorithms for learning discriminating features. Image recognition and human activity detection challenges have shown success with deep representation-based methods. For the purpose of detecting violent images and identifying aggressive human behaviours, this research suggested a deep representation-based model using the transfer learning idea. The results show that the suggested approach outperforms state-of-the-art accuracy levels by learning the most discriminating features, attaining 99.34% and 99.98% accuracy levels on the Hockey and Movies datasets, respectively.Keywords: violence detection, faster RCNN, transfer learning and, surveillance video
Procedia PDF Downloads 10614638 Sea-Land Segmentation Method Based on the Transformer with Enhanced Edge Supervision
Authors: Lianzhong Zhang, Chao Huang
Abstract:
Sea-land segmentation is a basic step in many tasks such as sea surface monitoring and ship detection. The existing sea-land segmentation algorithms have poor segmentation accuracy, and the parameter adjustments are cumbersome and difficult to meet actual needs. Also, the current sea-land segmentation adopts traditional deep learning models that use Convolutional Neural Networks (CNN). At present, the transformer architecture has achieved great success in the field of natural images, but its application in the field of radar images is less studied. Therefore, this paper proposes a sea-land segmentation method based on the transformer architecture to strengthen edge supervision. It uses a self-attention mechanism with a gating strategy to better learn relative position bias. Meanwhile, an additional edge supervision branch is introduced. The decoder stage allows the feature information of the two branches to interact, thereby improving the edge precision of the sea-land segmentation. Based on the Gaofen-3 satellite image dataset, the experimental results show that the method proposed in this paper can effectively improve the accuracy of sea-land segmentation, especially the accuracy of sea-land edges. The mean IoU (Intersection over Union), edge precision, overall precision, and F1 scores respectively reach 96.36%, 84.54%, 99.74%, and 98.05%, which are superior to those of the mainstream segmentation models and have high practical application values.Keywords: SAR, sea-land segmentation, deep learning, transformer
Procedia PDF Downloads 18114637 Amplifying Sine Unit-Convolutional Neural Network: An Efficient Deep Architecture for Image Classification and Feature Visualizations
Authors: Jamshaid Ul Rahman, Faiza Makhdoom, Dianchen Lu
Abstract:
Activation functions play a decisive role in determining the capacity of Deep Neural Networks (DNNs) as they enable neural networks to capture inherent nonlinearities present in data fed to them. The prior research on activation functions primarily focused on the utility of monotonic or non-oscillatory functions, until Growing Cosine Unit (GCU) broke the taboo for a number of applications. In this paper, a Convolutional Neural Network (CNN) model named as ASU-CNN is proposed which utilizes recently designed activation function ASU across its layers. The effect of this non-monotonic and oscillatory function is inspected through feature map visualizations from different convolutional layers. The optimization of proposed network is offered by Adam with a fine-tuned adjustment of learning rate. The network achieved promising results on both training and testing data for the classification of CIFAR-10. The experimental results affirm the computational feasibility and efficacy of the proposed model for performing tasks related to the field of computer vision.Keywords: amplifying sine unit, activation function, convolutional neural networks, oscillatory activation, image classification, CIFAR-10
Procedia PDF Downloads 11114636 Combining Work and Study: A Solution for Stronger University-Industry Linkage
Authors: Payam Najafi, Behnam Ebrahimi, Hamid Montazerolghaem, Safoura Akbari-Alavijeh, Rasoul Tarkesh Esfahani
Abstract:
The combination of work and study has been recently gained lots of attention due to the crucial demand of industries to skillfully trained youth. Nevertheless, the distance between university and industry makes this combination challenging. According to the OECD (2012), in most countries, there is a limited link between students’ field of study and their area of work while studying. On the other hand, high unemployment rates among the specialized workforce, which is common in developing countries, highlights the need to strengthen this relationship. Innovative Center of Isfahan Chamber of Commerce has defined a project called 'POUYESH', which helps students to find related work opportunities to their field of study as well as supporting industries to supply their needed workforce. The present research is sought to explore the effect of the running project as a model of combining work and study on the university-industry linkage.Keywords: work and study, university-industry linkage, POUYESH project, field of study
Procedia PDF Downloads 18414635 Exploratory Analysis of A Review of Nonexistence Polarity in Native Speech
Authors: Deawan Rakin Ahamed Remal, Sinthia Chowdhury, Sharun Akter Khushbu, Sheak Rashed Haider Noori
Abstract:
Native Speech to text synthesis has its own leverage for the purpose of mankind. The extensive nature of art to speaking different accents is common but the purpose of communication between two different accent types of people is quite difficult. This problem will be motivated by the extraction of the wrong perception of language meaning. Thus, many existing automatic speech recognition has been placed to detect text. Overall study of this paper mentions a review of NSTTR (Native Speech Text to Text Recognition) synthesis compared with Text to Text recognition. Review has exposed many text to text recognition systems that are at a very early stage to comply with the system by native speech recognition. Many discussions started about the progression of chatbots, linguistic theory another is rule based approach. In the Recent years Deep learning is an overwhelming chapter for text to text learning to detect language nature. To the best of our knowledge, In the sub continent a huge number of people speak in Bangla language but they have different accents in different regions therefore study has been elaborate contradictory discussion achievement of existing works and findings of future needs in Bangla language acoustic accent.Keywords: TTR, NSTTR, text to text recognition, deep learning, natural language processing
Procedia PDF Downloads 13214634 Internal Family Systems Parts-Work: A Revolutionary Approach to Reducing Suicide Lethality
Authors: Bill D. Geis
Abstract:
Even with significantly increased spending, suicide rates continue to climb—with alarming increases among traditionally low-risk groups. This has caused clinicians and researchers to call for a complete rethinking of all assumptions about suicide prevention, assessment, and intervention. A form of therapy--Internal Family Systems Therapy--affords tremendous promise in sustained diminishment of lethal suicide risk. Though a form of therapy that is most familiar to trauma therapists, Internal Family Systems Therapy, involving direct work with suicidal parts, is a promising therapy for meaningful and sustained reduction in suicide deaths. Developed by Richard Schwartz, Internal Family Systems Therapy proposes that we are all influenced greatly by internal parts, frozen by development adversities, and these often-contradictory parts contribute invisibly to mood, distress, and behavior. In making research videos of patients from our database and discussing their suicide attempts, it is clear that many persons who attempt suicide are in altered states at the time of their attempt and influenced by factors other than conscious intent. Suicide intervention using this therapy involves direct work with suicidal parts and other interacting parts that generate distress and despair. Internal Family Systems theory posits that deep experiences of pain, fear, aloneness, and distress are defended by a range of different parts that attempt to contain these experiences of pain through various internal activities that unwittingly push forward inhibition, fear, self-doubt, hopelessness, desires to cut and engage in destructive behavior, addictive behavior, and even suicidal actions. These suicidal parts are often created (and “frozen”) at young ages, and these very young parts do not understand the consequences of this influence. Experience suggests that suicidal parts can create impulsive risk behind the scenes when pain is high and emotional support reduced—with significant crisis potential. This understanding of latent suicide risk is consistent with many of our video accounts of serious suicidal acts—compiled in a database of 1104 subjects. Since 2016, consent has been obtained and records kept of 23 highly suicidal patients, with initial Intention-to-Die ratings (0= no intent, 10 = conviction to die) between 5 and 10. In 67% of these cases using IFST parts-work intervention, these highly suicidal patients’ risk was reduced to 0-1, and 83% of cases were reduced to 4 or lower. There were no suicide deaths. Case illustrations will be offered.Keywords: suicide, internal family systems therapy, crisis management, suicide prevention
Procedia PDF Downloads 4114633 Wave State of Self: Findings of Synchronistic Patterns in the Collective Unconscious
Authors: R. Dimitri Halley
Abstract:
The research within Jungian Psychology presented here is on the wave state of Self. What has been discovered via shared dreaming, independently correlating dreams across dreamers, is beyond the Self stage into the deepest layer or the wave state Self: the very quantum ocean, the Self archetype is embedded in. A quantum wave or rhyming of meaning constituting synergy across several dreamers was discovered in dreams and in extensively shared dream work with small groups at a post therapy stage. Within the format of shared dreaming, we find synergy patterns beyond what Jung called the Self archetype. Jung led us up to the phase of Individuation and delivered the baton to Von Franz to work out the next synchronistic stage, here proposed as the finding of the quantum patterns making up the wave state of Self. These enfolded synchronistic patterns have been found in group format of shared dreaming of individuals approximating individuation, and the unfolding of it is carried by belief and faith. The reason for this format and operating system is because beyond therapy and of living reality, we find no science – no thinking or even awareness in the therapeutic sense – but rather a state of mental processing resembling more like that of spiritual attitude. Thinking as such is linear and cannot contain the deepest layer of Self, the quantum core of the human being. It is self reflection which is the container for the process at the wave state of Self. Observation locks us in an outside-in reactive flow from a first-person perspective and hence toward the surface we see to believe, whereas here, the direction of focus shifts to inside out/intrinsic. The operating system or language at the wave level of Self is thus belief and synchronicity. Belief has up to now been almost the sole province of organized religions but was viewed by Jung as an inherent property in the process of Individuation. The shared dreaming stage of the synchronistic patterns forms a larger story constituting a deep connectivity unfolding around individual Selves. Dreams of independent dreamers form larger patterns that come together as puzzles forming a larger story, and in this sense, this group work level builds on Jung as a post individuation collective stage. Shared dream correlations will be presented, illustrating a larger story in terms of trails of shared synchronicity.Keywords: belief, shared dreaming, synchronistic patterns, wave state of self
Procedia PDF Downloads 19614632 A Criterion for Evaluating Plastic Loads: Plastic Work-Tangent Criterion
Authors: Ying Zhang
Abstract:
In ASME Boiler and Pressure Vessel Code, the plastic load is defined by applying the twice elastic slope (TES) criterion of plastic collapse to a characteristic load-deformation curve for the vessel. Several other plastic criterion such as tangent intersection (TI) criterion, plastic work (PW) criterion have been proposed in the literature, but all exhibit a practical limitation: difficult to define the load parameter for vessels subject to several combined loads. An alternative criterion: plastic work-tangent (PWT) criterion for evaluating plastic load in pressure vessel design by analysis is presented in this paper. According to the plastic work-load curve, when the tangent variation is less than a given value in the plastic phase, the corresponding load is the plastic load. Application of the proposed criterion is illustrated by considering the elastic-plastic response of the lower head of reactor pressure vessel (RPV) and nozzle intersection of (RPV). It is proposed that this is because the PWT criterion more fully represents the constraining effect of material strain hardening on the spread of plastic deformation and more efficiently ton evaluating the plastic load.Keywords: plastic load, plastic work, strain hardening, plastic work-tangent criterion
Procedia PDF Downloads 35514631 Emperical Correlation for Measurement of Thermal Diffusivity of Spherical Shaped Food Products under Forced Convection Environment
Authors: M. Riaz, Inamur Rehman, Abhishek Sharma
Abstract:
The present work is the development of an experimental method for determining the thermal diffusivity variations with temperature of selected regular shaped solid fruits and vegetables subjected to forced convection cooling. Experimental investigations were carried on the sample chosen (potato and brinjal), which is approximately of spherical geometry. The variation of temperature within the food product is measured at several locations from centre to skin, under forced convection environment using a deep freezer, maintained at -10°C.This method uses one dimensional Fourier equation applied to regular shapes. For this, the experimental temperature data obtained from cylindrical and spherical shaped products during pre-cooling was utilised. Such temperature and thermal diffusivity profiles can be readily used with other information such as degradation rate, etc. to evaluate thermal treatments based on cold air cooling methods for storage of perishable food products.Keywords: thermal diffusivity, skin temperature, precooling, forced convection, regular shaped
Procedia PDF Downloads 45914630 Comparative Study of Deep Reinforcement Learning Algorithm Against Evolutionary Algorithms for Finding the Optimal Values in a Simulated Environment Space
Authors: Akshay Paranjape, Nils Plettenberg, Robert Schmitt
Abstract:
Traditional optimization methods like evolutionary algorithms are widely used in production processes to find an optimal or near-optimal solution of control parameters based on the simulated environment space of a process. These algorithms are computationally intensive and therefore do not provide the opportunity for real-time optimization. This paper utilizes the Deep Reinforcement Learning (DRL) framework to find an optimal or near-optimal solution for control parameters. A model based on maximum a posteriori policy optimization (Hybrid-MPO) that can handle both numerical and categorical parameters is used as a benchmark for comparison. A comparative study shows that DRL can find optimal solutions of similar quality as compared to evolutionary algorithms while requiring significantly less time making them preferable for real-time optimization. The results are confirmed in a large-scale validation study on datasets from production and other fields. A trained XGBoost model is used as a surrogate for process simulation. Finally, multiple ways to improve the model are discussed.Keywords: reinforcement learning, evolutionary algorithms, production process optimization, real-time optimization, hybrid-MPO
Procedia PDF Downloads 11214629 Modelling for Roof Failure Analysis in an Underground Cave
Authors: M. Belén Prendes-Gero, Celestino González-Nicieza, M. Inmaculada Alvarez-Fernández
Abstract:
Roof collapse is one of the problems with a higher frequency in most of the mines of all countries, even now. There are many reasons that may cause the roof to collapse, namely the mine stress activities in the mining process, the lack of vigilance and carelessness or the complexity of the geological structure and irregular operations. This work is the result of the analysis of one accident produced in the “Mary” coal exploitation located in northern Spain. In this accident, the roof of a crossroad of excavated galleries to exploit the “Morena” Layer, 700 m deep, collapsed. In the paper, the work done by the forensic team to determine the causes of the incident, its conclusions and recommendations are collected. Initially, the available documentation (geology, geotechnics, mining, etc.) and accident area were reviewed. After that, laboratory and on-site tests were carried out to characterize the behaviour of the rock materials and the support used (metal frames and shotcrete). With this information, different hypotheses of failure were simulated to find the one that best fits reality. For this work, the software of finite differences in three dimensions, FLAC 3D, was employed. The results of the study confirmed that the detachment was originated as a consequence of one sliding in the layer wall, due to the large roof span present in the place of the accident, and probably triggered as a consequence of the existence of a protection pillar insufficient. The results allowed to establish some corrective measures avoiding future risks. For example, the dimensions of the protection zones that must be remained unexploited and their interaction with the crossing areas between galleries, or the use of more adequate supports for these conditions, in which the significant deformations may discourage the use of rigid supports such as shotcrete. At last, a grid of seismic control was proposed as a predictive system. Its efficiency was tested along the investigation period employing three control equipment that detected new incidents (although smaller) in other similar areas of the mine. These new incidents show that the use of explosives produces vibrations which are a new risk factor to analyse in a next future.Keywords: forensic analysis, hypothesis modelling, roof failure, seismic monitoring
Procedia PDF Downloads 11514628 A Hybrid Feature Selection and Deep Learning Algorithm for Cancer Disease Classification
Authors: Niousha Bagheri Khulenjani, Mohammad Saniee Abadeh
Abstract:
Learning from very big datasets is a significant problem for most present data mining and machine learning algorithms. MicroRNA (miRNA) is one of the important big genomic and non-coding datasets presenting the genome sequences. In this paper, a hybrid method for the classification of the miRNA data is proposed. Due to the variety of cancers and high number of genes, analyzing the miRNA dataset has been a challenging problem for researchers. The number of features corresponding to the number of samples is high and the data suffer from being imbalanced. The feature selection method has been used to select features having more ability to distinguish classes and eliminating obscures features. Afterward, a Convolutional Neural Network (CNN) classifier for classification of cancer types is utilized, which employs a Genetic Algorithm to highlight optimized hyper-parameters of CNN. In order to make the process of classification by CNN faster, Graphics Processing Unit (GPU) is recommended for calculating the mathematic equation in a parallel way. The proposed method is tested on a real-world dataset with 8,129 patients, 29 different types of tumors, and 1,046 miRNA biomarkers, taken from The Cancer Genome Atlas (TCGA) database.Keywords: cancer classification, feature selection, deep learning, genetic algorithm
Procedia PDF Downloads 11114627 Work demand and Prevalence of Work-Related Musculoskeletal Disorders: A Case Study of Pakistan Aviation Maintenance Workers
Authors: Muzamil Mahmood, Afshan Naseem, Muhammad Zeeshan Mirza, Yasir Ahmad, Masood Raza
Abstract:
The purpose of this research is to analyze how aviation maintenance workers’ characteristics and work demand affect their development of work-related musculoskeletal disorders (WMSDs). Guided by literature on task characteristics, work demand, and WMSDs, data is collected from 128 aviation maintenance workers of private and public airlines. Data is then analyzed through descriptive and inferential statistics. It is found that task characteristics have a significant positive effect on WMSDs and an increase in tasks performed by aviation maintenance workers leads to increase in WMSDs. Work demand did not have a significant effect on WMSDs. The task characteristics of aviation maintenance workers moderates the relationship between their work demand and WMSDs. This reveals that task characteristics of aviation maintenance workers enhance the effect of work demand on WMSDs. The task characteristics of aviation maintenance workers are challenging and unpredictable. Subsequently, WMSDs are prevalent among aviation maintenance workers. The work demand of aviation maintenance workers does not influence their development of WMSDs. Pakistan Civil Aviation Authority should minimize the intensity of tasks assigned to aviation maintenance workers by introducing work dynamisms such as task sharing, job rotation, and probably teleworking to enhance flexibility. Human Resource and Recruitment Department need to consider the ability and fitness levels of potential aviation maintenance workers during recruitment. In addition, regular physical activities and ergonomic policies should be put in place by the management of the Pakistan Civil Aviation Authority to reduce the incidences of WMSDs.Keywords: work related musculoskeletal disorders, ergonomics, occupational health and safety, human factors
Procedia PDF Downloads 16314626 Similar Script Character Recognition on Kannada and Telugu
Authors: Gurukiran Veerapur, Nytik Birudavolu, Seetharam U. N., Chandravva Hebbi, R. Praneeth Reddy
Abstract:
This work presents a robust approach for the recognition of characters in Telugu and Kannada, two South Indian scripts with structural similarities in characters. To recognize the characters exhaustive datasets are required, but there are only a few publicly available datasets. As a result, we decided to create a dataset for one language (source language),train the model with it, and then test it with the target language.Telugu is the target language in this work, whereas Kannada is the source language. The suggested method makes use of Canny edge features to increase character identification accuracy on pictures with noise and different lighting. A dataset of 45,150 images containing printed Kannada characters was created. The Nudi software was used to automatically generate printed Kannada characters with different writing styles and variations. Manual labelling was employed to ensure the accuracy of the character labels. The deep learning models like CNN (Convolutional Neural Network) and Visual Attention neural network (VAN) are used to experiment with the dataset. A Visual Attention neural network (VAN) architecture was adopted, incorporating additional channels for Canny edge features as the results obtained were good with this approach. The model's accuracy on the combined Telugu and Kannada test dataset was an outstanding 97.3%. Performance was better with Canny edge characteristics applied than with a model that solely used the original grayscale images. The accuracy of the model was found to be 80.11% for Telugu characters and 98.01% for Kannada words when it was tested with these languages. This model, which makes use of cutting-edge machine learning techniques, shows excellent accuracy when identifying and categorizing characters from these scripts.Keywords: base characters, modifiers, guninthalu, aksharas, vattakshara, VAN
Procedia PDF Downloads 5314625 Development of Polymer Nano-Particles as in vivo Imaging Agents for Photo-Acoustic Imaging
Authors: Hiroyuki Aoki
Abstract:
Molecular imaging has attracted much attention to visualize a tumor site in a living body on the basis of biological functions. A fluorescence in vivo imaging technique has been widely employed as a useful modality for small animals in pre-clinical researches. However, it is difficult to observe a site deep inside a body because of a short penetration depth of light. A photo-acoustic effect is a generation of a sound wave following light absorption. Because the sound wave is less susceptible to the absorption of tissues, an in vivo imaging method based on the photoacoustic effect can observe deep inside a living body. The current study developed an in vivo imaging agent for a photoacoustic imaging method. Nano-particles of poly(lactic acid) including indocyanine dye were developed as bio-compatible imaging agent with strong light absorption. A tumor site inside a mouse body was successfully observed in a photo-acoustic image. A photo-acoustic imaging with polymer nano-particle agent would be a powerful method to visualize a tumor.Keywords: nano-particle, photo-acoustic effect, polymer, dye, in vivo imaging
Procedia PDF Downloads 15514624 Stock Market Prediction Using Convolutional Neural Network That Learns from a Graph
Authors: Mo-Se Lee, Cheol-Hwi Ahn, Kee-Young Kwahk, Hyunchul Ahn
Abstract:
Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN (Convolutional Neural Network), which is known as effective solution for recognizing and classifying images, has been popularly applied to classification and prediction problems in various fields. In this study, we try to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. In specific, we propose to apply CNN as the binary classifier that predicts stock market direction (up or down) by using a graph as its input. That is, our proposal is to build a machine learning algorithm that mimics a person who looks at the graph and predicts whether the trend will go up or down. Our proposed model consists of four steps. In the first step, it divides the dataset into 5 days, 10 days, 15 days, and 20 days. And then, it creates graphs for each interval in step 2. In the next step, CNN classifiers are trained using the graphs generated in the previous step. In step 4, it optimizes the hyper parameters of the trained model by using the validation dataset. To validate our model, we will apply it to the prediction of KOSPI200 for 1,986 days in eight years (from 2009 to 2016). The experimental dataset will include 14 technical indicators such as CCI, Momentum, ROC and daily closing price of KOSPI200 of Korean stock market.Keywords: convolutional neural network, deep learning, Korean stock market, stock market prediction
Procedia PDF Downloads 42514623 Vital Pulp Therapy: A Paradigm Shift in Treating Irreversible Pulpitis
Authors: Fadwa Chtioui
Abstract:
Vital Pulp Therapy (VPT) is nowadays challenging the deep-rooted dogma of root canal treatment, being the only therapeutic option for permanent teeth diagnosed with irreversible pulpitis or carious pulp exposure. Histologic and clinical research has shown that compromised dental pulp can be treated without the full removal or excavation of all healthy pulp, and the outcome of the partial or full pulpotomy followed by a Tricalcium-Silicate-based dressing seems to show promising results in maintaining pulp vitality and preserving affected teeth in the long term. By reviewing recent advances in the techniques of VPT and their clinical effectiveness and safety in permanent teeth with irreversible Pulpitis, this work provides a new understanding of pulp pathophysiology and defense mechanisms and will reform dental practitioners' decision-making in treating irreversible pulpits from root canal therapy to vital pulp therapy by taking advantage of the biological effects of Tricalcium Silicate materials.Keywords: irreversible pulpitis, vital pulp therapy, pulpotomy, Tricalcium Silicate
Procedia PDF Downloads 6014622 Intelligent Campus Monitoring: YOLOv8-Based High-Accuracy Activity Recognition
Authors: A. Degale Desta, Tamirat Kebamo
Abstract:
Background: Recent advances in computer vision and pattern recognition have significantly improved activity recognition through video analysis, particularly with the application of Deep Convolutional Neural Networks (CNNs). One-stage detectors now enable efficient video-based recognition by simultaneously predicting object categories and locations. Such advancements are highly relevant in educational settings where CCTV surveillance could automatically monitor academic activities, enhancing security and classroom management. However, current datasets and recognition systems lack the specific focus on campus environments necessary for practical application in these settings.Objective: This study aims to address this gap by developing a dataset and testing an automated activity recognition system specifically tailored for educational campuses. The EthioCAD dataset was created to capture various classroom activities and teacher-student interactions, facilitating reliable recognition of academic activities using deep learning models. Method: EthioCAD, a novel video-based dataset, was created with a design science research approach to encompass teacher-student interactions across three domains and 18 distinct classroom activities. Using the Roboflow AI framework, the data was processed, with 4.224 KB of frames and 33.485 MB of images managed for frame extraction, labeling, and organization. The Ultralytics YOLOv8 model was then implemented within Google Colab to evaluate the dataset’s effectiveness, achieving high mean Average Precision (mAP) scores. Results: The YOLOv8 model demonstrated robust activity recognition within campus-like settings, achieving an mAP50 of 90.2% and an mAP50-95 of 78.6%. These results highlight the potential of EthioCAD, combined with YOLOv8, to provide reliable detection and classification of classroom activities, supporting automated surveillance needs on educational campuses. Discussion: The high performance of YOLOv8 on the EthioCAD dataset suggests that automated activity recognition for surveillance is feasible within educational environments. This system addresses current limitations in campus-specific data and tools, offering a tailored solution for academic monitoring that could enhance the effectiveness of CCTV systems in these settings. Conclusion: The EthioCAD dataset, alongside the YOLOv8 model, provides a promising framework for automated campus activity recognition. This approach lays the groundwork for future advancements in CCTV-based educational surveillance systems, enabling more refined and reliable monitoring of classroom activities.Keywords: deep CNN, EthioCAD, deep learning, YOLOv8, activity recognition
Procedia PDF Downloads 1014621 The Effect of Austenitization Conditioning on the Mechanical Properties of Cr-Mo-V Hot Work Tool Steel with Different Nitrogen Addition
Authors: Iting Chiang, Cheng-Yu Wei, Chin-Teng Kuo, Po-Sheng Hsu, Yo-Lun Yang, Yung-Chang Kang, Chien-Chon Chen, Chih-Yuan Chen
Abstract:
In recent years, it is reported that microalloying of nitrogen atoms within traditional Cr-Mo-V hot work tool steels can achieve better high temperature mechanical properties, which thus leads to such metallurgical approach widely utilized in the several commercial advanced hot work tool steels. Although the performance of hot work tool steel can be improved better by alloy composition design strategy, the influence of processing parameters on the mechanical property, especially on the service life of hot work tool steel, is still not fully understood yet. A longer service life of hot work tool steel can decrease the manufacturing cost effectively and thus become a research hot spot. According to several previous studies, it is generally acknowledged the service life of hot work tool steels can be increased effectively as the steels possessing higher hardness and toughness due to the formation and propagation of microcracks within the steel can be inhibited effectively. Therefore, in the present research, the designed experiments are primarily to explore the synergistic effect of nitrogen content and austenitization conditioning on the mechanical properties of hot work tool steels has been conducted and analyzed. No matter the nitrogen content, the results indicated the hardness of hot work tool steels increased as the austenitization treatment executed at higher temperature. On the other hand, an optimum toughness of hot work tool steel can be achieved as the austenitization treatment performed at a suitable temperature range. The possible explanation of such metallurgical phenomenon has been also proposed and analyzed in the present research.Keywords: hot work tool steel, Cr-Mo-V, toughness, hardness, TEM
Procedia PDF Downloads 5914620 Cantilever Shoring Piles with Prestressing Strands: An Experimental Approach
Authors: Hani Mekdash, Lina Jaber, Yehia Temsah
Abstract:
Underground space is becoming a necessity nowadays, especially in highly congested urban areas. Retaining underground excavations using shoring systems is essential in order to protect adjoining structures from potential damage or collapse. Reinforced Concrete Piles (RCP) supported by multiple rows of tie-back anchors are commonly used type of shoring systems in deep excavations. However, executing anchors can sometimes be challenging because they might illegally trespass neighboring properties or get obstructed by infrastructure and other underground facilities. A technique is proposed in this paper, and it involves the addition of eccentric high-strength steel strands to the RCP section through ducts without providing the pile with lateral supports. The strands are then vertically stressed externally on the pile cap using a hydraulic jack, creating a compressive strengthening force in the concrete section. An experimental study about the behavior of the shoring wall by pre-stressed piles is presented during the execution of an open excavation in an urban area (Beirut city) followed by numerical analysis using finite element software. Based on the experimental results, this technique is proven to be cost-effective and provides flexible and sustainable construction of shoring works.Keywords: deep excavation, prestressing, pre-stressed piles, shoring system
Procedia PDF Downloads 11714619 Machine Learning Predictive Models for Hydroponic Systems: A Case Study Nutrient Film Technique and Deep Flow Technique
Authors: Kritiyaporn Kunsook
Abstract:
Machine learning algorithms (MLAs) such us artificial neural networks (ANNs), decision tree, support vector machines (SVMs), Naïve Bayes, and ensemble classifier by voting are powerful data driven methods that are relatively less widely used in the mapping of technique of system, and thus have not been comparatively evaluated together thoroughly in this field. The performances of a series of MLAs, ANNs, decision tree, SVMs, Naïve Bayes, and ensemble classifier by voting in technique of hydroponic systems prospectively modeling are compared based on the accuracy of each model. Classification of hydroponic systems only covers the test samples from vegetables grown with Nutrient film technique (NFT) and Deep flow technique (DFT). The feature, which are the characteristics of vegetables compose harvesting height width, temperature, require light and color. The results indicate that the classification performance of the ANNs is 98%, decision tree is 98%, SVMs is 97.33%, Naïve Bayes is 96.67%, and ensemble classifier by voting is 98.96% algorithm respectively.Keywords: artificial neural networks, decision tree, support vector machines, naïve Bayes, ensemble classifier by voting
Procedia PDF Downloads 372