Search results for: computational accuracy
4871 OpenMP Parallelization of Three-Dimensional Magnetohydrodynamic Code FOI-PERFECT
Authors: Jiao F. Huang, Shi Chen, Shu C. Duan, Gang H. Wang
Abstract:
Due to its complex spatial structure as well as dynamic temporal evolution, an analytic solution of an X-pinch process is out of question, and numerical simulation becomes an important tool in X-pinch studies. Intrinsically, simulations of X-pinch are three-dimensional (3D) because of the specific structure of its load. Furthermore, in order to resolve both its μm-scales and ns-durations, fine spatial mesh grid and short time steps are usually adopted. The resulting large computational scales make the parallelization of codes a vital problem to be solved if any practical simulations are to be carried out. In this work, we report OpenMP parallelization of our 3D magnetohydrodynamic (MHD) code FOI-PERFECT. Results of test runs confirm that computational efficiency has been improved after parallelization, and both the sequential and parallel versions give the same physical results under the same initial conditions.Keywords: MHD simulation, OpenMP, parallelization, X-pinch
Procedia PDF Downloads 3404870 Low Complexity Deblocking Algorithm
Authors: Jagroop Singh Sidhu, Buta Singh
Abstract:
A low computational deblocking filter including three frequency related modes (smooth mode, intermediate mode, and non-smooth mode for low-frequency, mid-frequency, and high frequency regions, respectively) is proposed. The suggested approach requires zero additions, zero subtractions, zero multiplications (for intermediate region), no divisions (for non-smooth region) and no comparison. The suggested method thus keeps the computation lower and thus suitable for image coding systems based on blocks. Comparison of average number of operations for smooth, non-smooth, intermediate (per pixel vector for each block) using filter suggested by Chen and the proposed method filter suggests that the proposed filter keeps the computation lower and is thus suitable for fast processing algorithms.Keywords: blocking artifacts, computational complexity, non-smooth, intermediate, smooth
Procedia PDF Downloads 4624869 Numerical Simulation of Fluid Structure Interaction Using Two-Way Method
Authors: Samira Laidaoui, Mohammed Djermane, Nazihe Terfaya
Abstract:
The fluid-structure coupling is a natural phenomenon which reflects the effects of two continuums: fluid and structure of different types in the reciprocal action on each other, involving knowledge of elasticity and fluid mechanics. The solution for such problems is based on the relations of continuum mechanics and is mostly solved with numerical methods. It is a computational challenge to solve such problems because of the complex geometries, intricate physics of fluids, and complicated fluid-structure interactions. The way in which the interaction between fluid and solid is described gives the largest opportunity for reducing the computational effort. In this paper, a problem of fluid structure interaction is investigated with two-way coupling method. The formulation Arbitrary Lagrangian-Eulerian (ALE) was used, by considering a dynamic grid, where the solid is described by a Lagrangian formulation and the fluid by a Eulerian formulation. The simulation was made on the ANSYS software.Keywords: ALE, coupling, FEM, fluid-structure, interaction, one-way method, two-way method
Procedia PDF Downloads 6784868 Improving the Performance of Deep Learning in Facial Emotion Recognition with Image Sharpening
Authors: Ksheeraj Sai Vepuri, Nada Attar
Abstract:
We as humans use words with accompanying visual and facial cues to communicate effectively. Classifying facial emotion using computer vision methodologies has been an active research area in the computer vision field. In this paper, we propose a simple method for facial expression recognition that enhances accuracy. We tested our method on the FER-2013 dataset that contains static images. Instead of using Histogram equalization to preprocess the dataset, we used Unsharp Mask to emphasize texture and details and sharpened the edges. We also used ImageDataGenerator from Keras library for data augmentation. Then we used Convolutional Neural Networks (CNN) model to classify the images into 7 different facial expressions, yielding an accuracy of 69.46% on the test set. Our results show that using image preprocessing such as the sharpening technique for a CNN model can improve the performance, even when the CNN model is relatively simple.Keywords: facial expression recognittion, image preprocessing, deep learning, CNN
Procedia PDF Downloads 1434867 Plant Identification Using Convolution Neural Network and Vision Transformer-Based Models
Authors: Virender Singh, Mathew Rees, Simon Hampton, Sivaram Annadurai
Abstract:
Plant identification is a challenging task that aims to identify the family, genus, and species according to plant morphological features. Automated deep learning-based computer vision algorithms are widely used for identifying plants and can help users narrow down the possibilities. However, numerous morphological similarities between and within species render correct classification difficult. In this paper, we tested custom convolution neural network (CNN) and vision transformer (ViT) based models using the PyTorch framework to classify plants. We used a large dataset of 88,000 provided by the Royal Horticultural Society (RHS) and a smaller dataset of 16,000 images from the PlantClef 2015 dataset for classifying plants at genus and species levels, respectively. Our results show that for classifying plants at the genus level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420 and other state-of-the-art CNN-based models suggested in previous studies on a similar dataset. ViT model achieved top accuracy of 83.3% for classifying plants at the genus level. For classifying plants at the species level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420, with a top accuracy of 92.5%. We show that the correct set of augmentation techniques plays an important role in classification success. In conclusion, these results could help end users, professionals and the general public alike in identifying plants quicker and with improved accuracy.Keywords: plant identification, CNN, image processing, vision transformer, classification
Procedia PDF Downloads 1044866 Investigation of Flow Characteristics on Upstream and Downstream of Orifice Using Computational Fluid Dynamics
Authors: War War Min Swe, Aung Myat Thu, Khin Cho Thet, Zaw Moe Htet, Thuzar Mon
Abstract:
The main parameter of the orifice hole diameter was designed according to the range of throttle diameter ratio which gave the required discharge coefficient. The discharge coefficient is determined by difference diameter ratios. The value of discharge coefficient is 0.958 occurred at throttle diameter ratio 0.5. The throttle hole diameter is 80 mm. The flow analysis is done numerically using ANSYS 17.0, computational fluid dynamics. The flow velocity was analyzed in the upstream and downstream of the orifice meter. The downstream velocity of non-standard orifice meter is 2.5% greater than that of standard orifice meter. The differential pressure is 515.379 Pa in standard orifice.Keywords: CFD-CFX, discharge coefficients, flow characteristics, inclined
Procedia PDF Downloads 1454865 Bridging the Data Gap for Sexism Detection in Twitter: A Semi-Supervised Approach
Authors: Adeep Hande, Shubham Agarwal
Abstract:
This paper presents a study on identifying sexism in online texts using various state-of-the-art deep learning models based on BERT. We experimented with different feature sets and model architectures and evaluated their performance using precision, recall, F1 score, and accuracy metrics. We also explored the use of pseudolabeling technique to improve model performance. Our experiments show that the best-performing models were based on BERT, and their multilingual model achieved an F1 score of 0.83. Furthermore, the use of pseudolabeling significantly improved the performance of the BERT-based models, with the best results achieved using the pseudolabeling technique. Our findings suggest that BERT-based models with pseudolabeling hold great promise for identifying sexism in online texts with high accuracy.Keywords: large language models, semi-supervised learning, sexism detection, data sparsity
Procedia PDF Downloads 704864 Comparing Accuracy of Semantic and Radiomics Features in Prognosis of Epidermal Growth Factor Receptor Mutation in Non-Small Cell Lung Cancer
Authors: Mahya Naghipoor
Abstract:
Purpose: Non-small cell lung cancer (NSCLC) is the most common lung cancer type. Epidermal growth factor receptor (EGFR) mutation is the main reason which causes NSCLC. Computed tomography (CT) is used for diagnosis and prognosis of lung cancers because of low price and little invasion. Semantic analyses of qualitative CT features are based on visual evaluation by radiologist. However, the naked eye ability may not assess all image features. On the other hand, radiomics provides the opportunity of quantitative analyses for CT images features. The aim of this review study was comparing accuracy of semantic and radiomics features in prognosis of EGFR mutation in NSCLC. Methods: For this purpose, the keywords including: non-small cell lung cancer, epidermal growth factor receptor mutation, semantic, radiomics, feature, receiver operating characteristics curve (ROC) and area under curve (AUC) were searched in PubMed and Google Scholar. Totally 29 papers were reviewed and the AUC of ROC analyses for semantic and radiomics features were compared. Results: The results showed that the reported AUC amounts for semantic features (ground glass opacity, shape, margins, lesion density and presence or absence of air bronchogram, emphysema and pleural effusion) were %41-%79. For radiomics features (kurtosis, skewness, entropy, texture, standard deviation (SD) and wavelet) the AUC values were found %50-%86. Conclusions: In conclusion, the accuracy of radiomics analysis is a little higher than semantic in prognosis of EGFR mutation in NSCLC.Keywords: lung cancer, radiomics, computer tomography, mutation
Procedia PDF Downloads 1674863 Transformers in Gene Expression-Based Classification
Authors: Babak Forouraghi
Abstract:
A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations of previous approaches, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with attention mechanism. In a previous work on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.Keywords: transformers, generative ai, gene expression design, classification
Procedia PDF Downloads 594862 Management of Acute Appendicitis with Preference on Delayed Primary Suturing of Surgical Incision
Authors: N. A. D. P. Niwunhella, W. G. R. C. K. Sirisena
Abstract:
Appendicitis is one of the most encountered abdominal emergencies worldwide. Proper clinical diagnosis and appendicectomy with minimal post operative complications are therefore priorities. Aim of this study was to ascertain the overall management of acute appendicitis in Sri Lanka in special preference to delayed primary suturing of the surgical site, comparing other local and international treatment outcomes. Data were collected prospectively from 155 patients who underwent appendicectomy following clinical and radiological diagnosis with ultrasonography. Histological assessment was done for all the specimens. All perforated appendices were managed with delayed primary closure. Patients were followed up for 28 days to assess complications. Mean age of patient presentation was 27 years; mean pre-operative waiting time following admission was 24 hours; average hospital stay was 72 hours; accuracy of clinical diagnosis of appendicitis as confirmed by histology was 87.1%; post operative wound infection rate was 8.3%, and among them 5% had perforated appendices; 4 patients had post operative complications managed without re-opening. There was no fistula formation or mortality reported. Current study was compared with previously published data: a comparison on management of acute appendicitis in Sri Lanka vs. United Kingdom (UK). The diagnosis of current study was equally accurate, but post operative complications were significantly reduced - (current study-9.6%, compared Sri Lankan study-16.4%; compared UK study-14.1%). During the recent years, there has been an exponential rise in the use of Computerised Tomography (CT) imaging in the assessment of patients with acute appendicitis. Even though, the diagnostic accuracy without using CT, and treatment outcome of acute appendicitis in this study match other local studies as well as with data compared to UK. Therefore CT usage has not increased the diagnostic accuracy of acute appendicitis significantly. Especially, delayed primary closure may have reduced post operative wound infection rate for ruptured appendices, therefore suggest this approach for further evaluation as a safer and an effective practice in other hospitals worldwide as well.Keywords: acute appendicitis, computerised tomography, diagnostic accuracy, delayed primary closure
Procedia PDF Downloads 1674861 Using Divergent Nozzle with Aerodynamic Lens to Focus Nanoparticles
Authors: Hasan Jumaah Mrayeh, Fue-Sang Lien
Abstract:
ANSYS Fluent will be used to simulate Computational Fluid Dynamics (CFD) for an efficient lens and nozzle design which will be explained in this paper. We have designed and characterized an aerodynamic lens and a divergent nozzle for focusing flow that transmits sub 25 nm particles through the aerodynamic lens. The design of the lens and nozzle has been improved using CFD for particle trajectories. We obtained a case for calculating nanoparticles (25 nm) flowing through the aerodynamic lens and divergent nozzle. Nanoparticles are transported by air, which is pumped into the aerodynamic lens through the nozzle at 1 atmospheric pressure. We have also developed a computational methodology that can determine the exact focus characteristics of aerodynamic lens systems. Particle trajectories were traced using the Lagrange approach. The simulation shows the ability of the aerodynamic lens to focus on 25 nm particles after using a divergent nozzle.Keywords: aerodynamic lens, divergent nozzle, ANSYS Fluent, Lagrange approach
Procedia PDF Downloads 3064860 A Computational Study of Very High Turbulent Flow and Heat Transfer Characteristics in Circular Duct with Hemispherical Inline Baffles
Authors: Dipak Sen, Rajdeep Ghosh
Abstract:
This paper presents a computational study of steady state three dimensional very high turbulent flow and heat transfer characteristics in a constant temperature-surfaced circular duct fitted with 900 hemispherical inline baffles. The computations are based on realizable k-ɛ model with standard wall function considering the finite volume method, and the SIMPLE algorithm has been implemented. Computational Study are carried out for Reynolds number, Re ranging from 80000 to 120000, Prandtl Number, Pr of 0.73, Pitch Ratios, PR of 1,2,3,4,5 based on the hydraulic diameter of the channel, hydrodynamic entry length, thermal entry length and the test section. Ansys Fluent 15.0 software has been used to solve the flow field. Study reveals that circular pipe having baffles has a higher Nusselt number and friction factor compared to the smooth circular pipe without baffles. Maximum Nusselt number and friction factor are obtained for the PR=5 and PR=1 respectively. Nusselt number increases while pitch ratio increases in the range of study; however, friction factor also decreases up to PR 3 and after which it becomes almost constant up to PR 5. Thermal enhancement factor increases with increasing pitch ratio but with slightly decreasing Reynolds number in the range of study and becomes almost constant at higher Reynolds number. The computational results reveal that optimum thermal enhancement factor of 900 inline hemispherical baffle is about 1.23 for pitch ratio 5 at Reynolds number 120000.It also shows that the optimum pitch ratio for which the baffles can be installed in such very high turbulent flows should be 5. Results show that pitch ratio and Reynolds number play an important role on both fluid flow and heat transfer characteristics.Keywords: friction factor, heat transfer, turbulent flow, circular duct, baffle, pitch ratio
Procedia PDF Downloads 3724859 Approach of Measuring System Analyses for Automotive Part Manufacturing
Authors: S. Homrossukon, S. Sansureerungsigun
Abstract:
This work aims to introduce an efficient and to standardize the measuring system analyses for automotive industrial. The study started by literature reviewing about the management and analyses measurement system. The approach of measuring system management, then, was constructed. Such approach was validated by collecting the current measuring system data using the equipments of interest including vernier caliper and micrometer. Their accuracy and precision of measurements were analyzed. Finally, the measuring system was improved and evaluated. The study showed that vernier did not meet its measuring characteristics based on the linearity whereas all equipment were lacking of the measuring precision characteristics. Consequently, the causes of measuring variation via the equipment of interest were declared. After the improvement, it was found that their measuring performance could be accepted as the standard required. Finally, the standardized approach for analyzing the measuring system of automotive was concluded.Keywords: automotive part manufacturing measurement, measuring accuracy, measuring precision, measurement system analyses
Procedia PDF Downloads 3114858 Frequency Recognition Models for Steady State Visual Evoked Potential Based Brain Computer Interfaces (BCIs)
Authors: Zeki Oralhan, Mahmut Tokmakçı
Abstract:
SSVEP based brain computer interface (BCI) systems have been preferred, because of high information transfer rate (ITR) and practical use. ITR is the parameter of BCI overall performance. For high ITR value, one of specification BCI system is that has high accuracy. In this study, we investigated to recognize SSVEP with shorter time and lower error rate. In the experiment, there were 8 flickers on light crystal display (LCD). Participants gazed to flicker which had 12 Hz frequency and 50% duty cycle ratio on the LCD during 10 seconds. During the experiment, EEG signals were acquired via EEG device. The EEG data was filtered in preprocessing session. After that Canonical Correlation Analysis (CCA), Multiset CCA (MsetCCA), phase constrained CCA (PCCA), and Multiway CCA (MwayCCA) methods were applied on data. The highest average accuracy value was reached when MsetCCA was applied.Keywords: brain computer interface, canonical correlation analysis, human computer interaction, SSVEP
Procedia PDF Downloads 2664857 Computational Modeling of Combustion Wave in Nanoscale Thermite Reaction
Authors: Kyoungjin Kim
Abstract:
Nanoscale thermites such as the composite mixture of nano-sized aluminum and molybdenum trioxide powders possess several technical advantages such as much higher reaction rate and shorter ignition delay, when compared to the conventional energetic formulations made of micron-sized metal and oxidizer particles. In this study, the self-propagation of combustion wave in compacted pellets of nanoscale thermite composites is modeled and computationally investigated by utilizing the activation energy reduction of aluminum particles due to nanoscale particle sizes. The present computational model predicts the speed of combustion wave propagation which is good agreement with the corresponding experiments of thermite reaction. Also, several characteristics of thermite reaction in nanoscale composites are discussed including the ignition delay and combustion wave structures.Keywords: nanoparticles, thermite reaction, combustion wave, numerical modeling
Procedia PDF Downloads 3804856 Thermal and Acoustic Design of Mobile Hydraulic Vehicle Engine Room
Authors: Homin Kim, Hyungjo Byun, Jinyoung Do, Yongil Lee, Hyunho Shin, Seungbae Lee
Abstract:
Engine room of mobile hydraulic vehicle is densely packed with an engine and many hydraulic components mostly generating heat and sound. Though hydraulic oil cooler, ATF cooler, and axle oil cooler etc. are added to vehicle cooling system of mobile vehicle, the overheating may cause downgraded performance and frequent failures. In order to improve thermal and acoustic environment of engine room, the computational approaches by Computational Fluid Dynamics (CFD) and Boundary Element Method (BEM) are used together with necessary modal analysis of belt-driven system. The engine room design layout and process, which satisfies the design objectives of sound power level and temperature levels of radiator water, charged air cooler, transmission and hydraulic oil coolers, is discussed.Keywords: acoustics, CFD, engine room design, mobile hydraulics
Procedia PDF Downloads 3274855 Medical Neural Classifier Based on Improved Genetic Algorithm
Authors: Fadzil Ahmad, Noor Ashidi Mat Isa
Abstract:
This study introduces an improved genetic algorithm procedure that focuses search around near optimal solution corresponded to a group of elite chromosome. This is achieved through a novel crossover technique known as Segmented Multi Chromosome Crossover. It preserves the highly important information contained in a gene segment of elite chromosome and allows an offspring to carry information from gene segment of multiple chromosomes. In this way the algorithm has better possibility to effectively explore the solution space. The improved GA is applied for the automatic and simultaneous parameter optimization and feature selection of artificial neural network in pattern recognition of medical problem, the cancer and diabetes disease. The experimental result shows that the average classification accuracy of the cancer and diabetes dataset has improved by 0.1% and 0.3% respectively using the new algorithm.Keywords: genetic algorithm, artificial neural network, pattern clasification, classification accuracy
Procedia PDF Downloads 4744854 Environmental Monitoring by Using Unmanned Aerial Vehicle (UAV) Images and Spatial Data: A Case Study of Mineral Exploitation in Brazilian Federal District, Brazil
Authors: Maria De Albuquerque Bercot, Caio Gustavo Mesquita Angelo, Daniela Maria Moreira Siqueira, Augusto Assucena De Vasconcellos, Rodrigo Studart Correa
Abstract:
Mining is an important socioeconomic activity in Brazil although it negatively impacts the environment. Mineral operations cause irreversible changes in topography, removal of vegetation and topsoil, habitat destruction, displacement of fauna, loss of biodiversity, soil erosion, siltation of watercourses and have potential to enhance climate change. Due to the impacts and its pollution potential, mining activity in Brazil is legally subjected to environmental licensing. Unlicensed mining operations or operations that not abide to the terms of an obtained license are taken as environmental crimes in the country. This work reports a case analyzed in the Forensic Institute of the Brazilian Federal District Civil Police. The case consisted of detecting illegal aspects of sand exploitation from a licensed mine in Federal District, nearby Brasilia city. The fieldwork covered an area of roughly 6 ha, which was surveyed with an unmanned aerial vehicle (UAV) (PHANTOM 3 ADVANCED). The overflight with UAV took about 20 min, with maximum flight height of 100 m. 592 UAV georeferenced images were obtained and processed in a photogrammetric software (AGISOFT PHOTOSCAN 1.1.4), which generated a mosaic of geo-referenced images and a 3D model in less than six working hours. The 3D model was analyzed in a forensic software for accurate modeling and volumetric analysis. (MAPTEK I-SITE FORENSIC 2.2). To ensure the 3D model was a true representation of the mine site, coordinates of ten control points and reference measures were taken during fieldwork and compared to respective spatial data in the model. Finally, these spatial data were used for measuring mining area, excavation depth and volume of exploited sand. Results showed that mine holder had not complied with some terms and conditions stated in the granted license, such as sand exploration beyond authorized extension, depth and volume. Easiness, the accuracy and expedition of procedures used in this case highlight the employment of UAV imagery and computational photogrammetry as efficient tools for outdoor forensic exams, especially on environmental issues.Keywords: computational photogrammetry, environmental monitoring, mining, UAV
Procedia PDF Downloads 3194853 A Computer-Aided System for Detection and Classification of Liver Cirrhosis
Authors: Abdel Hadi N. Ebraheim, Eman Azomi, Nefisa A. Fahmy
Abstract:
This paper designs and implements a computer-aided system (CAS) to help detect and diagnose liver cirrhosis in patients with Chronic Hepatitis C. Our system reduces the required features (tests) the patient is asked to do to tests to their minimal best most informative subset of tests, with a diagnostic accuracy above 99%, and hence saving both time and costs. We use the Support Vector Machine (SVM) with cross-validation, a Multilayer Perceptron Neural Network (MLP), and a Generalized Regression Neural Network (GRNN) that employs a base of radial functions for functional approximation, as classifiers. Our system is tested on 199 subjects, of them 99 Chronic Hepatitis C.The subjects were selected from among the outpatient clinic in National Herpetology and Tropical Medicine Research Institute (NHTMRI).Keywords: liver cirrhosis, artificial neural network, support vector machine, multi-layer perceptron, classification, accuracy
Procedia PDF Downloads 4614852 Applying Unmanned Aerial Vehicle on Agricultural Damage: A Case Study of the Meteorological Disaster on Taiwan Paddy Rice
Authors: Chiling Chen, Chiaoying Chou, Siyang Wu
Abstract:
Taiwan locates at the west of Pacific Ocean and intersects between continental and marine climate. Typhoons frequently strike Taiwan and come with meteorological disasters, i.e., heavy flooding, landslides, loss of life and properties, etc. Global climate change brings more extremely meteorological disasters. So, develop techniques to improve disaster prevention and mitigation is needed, to improve rescue processes and rehabilitations is important as well. In this study, UAVs (Unmanned Aerial Vehicles) are applied to take instant images for improving the disaster investigation and rescue processes. Paddy rice fields in the central Taiwan are the study area. There have been attacked by heavy rain during the monsoon season in June 2016. UAV images provide the high ground resolution (3.5cm) with 3D Point Clouds to develop image discrimination techniques and digital surface model (DSM) on rice lodging. Firstly, image supervised classification with Maximum Likelihood Method (MLD) is used to delineate the area of rice lodging. Secondly, 3D point clouds generated by Pix4D Mapper are used to develop DSM for classifying the lodging levels of paddy rice. As results, discriminate accuracy of rice lodging is 85% by image supervised classification, and the classification accuracy of lodging level is 87% by DSM. Therefore, UAVs not only provide instant images of agricultural damage after the meteorological disaster, but the image discriminations on rice lodging also reach acceptable accuracy (>85%). In the future, technologies of UAVs and image discrimination will be applied to different crop fields. The results of image discrimination will be overlapped with administrative boundaries of paddy rice, to establish GIS-based assist system on agricultural damage discrimination. Therefore, the time and labor would be greatly reduced on damage detection and monitoring.Keywords: Monsoon, supervised classification, Pix4D, 3D point clouds, discriminate accuracy
Procedia PDF Downloads 3004851 Effect of Geometry on the Aerodynamic Performance of Darrieus H Yype Vertical Axis Wind Turbine
Authors: Belkheir Noura, Rabah Kerfah, Boumehani Abdellah
Abstract:
The influence of solidity variations on the aerodynamic performance of H type vertical axis wind turbine is studied in this paper. The wind turbine model used in this paper is the three-blade wind turbine with the symmetrical airfoil, NACA0021. The length of the chord is 0.265m. Numerical investigations were implemented for the different solidity by changing the radius and blade number. A two-dimensional model of the wind turbine is employed. The approach a Reynolds-Averaged Navier–Stokes equations, completed by the K- ώ SST turbulence model, is used. Motion mesh model capability of a computational fluid dynamics (CFD) solver is used. For each value of the solidity, the aerodynamics performances and the characteristics of the flow field are studied at several values of the tip speed ratio, λ = 0.5 to λ = 3, with an incoming wind speed of 8 m/s. The results show that increasing the number of blades will reduce the maximum value of the power coefficient of the wind turbine. Also, for the VAWT with a lower solidity can obtain the maximum Cp at a high tip speed ratio. The effects of changing the radius and blade number on aerodynamic performance are almost the same. Finally, for the validation, experimental data from the literature and computational results were compared. In conclusion, to study the influence of the solidity in the performances of the wind turbine is to provide the reference for the design of H type vertical axis wind turbines.Keywords: wind energy, darrieus h type vertical axis wind turbine, computational fluid dynamic, solidity
Procedia PDF Downloads 964850 Performance Improvement in a Micro Compressor for Micro Gas Turbine Using Computational Fluid Dynamics
Authors: Kamran Siddique, Hiroyuki Asada, Yoshifumi Ogami
Abstract:
Micro gas turbine (MGT) nowadays has a wide variety of applications from drones to hybrid electric vehicles. As microfabrication technology getting better, the size of MGT is getting smaller. Overall performance of MGT is dependent on the individual components. Each component’s performance is dependent and interrelated with another component. Therefore, careful consideration needs to be given to each and every individual component of MGT. In this study, the focus is on improving the performance of the compressor in order to improve the overall performance of MGT. Computational Fluid Dynamics (CFD) is being performed using the software FLUENT to analyze the design of a micro compressor. Operating parameters like mass flow rate and RPM, and design parameters like inner blade angle (IBA), outer blade angle (OBA), blade thickness and number of blades are varied to study its effect on the performance of the compressor. Pressure ratio is used as a tool to measure the performance of the compressor. Higher the pressure ratio, better the design is. In the study, target mass flow rate is 0.2 g/s and RPM to be less than or equal to 900,000. So far, a pressure ratio of above 3 has been achieved at 0.2 g/s mass flow rate with 5 rotor blades, 0.36 mm blade thickness, 94.25 degrees OBA and 10.46 degrees IBA. The design in this study differs from a regular centrifugal compressor used in conventional gas turbines such that compressor is designed keeping in mind ease of manufacturability. So, this study proposes a compressor design which has a good pressure ratio, and at the same time, it is easy to manufacture using current microfabrication technologies.Keywords: computational fluid dynamics, FLUENT microfabrication, RPM
Procedia PDF Downloads 1624849 Traffic Sign Recognition System Using Convolutional Neural NetworkDevineni
Authors: Devineni Vijay Bhaskar, Yendluri Raja
Abstract:
We recommend a model for traffic sign detection stranded on Convolutional Neural Networks (CNN). We first renovate the unique image into the gray scale image through with support vector machines, then use convolutional neural networks with fixed and learnable layers for revealing and understanding. The permanent layer can reduction the amount of attention areas to notice and crop the limits very close to the boundaries of traffic signs. The learnable coverings can rise the accuracy of detection significantly. Besides, we use bootstrap procedures to progress the accuracy and avoid overfitting problem. In the German Traffic Sign Detection Benchmark, we obtained modest results, with an area under the precision-recall curve (AUC) of 99.49% in the group “Risk”, and an AUC of 96.62% in the group “Obligatory”.Keywords: convolutional neural network, support vector machine, detection, traffic signs, bootstrap procedures, precision-recall curve
Procedia PDF Downloads 1224848 A Lightweight Pretrained Encrypted Traffic Classification Method with Squeeze-and-Excitation Block and Sharpness-Aware Optimization
Authors: Zhiyan Meng, Dan Liu, Jintao Meng
Abstract:
Dependable encrypted traffic classification is crucial for improving cybersecurity and handling the growing amount of data. Large language models have shown that learning from large datasets can be effective, making pre-trained methods for encrypted traffic classification popular. However, attention-based pre-trained methods face two main issues: their large neural parameters are not suitable for low-computation environments like mobile devices and real-time applications, and they often overfit by getting stuck in local minima. To address these issues, we developed a lightweight transformer model, which reduces the computational parameters through lightweight vocabulary construction and Squeeze-and-Excitation Block. We use sharpness-aware optimization to avoid local minima during pre-training and capture temporal features with relative positional embeddings. Our approach keeps the model's classification accuracy high for downstream tasks. We conducted experiments on four datasets -USTC-TFC2016, VPN 2016, Tor 2016, and CICIOT 2022. Even with fewer than 18 million parameters, our method achieves classification results similar to methods with ten times as many parameters.Keywords: sharpness-aware optimization, encrypted traffic classification, squeeze-and-excitation block, pretrained model
Procedia PDF Downloads 304847 Developing Computational Thinking in Early Childhood Education
Authors: Kalliopi Kanaki, Michael Kalogiannakis
Abstract:
Nowadays, in the digital era, the early acquisition of basic programming skills and knowledge is encouraged, as it facilitates students’ exposure to computational thinking and empowers their creativity, problem-solving skills, and cognitive development. More and more researchers and educators investigate the introduction of computational thinking in K-12 since it is expected to be a fundamental skill for everyone by the middle of the 21st century, just like reading, writing and arithmetic are at the moment. In this paper, a doctoral research in the process is presented, which investigates the infusion of computational thinking into science curriculum in early childhood education. The whole attempt aims to develop young children’s computational thinking by introducing them to the fundamental concepts of object-oriented programming in an enjoyable, yet educational framework. The backbone of the research is the digital environment PhysGramming (an abbreviation of Physical Science Programming), which provides children the opportunity to create their own digital games, turning them from passive consumers to active creators of technology. PhysGramming deploys an innovative hybrid schema of visual and text-based programming techniques, with emphasis on object-orientation. Through PhysGramming, young students are familiarized with basic object-oriented programming concepts, such as classes, objects, and attributes, while, at the same time, get a view of object-oriented programming syntax. Nevertheless, the most noteworthy feature of PhysGramming is that children create their own digital games within the context of physical science courses, in a way that provides familiarization with the basic principles of object-oriented programming and computational thinking, even though no specific reference is made to these principles. Attuned to the ethical guidelines of educational research, interventions were conducted in two classes of second grade. The interventions were designed with respect to the thematic units of the curriculum of physical science courses, as a part of the learning activities of the class. PhysGramming was integrated into the classroom, after short introductory sessions. During the interventions, 6-7 years old children worked in pairs on computers and created their own digital games (group games, matching games, and puzzles). The authors participated in these interventions as observers in order to achieve a realistic evaluation of the proposed educational framework concerning its applicability in the classroom and its educational and pedagogical perspectives. To better examine if the objectives of the research are met, the investigation was focused on six criteria; the educational value of PhysGramming, its engaging and enjoyable characteristics, its child-friendliness, its appropriateness for the purpose that is proposed, its ability to monitor the user’s progress and its individualizing features. In this paper, the functionality of PhysGramming and the philosophy of its integration in the classroom are both described in detail. Information about the implemented interventions and the results obtained is also provided. Finally, several limitations of the research conducted that deserve attention are denoted.Keywords: computational thinking, early childhood education, object-oriented programming, physical science courses
Procedia PDF Downloads 1204846 Simulation of Optimal Runoff Hydrograph Using Ensemble of Radar Rainfall and Blending of Runoffs Model
Authors: Myungjin Lee, Daegun Han, Jongsung Kim, Soojun Kim, Hung Soo Kim
Abstract:
Recently, the localized heavy rainfall and typhoons are frequently occurred due to the climate change and the damage is becoming bigger. Therefore, we may need a more accurate prediction of the rainfall and runoff. However, the gauge rainfall has the limited accuracy in space. Radar rainfall is better than gauge rainfall for the explanation of the spatial variability of rainfall but it is mostly underestimated with the uncertainty involved. Therefore, the ensemble of radar rainfall was simulated using error structure to overcome the uncertainty and gauge rainfall. The simulated ensemble was used as the input data of the rainfall-runoff models for obtaining the ensemble of runoff hydrographs. The previous studies discussed about the accuracy of the rainfall-runoff model. Even if the same input data such as rainfall is used for the runoff analysis using the models in the same basin, the models can have different results because of the uncertainty involved in the models. Therefore, we used two models of the SSARR model which is the lumped model, and the Vflo model which is a distributed model and tried to simulate the optimum runoff considering the uncertainty of each rainfall-runoff model. The study basin is located in Han river basin and we obtained one integrated runoff hydrograph which is an optimum runoff hydrograph using the blending methods such as Multi-Model Super Ensemble (MMSE), Simple Model Average (SMA), Mean Square Error (MSE). From this study, we could confirm the accuracy of rainfall and rainfall-runoff model using ensemble scenario and various rainfall-runoff model and we can use this result to study flood control measure due to climate change. Acknowledgements: This work is supported by the Korea Agency for Infrastructure Technology Advancement(KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 18AWMP-B083066-05).Keywords: radar rainfall ensemble, rainfall-runoff models, blending method, optimum runoff hydrograph
Procedia PDF Downloads 2804845 Virtual Prototyping of LED Chip Scale Packaging Using Computational Fluid Dynamic and Finite Element Method
Authors: R. C. Law, Shirley Kang, T. Y. Hin, M. Z. Abdullah
Abstract:
LED technology has been evolving aggressively in recent years from incandescent bulb during older days to as small as chip scale package. It will continue to stay bright in future. As such, there is tremendous pressure to stay competitive in the market by optimizing products to next level of performance and reliability with the shortest time to market. This changes the conventional way of product design and development to virtual prototyping by means of Computer Aided Engineering (CAE). It comprises of the deployment of Finite Element Method (FEM) and Computational Fluid Dynamic (CFD). FEM accelerates the investigation for early detection of failures such as crack, improve the thermal performance of system and enhance solder joint reliability. CFD helps to simulate the flow pattern of molding material as a function of different temperature, molding parameters settings to evaluate failures like voids and displacement. This paper will briefly discuss the procedures and applications of FEM in thermal stress, solder joint reliability and CFD of compression molding in LED CSP. Integration of virtual prototyping in product development had greatly reduced the time to market. Many successful achievements with minimized number of evaluation iterations required in the scope of material, process setting, and package architecture variant have been materialized with this approach.Keywords: LED, chip scale packaging (CSP), computational fluid dynamic (CFD), virtual prototyping
Procedia PDF Downloads 2874844 Machine Learning Classification of Fused Sentinel-1 and Sentinel-2 Image Data Towards Mapping Fruit Plantations in Highly Heterogenous Landscapes
Authors: Yingisani Chabalala, Elhadi Adam, Khalid Adem Ali
Abstract:
Mapping smallholder fruit plantations using optical data is challenging due to morphological landscape heterogeneity and crop types having overlapped spectral signatures. Furthermore, cloud covers limit the use of optical sensing, especially in subtropical climates where they are persistent. This research assessed the effectiveness of Sentinel-1 (S1) and Sentinel-2 (S2) data for mapping fruit trees and co-existing land-use types by using support vector machine (SVM) and random forest (RF) classifiers independently. These classifiers were also applied to fused data from the two sensors. Feature ranks were extracted using the RF mean decrease accuracy (MDA) and forward variable selection (FVS) to identify optimal spectral windows to classify fruit trees. Based on RF MDA and FVS, the SVM classifier resulted in relatively high classification accuracy with overall accuracy (OA) = 0.91.6% and kappa coefficient = 0.91% when applied to the fused satellite data. Application of SVM to S1, S2, S2 selected variables and S1S2 fusion independently produced OA = 27.64, Kappa coefficient = 0.13%; OA= 87%, Kappa coefficient = 86.89%; OA = 69.33, Kappa coefficient = 69. %; OA = 87.01%, Kappa coefficient = 87%, respectively. Results also indicated that the optimal spectral bands for fruit tree mapping are green (B3) and SWIR_2 (B10) for S2, whereas for S1, the vertical-horizontal (VH) polarization band. Including the textural metrics from the VV channel improved crop discrimination and co-existing land use cover types. The fusion approach proved robust and well-suited for accurate smallholder fruit plantation mapping.Keywords: smallholder agriculture, fruit trees, data fusion, precision agriculture
Procedia PDF Downloads 544843 A Mechanical Diagnosis Method Based on Vibration Fault Signal down-Sampling and the Improved One-Dimensional Convolutional Neural Network
Authors: Bowei Yuan, Shi Li, Liuyang Song, Huaqing Wang, Lingli Cui
Abstract:
Convolutional neural networks (CNN) have received extensive attention in the field of fault diagnosis. Many fault diagnosis methods use CNN for fault type identification. However, when the amount of raw data collected by sensors is massive, the neural network needs to perform a time-consuming classification task. In this paper, a mechanical fault diagnosis method based on vibration signal down-sampling and the improved one-dimensional convolutional neural network is proposed. Through the robust principal component analysis, the low-rank feature matrix of a large amount of raw data can be separated, and then down-sampling is realized to reduce the subsequent calculation amount. In the improved one-dimensional CNN, a smaller convolution kernel is used to reduce the number of parameters and computational complexity, and regularization is introduced before the fully connected layer to prevent overfitting. In addition, the multi-connected layers can better generalize classification results without cumbersome parameter adjustments. The effectiveness of the method is verified by monitoring the signal of the centrifugal pump test bench, and the average test accuracy is above 98%. When compared with the traditional deep belief network (DBN) and support vector machine (SVM) methods, this method has better performance.Keywords: fault diagnosis, vibration signal down-sampling, 1D-CNN
Procedia PDF Downloads 1314842 Localization Mobile Beacon Using RSSI
Authors: Sallama Resen, Celal Öztürk
Abstract:
Distance estimation between tow nodes has wide scope of surveillance and tracking applications. This paper suggests a Bluetooth Low Energy (BLE) technology as a media for transceiver and receiver signal in small indoor areas. As an example, BLE communication technologies used in child safety domains. Local network is designed to detect child position in indoor school area consisting Mobile Beacons (MB), Access Points (AP) and Smart Phones (SP) where MBs stuck in children’s shoes as wearable sensors. This paper presents a technique that can detect mobile beacons’ position and help finding children’s location within dynamic environment. By means of bluetooth beacons that are attached to child’s shoes, the distance between the MB and teachers SP is estimated with an accuracy of less than one meter. From the simulation results, it is shown that high accuracy of position coordinates are achieved for multi-mobile beacons in different environments.Keywords: bluetooth low energy, child safety, mobile beacons, received signal strength
Procedia PDF Downloads 346