Search results for: gradient boosting machine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3664

Search results for: gradient boosting machine

484 Advances of Image Processing in Precision Agriculture: Using Deep Learning Convolution Neural Network for Soil Nutrient Classification

Authors: Halimatu S. Abdullahi, Ray E. Sheriff, Fatima Mahieddine

Abstract:

Agriculture is essential to the continuous existence of human life as they directly depend on it for the production of food. The exponential rise in population calls for a rapid increase in food with the application of technology to reduce the laborious work and maximize production. Technology can aid/improve agriculture in several ways through pre-planning and post-harvest by the use of computer vision technology through image processing to determine the soil nutrient composition, right amount, right time, right place application of farm input resources like fertilizers, herbicides, water, weed detection, early detection of pest and diseases etc. This is precision agriculture which is thought to be solution required to achieve our goals. There has been significant improvement in the area of image processing and data processing which has being a major challenge. A database of images is collected through remote sensing, analyzed and a model is developed to determine the right treatment plans for different crop types and different regions. Features of images from vegetations need to be extracted, classified, segmented and finally fed into the model. Different techniques have been applied to the processes from the use of neural network, support vector machine, fuzzy logic approach and recently, the most effective approach generating excellent results using the deep learning approach of convolution neural network for image classifications. Deep Convolution neural network is used to determine soil nutrients required in a plantation for maximum production. The experimental results on the developed model yielded results with an average accuracy of 99.58%.

Keywords: convolution, feature extraction, image analysis, validation, precision agriculture

Procedia PDF Downloads 313
483 Improving Pneumatic Artificial Muscle Performance Using Surrogate Model: Roles of Operating Pressure and Tube Diameter

Authors: Van-Thanh Ho, Jaiyoung Ryu

Abstract:

In soft robotics, the optimization of fluid dynamics through pneumatic methods plays a pivotal role in enhancing operational efficiency and reducing energy loss. This is particularly crucial when replacing conventional techniques such as cable-driven electromechanical systems. The pneumatic model employed in this study represents a sophisticated framework designed to efficiently channel pressure from a high-pressure reservoir to various muscle locations on the robot's body. This intricate network involves a branching system of tubes. The study introduces a comprehensive pneumatic model, encompassing the components of a reservoir, tubes, and Pneumatically Actuated Muscles (PAM). The development of this model is rooted in the principles of shock tube theory. Notably, the study leverages experimental data to enhance the understanding of the interplay between the PAM structure and the surrounding fluid. This improved interactive approach involves the use of morphing motion, guided by a contraction function. The study's findings demonstrate a high degree of accuracy in predicting pressure distribution within the PAM. The model's predictive capabilities ensure that the error in comparison to experimental data remains below a threshold of 10%. Additionally, the research employs a machine learning model, specifically a surrogate model based on the Kriging method, to assess and quantify uncertainty factors related to the initial reservoir pressure and tube diameter. This comprehensive approach enhances our understanding of pneumatic soft robotics and its potential for improved operational efficiency.

Keywords: pneumatic artificial muscles, pressure drop, morhing motion, branched network, surrogate model

Procedia PDF Downloads 96
482 Opposed Piston Engine Crankshaft Strength Calculation Using Finite Element Method

Authors: Konrad Pietrykowski, Michał Gęca, Michał Bialy

Abstract:

The paper presents the results of the crankshaft strength simulation. The crankshaft was taken from the opposed piston engine. Calculations were made using finite element method (FEM) in Abaqus software. This program allows to perform strength tests of individual machine parts as well as their assemblies. The crankshaft that was used in the calculations will be used in the two-stroke aviation research aircraft engine. The assumptions for the calculations were obtained from the AVL Boost software, from one-dimensional engine cycle model and from the multibody model using the method developed in the MSC Adams software. The research engine will be equipped with 3 combustion chambers and two crankshafts. In order to shorten the calculation time, only one crankcase analysis was performed. The cut of the shaft has been selected with the greatest forces resulting from the engine operation. Calculations were made for two cases. For maximum piston force when maximum bending load occurs and for the maximum torque. Cast iron material was adopted. For this material, Poisson's number, density, and Young's modulus were determined. The computational grid contained of 1,977,473 Tet elements. This type of elements was chosen because of the complex design of the crankshaft. Results are presented in the form of stress distributions maps and displacements on the surface and inside the geometry of the shaft. The results show the places of tension stresses, however, no stresses are exceeded at any place. The shaft can thus be applied to the engine in its present form. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK 'PZL-KALISZ’ S.A. and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: aircraft diesel engine, crankshaft, finite element method, two-stroke engine

Procedia PDF Downloads 180
481 Thick Data Techniques for Identifying Abnormality in Video Frames for Wireless Capsule Endoscopy

Authors: Jinan Fiaidhi, Sabah Mohammed, Petros Zezos

Abstract:

Capsule endoscopy (CE) is an established noninvasive diagnostic modality in investigating small bowel disease. CE has a pivotal role in assessing patients with suspected bleeding or identifying evidence of active Crohn's disease in the small bowel. However, CE produces lengthy videos with at least eighty thousand frames, with a frequency rate of 2 frames per second. Gastroenterologists cannot dedicate 8 to 15 hours to reading the CE video frames to arrive at a diagnosis. This is why the issue of analyzing CE videos based on modern artificial intelligence techniques becomes a necessity. However, machine learning, including deep learning, has failed to report robust results because of the lack of large samples to train its neural nets. In this paper, we are describing a thick data approach that learns from a few anchor images. We are using sound datasets like KVASIR and CrohnIPI to filter candidate frames that include interesting anomalies in any CE video. We are identifying candidate frames based on feature extraction to provide representative measures of the anomaly, like the size of the anomaly and the color contrast compared to the image background, and later feed these features to a decision tree that can classify the candidate frames as having a condition like the Crohn's Disease. Our thick data approach reported accuracy of detecting Crohn's Disease based on the availability of ulcer areas at the candidate frames for KVASIR was 89.9% and for the CrohnIPI was 83.3%. We are continuing our research to fine-tune our approach by adding more thick data methods for enhancing diagnosis accuracy.

Keywords: thick data analytics, capsule endoscopy, Crohn’s disease, siamese neural network, decision tree

Procedia PDF Downloads 155
480 A Novel Methodology for Browser Forensics to Retrieve Searched Keywords from Windows 10 Physical Memory Dump

Authors: Dija Sulekha

Abstract:

Nowadays, a good percentage of reported cybercrimes involve the usage of the Internet, directly or indirectly for committing the crime. Usually, Web Browsers leave traces of browsing activities on the host computer’s hard disk, which can be used by investigators to identify internet-based activities of the suspect. But criminals, who involve in some organized crimes, disable browser file generation feature to hide the evidence while doing illegal activities through the Internet. In such cases, even though browser files were not generated in the storage media of the system, traces of recent and ongoing activities were generated in the Physical Memory of the system. As a result, the analysis of Physical Memory Dump collected from the suspect's machine retrieves lots of forensically crucial information related to the browsing history of the Suspect. This information enables the cyber forensic investigators to concentrate on a few highly relevant selected artefacts while doing the Offline Forensics analysis of storage media. This paper addresses the reconstruction of web browsing activities by conducting live forensics to identify searched terms, downloaded files, visited sites, email headers, email ids, etc. from the physical memory dump collected from Windows 10 Systems. Well-known entry points are available for retrieving all the above artefacts except searched terms. The paper describes a novel methodology to retrieve the searched terms from Windows 10 Physical Memory. The searched terms retrieved in this way can be used for doing advanced file and keyword search in the storage media files reconstructed from the file system recovery in offline forensics.

Keywords: browser forensics, digital forensics, live Forensics, physical memory forensics

Procedia PDF Downloads 115
479 Power and Wear Reduction Using Composite Links of Crank-Rocker Mechanism with Optimum Transmission Angle

Authors: Khaled M. Khader, Mamdouh I. Elimy

Abstract:

Reducing energy consumption became the major concern for all countries of the world during the recent decades. In general, power saving is currently the nominal goal of most industrial countries. It is well known that fossil fuels are the main pillar of development of world countries. Unfortunately, the increased rate of fossil fuel consumption will lead to serious problems caused by an expected depletion of fuels. Moreover, dangerous gases and vapors emission lead to severe environmental problems during fuel burning. Consequently, most engineering sectors especially the mechanical sectors are looking for improving any machine accompanied by reducing its energy consumption. Crank-Rocker planar mechanism is the most applied in mechanical systems. Besides, it is one of the most significant parts of the machines for obtaining the oscillatory motion. The transmission angle of this mechanism can be considered as an optimum value when its extreme values are equally varied around 90°. In addition, the transmission angle plays an important role in decreasing the required driving power and improving the dynamic properties of the mechanism. Hence, appropriate selection of mechanism links lengthens, which assures optimum transmission angle leads to decreasing the driving power. Moreover, mechanism's links manufactured from composite materials afford link's lightweight, which decreases the required driving torque. Furthermore, wear and corrosion problems can be treated through using composite links instead of using metal ones. This paper is dealing with improving the performance of crank-rocker mechanism using composite links due to their flexural elastic modulus values and stiffness in addition to high damping of composite materials.

Keywords: Composite Material, Crank-Rocker Mechanism, Transmission angle, Design techniques, Power Saving

Procedia PDF Downloads 302
478 Design of Low-Emission Catalytically Stabilized Combustion Chamber Concept

Authors: Annapurna Basavaraju, Andreas Marn, Franz Heitmeir

Abstract:

The Advisory Council for Aeronautics Research in Europe (ACARE) is cognizant for the overall reduction of NOx emissions by 80% in its vision 2020. Moreover small turbo engines have higher fuel specific emissions compared to large engines due to their limited combustion chamber size. In order to fulfill these requirements, novel combustion concepts are essential. This motivates to carry out the research on the current state of art, catalytic stabilized combustion chamber using hydrogen in small jet engines which are designed and investigated both numerically and experimentally during this project. Catalytic combustion concepts can also be adopted for low caloric fuels and are therefore not constrained to only hydrogen. However, hydrogen has high heating value and has the major advantage of producing only the nitrogen oxides as pollutants during the combustion, thus eliminating the interest on other emissions such as Carbon monoxides etc. In the present work, the combustion chamber is designed based on the ‘Rich catalytic Lean burn’ concept. The experiments are conducted for the characteristic operating range of an existing engine. This engine has been tested successfully at Institute of Thermal Turbomachinery and Machine Dynamics (ITTM), Technical University Graz. One of the facts that the efficient combustion is a result of proper mixing of fuel-air mixture, considerable significance is given to the selection of appropriate mixer. This led to the design of three diverse configurations of mixers and is investigated experimentally and numerically. Subsequently the best mixer would be equipped in the main combustion chamber and used throughout the experimentation. Furthermore, temperatures and pressures would be recorded at various locations inside the combustion chamber and the exhaust emissions will also be analyzed. The instrumented combustion chamber would be inspected at the engine relevant inlet conditions for nine different sets of catalysts at the Hot Flow Test Facility (HFTF) of the institute.

Keywords: catalytic combustion, gas turbine, hydrogen, mixer, NOx emissions

Procedia PDF Downloads 303
477 Greenhouse Controlled with Graphical Plotting in Matlab

Authors: Bruno R. A. Oliveira, Italo V. V. Braga, Jonas P. Reges, Luiz P. O. Santos, Sidney C. Duarte, Emilson R. R. Melo, Auzuir R. Alexandria

Abstract:

This project aims to building a controlled greenhouse, or for better understanding, a structure where one can maintain a given range of temperature values (°C) coming from radiation emitted by an incandescent light, as previously defined, characterizing as a kind of on-off control and a differential, which is the plotting of temperature versus time graphs assisted by MATLAB software via serial communication. That way it is possible to connect the stove with a computer and monitor parameters. In the control, it was performed using a PIC 16F877A microprocessor which enabled convert analog signals to digital, perform serial communication with the IC MAX232 and enable signal transistors. The language used in the PIC's management is Basic. There are also a cooling system realized by two coolers 12V distributed in lateral structure, being used for venting and the other for exhaust air. To find out existing temperature inside is used LM35DZ sensor. Other mechanism used in the greenhouse construction was comprised of a reed switch and a magnet; their function is in recognition of the door position where a signal is sent to a buzzer when the door is open. Beyond it exist LEDs that help to identify the operation which the stove is located. To facilitate human-machine communication is employed an LCD display that tells real-time temperature and other information. The average range of design operating without any major problems, taking into account the limitations of the construction material and structure of electrical current conduction, is approximately 65 to 70 ° C. The project is efficient in these conditions, that is, when you wish to get information from a given material to be tested at temperatures not as high. With the implementation of the greenhouse automation, facilitating the temperature control and the development of a structure that encourages correct environment for the most diverse applications.

Keywords: greenhouse, microcontroller, temperature, control, MATLAB

Procedia PDF Downloads 401
476 Nanostructure Formation and Characterization of Eco-Friendly Banana Peels Nanosorbent

Authors: Opeyemi Atiba-Oyewo, Maurice S. Onya, Christian Wolkersdorfer

Abstract:

Nanostructure formation and characterization of eco-friendly banana peels nanosorbent are thoroughly described in this paper. The transformation of material during mechanical milling to enhance certain properties such as changes in microstructure and surface area to solve the current problems involving water pollution and water quality were studied. The mechanical milling was employed using planetary continuous milling machine and ethanol as process control agent, the sample were taken at time interval between 10 h to 30 h to examine the structural changes. The samples were characterised by X-ray diffraction (XRD), scanning electron microscopy (SEM), Fourier transform infra-red (FTIR), Transmission electron microscopy (TEM) and Brunauer Emmett and teller (BET). Results revealed that the three typical structures with different grain-size, lattice strain and shapes were observed, and the deformation mechanisms in these structures were found to be different, further particles fracturing results to surface area increment which was confirmed by Brunauer Emmett and teller (BET) analysis. X-ray diffraction (XRD) shows high densities of dislocations in large crystallites, implying that dislocation slip is the dominant deformation mechanism. Scanning electron microscopy revealed the morphological properties of the materials at different milling time, nanostructure of the particles and fibres were confirmed by Transmission electron microscopy and FT-IR identified the functional groups responsible for its capacity to coordinate and remove metal ions, such as the carboxylic and amine groups at absorption bands of 1730 and 889 cm-1, respectively. However, the choice of this sorbent material for the sorption of any contaminants will depend on the composition of the effluent to be treated.

Keywords: banana peels, eco-friendly, mechanical milling, nanosorbent, nanostructure water quality

Procedia PDF Downloads 253
475 A Prediction of Cutting Forces Using Extended Kienzle Force Model Incorporating Tool Flank Wear Progression

Authors: Wu Peng, Anders Liljerehn, Martin Magnevall

Abstract:

In metal cutting, tool wear gradually changes the micro geometry of the cutting edge. Today there is a significant gap in understanding the impact these geometrical changes have on the cutting forces which governs tool deflection and heat generation in the cutting zone. Accurate models and understanding of the interaction between the work piece and cutting tool leads to improved accuracy in simulation of the cutting process. These simulations are useful in several application areas, e.g., optimization of insert geometry and machine tool monitoring. This study aims to develop an extended Kienzle force model to account for the effect of rake angle variations and tool flank wear have on the cutting forces. In this paper, the starting point sets from cutting force measurements using orthogonal turning tests of pre-machined flanches with well-defined width, using triangular coated inserts to assure orthogonal condition. The cutting forces have been measured by dynamometer with a set of three different rake angles, and wear progression have been monitored during machining by an optical measuring collaborative robot. The method utilizes the measured cutting forces with the inserts flank wear progression to extend the mechanistic cutting forces model with flank wear as an input parameter. The adapted cutting forces model is validated in a turning process with commercial cutting tools. This adapted cutting forces model shows the significant capability of prediction of cutting forces accounting for tools flank wear and different-rake-angle cutting tool inserts. The result of this study suggests that the nonlinear effect of tools flank wear and interaction between the work piece and the cutting tool can be considered by the developed cutting forces model.

Keywords: cutting force, kienzle model, predictive model, tool flank wear

Procedia PDF Downloads 106
474 Nano Sol Based Solar Responsive Smart Window for Aircraft

Authors: K. A. D. D. Kuruppu, R. M. De Silva, K. M. N. De Silva

Abstract:

This research work was based on developing a solar responsive aircraft window panel which can be used as a self-cleaning surface and also a surface which degrade Volatile Organic compounds (VOC) available in the aircraft cabin areas. Further, this surface has the potential of harvesting energy from Solar. The transparent inorganic nano sol solution was prepared. The obtained sol solution was characterized using X-ray diffraction, Particle size analyzer and FT-IR. The existing nano material which shows the similar characteristics was also used to compare the efficiencies with the newly prepared nano sol. Nano sol solution was coated on cleaned four aircraft window pieces separately using a spin coater machine. The existing nano material was dissolved and prepared a solution having the similar concentration as nano sol solution. Pre-cleaned four aircraft window pieces were coated with this solution and the rest cleaned four aircraft window pieces were considered as control samples. The control samples were uncoated from anything. All the window pieces were allowed to dry at room temperature. All the twelve aircraft window pieces were uniform in all the factors other than the type of coating. The surface morphologies of the samples were analyzed using SEM. The photocatalytic degradation of VOC was determined after incorporating gas of Toluene to each sample followed by the analysis done by UV-VIS spectroscopy. The self- cleaning capabilities were analyzed after adding of several types of stains on the window pieces. The self-cleaning property of each sample was analyzed using UV-VIS spectroscopy. The highest photocatalytic degradation of Volatile Organic compound and the highest photocatalytic degradation of stains were obtained for the samples which were coated by the nano sol solution. Therefore, the experimental results clearly show that there is a potential of using this nano sol in aircraft window pieces which favors the self-cleaning property as well as efficient photocatalytic degradation of VOC gases. This will ensure safer environment inside aircraft cabins.

Keywords: aircraft, nano, smart windows, solar

Procedia PDF Downloads 255
473 Development of a Regression Based Model to Predict Subjective Perception of Squeak and Rattle Noise

Authors: Ramkumar R., Gaurav Shinde, Pratik Shroff, Sachin Kumar Jain, Nagesh Walke

Abstract:

Advancements in electric vehicles have significantly reduced the powertrain noise and moving components of vehicles. As a result, in-cab noises have become more noticeable to passengers inside the car. To ensure a comfortable ride for drivers and other passengers, it has become crucial to eliminate undesirable component noises during the development phase. Standard practices are followed to identify the severity of noises based on subjective ratings, but it can be a tedious process to identify the severity of each development sample and make changes to reduce it. Additionally, the severity rating can vary from jury to jury, making it challenging to arrive at a definitive conclusion. To address this, an automotive component was identified to evaluate squeak and rattle noise issue. Physical tests were carried out for random and sine excitation profiles. Aim was to subjectively assess the noise using jury rating method and objectively evaluate the same by measuring the noise. Suitable jury evaluation method was selected for the said activity, and recorded sounds were replayed for jury rating. Objective data sound quality metrics viz., loudness, sharpness, roughness, fluctuation strength and overall Sound Pressure Level (SPL) were measured. Based on this, correlation co-efficients was established to identify the most relevant sound quality metrics that are contributing to particular identified noise issue. Regression analysis was then performed to establish the correlation between subjective and objective data. Mathematical model was prepared using artificial intelligence and machine learning algorithm. The developed model was able to predict the subjective rating with good accuracy.

Keywords: BSR, noise, correlation, regression

Procedia PDF Downloads 78
472 Reimagining the Management of Telco Supply Chain with Blockchain

Authors: Jeaha Yang, Ahmed Khan, Donna L. Rodela, Mohammed A. Qaudeer

Abstract:

Traditional supply chain silos still exist today due to the difficulty of establishing trust between various partners and technological barriers across industries. Companies lose opportunities and revenue and inadvertently make poor business decisions resulting in further challenges. Blockchain technology can bring a new level of transparency through sharing information with a distributed ledger in a decentralized manner that creates a basis of trust for business. Blockchain is a loosely coupled, hub-style communication network in which trading partners can work indirectly with each other for simpler integration, but they work together through the orchestration of their supply chain operations under a coherent process that is developed jointly. A Blockchain increases efficiencies, lowers costs, and improves interoperability to strengthen and automate the supply chain management process while all partners share the risk. Blockchain ledger is built to track inventory lifecycle for supply chain transparency and keeps a journal of inventory movement for real-time reconciliation. State design patterns are used to capture the life cycle (behavior) of inventory management as a state machine for a common, transparent and coherent process which creates an opportunity for trading partners to become more responsive in terms of changes or improvements in process, reconcile discrepancies, and comply with internal governance and external regulations. It enables end-to-end, inter-company visibility at the unit level for more accurate demand planning with better insight into order fulfillment and replenishment.

Keywords: supply chain management, inventory trace-ability, perpetual inventory system, inventory lifecycle, blockchain, inventory consignment, supply chain transparency, digital thread, demand planning, hyper ledger fabric

Procedia PDF Downloads 89
471 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data

Authors: K. Sathishkumar, V. Thiagarasu

Abstract:

Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.

Keywords: microarray technology, gene expression data, clustering, gene Selection

Procedia PDF Downloads 323
470 Studies on the Characterization and Machinability of Duplex Stainless Steel 2205 during Dry Turning

Authors: Gaurav D. Sonawane, Vikas G. Sargade

Abstract:

The present investigation is a study of the effect of advanced Physical Vapor Deposition (PVD) coatings on cutting temperature residual stresses and surface roughness during Duplex Stainless Steel (DSS) 2205 turning. Austenite stabilizers like nickel, manganese, and molybdenum reduced the cost of DSS. Surface Integrity (SI) plays an important role in determining corrosion resistance and fatigue life. Resistance to various types of corrosion makes DSS suitable for applications with critical environments like Heat exchangers, Desalination plants, Seawater pipes and Marine components. However, lower thermal conductivity, poor chip control and non-uniform tool wear make DSS very difficult to machine. Cemented carbide tools (M grade) were used to turn DSS in a dry environment. AlTiN and AlTiCrN coatings were deposited using advanced PVD High Pulse Impulse Magnetron Sputtering (HiPIMS) technique. Experiments were conducted with cutting speed of 100 m/min, 140 m/min and 180 m/min. A constant feed and depth of cut of 0.18 mm/rev and 0.8 mm were used, respectively. AlTiCrN coated tools followed by AlTiN coated tools outperformed uncoated tools due to properties like lower thermal conductivity, higher adhesion strength and hardness. Residual stresses were found to be compressive for all the tools used for dry turning, increasing the fatigue life of the machined component. Higher cutting temperatures were observed for coated tools due to its lower thermal conductivity, which results in very less tool wear than uncoated tools. Surface roughness with uncoated tools was found to be three times higher than coated tools due to lower coefficient of friction of coating used.

Keywords: cutting temperature, DSS2205, dry turning, HiPIMS, surface integrity

Procedia PDF Downloads 131
469 Analysis of Compressive and Tensile Response of Pumpkin Flesh, Peel and Unpeeled Tissues Using Experimental and FEA

Authors: Maryam Shirmohammadi, Prasad K. D. V. Yarlagadda, YuanTong Gu

Abstract:

The mechanical damage on the agricultural crop during and after harvesting can create high volume of damage on tissue. Uniaxial compression and tensile loading were performed on flesh and peel samples of pumpkin. To investigate the structural changes on the tissue, Scanning Electron Microscopy (SEM) was used to capture the cellular structure change before and after loading on tissue for tensile, compression and indentation tests. To obtain required mechanical properties of tissue for the finite element analysis (FEA) model, laser measurement sensors were used to record the lateral displacement of tissue under the compression loading. Uniaxial force versus deformation data were recorded using Universal Testing Machine for both tensile and compression tests. The experimental Results were employed to develop a material model with failure criteria. The results obtained by the simulation were compared with those obtained by experiments. Note that although modelling food materials’ behaviour is not a new concept however, majority of previous studies focused on elastic behaviour and damages under linear limit, this study, however, has developed FEA models for tensile and compressive loading of pumpkin flesh and peel samples using, as the first study, both elastic and elasto-plastic material types. In addition, pumpkin peel and flesh tissues were considered as two different materials with different properties under mechanical loadings. The tensile and compression loadings were used to develop the material model for a composite structure for FEA model of mechanical peeling of pumpkin as a tough skinned vegetable.

Keywords: compressive and tensile response, finite element analysis, poisson’s ratio, elastic modulus, elastic and plastic response, rupture and bio-yielding

Procedia PDF Downloads 330
468 Detecting Indigenous Languages: A System for Maya Text Profiling and Machine Learning Classification Techniques

Authors: Alejandro Molina-Villegas, Silvia Fernández-Sabido, Eduardo Mendoza-Vargas, Fátima Miranda-Pestaña

Abstract:

The automatic detection of indigenous languages ​​in digital texts is essential to promote their inclusion in digital media. Underrepresented languages, such as Maya, are often excluded from language detection tools like Google’s language-detection library, LANGDETECT. This study addresses these limitations by developing a hybrid language detection solution that accurately distinguishes Maya (YUA) from Spanish (ES). Two strategies are employed: the first focuses on creating a profile for the Maya language within the LANGDETECT library, while the second involves training a Naive Bayes classification model with two categories, YUA and ES. The process includes comprehensive data preprocessing steps, such as cleaning, normalization, tokenization, and n-gram counting, applied to text samples collected from various sources, including articles from La Jornada Maya, a major newspaper in Mexico and the only media outlet that includes a Maya section. After the training phase, a portion of the data is used to create the YUA profile within LANGDETECT, which achieves an accuracy rate above 95% in identifying the Maya language during testing. Additionally, the Naive Bayes classifier, trained and tested on the same database, achieves an accuracy close to 98% in distinguishing between Maya and Spanish, with further validation through F1 score, recall, and logarithmic scoring, without signs of overfitting. This strategy, which combines the LANGDETECT profile with a Naive Bayes model, highlights an adaptable framework that can be extended to other underrepresented languages in future research. This fills a gap in Natural Language Processing and supports the preservation and revitalization of these languages.

Keywords: indigenous languages, language detection, Maya language, Naive Bayes classifier, natural language processing, low-resource languages

Procedia PDF Downloads 15
467 Gender Bias in Natural Language Processing: Machines Reflect Misogyny in Society

Authors: Irene Yi

Abstract:

Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.

Keywords: gendered grammar, misogynistic language, natural language processing, neural networks

Procedia PDF Downloads 118
466 Evaluation of Cultural Landscape Perception in Waterfront Historic Districts Based on Multi-source Data - Taking Venice and Suzhou as Examples

Authors: Shuyu Zhang

Abstract:

The waterfront historical district, as a type of historical districts on the verge of waters such as the sea, lake, and river, have a relatively special urban form. In the past preservation and renewal of traditional historic districts, there have been many discussions on the land range, and the waterfront and marginal spaces are easily overlooked. However, the waterfront space of the historic districts, as a cultural landscape heritage combining historical buildings and landscape elements, has strong ecological and sustainable values. At the same time, Suzhou and Venice, as sister water cities in history, have more waterfront spaces that can be compared in urban form and other levels. Therefore, this paper focuses on the waterfront historic districts in Venice and Suzhou, establishes quantitative evaluation indicators for environmental perception, makes analogies, and promotes the renewal and activation of the entire historical district by improving the spatial quality and vitality of the waterfront area. First, this paper uses multi-source data for analysis, such as Baidu Maps and Google Maps API to crawl the street view of the waterfront historic districts, uses machine learning algorithms to analyze the proportion of cultural landscape elements such as green viewing rate in the street view pictures, and uses space syntax software to make quantitative selectivity analysis, so as to establish environmental perception evaluation indicators for the waterfront historic districts. Finally, by comparing and summarizing the waterfront historic districts in Venice and Suzhou, it reveals their similarities and differences, characteristics and conclusions, and hopes to provide a reference for the heritage preservation and renewal of other waterfront historic districts.

Keywords: waterfront historical district, cultural landscape, perception, multi-source Data

Procedia PDF Downloads 195
465 Detect Critical Thinking Skill in Written Text Analysis. The Use of Artificial Intelligence in Text Analysis vs Chat/Gpt

Authors: Lucilla Crosta, Anthony Edwards

Abstract:

Companies and the market place nowadays struggle to find employees with adequate skills in relation to anticipated growth of their businesses. At least half of workers will need to undertake some form of up-skilling process in the next five years in order to remain aligned with the requests of the market . In order to meet these challenges, there is a clear need to explore the potential uses of AI (artificial Intelligence) based tools in assessing transversal skills (critical thinking, communication and soft skills of different types in general) of workers and adult students while empowering them to develop those same skills in a reliable trustworthy way. Companies seek workers with key transversal skills that can make a difference between workers now and in the future. However, critical thinking seems to be the one of the most imprtant skill, bringing unexplored ideas and company growth in business contexts. What employers have been reporting since years now, is that this skill is lacking in the majority of workers and adult students, and this is particularly visible trough their writing. This paper investigates how critical thinking and communication skills are currently developed in Higher Education environments through use of AI tools at postgraduate levels. It analyses the use of a branch of AI namely Machine Learning and Big Data and of Neural Network Analysis. It also examines the potential effect the acquisition of these skills through AI tools and what kind of effects this has on employability This paper will draw information from researchers and studies both at national (Italy & UK) and international level in Higher Education. The issues associated with the development and use of one specific AI tool Edulai, will be examined in details. Finally comparisons will be also made between these tools and the more recent phenomenon of Chat GPT and forthcomings and drawbacks will be analysed.

Keywords: critical thinking, artificial intelligence, higher education, soft skills, chat GPT

Procedia PDF Downloads 105
464 Numerical Analysis of Solar Cooling System

Authors: Nadia Allouache, Mohamed Belmedani

Abstract:

Energy source is a sustainable, totally inexhaustible and environmentally friendly alternative to the fossil fuels available. It is a renewable and economical energy that can be harnessed sustainably over the long term and thus stabilizes energy costs. Solar cooling technologies have been developed to decrease the augmentation electricity consumption for air conditioning and to displace the peak load during hot summer days. A numerical analysis of thermal and solar performances of an annular finned adsorber, which is the most important component of the adsorption solar refrigerating system, is considered in this work. Different adsorbent/adsorbate pairs, such as activated carbon AC35/methanol, activated carbon AC35/ethanol, and activated carbon BPL/Ammoniac, are undertaken in this study. The modeling of the adsorption cooling machine requires the resolution of the equation describing the energy and mass transfer in the tubular finned adsorber. The Wilson and Dubinin- Astakhov models of the solid-adsorbate equilibrium are used to calculate the adsorbed quantity. The porous medium and the fins are contained in the annular space, and the adsorber is heated by solar energy. Effects of key parameters on the adsorbed quantity and on the thermal and solar performances are analysed and discussed. The AC35/methanol pair is the best pair compared to BPL/Ammoniac and AC35/ethanol pairs in terms of system performance. The system performances are sensitive to the fin geometry. For the considered data measured for clear type days of July 2023 in Algeria and Morocco, the performances of the cooling system are very significant in Algeria.

Keywords: activated carbon AC35-methanol pair, activated carbon AC35-ethanol pair, activated carbon BPL-ammoniac pair, annular finned adsorber, performance coefficients, numerical analysis, solar cooling system

Procedia PDF Downloads 53
463 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features

Authors: Bushra Zafar, Usman Qamar

Abstract:

Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.

Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection

Procedia PDF Downloads 315
462 A Statistical Approach to Predict and Classify the Commercial Hatchability of Chickens Using Extrinsic Parameters of Breeders and Eggs

Authors: M. S. Wickramarachchi, L. S. Nawarathna, C. M. B. Dematawewa

Abstract:

Hatchery performance is critical for the profitability of poultry breeder operations. Some extrinsic parameters of eggs and breeders cause to increase or decrease the hatchability. This study aims to identify the affecting extrinsic parameters on the commercial hatchability of local chicken's eggs and determine the most efficient classification model with a hatchability rate greater than 90%. In this study, seven extrinsic parameters were considered: egg weight, moisture loss, breeders age, number of fertilised eggs, shell width, shell length, and shell thickness. Multiple linear regression was performed to determine the most influencing variable on hatchability. First, the correlation between each parameter and hatchability were checked. Then a multiple regression model was developed, and the accuracy of the fitted model was evaluated. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), k-Nearest Neighbors (kNN), Support Vector Machines (SVM) with a linear kernel, and Random Forest (RF) algorithms were applied to classify the hatchability. This grouping process was conducted using binary classification techniques. Hatchability was negatively correlated with egg weight, breeders' age, shell width, shell length, and positive correlations were identified with moisture loss, number of fertilised eggs, and shell thickness. Multiple linear regression models were more accurate than single linear models regarding the highest coefficient of determination (R²) with 94% and minimum AIC and BIC values. According to the classification results, RF, CART, and kNN had performed the highest accuracy values 0.99, 0.975, and 0.972, respectively, for the commercial hatchery process. Therefore, the RF is the most appropriate machine learning algorithm for classifying the breeder outcomes, which are economically profitable or not, in a commercial hatchery.

Keywords: classification models, egg weight, fertilised eggs, multiple linear regression

Procedia PDF Downloads 86
461 Local Directional Encoded Derivative Binary Pattern Based Coral Image Classification Using Weighted Distance Gray Wolf Optimization Algorithm

Authors: Annalakshmi G., Sakthivel Murugan S.

Abstract:

This paper presents a local directional encoded derivative binary pattern (LDEDBP) feature extraction method that can be applied for the classification of submarine coral reef images. The classification of coral reef images using texture features is difficult due to the dissimilarities in class samples. In coral reef image classification, texture features are extracted using the proposed method called local directional encoded derivative binary pattern (LDEDBP). The proposed approach extracts the complete structural arrangement of the local region using local binary batten (LBP) and also extracts the edge information using local directional pattern (LDP) from the edge response available in a particular region, thereby achieving extra discriminative feature value. Typically the LDP extracts the edge details in all eight directions. The process of integrating edge responses along with the local binary pattern achieves a more robust texture descriptor than the other descriptors used in texture feature extraction methods. Finally, the proposed technique is applied to an extreme learning machine (ELM) method with a meta-heuristic algorithm known as weighted distance grey wolf optimizer (GWO) to optimize the input weight and biases of single-hidden-layer feed-forward neural networks (SLFN). In the empirical results, ELM-WDGWO demonstrated their better performance in terms of accuracy on all coral datasets, namely RSMAS, EILAT, EILAT2, and MLC, compared with other state-of-the-art algorithms. The proposed method achieves the highest overall classification accuracy of 94% compared to the other state of art methods.

Keywords: feature extraction, local directional pattern, ELM classifier, GWO optimization

Procedia PDF Downloads 163
460 Evaluation of Possible Application of Cold Energy in Liquefied Natural Gas Complexes

Authors: А. I. Dovgyalo, S. O. Nekrasova, D. V. Sarmin, A. A. Shimanov, D. A. Uglanov

Abstract:

Usually liquefied natural gas (LNG) gasification is performed due to atmospheric heat. In order to produce a liquefied gas a sufficient amount of energy is to be consumed (about 1 kW∙h for 1 kg of LNG). This study offers a number of solutions, allowing using a cold energy of LNG. In this paper it is evaluated the application turbines installed behind the evaporator in LNG complex due to its work additional energy can be obtained and then converted into electricity. At the LNG consumption of G=1000kg/h the expansion work capacity of about 10 kW can be reached. Herewith-open Rankine cycle is realized, where a low capacity cryo-pump (about 500W) performs its normal function, providing the cycle pressure. Additionally discussed an application of Stirling engine within the LNG complex also gives a possibility to realize cold energy. Considering the fact, that efficiency coefficient of Stirling engine reaches 50 %, LNG consumption of G=1000 kg/h may result in getting a capacity of about 142 kW of such a thermal machine. The capacity of the pump, required to compensate pressure losses when LNG passes through the hydraulic channel, will make 500 W. Apart from the above-mentioned converters, it can be proposed to use thermoelectric generating packages (TGP), which are widely used now. At present, the modern thermoelectric generator line provides availability of electric capacity with coefficient of efficiency up to 15%. In the proposed complex, it is suggested to install the thermoelectric generator on the evaporator surface is such a way, that the cold end is contacted with the evaporator’s surface, and the hot one – with the atmosphere. At the LNG consumption of G=1000 kgг/h and specified coefficient of efficiency the capacity of the heat flow Qh will make about 32 kW. The derivable net electric power will be P=4,2 kW, and the number of packages will amount to about 104 pieces. The carried out calculations demonstrate the research perceptiveness in this field of propulsion plant development, as well as allow realizing the energy saving potential with the use of liquefied natural gas and other cryogenics technologies.

Keywords: cold energy, gasification, liquefied natural gas, electricity

Procedia PDF Downloads 272
459 Effect of Tool Size and Cavity Depth on Response Characteristics during Electric Discharge Machining on Superalloy Metal - An Experimental Investigation

Authors: Sudhanshu Kumar

Abstract:

Electrical discharge machining, also known as EDM, process is one of the most applicable machining process for removal of material in hard to machine materials like superalloy metals. EDM process utilizes electrical energy into sparks to erode the metals in presence of dielectric medium. In the present investigation, superalloy, Inconel 718 has been selected as workpiece and electrolytic copper as tool electrode. Attempt has been made to understand the effect of size of tool with varying cavity depth during drilling of hole through EDM process. In order to systematic investigate, tool size in terms of tool diameter and cavity depth along with other important electrical parameters namely, peak current, pulse-on time and servo voltage have been varied at three different values and the experiments has been designed using fractional factorial (Taguchi) method. Each experiment has been repeated twice under the same condition in order to understand the variability within the experiments. The effect of variations in parameters has been evaluated in terms of material removal rate, tool wear rate and surface roughness. Results revel that change in tool diameter during machining affects the response characteristics significantly. Larger tool diameter yielded 13% more material removal rate than smaller tool diameter. Analysis of the effect of variation in cavity depth is notable. There is no significant effect of cavity depth on material removal rate, tool wear rate and surface quality. This indicates that number of experiments can be performed to analyze other parameters effect even at smaller depth of cavity which can reduce the cost and time of experiments. Further, statistical analysis has been carried out to identify the interaction effect between parameters.

Keywords: EDM, Inconel 718, material removal rate, roughness, tool wear, tool size

Procedia PDF Downloads 214
458 Comparison of Support Vector Machines and Artificial Neural Network Classifiers in Characterizing Threatened Tree Species Using Eight Bands of WorldView-2 Imagery in Dukuduku Landscape, South Africa

Authors: Galal Omer, Onisimo Mutanga, Elfatih M. Abdel-Rahman, Elhadi Adam

Abstract:

Threatened tree species (TTS) play a significant role in ecosystem functioning and services, land use dynamics, and other socio-economic aspects. Such aspects include ecological, economic, livelihood, security-based, and well-being benefits. The development of techniques for mapping and monitoring TTS is thus critical for understanding the functioning of ecosystems. The advent of advanced imaging systems and supervised learning algorithms has provided an opportunity to classify TTS over fragmenting landscape. Recently, vegetation maps have been produced using advanced imaging systems such as WorldView-2 (WV-2) and robust classification algorithms such as support vectors machines (SVM) and artificial neural network (ANN). However, delineation of TTS in a fragmenting landscape using high resolution imagery has widely remained elusive due to the complexity of the species structure and their distribution. Therefore, the objective of the current study was to examine the utility of the advanced WV-2 data for mapping TTS in the fragmenting Dukuduku indigenous forest of South Africa using SVM and ANN classification algorithms. The results showed the robustness of the two machine learning algorithms with an overall accuracy (OA) of 77.00% (total disagreement = 23.00%) for SVM and 75.00% (total disagreement = 25.00%) for ANN using all eight bands of WV-2 (8B). This study concludes that SVM and ANN classification algorithms with WV-2 8B have the potential to classify TTS in the Dukuduku indigenous forest. This study offers relatively accurate information that is important for forest managers to make informed decisions regarding management and conservation protocols of TTS.

Keywords: artificial neural network, threatened tree species, indigenous forest, support vector machines

Procedia PDF Downloads 513
457 Sea Level Rise and Sediment Supply Explain Large-Scale Patterns of Saltmarsh Expansion and Erosion

Authors: Cai J. T. Ladd, Mollie F. Duggan-Edwards, Tjeerd J. Bouma, Jordi F. Pages, Martin W. Skov

Abstract:

Salt marshes are valued for their role in coastal flood protection, carbon storage, and for supporting biodiverse ecosystems. As a biogeomorphic landscape, marshes evolve through the complex interactions between sea level rise, sediment supply and wave/current forcing, as well as and socio-economic factors. Climate change and direct human modification could lead to a global decline marsh extent if left unchecked. Whilst the processes of saltmarsh erosion and expansion are well understood, empirical evidence on the key drivers of long-term lateral marsh dynamics is lacking. In a GIS, saltmarsh areal extent in 25 estuaries across Great Britain was calculated from historical maps and aerial photographs, at intervals of approximately 30 years between 1846 and 2016. Data on the key perceived drivers of lateral marsh change (namely sea level rise rates, suspended sediment concentration, bedload sediment flux rates, and frequency of both river flood and storm events) were collated from national monitoring centres. Continuous datasets did not extend beyond 1970, therefore predictor variables that best explained rate change of marsh extent between 1970 and 2016 was calculated using a Partial Least Squares Regression model. Information about the spread of Spartina anglica (an invasive marsh plant responsible for marsh expansion around the globe) and coastal engineering works that may have impacted on marsh extent, were also recorded from historical documents and their impacts assessed on long-term, large-scale marsh extent change. Results showed that salt marshes in the northern regions of Great Britain expanded an average of 2.0 ha/yr, whilst marshes in the south eroded an average of -5.3 ha/yr. Spartina invasion and coastal engineering works could not explain these trends since a trend of either expansion or erosion preceded these events. Results from the Partial Least Squares Regression model indicated that the rate of relative sea level rise (RSLR) and availability of suspended sediment concentration (SSC) best explained the patterns of marsh change. RSLR increased from 1.6 to 2.8 mm/yr, as SSC decreased from 404.2 to 78.56 mg/l along the north-to-south gradient of Great Britain, resulting in the shift from marsh expansion to erosion. Regional differences in RSLR and SSC are due to isostatic rebound since deglaciation, and tidal amplitudes respectively. Marshes exposed to low RSLR and high SSC likely leads to sediment accumulation at the coast suitable for colonisation by marsh plants and thus lateral expansion. In contrast, high RSLR with are likely not offset deposition under low SSC, thus average water depth at the marsh edge increases, allowing larger wind-waves to trigger marsh erosion. Current global declines in sediment flux to the coast are likely to diminish the resilience of salt marshes to RSLR. Monitoring and managing suspended sediment supply is not common-place, but may be critical to mitigating coastal impacts from climate change.

Keywords: lateral saltmarsh dynamics, sea level rise, sediment supply, wave forcing

Procedia PDF Downloads 134
456 Geographic Information System Based Multi-Criteria Subsea Pipeline Route Optimisation

Authors: James Brown, Stella Kortekaas, Ian Finnie, George Zhang, Christine Devine, Neil Healy

Abstract:

The use of GIS as an analysis tool for engineering decision making is now best practice in the offshore industry. GIS enables multidisciplinary data integration, analysis and visualisation which allows the presentation of large and intricate datasets in a simple map-interface accessible to all project stakeholders. Presenting integrated geoscience and geotechnical data in GIS enables decision makers to be well-informed. This paper is a successful case study of how GIS spatial analysis techniques were applied to help select the most favourable pipeline route. Routing a pipeline through any natural environment has numerous obstacles, whether they be topographical, geological, engineering or financial. Where the pipeline is subjected to external hydrostatic water pressure and is carrying pressurised hydrocarbons, the requirement to safely route the pipeline through hazardous terrain becomes absolutely paramount. This study illustrates how the application of modern, GIS-based pipeline routing techniques enabled the identification of a single most-favourable pipeline route crossing of a challenging seabed terrain. Conventional approaches to pipeline route determination focus on manual avoidance of primary constraints whilst endeavouring to minimise route length. Such an approach is qualitative, subjective and is liable to bias towards the discipline and expertise that is involved in the routing process. For very short routes traversing benign seabed topography in shallow water this approach may be sufficient, but for deepwater geohazardous sites, the need for an automated, multi-criteria, and quantitative approach is essential. This study combined multiple routing constraints using modern least-cost-routing algorithms deployed in GIS, hitherto unachievable with conventional approaches. The least-cost-routing procedure begins with the assignment of geocost across the study area. Geocost is defined as a numerical penalty score representing hazard posed by each routing constraint (e.g. slope angle, rugosity, vulnerability to debris flows) to the pipeline. All geocosted routing constraints are combined to generate a composite geocost map that is used to compute the least geocost route between two defined terminals. The analyses were applied to select the most favourable pipeline route for a potential gas development in deep water. The study area is geologically complex with a series of incised, potentially active, canyons carved into a steep escarpment, with evidence of extensive debris flows. A similar debris flow in the future could cause significant damage to a poorly-placed pipeline. Protruding inter-canyon spurs offer lower-gradient options for ascending an escarpment but the vulnerability of periodic failure of these spurs is not well understood. Close collaboration between geoscientists, pipeline engineers, geotechnical engineers and of course the gas export pipeline operator guided the analyses and assignment of geocosts. Shorter route length, less severe slope angles, and geohazard avoidance were the primary drivers in identifying the most favourable route.

Keywords: geocost, geohazard, pipeline route determination, pipeline route optimisation, spatial analysis

Procedia PDF Downloads 405
455 Domain-Specific Deep Neural Network Model for Classification of Abnormalities on Chest Radiographs

Authors: Nkechinyere Joy Olawuyi, Babajide Samuel Afolabi, Bola Ibitoye

Abstract:

This study collected a preprocessed dataset of chest radiographs and formulated a deep neural network model for detecting abnormalities. It also evaluated the performance of the formulated model and implemented a prototype of the formulated model. This was with the view to developing a deep neural network model to automatically classify abnormalities in chest radiographs. In order to achieve the overall purpose of this research, a large set of chest x-ray images were sourced for and collected from the CheXpert dataset, which is an online repository of annotated chest radiographs compiled by the Machine Learning Research Group, Stanford University. The chest radiographs were preprocessed into a format that can be fed into a deep neural network. The preprocessing techniques used were standardization and normalization. The classification problem was formulated as a multi-label binary classification model, which used convolutional neural network architecture to make a decision on whether an abnormality was present or not in the chest radiographs. The classification model was evaluated using specificity, sensitivity, and Area Under Curve (AUC) score as the parameter. A prototype of the classification model was implemented using Keras Open source deep learning framework in Python Programming Language. The AUC ROC curve of the model was able to classify Atelestasis, Support devices, Pleural effusion, Pneumonia, A normal CXR (no finding), Pneumothorax, and Consolidation. However, Lung opacity and Cardiomegaly had a probability of less than 0.5 and thus were classified as absent. Precision, recall, and F1 score values were 0.78; this implies that the number of False Positive and False Negative is the same, revealing some measure of label imbalance in the dataset. The study concluded that the developed model is sufficient to classify abnormalities present in chest radiographs into present or absent.

Keywords: transfer learning, convolutional neural network, radiograph, classification, multi-label

Procedia PDF Downloads 124