Search results for: image repair
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3253

Search results for: image repair

2563 Comparative Stem Cells Therapy for Regeneration of Liver Fibrosis

Authors: H. M. Imam, H. M. Rezk, A. F. Tohamy

Abstract:

Background: Human umbilical cord blood (HUCB) is considered as a unique source for stem cells. HUCB contain different types of progenitor cells which could differentiate into hepatocytes. Aims: To investigate the potential of rat's liver damage repair using human umbilical cord mesenchymal stem cells (hUCMSCs). We investigated the feasibility for hUCMSCs in recovery from liver damage. Moreover, investigating fibrotic liver repair and using the CCl4-induced model for liver damage in the rat. Methods: Rats were injected with 0.5 ml/kg CCl4 to induce liver damage and progressive liver fibrosis. hUCMSCs were injected into the rats through the tail vein; Stem cells were transplanted at a dose of 1×106 cells/rat after 72 hours of CCl4 injection without receiving any immunosuppressant. After (6 and 8 weeks) of transplantation, blood samples were collected to assess liver functions (ALT, AST, GGT and ALB) and level of Procollagen III as a liver fibrosis marker. In addition, hepatic tissue regeneration was assessed histopathologically and immunohistochemically using antihuman monoclonal antibodies against CD34, CK19 and albumin. Results: Biochemical and histopathological analysis showed significantly increased recovery from liver damage in the transplanted group. In addition, HUCB stem cells transdifferentiated into functional hepatocytes in rats with hepatic injury which results in improving liver structure and function. Conclusion: Our findings suggest that transplantation of hUCMSCs may be a novel therapeutic approach for treating liver fibrosis. Therefore, hUCMSCs are a potential option for treatment of liver cirrhosis.

Keywords: carbon tetra chloride, liver fibrosis, mesenchymal stem cells, rat

Procedia PDF Downloads 332
2562 Budget Optimization for Maintenance of Bridges in Egypt

Authors: Hesham Abd Elkhalek, Sherif M. Hafez, Yasser M. El Fahham

Abstract:

Allocating limited budget to maintain bridge networks and selecting effective maintenance strategies for each bridge represent challenging tasks for maintenance managers and decision makers. In Egypt, bridges are continuously deteriorating. In many cases, maintenance works are performed due to user complaints. The objective of this paper is to develop a practical and reliable framework to manage the maintenance, repair, and rehabilitation (MR&R) activities of Bridges network considering performance and budget limits. The model solves an optimization problem that maximizes the average condition of the entire network given the limited available budget using Genetic Algorithm (GA). The framework contains bridge inventory, condition assessment, repair cost calculation, deterioration prediction, and maintenance optimization. The developed model takes into account multiple parameters including serviceability requirements, budget allocation, element importance on structural safety and serviceability, bridge impact on network, and traffic. A questionnaire is conducted to complete the research scope. The proposed model is implemented in software, which provides a friendly user interface. The framework provides a multi-year maintenance plan for the entire network for up to five years. A case study of ten bridges is presented to validate and test the proposed model with data collected from Transportation Authorities in Egypt. Different scenarios are presented. The results are reasonable, feasible and within acceptable domain.

Keywords: bridge management systems (BMS), cost optimization condition assessment, fund allocation, Markov chain

Procedia PDF Downloads 287
2561 Secret Sharing in Visual Cryptography Using NVSS and Data Hiding Techniques

Authors: Misha Alexander, S. B. Waykar

Abstract:

Visual Cryptography is a special unbreakable encryption technique that transforms the secret image into random noisy pixels. These shares are transmitted over the network and because of its noisy texture it attracts the hackers. To address this issue a Natural Visual Secret Sharing Scheme (NVSS) was introduced that uses natural shares either in digital or printed form to generate the noisy secret share. This scheme greatly reduces the transmission risk but causes distortion in the retrieved secret image through variation in settings and properties of digital devices used to capture the natural image during encryption / decryption phase. This paper proposes a new NVSS scheme that extracts the secret key from randomly selected unaltered multiple natural images. To further improve the security of the shares data hiding techniques such as Steganography and Alpha channel watermarking are proposed.

Keywords: decryption, encryption, natural visual secret sharing, natural images, noisy share, pixel swapping

Procedia PDF Downloads 397
2560 Numerical Implementation and Testing of Fractioning Estimator Method for the Box-Counting Dimension of Fractal Objects

Authors: Abraham Terán Salcedo, Didier Samayoa Ochoa

Abstract:

This work presents a numerical implementation of a method for estimating the box-counting dimension of self-avoiding curves on a planar space, fractal objects captured on digital images; this method is named fractioning estimator. Classical methods of digital image processing, such as noise filtering, contrast manipulation, and thresholding, among others, are used in order to obtain binary images that are suitable for performing the necessary computations of the fractioning estimator. A user interface is developed for performing the image processing operations and testing the fractioning estimator on different captured images of real-life fractal objects. To analyze the results, the estimations obtained through the fractioning estimator are compared to the results obtained through other methods that are already implemented on different available software for computing and estimating the box-counting dimension.

Keywords: box-counting, digital image processing, fractal dimension, numerical method

Procedia PDF Downloads 78
2559 The Long-Term Effects of Immediate Implantation, Early Implantation and Delayed Implantation at Aesthetics Area

Authors: Xing Wang, Lin Feng, Xuan Zou, Hongchen liu

Abstract:

Immediate Implantation after tooth extraction is considered to be the ideal way to retain the alveolar bone, but some scholars believe the aesthetic effect in the Early Implantation case are more reliable. In this study, 89 patients were added to this retrospective study up to 5 years. Assessment indicators was including the survival of the implant (peri-implant infection, implant loosening, shedding, crowns and occlusal), aesthetics (color and fullness gums, papilla height, probing depth, X-ray alveolar crest height, the patient's own aesthetic satisfaction, doctors aesthetics score), repair defects around the implant (peri-implant bone changes in height and thickness, whether the use of autologous bone graft, whether to use absorption/repair manual nonabsorbable material), treatment time, cost and the use of antibiotics.The results demonstrated that there is no significant difference in long-term success rate of immediate implantation, early implantation and delayed implantation (p> 0.05). But the results indicated immediate implantation group could get get better aesthetic results after two years (p< 0.05), but may increase the risk of complications and failures (p< 0.05). High-risk indicators include gingival recession, labial bone wall damage, thin gingival biotypes, planting position and occlusal restoration bad and so on. No matter which type of implanting methods was selected, the extraction methods and bone defect amplification techniques are observed as a significant factors on aesthetic effect (p< 0.05).

Keywords: immediate implantation, long-term effects, aesthetics area, dental implants

Procedia PDF Downloads 353
2558 A Combination of Anisotropic Diffusion and Sobel Operator to Enhance the Performance of the Morphological Component Analysis for Automatic Crack Detection

Authors: Ankur Dixit, Hiroaki Wagatsuma

Abstract:

The crack detection on a concrete bridge is an important and constant task in civil engineering. Chronically, humans are checking the bridge for inspection of cracks to maintain the quality and reliability of bridge. But this process is very long and costly. To overcome such limitations, we have used a drone with a digital camera, which took some images of bridge deck and these images are processed by morphological component analysis (MCA). MCA technique is a very strong application of sparse coding and it explores the possibility of separation of images. In this paper, MCA has been used to decompose the image into coarse and fine components with the effectiveness of two dictionaries namely anisotropic diffusion and wavelet transform. An anisotropic diffusion is an adaptive smoothing process used to adjust diffusion coefficient by finding gray level and gradient as features. These cracks in image are enhanced by subtracting the diffused coarse image into the original image and the results are treated by Sobel edge detector and binary filtering to exhibit the cracks in a fine way. Our results demonstrated that proposed MCA framework using anisotropic diffusion followed by Sobel operator and binary filtering may contribute to an automation of crack detection even in open field sever conditions such as bridge decks.

Keywords: anisotropic diffusion, coarse component, fine component, MCA, Sobel edge detector and wavelet transform

Procedia PDF Downloads 169
2557 Improved Processing Speed for Text Watermarking Algorithm in Color Images

Authors: Hamza A. Al-Sewadi, Akram N. A. Aldakari

Abstract:

Copyright protection and ownership proof of digital multimedia are achieved nowadays by digital watermarking techniques. A text watermarking algorithm for protecting the property rights and ownership judgment of color images is proposed in this paper. Embedding is achieved by inserting texts elements randomly into the color image as noise. The YIQ image processing model is found to be faster than other image processing methods, and hence, it is adopted for the embedding process. An optional choice of encrypting the text watermark before embedding is also suggested (in case required by some applications), where, the text can is encrypted using any enciphering technique adding more difficulty to hackers. Experiments resulted in embedding speed improvement of more than double the speed of other considered systems (such as least significant bit method, and separate color code methods), and a fairly acceptable level of peak signal to noise ratio (PSNR) with low mean square error values for watermarking purposes.

Keywords: steganography, watermarking, time complexity measurements, private keys

Procedia PDF Downloads 140
2556 Automatic Early Breast Cancer Segmentation Enhancement by Image Analysis and Hough Transform

Authors: David Jurado, Carlos Ávila

Abstract:

Detection of early signs of breast cancer development is crucial to quickly diagnose the disease and to define adequate treatment to increase the survival probability of the patient. Computer Aided Detection systems (CADs), along with modern data techniques such as Machine Learning (ML) and Neural Networks (NN), have shown an overall improvement in digital mammography cancer diagnosis, reducing the false positive and false negative rates becoming important tools for the diagnostic evaluations performed by specialized radiologists. However, ML and NN-based algorithms rely on datasets that might bring issues to the segmentation tasks. In the present work, an automatic segmentation and detection algorithm is described. This algorithm uses image processing techniques along with the Hough transform to automatically identify microcalcifications that are highly correlated with breast cancer development in the early stages. Along with image processing, automatic segmentation of high-contrast objects is done using edge extraction and circle Hough transform. This provides the geometrical features needed for an automatic mask design which extracts statistical features of the regions of interest. The results shown in this study prove the potential of this tool for further diagnostics and classification of mammographic images due to the low sensitivity to noisy images and low contrast mammographies.

Keywords: breast cancer, segmentation, X-ray imaging, hough transform, image analysis

Procedia PDF Downloads 75
2555 Impact of Green Marketing Mix Strategy and CSR on Organizational Performance: An Empirical Study of Manufacturing Sector of Pakistan

Authors: Syeda Shawana Mahasan, Muhammad Farooq Akhtar

Abstract:

The objective of this study is to analyze the influence of the green marketing mix strategy and corporate social responsibility (CSR) on the performance of an organization, taking into account the mediating effect of corporate image. The impact of frugal innovation and corporate activism is being examined. The data was gathered from executives at various levels of management, including top, middle, and lower-level managers, from a total of 550 manufacturing enterprises of different sizes, ranging from small to medium to large. The collected replies are processed and analyzed using SMART PLS version 4.0.0.0. The application of PLS-SEM demonstrates that the green marketing mix strategy and corporate social responsibility have a significant impact on organizational performance. Therefore, it is imperative for organizations to effectively adopt environmentally sustainable and socially conscious methods within their operations. The results indicate that the corporate image has a key role in mediating the relationship between the green marketing mix strategy, corporate social responsibility, and organizational performance. This demonstrates the imperative for organizations to actively enhance their favorable reputation among stakeholders. The combination of frugal innovation and corporate activism enhances the connection between corporate image and organizational performance. The current study assists managers in recognizing the significance of these particular constructs in maintaining the long-term performance of the organization.

Keywords: green marketing mix strategy, CSR, corporate image, organizational performance, frugal innovation, corporate activism

Procedia PDF Downloads 27
2554 MRI Quality Control Using Texture Analysis and Spatial Metrics

Authors: Kumar Kanudkuri, A. Sandhya

Abstract:

Typically, in a MRI clinical setting, there are several protocols run, each indicated for a specific anatomy and disease condition. However, these protocols or parameters within them can change over time due to changes to the recommendations by the physician groups or updates in the software or by the availability of new technologies. Most of the time, the changes are performed by the MRI technologist to account for either time, coverage, physiological, or Specific Absorbtion Rate (SAR ) reasons. However, giving properly guidelines to MRI technologist is important so that they do not change the parameters that negatively impact the image quality. Typically a standard American College of Radiology (ACR) MRI phantom is used for Quality Control (QC) in order to guarantee that the primary objectives of MRI are met. The visual evaluation of quality depends on the operator/reviewer and might change amongst operators as well as for the same operator at various times. Therefore, overcoming these constraints is essential for a more impartial evaluation of quality. This makes quantitative estimation of image quality (IQ) metrics for MRI quality control is very important. So in order to solve this problem, we proposed that there is a need for a robust, open-source, and automated MRI image control tool. The Designed and developed an automatic analysis tool for measuring MRI image quality (IQ) metrics like Signal to Noise Ratio (SNR), Signal to Noise Ratio Uniformity (SNRU), Visual Information Fidelity (VIF), Feature Similarity (FSIM), Gray level co-occurrence matrix (GLCM), slice thickness accuracy, slice position accuracy, High contrast spatial resolution) provided good accuracy assessment. A standardized quality report has generated that incorporates metrics that impact diagnostic quality.

Keywords: ACR MRI phantom, MRI image quality metrics, SNRU, VIF, FSIM, GLCM, slice thickness accuracy, slice position accuracy

Procedia PDF Downloads 156
2553 A Comprehensive Study and Evaluation on Image Fashion Features Extraction

Authors: Yuanchao Sang, Zhihao Gong, Longsheng Chen, Long Chen

Abstract:

Clothing fashion represents a human’s aesthetic appreciation towards everyday outfits and appetite for fashion, and it reflects the development of status in society, humanity, and economics. However, modelling fashion by machine is extremely challenging because fashion is too abstract to be efficiently described by machines. Even human beings can hardly reach a consensus about fashion. In this paper, we are dedicated to answering a fundamental fashion-related problem: what image feature best describes clothing fashion? To address this issue, we have designed and evaluated various image features, ranging from traditional low-level hand-crafted features to mid-level style awareness features to various current popular deep neural network-based features, which have shown state-of-the-art performance in various vision tasks. In summary, we tested the following 9 feature representations: color, texture, shape, style, convolutional neural networks (CNNs), CNNs with distance metric learning (CNNs&DML), AutoEncoder, CNNs with multiple layer combination (CNNs&MLC) and CNNs with dynamic feature clustering (CNNs&DFC). Finally, we validated the performance of these features on two publicly available datasets. Quantitative and qualitative experimental results on both intra-domain and inter-domain fashion clothing image retrieval showed that deep learning based feature representations far outweigh traditional hand-crafted feature representation. Additionally, among all deep learning based methods, CNNs with explicit feature clustering performs best, which shows feature clustering is essential for discriminative fashion feature representation.

Keywords: convolutional neural network, feature representation, image processing, machine modelling

Procedia PDF Downloads 135
2552 Lime Based Products as a Maintainable Option for Repair And Restoration of Historic Buildings in India

Authors: Adedayo Jeremiah Adeyekun, Samuel Oluwagbemiga Ishola

Abstract:

This research aims to study the use of traditional building materials for the repair and refurbishment of historic buildings in India and to provide an authentic treatment of historical buildings that will be highly considered by taking into consideration the new standards of rehabilitating process. This can be proven to be an effective solution over modern impervious material due to its compatibility with traditional building methods and materials. For example, their elastoplastic properties allow accommodating movement due to settlement or moisture/temperature changes without cracking. The use of lime also enhances workability, water retention and bond characteristics. Lime is considered to be a natural, traditional material, but it is also sustainable and energy-efficient, with production powered by biomass and emissions up to 25% less than cementitious materials. However, there is a lack of comprehensive data on the impact of lime‐based materials on the energy efficiency and thermal properties of traditional buildings and structures. Although lime mortars, renders and plasters were largely superseded by cement-based products in the first half of the 20th century, lime has a long and proven track record dating back to ancient times. This was used by the Egyptians in 4000BC to construct the pyramids. This doesn't mean that lime is an outdated technology, nor is it difficult to be used as a material. In fact, lime has a growing place in modern construction, with increasing numbers of designers choosing to use lime-based products because of their special properties. To carry out this research, some historic buildings will be surveyed and information will be derived from the textbooks and journals related to Architectural restoration.

Keywords: lime, materials, historic, buildings, sustainability

Procedia PDF Downloads 161
2551 Optimal Sequential Scheduling of Imperfect Maintenance Last Policy for a System Subject to Shocks

Authors: Yen-Luan Chen

Abstract:

Maintenance has a great impact on the capacity of production and on the quality of the products, and therefore, it deserves continuous improvement. Maintenance procedure done before a failure is called preventive maintenance (PM). Sequential PM, which specifies that a system should be maintained at a sequence of intervals with unequal lengths, is one of the commonly used PM policies. This article proposes a generalized sequential PM policy for a system subject to shocks with imperfect maintenance and random working time. The shocks arrive according to a non-homogeneous Poisson process (NHPP) with varied intensity function in each maintenance interval. As a shock occurs, the system suffers two types of failures with number-dependent probabilities: type-I (minor) failure, which is rectified by a minimal repair, and type-II (catastrophic) failure, which is removed by a corrective maintenance (CM). The imperfect maintenance is carried out to improve the system failure characteristic due to the altered shock process. The sequential preventive maintenance-last (PML) policy is defined as that the system is maintained before any CM occurs at a planned time Ti or at the completion of a working time in the i-th maintenance interval, whichever occurs last. At the N-th maintenance, the system is replaced rather than maintained. This article first takes up the sequential PML policy with random working time and imperfect maintenance in reliability engineering. The optimal preventive maintenance schedule that minimizes the mean cost rate of a replacement cycle is derived analytically and determined in terms of its existence and uniqueness. The proposed models provide a general framework for analyzing the maintenance policies in reliability theory.

Keywords: optimization, preventive maintenance, random working time, minimal repair, replacement, reliability

Procedia PDF Downloads 266
2550 Comprehensive Evaluation of COVID-19 Through Chest Images

Authors: Parisa Mansour

Abstract:

The coronavirus disease 2019 (COVID-19) was discovered and rapidly spread to various countries around the world since the end of 2019. Computed tomography (CT) images have been used as an important alternative to the time-consuming RT. PCR test. However, manual segmentation of CT images alone is a major challenge as the number of suspected cases increases. Thus, accurate and automatic segmentation of COVID-19 infections is urgently needed. Because the imaging features of the COVID-19 infection are different and similar to the background, existing medical image segmentation methods cannot achieve satisfactory performance. In this work, we try to build a deep convolutional neural network adapted for the segmentation of chest CT images with COVID-19 infections. First, we maintain a large and novel chest CT image database containing 165,667 annotated chest CT images from 861 patients with confirmed COVID-19. Inspired by the observation that the boundary of an infected lung can be improved by global intensity adjustment, we introduce a feature variable block into the proposed deep CNN, which adjusts the global features of features to segment the COVID-19 infection. The proposed PV array can effectively and adaptively improve the performance of functions in different cases. We combine features of different scales by proposing a progressive atrocious space pyramid fusion scheme to deal with advanced infection regions with various aspects and shapes. We conducted experiments on data collected in China and Germany and showed that the proposed deep CNN can effectively produce impressive performance.

Keywords: chest, COVID-19, chest Image, coronavirus, CT image, chest CT

Procedia PDF Downloads 50
2549 Classification of Digital Chest Radiographs Using Image Processing Techniques to Aid in Diagnosis of Pulmonary Tuberculosis

Authors: A. J. S. P. Nileema, S. Kulatunga , S. H. Palihawadana

Abstract:

Computer aided detection (CAD) system was developed for the diagnosis of pulmonary tuberculosis using digital chest X-rays with MATLAB image processing techniques using a statistical approach. The study comprised of 200 digital chest radiographs collected from the National Hospital for Respiratory Diseases - Welisara, Sri Lanka. Pre-processing was done to remove identification details. Lung fields were segmented and then divided into four quadrants; right upper quadrant, left upper quadrant, right lower quadrant, and left lower quadrant using the image processing techniques in MATLAB. Contrast, correlation, homogeneity, energy, entropy, and maximum probability texture features were extracted using the gray level co-occurrence matrix method. Descriptive statistics and normal distribution analysis were performed using SPSS. Depending on the radiologists’ interpretation, chest radiographs were classified manually into PTB - positive (PTBP) and PTB - negative (PTBN) classes. Features with standard normal distribution were analyzed using an independent sample T-test for PTBP and PTBN chest radiographs. Among the six features tested, contrast, correlation, energy, entropy, and maximum probability features showed a statistically significant difference between the two classes at 95% confidence interval; therefore, could be used in the classification of chest radiograph for PTB diagnosis. With the resulting value ranges of the five texture features with normal distribution, a classification algorithm was then defined to recognize and classify the quadrant images; if the texture feature values of the quadrant image being tested falls within the defined region, it will be identified as a PTBP – abnormal quadrant and will be labeled as ‘Abnormal’ in red color with its border being highlighted in red color whereas if the texture feature values of the quadrant image being tested falls outside of the defined value range, it will be identified as PTBN–normal and labeled as ‘Normal’ in blue color but there will be no changes to the image outline. The developed classification algorithm has shown a high sensitivity of 92% which makes it an efficient CAD system and with a modest specificity of 70%.

Keywords: chest radiographs, computer aided detection, image processing, pulmonary tuberculosis

Procedia PDF Downloads 115
2548 Toward Subtle Change Detection and Quantification in Magnetic Resonance Neuroimaging

Authors: Mohammad Esmaeilpour

Abstract:

One of the important open problems in the field of medical image processing is detection and quantification of small changes. In this poster, we try to investigate that, how the algebraic decomposition techniques can be used for semiautomatically detecting and quantifying subtle changes in Magnetic Resonance (MR) neuroimaging volumes. We mostly focus on the low-rank values of the matrices achieved from decomposing MR image pairs during a period of time. Besides, a skillful neuroradiologist will help the algorithm to distinguish between noises and small changes.

Keywords: magnetic resonance neuroimaging, subtle change detection and quantification, algebraic decomposition, basis functions

Procedia PDF Downloads 467
2547 Scar Removal Stretegy for Fingerprint Using Diffusion

Authors: Mohammad A. U. Khan, Tariq M. Khan, Yinan Kong

Abstract:

Fingerprint image enhancement is one of the most important step in an automatic fingerprint identification recognition (AFIS) system which directly affects the overall efficiency of AFIS. The conventional fingerprint enhancement like Gabor and Anisotropic filters do fill the gaps in ridge lines but they fail to tackle scar lines. To deal with this problem we are proposing a method for enhancing the ridges and valleys with scar so that true minutia points can be extracted with accuracy. Our results have shown an improved performance in terms of enhancement.

Keywords: fingerprint image enhancement, removing noise, coherence, enhanced diffusion

Procedia PDF Downloads 509
2546 Small Text Extraction from Documents and Chart Images

Authors: Rominkumar Busa, Shahira K. C., Lijiya A.

Abstract:

Text recognition is an important area in computer vision which deals with detecting and recognising text from an image. The Optical Character Recognition (OCR) is a saturated area these days and with very good text recognition accuracy. However the same OCR methods when applied on text with small font sizes like the text data of chart images, the recognition rate is less than 30%. In this work, aims to extract small text in images using the deep learning model, CRNN with CTC loss. The text recognition accuracy is found to improve by applying image enhancement by super resolution prior to CRNN model. We also observe the text recognition rate further increases by 18% by applying the proposed method, which involves super resolution and character segmentation followed by CRNN with CTC loss. The efficiency of the proposed method shows that further pre-processing on chart image text and other small text images will improve the accuracy further, thereby helping text extraction from chart images.

Keywords: small text extraction, OCR, scene text recognition, CRNN

Procedia PDF Downloads 120
2545 Microstructure and Hardness Changes on T91 Weld Joint after Heating at 560°C

Authors: Suraya Mohamad Nadzir, Badrol Ahmad, Norlia Berahim

Abstract:

T91 steel has been used as construction material for superheater tubes in sub-critical and super critical boiler. This steel was developed with higher creep strength property as compared to conventional low alloy steel. However, this steel is also susceptible to materials degradation due to its sensitivity to heat treatment especially Post Weld Heat Treatment (PWHT) after weld repair process. Review of PWHT process shows that the holding temperature may different from one batch to other batch of samples depending on the material composition. This issue was reviewed by many researchers and one of the potential solutions is the development of weld repair process without PWHT. This process is possible with the use of temper bead welding technique. However, study has shown the hardness value across the weld joint with exception of PWHT is much higher compare to recommended hardness value. Based on the above findings, a study to evaluate the microstructure and hardness changes of T91 weld joint after heating at 560°C at varying duration was carried out. This study was carried out to evaluate the possibility of self-tempering process during in-service period. In this study, the T91 weld joint was heat-up in air furnace at 560°C for duration of 50 and 150 hours. The heating process was controlled with heating rate of 200°C/hours, and cooling rate about 100°C/hours. Following this process, samples were prepared for the microstructure examination and hardness evaluation. Results have shown full tempered martensite structure and acceptance hardness value was achieved after 50 hours heating. This result shows that the thin component such as T91 superheater tubes is able to self-tempering during service hour.

Keywords: T91, weld-joint, tempered martensite, self-tempering

Procedia PDF Downloads 372
2544 Development of Earthquake and Typhoon Loss Models for Japan, Specifically Designed for Underwriting and Enterprise Risk Management Cycles

Authors: Nozar Kishi, Babak Kamrani, Filmon Habte

Abstract:

Natural hazards such as earthquakes and tropical storms, are very frequent and highly destructive in Japan. Japan experiences, every year on average, more than 10 tropical cyclones that come within damaging reach, and earthquakes of moment magnitude 6 or greater. We have developed stochastic catastrophe models to address the risk associated with the entire suite of damaging events in Japan, for use by insurance, reinsurance, NGOs and governmental institutions. KCC’s (Karen Clark and Company) catastrophe models are procedures constituted of four modular segments: 1) stochastic events sets that would represent the statistics of the past events, hazard attenuation functions that could model the local intensity, vulnerability functions that would address the repair need for local buildings exposed to the hazard, and financial module addressing policy conditions that could estimates the losses incurring as result of. The events module is comprised of events (faults or tracks) with different intensities with corresponding probabilities. They are based on the same statistics as observed through the historical catalog. The hazard module delivers the hazard intensity (ground motion or wind speed) at location of each building. The vulnerability module provides library of damage functions that would relate the hazard intensity to repair need as percentage of the replacement value. The financial module reports the expected loss, given the payoff policies and regulations. We have divided Japan into regions with similar typhoon climatology, and earthquake micro-zones, within each the characteristics of events are similar enough for stochastic modeling. For each region, then, a set of stochastic events is developed that results in events with intensities corresponding to annual occurrence probabilities that are of interest to financial communities; such as 0.01, 0.004, etc. The intensities, corresponding to these probabilities (called CE, Characteristics Events) are selected through a superstratified sampling approach that is based on the primary uncertainty. Region specific hazard intensity attenuation functions followed by vulnerability models leads to estimation of repair costs. Extensive economic exposure model addresses all local construction and occupancy types, such as post-linter Shinand Okabe wood, as well as concrete confined in steel, SRC (Steel-Reinforced Concrete), high-rise.

Keywords: typhoon, earthquake, Japan, catastrophe modelling, stochastic modeling, stratified sampling, loss model, ERM

Procedia PDF Downloads 261
2543 Advances of Image Processing in Precision Agriculture: Using Deep Learning Convolution Neural Network for Soil Nutrient Classification

Authors: Halimatu S. Abdullahi, Ray E. Sheriff, Fatima Mahieddine

Abstract:

Agriculture is essential to the continuous existence of human life as they directly depend on it for the production of food. The exponential rise in population calls for a rapid increase in food with the application of technology to reduce the laborious work and maximize production. Technology can aid/improve agriculture in several ways through pre-planning and post-harvest by the use of computer vision technology through image processing to determine the soil nutrient composition, right amount, right time, right place application of farm input resources like fertilizers, herbicides, water, weed detection, early detection of pest and diseases etc. This is precision agriculture which is thought to be solution required to achieve our goals. There has been significant improvement in the area of image processing and data processing which has being a major challenge. A database of images is collected through remote sensing, analyzed and a model is developed to determine the right treatment plans for different crop types and different regions. Features of images from vegetations need to be extracted, classified, segmented and finally fed into the model. Different techniques have been applied to the processes from the use of neural network, support vector machine, fuzzy logic approach and recently, the most effective approach generating excellent results using the deep learning approach of convolution neural network for image classifications. Deep Convolution neural network is used to determine soil nutrients required in a plantation for maximum production. The experimental results on the developed model yielded results with an average accuracy of 99.58%.

Keywords: convolution, feature extraction, image analysis, validation, precision agriculture

Procedia PDF Downloads 311
2542 Mitochondrial Apolipoprotein A-1 Binding Protein Promotes Repolarization of Inflammatory Macrophage by Repairing Mitochondrial Respiration

Authors: Hainan Chen, Jina Qing, Xiao Zhu, Ling Gao, Ampadu O. Jackson, Min Zhang, Kai Yin

Abstract:

Objective: Editing macrophage activation to dampen inflammatory diseases by promoting the repolarization of inflammatory (M1) macrophages to anti-inflammatory (M2) macrophages is highly associated with mitochondrial respiration. Recent studies have suggested that mitochondrial apolipoprotein A-1 binding protein (APOA1BP) was essential for the cellular metabolite NADHX repair to NADH, which is necessary for the mitochondrial function. The exact role of APOA1BP in the repolarization of M1 to M2, however, is uncertain. Material and method: THP-1-derived macrophages were incubated with LPS (10 ng/ml) or/and IL-4 (100 U/ml) for 24 hours. Biochemical parameters of oxidative phosphorylation and M1/M2 markers were analyzed after overexpression of APOA1BP in cells. Results: Compared with control and IL-4-exposed M2 cells, APOA1BP was downregulated in M1 macrophages. APOA1BP restored the decline in mitochondrial function to improve metabolic and phenotypic reprogramming of M1 to M2 macrophages. Blocking oxidative phosphorylation by oligomycin blunts the effects of APOA1BP on M1 to M2 repolarization. Mechanistically, LPS triggered the hydration of NADH and increased its hydrate NADHX which inhibit cellular NADH dehydrogenases, a key component of electron transport chain for oxidative phosphorylation. APOA1BP decreased the level of NADHX via converting R-NADHX to biologically useful S-NADHX. The mutant of APOA1BP aspartate188, the binding site of NADHX, fail to repair oxidative phosphorylation, thereby preventing repolarization. Conclusions: Restoring mitochondrial function by increasing mitochondrial APOA1BP might be useful to improve the reprogramming of inflammatory macrophages into anti-inflammatory cells to control inflammatory diseases.

Keywords: inflammatory diseases, macrophage repolarization, mitochondrial respiration, apolipoprotein A-1 binding protein, NADHX, NADH

Procedia PDF Downloads 168
2541 3D Microscopy, Image Processing, and Analysis of Lymphangiogenesis in Biological Models

Authors: Thomas Louis, Irina Primac, Florent Morfoisse, Tania Durre, Silvia Blacher, Agnes Noel

Abstract:

In vitro and in vivo lymphangiogenesis assays are essential for the identification of potential lymphangiogenic agents and the screening of pharmacological inhibitors. In the present study, we analyse three biological models: in vitro lymphatic endothelial cell spheroids, in vivo ear sponge assay, and in vivo lymph node colonisation by tumour cells. These assays provide suitable 3D models to test pro- and anti-lymphangiogenic factors or drugs. 3D images were acquired by confocal laser scanning and light sheet fluorescence microscopy. Virtual scan microscopy followed by 3D reconstruction by image aligning methods was also used to obtain 3D images of whole large sponge and ganglion samples. 3D reconstruction, image segmentation, skeletonisation, and other image processing algorithms are described. Fixed and time-lapse imaging techniques are used to analyse lymphatic endothelial cell spheroids behaviour. The study of cell spatial distribution in spheroid models enables to detect interactions between cells and to identify invasion hierarchy and guidance patterns. Global measurements such as volume, length, and density of lymphatic vessels are measured in both in vivo models. Branching density and tortuosity evaluation are also proposed to determine structure complexity. Those properties combined with vessel spatial distribution are evaluated in order to determine lymphangiogenesis extent. Lymphatic endothelial cell invasion and lymphangiogenesis were evaluated under various experimental conditions. The comparison of these conditions enables to identify lymphangiogenic agents and to better comprehend their roles in the lymphangiogenesis process. The proposed methodology is validated by its application on the three presented models.

Keywords: 3D image segmentation, 3D image skeletonisation, cell invasion, confocal microscopy, ear sponges, light sheet microscopy, lymph nodes, lymphangiogenesis, spheroids

Procedia PDF Downloads 367
2540 Optimizing Super Resolution Generative Adversarial Networks for Resource-Efficient Single-Image Super-Resolution via Knowledge Distillation and Weight Pruning

Authors: Hussain Sajid, Jung-Hun Shin, Kum-Won Cho

Abstract:

Image super-resolution is the most common computer vision problem with many important applications. Generative adversarial networks (GANs) have promoted remarkable advances in single-image super-resolution (SR) by recovering photo-realistic images. However, high memory requirements of GAN-based SR (mainly generators) lead to performance degradation and increased energy consumption, making it difficult to implement it onto resource-constricted devices. To relieve such a problem, In this paper, we introduce an optimized and highly efficient architecture for SR-GAN (generator) model by utilizing model compression techniques such as Knowledge Distillation and pruning, which work together to reduce the storage requirement of the model also increase in their performance. Our method begins with distilling the knowledge from a large pre-trained model to a lightweight model using different loss functions. Then, iterative weight pruning is applied to the distilled model to remove less significant weights based on their magnitude, resulting in a sparser network. Knowledge Distillation reduces the model size by 40%; pruning then reduces it further by 18%. To accelerate the learning process, we employ the Horovod framework for distributed training on a cluster of 2 nodes, each with 8 GPUs, resulting in improved training performance and faster convergence. Experimental results on various benchmarks demonstrate that the proposed compressed model significantly outperforms state-of-the-art methods in terms of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and image quality for x4 super-resolution tasks.

Keywords: single-image super-resolution, generative adversarial networks, knowledge distillation, pruning

Procedia PDF Downloads 86
2539 A Method of Manufacturing Low Cost Utility Robots and Vehicles

Authors: Gregory E. Ofili

Abstract:

Introduction and Objective: Climate change and a global economy mean farmers must adapt and gain access to affordable and reliable automation technologies. Key barriers include a lack of transportation, electricity, and internet service, coupled with costly enabling technologies and limited local subject matter expertise. Methodology/Approach: Resourcefulness is essential to mechanization on a farm. This runs contrary to the tech industry practice of planned obsolescence and disposal. One solution is plug-and-play hardware that allows farmer to assemble, repair, program, and service their own fleet of industrial machines. To that end, we developed a method of manufacturing low-cost utility robots, transport vehicles, and solar/wind energy harvesting systems, all running on an open-source Robot Operating System (ROS). We demonstrate this technology by fabricating a utility robot and an all-terrain (4X4) utility vehicle. Constructed of aluminum trusses and weighing just 40 pounds, yet capable of transporting 200 pounds of cargo, on sale for less than $2,000. Conclusions & Policy Implications: Electricity, internet, and automation are essential for productivity and competitiveness. With planned obsolescence, the priorities of technology suppliers are not aligned with the farmer’s realities. This patent-pending method of manufacturing low-cost industrial robots and electric vehicles has met its objective. To create low-cost machines, the farmer can assemble, program, and repair with basic hand tools.

Keywords: automation, robotics, utility robot, small-hold farm, robot operating system

Procedia PDF Downloads 64
2538 A Study of Common Carotid Artery Behavior from B-Mode Ultrasound Image for Different Gender and BMI Categories

Authors: Nabilah Ibrahim, Khaliza Musa

Abstract:

The increment thickness of intima-media thickness (IMT) which involves the changes of diameter of the carotid artery is one of the early symptoms of the atherosclerosis lesion. The manual measurement of arterial diameter is time consuming and lack of reproducibility. Thus, this study reports the automatic approach to find the arterial diameter behavior for different gender, and body mass index (BMI) categories, focus on tracked region. BMI category is divided into underweight, normal, and overweight categories. Canny edge detection is employed to the B-mode image to extract the important information to be deal as the carotid wall boundary. The result shows the significant difference of arterial diameter between male and female groups which is 2.5% difference. In addition, the significant result of differences of arterial diameter for BMI category is the decreasing of arterial diameter proportional to the BMI.

Keywords: B-mode Ultrasound Image, carotid artery diameter, canny edge detection, body mass index

Procedia PDF Downloads 436
2537 Adversarial Attacks and Defenses on Deep Neural Networks

Authors: Jonathan Sohn

Abstract:

Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.

Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning

Procedia PDF Downloads 189
2536 Normalized Compression Distance Based Scene Alteration Analysis of a Video

Authors: Lakshay Kharbanda, Aabhas Chauhan

Abstract:

In this paper, an application of Normalized Compression Distance (NCD) to detect notable scene alterations occurring in videos is presented. Several research groups have been developing methods to perform image classification using NCD, a computable approximation to Normalized Information Distance (NID) by studying the degree of similarity in images. The timeframes where significant aberrations between the frames of a video have occurred have been identified by obtaining a threshold NCD value, using two compressors: LZMA and BZIP2 and defining scene alterations using Pixel Difference Percentage metrics.

Keywords: image compression, Kolmogorov complexity, normalized compression distance, root mean square error

Procedia PDF Downloads 336
2535 Congenital Diaphragmatic Hernia Outcomes in a Low-Volume Center

Authors: Michael Vieth, Aric Schadler, Hubert Ballard, J. A. Bauer, Pratibha Thakkar

Abstract:

Introduction: Congenital diaphragmatic hernia (CDH) is a condition characterized by the herniation of abdominal contents into the thoracic cavity requiring postnatal surgical repair. Previous literature suggests improved CDH outcomes at high-volume regional referral centers compared to low-volume centers. The purpose of this study was to examine CDH outcomes at Kentucky Children’s Hospital (KCH), a low-volume center, compared to the Congenital Diaphragmatic Hernia Study Group (CDHSG). Methods: A retrospective chart review was performed at KCH from 2007-2019 for neonates with CDH, and then subdivided into two cohorts: those requiring ECMO therapy and those not requiring ECMO therapy. Basic demographic data and measures of mortality and morbidity including ventilator days and length of stay were compared to the CDHSG. Measures of morbidity for the ECMO cohort including duration of ECMO, clinical bleeding, intracranial hemorrhage, sepsis, need for continuous renal replacement therapy (CRRT), need for sildenafil at discharge, timing of surgical repair, and total ventilator days were collected. Statistical analysis was performed using IBM SPSS Statistics version 28. One-sample t-tests and one-sample Wilcoxon Signed Rank test were utilized as appropriate.Results: There were a total of 27 neonatal patients with CDH at KCH from 2007-2019; 9 of the 27 required ECMO therapy. The birth weight and gestational age were similar between KCH and the CDHSG (2.99 kg vs 2.92 kg, p =0.655; 37.0 weeks vs 37.4 weeks, p =0.51). About half of the patients were inborn in both cohorts (52% vs 56%, p =0.676). KCH cohort had significantly more Caucasian patients (96% vs 55%, p=<0.001). Unadjusted mortality was similar in both groups (KCH 70% vs CDHSG 72%, p =0.857). Using ECMO utilization (KCH 78% vs CDHSG 52%, p =0.118) and need for surgical repair (KCH 95% vs CDHSG 85%, p =0.060) as proxy for severity, both groups’ mortality were comparable. No significant difference was noted for pulmonary outcomes such as average ventilator days (KCH 43.2 vs. CDHSG 17.3, p =0.078) and home oxygen dependency (KCH 44% vs. CDHSG 24%, p =0.108). Average length of hospital stay for patients treated at KCH was similar to CDHSG (64.4 vs 49.2, p=1.000). Conclusion: Our study demonstrates that outcome in CDH patients is independent of center’s case volume status. Management of CDH with a standardized approach in a low-volume center can yield similar outcomes. This data supports the treatment of patients with CDH at low-volume centers as opposed to transferring to higher-volume centers.

Keywords: ECMO, case volume, congenital diaphragmatic hernia, congenital diaphragmatic hernia study group, neonate

Procedia PDF Downloads 88
2534 Quality Analysis of Vegetables Through Image Processing

Authors: Abdul Khalique Baloch, Ali Okatan

Abstract:

The quality analysis of food and vegetable from image is hot topic now a day, where researchers make them better then pervious findings through different technique and methods. In this research we have review the literature, and find gape from them, and suggest better proposed approach, design the algorithm, developed a software to measure the quality from images, where accuracy of image show better results, and compare the results with Perouse work done so for. The Application we uses an open-source dataset and python language with tensor flow lite framework. In this research we focus to sort food and vegetable from image, in the images, the application can sorts and make them grading after process the images, it could create less errors them human base sorting errors by manual grading. Digital pictures datasets were created. The collected images arranged by classes. The classification accuracy of the system was about 94%. As fruits and vegetables play main role in day-to-day life, the quality of fruits and vegetables is necessary in evaluating agricultural produce, the customer always buy good quality fruits and vegetables. This document is about quality detection of fruit and vegetables using images. Most of customers suffering due to unhealthy foods and vegetables by suppliers, so there is no proper quality measurement level followed by hotel managements. it have developed software to measure the quality of the fruits and vegetables by using images, it will tell you how is your fruits and vegetables are fresh or rotten. Some algorithms reviewed in this thesis including digital images, ResNet, VGG16, CNN and Transfer Learning grading feature extraction. This application used an open source dataset of images and language used python, and designs a framework of system.

Keywords: deep learning, computer vision, image processing, rotten fruit detection, fruits quality criteria, vegetables quality criteria

Procedia PDF Downloads 63