Search results for: images about Japan and Japanese people
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9774

Search results for: images about Japan and Japanese people

9444 Recognition of Objects in a Maritime Environment Using a Combination of Pre- and Post-Processing of the Polynomial Fit Method

Authors: R. R. Hordijk, O. J. G. Somsen

Abstract:

Traditionally, radar systems are the eyes and ears of a ship. However, these systems have their drawbacks and nowadays they are extended with systems that work with video and photos. Processing of data from these videos and photos is however very labour-intensive and efforts are being made to automate this process. A major problem when trying to recognize objects in water is that the 'background' is not homogeneous so that traditional image recognition technics do not work well. Main question is, can a method be developed which automate this recognition process. There are a large number of parameters involved to facilitate the identification of objects on such images. One is varying the resolution. In this research, the resolution of some images has been reduced to the extreme value of 1% of the original to reduce clutter before the polynomial fit (pre-processing). It turned out that the searched object was clearly recognizable as its grey value was well above the average. Another approach is to take two images of the same scene shortly after each other and compare the result. Because the water (waves) fluctuates much faster than an object floating in the water one can expect that the object is the only stable item in the two images. Both these methods (pre-processing and comparing two images of the same scene) delivered useful results. Though it is too early to conclude that with these methods all image problems can be solved they are certainly worthwhile for further research.

Keywords: image processing, image recognition, polynomial fit, water

Procedia PDF Downloads 517
9443 The Image of Saddam Hussein and Collective Memory: The Semiotics of Ba'ath Regime's Mural in Iraq (1980-2003)

Authors: Maryam Pirdehghan

Abstract:

During the Ba'ath Party's rule in Iraq, propaganda was utilized to justify and to promote Saddam Hussein's image in the collective memory as the greatest Arab leader. Consequently, urban walls were routinely covered with images of Saddam. Relying on these images, the regime aimed to provide a basis for evoking meanings in the public opinion, which would supposedly strengthen Saddam’s power and reconstruct facts to legitimize his political ideology. Nonetheless, Saddam was not always portrayed with common and explicit elements but in certain periods of his rule, the paintings depicted him in an unusual context, where various historical and contemporary elements were combined in a narrative background. Therefore, an understanding of the implied socio-political references of these elements is required to fully elucidate the impact of these images on forming the memory and collective unconscious of the Iraqi people. To obtain such understanding, one needs to address the following questions: a) How Saddam Hussein is portrayed in mural during his rule? b) What of elements and mythical-historical narratives are found in the paintings? c) Which Saddam's political views were subject to the collective memory through mural? Employing visual semiotics, this study reveals that during Saddam Hussein's regime, the paintings were initially simple portraits but gradually transformed into narrative images, characterized by a complex network of historical, mythical and religious elements. These elements demonstrate the transformation of a secular-nationalist politician into a Muslim ruler who tried to instill three major policies in domestic and international relations i.e. the arabization of Iraq, as well as the propagation of pan-arabism ideology (first period), the implementation of anti-Israel policy (second period) and the implementation of anti-American-British policy (last period).

Keywords: Ba'ath Party, Saddam Hussein, mural, Iraq, propaganda, collective memory

Procedia PDF Downloads 301
9442 Dark and Bright Envelopes for Dehazing Images

Authors: Zihan Yu, Kohei Inoue, Kiichi Urahama

Abstract:

We present a method for de-hazing images. A dark envelope image is derived with the bilateral minimum filter and a bright envelope is derived with the bilateral maximum filter. The ambient light and transmission of the scene are estimated from these two envelope images. An image without haze is reconstructed from the estimated ambient light and transmission.

Keywords: image dehazing, bilateral minimum filter, bilateral maximum filter, local contrast

Procedia PDF Downloads 245
9441 Intelligent Rheumatoid Arthritis Identification System Based Image Processing and Neural Classifier

Authors: Abdulkader Helwan

Abstract:

Rheumatoid joint inflammation is characterized as a perpetual incendiary issue which influences the joints by hurting body tissues Therefore, there is an urgent need for an effective intelligent identification system of knee Rheumatoid arthritis especially in its early stages. This paper is to develop a new intelligent system for the identification of Rheumatoid arthritis of the knee utilizing image processing techniques and neural classifier. The system involves two principle stages. The first one is the image processing stage in which the images are processed using some techniques such as RGB to gryascale conversion, rescaling, median filtering, background extracting, images subtracting, segmentation using canny edge detection, and features extraction using pattern averaging. The extracted features are used then as inputs for the neural network which classifies the X-ray knee images as normal or abnormal (arthritic) based on a backpropagation learning algorithm which involves training of the network on 400 X-ray normal and abnormal knee images. The system was tested on 400 x-ray images and the network shows good performance during that phase, resulting in a good identification rate 97%.

Keywords: rheumatoid arthritis, intelligent identification, neural classifier, segmentation, backpropoagation

Procedia PDF Downloads 517
9440 Fluoride Immobilization in Plaster Board Waste: A Safety Measure to Prevent Soil and Water Pollution

Authors: Venkataraman Sivasankar, Kiyoshi Omine, Hideaki Sano

Abstract:

The leaching of fluoride from Plaster Board Waste (PBW) is quite feasible in soil and water environments. The Ministry of Environment, Japan recommended the standard limit of 0.8 mgL⁻¹ or less for fluoride. Although the utilization of PBW as a substitute for cement is rather meritorious, its fluoride leaching behavior deteriorates the quality of soil and water and therefore envisaged as a demerit. In view of this fluoride leaching problem, the present research is focused on immobilizing fluoride in PBW. The immobilization experiments were conducted with four chemical systems operated by DAHP (diammonium hydrogen phosphate) and phosphoric acid carbonization of bamboo mass coupled with certain inorganic reactions using reagents such as calcium hydroxide, sodium hydroxide, and aqueous ammonia. The fluoride immobilization was determined after shaking the reactor contents including the plaster board waste for 24 h at 25˚C. In the DAHP system, the immobilization of fluoride was evident from the leaching of fluoride in the range 0.071-0.12 mgL⁻¹, 0.026-0.14 mgL⁻¹ and 0.068-0.12 mgL⁻¹ for the reaction temperatures at 30˚C, 50˚C, and 90˚C, respectively, with final pH of 6.8. The other chemical systems designated as PACCa, PACAm, and PACNa could immobilize fluoride in PBW, and the resulting solution was analyzed with the fluoride less than the Japanese environmental standard of 0.8 mgL⁻¹. In the case of PACAm and PACCa systems, the calcium concentration was found undetectable and witnessed the formation of phosphate compounds. The immobilization of fluoride was found inversely proportional to the increase in the volume of leaching solvent and dose of PBW. Characterization studies of PBW and the solid after fluoride immobilization was done using FTIR (Fourier transform infrared spectroscopy), Raman spectroscopy, FE-SEM ( Field Emission Scanning Electron Microscopy) with EDAX (Energy Dispersive Spectroscopy), XRD (X-ray diffraction), and XPS (X-ray photoelectron spectroscopy). The results revealed the formation of new calcium phosphate compounds such as apatite, monetite, and hydroxylapatite. The participation of such new compounds in fluoride immobilization seems indispensable through the exchange mechanism of hydroxyl and fluoride groups. Acknowledgment: First author thanks to Japanese Society for the Promotion of Science (JSPS) for the award of the fellowship (ID No. 16544).

Keywords: characterization, fluoride, immobilization, plaster board waste

Procedia PDF Downloads 139
9439 Blind Data Hiding Technique Using Interpolation of Subsampled Images

Authors: Singara Singh Kasana, Pankaj Garg

Abstract:

In this paper, a blind data hiding technique based on interpolation of sub sampled versions of a cover image is proposed. Sub sampled image is taken as a reference image and an interpolated image is generated from this reference image. Then difference between original cover image and interpolated image is used to embed secret data. Comparisons with the existing interpolation based techniques show that proposed technique provides higher embedding capacity and better visual quality marked images. Moreover, the performance of the proposed technique is more stable for different images.

Keywords: interpolation, image subsampling, PSNR, SIM

Procedia PDF Downloads 561
9438 Design and Implementation of Image Super-Resolution for Myocardial Image

Authors: M. V. Chidananda Murthy, M. Z. Kurian, H. S. Guruprasad

Abstract:

Super-resolution is the technique of intelligently upscaling images, avoiding artifacts or blurring, and deals with the recovery of a high-resolution image from one or more low-resolution images. Single-image super-resolution is a process of obtaining a high-resolution image from a set of low-resolution observations by signal processing. While super-resolution has been demonstrated to improve image quality in scaled down images in the image domain, its effects on the Fourier-based technique remains unknown. Super-resolution substantially improved the spatial resolution of the patient LGE images by sharpening the edges of the heart and the scar. This paper aims at investigating the effects of single image super-resolution on Fourier-based and image based methods of scale-up. In this paper, first, generate a training phase of the low-resolution image and high-resolution image to obtain dictionary. In the test phase, first, generate a patch and then difference of high-resolution image and interpolation image from the low-resolution image. Next simulation of the image is obtained by applying convolution method to the dictionary creation image and patch extracted the image. Finally, super-resolution image is obtained by combining the fused image and difference of high-resolution and interpolated image. Super-resolution reduces image errors and improves the image quality.

Keywords: image dictionary creation, image super-resolution, LGE images, patch extraction

Procedia PDF Downloads 355
9437 Using Machine Learning to Classify Different Body Parts and Determine Healthiness

Authors: Zachary Pan

Abstract:

Our general mission is to solve the problem of classifying images into different body part types and deciding if each of them is healthy or not. However, for now, we will determine healthiness for only one-sixth of the body parts, specifically the chest. We will detect pneumonia in X-ray scans of those chest images. With this type of AI, doctors can use it as a second opinion when they are taking CT or X-ray scans of their patients. Another ad-vantage of using this machine learning classifier is that it has no human weaknesses like fatigue. The overall ap-proach to this problem is to split the problem into two parts: first, classify the image, then determine if it is healthy. In order to classify the image into a specific body part class, the body parts dataset must be split into test and training sets. We can then use many models, like neural networks or logistic regression models, and fit them using the training set. Now, using the test set, we can obtain a realistic accuracy the models will have on images in the real world since these testing images have never been seen by the models before. In order to increase this testing accuracy, we can also apply many complex algorithms to the models, like multiplicative weight update. For the second part of the problem, to determine if the body part is healthy, we can have another dataset consisting of healthy and non-healthy images of the specific body part and once again split that into the test and training sets. We then use another neural network to train on those training set images and use the testing set to figure out its accuracy. We will do this process only for the chest images. A major conclusion reached is that convolutional neural networks are the most reliable and accurate at image classification. In classifying the images, the logistic regression model, the neural network, neural networks with multiplicative weight update, neural networks with the black box algorithm, and the convolutional neural network achieved 96.83 percent accuracy, 97.33 percent accuracy, 97.83 percent accuracy, 96.67 percent accuracy, and 98.83 percent accuracy, respectively. On the other hand, the overall accuracy of the model that de-termines if the images are healthy or not is around 78.37 percent accuracy.

Keywords: body part, healthcare, machine learning, neural networks

Procedia PDF Downloads 78
9436 A Comparative Study of Medical Image Segmentation Methods for Tumor Detection

Authors: Mayssa Bensalah, Atef Boujelben, Mouna Baklouti, Mohamed Abid

Abstract:

Image segmentation has a fundamental role in analysis and interpretation for many applications. The automated segmentation of organs and tissues throughout the body using computed imaging has been rapidly increasing. Indeed, it represents one of the most important parts of clinical diagnostic tools. In this paper, we discuss a thorough literature review of recent methods of tumour segmentation from medical images which are briefly explained with the recent contribution of various researchers. This study was followed by comparing these methods in order to define new directions to develop and improve the performance of the segmentation of the tumour area from medical images.

Keywords: features extraction, image segmentation, medical images, tumor detection

Procedia PDF Downloads 148
9435 Digital Portfolio as Mediation to Enhance Willingness to Communicate in English

Authors: Saeko Toyoshima

Abstract:

This research will discuss if performance tasks with technology would enhance students' willingness to communicate. The present study investigated how Japanese learners of English would change their attitude to communication in their target language by experiencing a performance task, called 'digital portfolio', in the classroom, applying the concepts of action research. The study adapted questionnaires including four-Likert and open-end questions as mixed-methods research. There were 28 students in the class. Many of Japanese university students with low proficiency (A1 in Common European Framework of References in Language Learning and Teaching) have difficulty in communicating in English due to the low proficiency and the lack of practice in and outside of the classroom at secondary education. They should need to mediate between themselves in the world of L1 and L2 with completing a performance task for communication. This paper will introduce the practice of CALL class where A1 level students have made their 'digital portfolio' related to the topics of TED® (Technology, Entertainment, Design) Talk materials. The students had 'Portfolio Session' twice in one term, once in the middle, and once at the end of the course, where they introduced their portfolio to their classmates and international students in English. The present study asked the students to answer a questionnaire about willingness to communicate twice, once at the end of the first term and once at the end of the second term. The four-Likert questions were statistically analyzed with a t-test, and the answers to open-end questions were analyzed to clarify the difference between them. They showed that the students had a more positive attitude to communication in English and enhanced their willingness to communicate through the experiences of the task. It will be the implication of this paper that making and presenting portfolio as a performance task would lead them to construct themselves in English and enable them to communicate with the others enjoyably and autonomously.

Keywords: action research, digital portfoliio, computer-assisted language learning, ELT with CALL system, mixed methods research, Japanese English learners, willingness to communicate

Procedia PDF Downloads 106
9434 Segmentation of Gray Scale Images of Dropwise Condensation on Textured Surfaces

Authors: Helene Martin, Solmaz Boroomandi Barati, Jean-Charles Pinoli, Stephane Valette, Yann Gavet

Abstract:

In the present work we developed an image processing algorithm to measure water droplets characteristics during dropwise condensation on pillared surfaces. The main problem in this process is the similarity between shape and size of water droplets and the pillars. The developed method divides droplets into four main groups based on their size and applies the corresponding algorithm to segment each group. These algorithms generate binary images of droplets based on both their geometrical and intensity properties. The information related to droplets evolution during time including mean radius and drops number per unit area are then extracted from the binary images. The developed image processing algorithm is verified using manual detection and applied to two different sets of images corresponding to two kinds of pillared surfaces.

Keywords: dropwise condensation, textured surface, image processing, watershed

Procedia PDF Downloads 209
9433 A Study of the Frequency of Individual Support for the Pupils With Developmental Disabilities or Suspected Developmental Disabilities in Regular Japanese School Classes - From a Questionnaire Survey of Teachers

Authors: Maho Komura

Abstract:

The purpose of this study was to determine from a questionnaire survey of teachers the status of implementation of individualized support for the pupils with suspected developmental disabilities in regular elementary school classes in Japan. In inclusive education, the goal is for all pupils to learn in the same place as much as possible by receiving the individualized support they need. However, in the Japanese school culture, strong "homogeneity" sometimes surfaces, and it is pointed out that it is difficult to provide individualized support from the viewpoint of formal equality. Therefore, we decided to conduct this study in order to examine whether there is a difference in the frequency of implementation depending on the content of individualized support and to consider the direction of future individualized support. The subjects of the survey were 196 public elementary school teachers who had been in charge of regular classes within the past five years. In the survey, individualized support was defined as individualized consideration including rational consideration, and did not include support for the entire class or all pupils enrolled in the class (e.g., reducing the amount of homework for pupils who have trouble learning, changing classroom rules, etc.). (e.g., reducing the amount of homework for pupils with learning difficulties, allowing pupils with behavioral concerns to use the library or infirmary when they are unstable). The respondents were asked to choose one answer from four options, ranging from "very much" to "not at all," regarding the degree to which they implemented the nine individual support items that were set up with reference to previous studies. As a result, it became clear that the majority of teachers had pupils with developmental disabilities or pupils who require consideration in terms of learning and behavior, and that the majority of teachers had experience in providing individualized support to these pupils. Investigating the content of the individualized support that had been implemented, it became clear that the frequency with which it was implemented varied depending on the individualized support. Individualized support that allowed pupils to perform the same learning tasks was implemented more frequently, but individualized support that allowed different learning tasks or use of places other than the classroom was implemented less frequently. It was suggested that flexible support methods tailored to each pupil may not have been considered.

Keywords: inclusive education, ndividualized support, regular class, elementary school

Procedia PDF Downloads 114
9432 Leukocyte Detection Using Image Stitching and Color Overlapping Windows

Authors: Lina, Arlends Chris, Bagus Mulyawan, Agus B. Dharmawan

Abstract:

Blood cell analysis plays a significant role in the diagnosis of human health. As an alternative to the traditional technique conducted by laboratory technicians, this paper presents an automatic white blood cell (leukocyte) detection system using Image Stitching and Color Overlapping Windows. The advantage of this method is to present a detection technique of white blood cells that are robust to imperfect shapes of blood cells with various image qualities. The input for this application is images from a microscope-slide translation video. The preprocessing stage is performed by stitching the input images. First, the overlapping parts of the images are determined, then stitching and blending processes of two input images are performed. Next, the Color Overlapping Windows is performed for white blood cell detection which consists of color filtering, window candidate checking, window marking, finds window overlaps, and window cropping processes. Experimental results show that this method could achieve an average of 82.12% detection accuracy of the leukocyte images.

Keywords: color overlapping windows, image stitching, leukocyte detection, white blood cell detection

Procedia PDF Downloads 291
9431 Quality Analysis of Vegetables Through Image Processing

Authors: Abdul Khalique Baloch, Ali Okatan

Abstract:

The quality analysis of food and vegetable from image is hot topic now a day, where researchers make them better then pervious findings through different technique and methods. In this research we have review the literature, and find gape from them, and suggest better proposed approach, design the algorithm, developed a software to measure the quality from images, where accuracy of image show better results, and compare the results with Perouse work done so for. The Application we uses an open-source dataset and python language with tensor flow lite framework. In this research we focus to sort food and vegetable from image, in the images, the application can sorts and make them grading after process the images, it could create less errors them human base sorting errors by manual grading. Digital pictures datasets were created. The collected images arranged by classes. The classification accuracy of the system was about 94%. As fruits and vegetables play main role in day-to-day life, the quality of fruits and vegetables is necessary in evaluating agricultural produce, the customer always buy good quality fruits and vegetables. This document is about quality detection of fruit and vegetables using images. Most of customers suffering due to unhealthy foods and vegetables by suppliers, so there is no proper quality measurement level followed by hotel managements. it have developed software to measure the quality of the fruits and vegetables by using images, it will tell you how is your fruits and vegetables are fresh or rotten. Some algorithms reviewed in this thesis including digital images, ResNet, VGG16, CNN and Transfer Learning grading feature extraction. This application used an open source dataset of images and language used python, and designs a framework of system.

Keywords: deep learning, computer vision, image processing, rotten fruit detection, fruits quality criteria, vegetables quality criteria

Procedia PDF Downloads 52
9430 Smartphone Photography in Urban China

Authors: Wen Zhang

Abstract:

The smartphone plays a significant role in media convergence, and smartphone photography is reconstructing the way we communicate and think. This article aims to explore the smartphone photography practices of urban Chinese smartphone users and images produced by smartphones from a techno-cultural perspective. The analysis consists of two types of data: One is a semi-structured interview of 21 participants, and the other consists of the images created by the participants. The findings are organised in two parts. The first part summarises the current tendencies of capturing, editing, sharing and archiving digital images via smartphones. The second part shows that food and selfie/anti-selfie are the preferred subjects of smartphone photographic images from a technical and multi-purpose perspective and demonstrates that screenshots and image texts are new genres of non-photographic images that are frequently made by smartphones, which contributes to improving operational efficiency, disseminating information and sharing knowledge. The analyses illustrate the positive impacts between smartphones and photography enthusiasm and practices based on the diffusion of innovation theory, which also makes us rethink the value of photographs and the practice of ‘photographic seeing’ from the screen itself.

Keywords: digital photography, image-text, media convergence, photographic- seeing, selfie/anti-selfie, smartphone, technological innovation

Procedia PDF Downloads 338
9429 The Role of Artificial Intelligence in Creating Personalized Health Content for Elderly People: A Systematic Review Study

Authors: Mahnaz Khalafehnilsaz, Rozina Rahnama

Abstract:

Introduction: The elderly population is growing rapidly, and with this growth comes an increased demand for healthcare services. Artificial intelligence (AI) has the potential to revolutionize the delivery of healthcare services to the elderly population. In this study, the various ways in which AI is used to create health content for elderly people and its transformative impact on the healthcare industry will be explored. Method: A systematic review of the literature was conducted to identify studies that have investigated the role of AI in creating health content specifically for elderly people. Several databases, including PubMed, Scopus, and Web of Science, were searched for relevant articles published between 2000 and 2022. The search strategy employed a combination of keywords related to AI, personalized health content, and the elderly. Studies that utilized AI to create health content for elderly individuals were included, while those that did not meet the inclusion criteria were excluded. A total of 20 articles that met the inclusion criteria were identified. Finding: The findings of this review highlight the diverse applications of AI in creating health content for elderly people. One significant application is the use of natural language processing (NLP), which involves the creation of chatbots and virtual assistants capable of providing personalized health information and advice to elderly patients. AI is also utilized in the field of medical imaging, where algorithms analyze medical images such as X-rays, CT scans, and MRIs to detect diseases and abnormalities. Additionally, AI enables the development of personalized health content for elderly patients by analyzing large amounts of patient data to identify patterns and trends that can inform healthcare providers in developing tailored treatment plans. Conclusion: AI is transforming the healthcare industry by providing a wide range of applications that can improve patient outcomes and reduce healthcare costs. From creating chatbots and virtual assistants to analyzing medical images and developing personalized treatment plans, AI is revolutionizing the way healthcare is delivered to elderly patients. Continued investment in this field is essential to ensure that elderly patients receive the best possible care.

Keywords: artificial intelligence, health content, older adult, healthcare

Procedia PDF Downloads 46
9428 Multimodal Rhetoric in the Wildlife Documentary, “My Octopus Teacher”

Authors: Visvaganthie Moodley

Abstract:

While rhetoric goes back as far as Aristotle who focalised its meaning as the “art of persuasion”, most scholars have focused on elocutio and dispositio canons, neglecting the rhetorical impact of multimodal texts, such as documentaries. Film documentaries are being increasingly rhetoric, often used by wildlife conservationists for influencing people to become more mindful about humanity’s connection with nature. This paper examines the award-winning film documentary, “My Octopus Teacher”, which depicts naturalist, Craig Foster’s unique discovery and relationship with a female octopus in the southern tip of Africa, the Cape of Storms in South Africa. It is anchored in Leech and Short’s (2007) framework of linguistic and stylistic categories – comprising lexical items, grammatical features, figures of speech and other rhetoric features, and cohesiveness – with particular foci on diction, anthropomorphic language, metaphors and symbolism. It also draws on Kress and van Leeuwen’s (2006) multimodal analysis to show how verbal cues (the narrator’s commentary), visual images in motion, visual images as metaphors and symbolism, and aural sensory images such as music and sound synergise for rhetoric effect. In addition, the analysis of “My Octopus Teacher” is guided by Nichol’s (2010) narrative theory; features of a documentary which foregrounds the credibility of the narrative as a text that represents real events with real people; and its modes of construction, viz., the poetic mode, the expository mode, observational mode and participatory mode, and their integration – forging documentaries as multimodal texts. This paper presents a multimodal rhetoric discussion on the sequence of salient episodes captured in the slow moving one-and-a-half-hour documentary. These are: (i) The prologue: on the brink of something extraordinary; (ii) The day it all started; (iii) The narrator’s turmoil: getting back into the ocean; (iv) The incredible encounter with the octopus; (v) Establishing a relationship; (vi) Outwitting the predatory pyjama shark; (vii) The cycle of life; and (viii) The conclusion: lessons from an octopus. The paper argues that wildlife documentaries, characterized by plausibility and which provide researchers the lens to examine the ideologies about animals and humans, offer an assimilation of the various senses – vocal, visual and audial – for engaging viewers in stylized compelling way; they have the ability to persuade people to think and act in particular ways. As multimodal texts, with its use of lexical items; diction; anthropomorphic language; linguistic, visual and aural metaphors and symbolism; and depictions of anthropocentrism, wildlife documentaries are powerful resources for promoting wildlife conservation and conscientizing people of the need for establishing a harmonious relationship with nature and humans alike.

Keywords: documentaries, multimodality, rhetoric, style, wildlife, conservation

Procedia PDF Downloads 74
9427 The Effect of Restaurant Residuals on Performance of Japanese Quail

Authors: A. A. Saki, Y. Karimi, H. J. Najafabadi, P. Zamani, Z. Mostafaie

Abstract:

The restaurant residuals reasons such as competition between human and animal consumption of cereals, increasing environmental pollution and the high cost of production of livestock products is important. Therefore, in this restaurant residuals have a high nutritional value (protein and high energy) that it is possible can replace some of the poultry diets are especially Japanese quail. Today, the challenges of processing and consumption of these lesions occurring in modern industry would be confronting. Increasing costs, pressures, and problems associated with waste excretion, the need for re-evaluation and utilization of waste to livestock and poultry feed fortifies. This study aimed to investigate the effects of different levels of restaurant residuals on performance of 300 layer Japanese quails. This experiment included 5 treatments, 4 replicates, and 15 quails in each from 10 to 18 weeks age in a completely randomized design (CRD). The treatments consist of basal diet including corn and soybean meal (without residual restaurants), and treatments 2, 3, 4 and 5, includes a basal diet containing 5, 10, 15 and 20% of restaurant residuals, respectively. There were no significant effect of restaurant residuals levels on body weight (BW), feed conversion ratio (FCR), percentage of egg production (EP), egg mass (EM) between treatments (P > 0/05). However, feed intake (FI) of 5% restaurant residual was significantly higher than 20% treatment (P < 0/05). Egg weight (EW) was also higher by receiving 20% restaurant residuals compared with 10% in this respect (P < 0/05). Yolk weight (YW) of treatments containing 10 and 20% of the residual restaurant were significantly higher than control (P < 0/05). Eggs white weight (EWW) of 20 and 5% restaurants residual treatments were significantly increased compared by 10% (P < 0/05). Furthermore, EW, egg weight to shell surface area and egg surface area in 20% treatment were significantly higher than control and 10% treatment (P < 0/05). The overall results of this study have shown that restaurant residuals for laying quail diets in levels of 10 and 15 percent could be replaced with a part of the quail ration without any adverse effect.

Keywords: by-product, laying quail, performance, restaurant residuals

Procedia PDF Downloads 151
9426 Multi-Atlas Segmentation Based on Dynamic Energy Model: Application to Brain MR Images

Authors: Jie Huo, Jonathan Wu

Abstract:

Segmentation of anatomical structures in medical images is essential for scientific inquiry into the complex relationships between biological structure and clinical diagnosis, treatment and assessment. As a method of incorporating the prior knowledge and the anatomical structure similarity between a target image and atlases, multi-atlas segmentation has been successfully applied in segmenting a variety of medical images, including the brain, cardiac, and abdominal images. The basic idea of multi-atlas segmentation is to transfer the labels in atlases to the coordinate of the target image by matching the target patch to the atlas patch in the neighborhood. However, this technique is limited by the pairwise registration between target image and atlases. In this paper, a novel multi-atlas segmentation approach is proposed by introducing a dynamic energy model. First, the target is mapped to each atlas image by minimizing the dynamic energy function, then the segmentation of target image is generated by weighted fusion based on the energy. The method is tested on MICCAI 2012 Multi-Atlas Labeling Challenge dataset which includes 20 target images and 15 atlases images. The paper also analyzes the influence of different parameters of the dynamic energy model on the segmentation accuracy and measures the dice coefficient by using different feature terms with the energy model. The highest mean dice coefficient obtained with the proposed method is 0.861, which is competitive compared with the recently published method.

Keywords: brain MRI segmentation, dynamic energy model, multi-atlas segmentation, energy minimization

Procedia PDF Downloads 316
9425 Best Timing for Capturing Satellite Thermal Images, Asphalt, and Concrete Objects

Authors: Toufic Abd El-Latif Sadek

Abstract:

The asphalt object represents the asphalted areas like roads, and the concrete object represents the concrete areas like concrete buildings. The efficient extraction of asphalt and concrete objects from one satellite thermal image occurred at a specific time, by preventing the gaps in times which give the close and same brightness values between asphalt and concrete, and among other objects. So that to achieve efficient extraction and then better analysis. Seven sample objects were used un this study, asphalt, concrete, metal, rock, dry soil, vegetation, and water. It has been found that, the best timing for capturing satellite thermal images to extract the two objects asphalt and concrete from one satellite thermal image, saving time and money, occurred at a specific time in different months. A table is deduced shows the optimal timing for capturing satellite thermal images to extract effectively these two objects.

Keywords: asphalt, concrete, satellite thermal images, timing

Procedia PDF Downloads 302
9424 PathoPy2.0: Application of Fractal Geometry for Early Detection and Histopathological Analysis of Lung Cancer

Authors: Rhea Kapoor

Abstract:

Fractal dimension provides a way to characterize non-geometric shapes like those found in nature. The purpose of this research is to estimate Minkowski fractal dimension of human lung images for early detection of lung cancer. Lung cancer is the leading cause of death among all types of cancer and an early histopathological analysis will help reduce deaths primarily due to late diagnosis. A Python application program, PathoPy2.0, was developed for analyzing medical images in pixelated format and estimating Minkowski fractal dimension using a new box-counting algorithm that allows windowing of images for more accurate calculation in the suspected areas of cancerous growth. Benchmark geometric fractals were used to validate the accuracy of the program and changes in fractal dimension of lung images to indicate the presence of issues in the lung. The accuracy of the program for the benchmark examples was between 93-99% of known values of the fractal dimensions. Fractal dimension values were then calculated for lung images, from National Cancer Institute, taken over time to correctly detect the presence of cancerous growth. For example, as the fractal dimension for a given lung increased from 1.19 to 1.27 due to cancerous growth, it represents a significant change in fractal dimension which lies between 1 and 2 for 2-D images. Based on the results obtained on many lung test cases, it was concluded that fractal dimension of human lungs can be used to diagnose lung cancer early. The ideas behind PathoPy2.0 can also be applied to study patterns in the electrical activity of the human brain and DNA matching.

Keywords: fractals, histopathological analysis, image processing, lung cancer, Minkowski dimension

Procedia PDF Downloads 153
9423 Promoting Local Products through One Village One Product and Customer Satisfaction

Authors: Wardoyo, Humairoh

Abstract:

In global competition nowadays, the world economy heavily depends upon high technology and capital intensive industries that are mainly owned by well-established economic and developed countries, such as United States of America, United Kingdom, Japan, and South Korea. Indonesia as a developing country is building its economic activities towards industrial country as well, although a slightly different approach was implemented. For example, similar to the concept of one village one product (OVOP) implemented in Japan, Indonesia also adopted this concept by promoting local traditional products to improve incomes of village people and to enhance local economic activities. Analysis on how OVOP program increase local people’s income and influence customer satisfaction were the objective of this paper. Behavioral intention to purchase and re-purchase, customer satisfaction and promotion are key factors for local products to play significant roles in improving local income and economy of the region. The concepts of OVOP and key factors that influence economic activities of local people and the region will be described and explained in the paper. Results of research, in a case study based on 300 respondents, customers of a local restaurant at Tangerang City, Banten Province of Indonesia, indicated that local product, service quality and behavioral intention individually have significant influence to customer satisfaction; whereas simultaneous tests to the variables indicated positive and significant influence to the behavioral intention through customer satisfaction as the intervening variable.

Keywords: behavioral intention, customer satisfaction, local products, one village one product (OVOP)

Procedia PDF Downloads 282
9422 A New 3D Shape Descriptor Based on Multi-Resolution and Multi-Block CS-LBP

Authors: Nihad Karim Chowdhury, Mohammad Sanaullah Chowdhury, Muhammed Jamshed Alam Patwary, Rubel Biswas

Abstract:

In content-based 3D shape retrieval system, achieving high search performance has become an important research problem. A challenging aspect of this problem is to find an effective shape descriptor which can discriminate similar shapes adequately. To address this problem, we propose a new shape descriptor for 3D shape models by combining multi-resolution with multi-block center-symmetric local binary pattern operator. Given an arbitrary 3D shape, we first apply pose normalization, and generate a set of multi-viewed 2D rendered images. Second, we apply Gaussian multi-resolution filter to generate several levels of images from each of 2D rendered image. Then, overlapped sub-images are computed for each image level of a multi-resolution image. Our unique multi-block CS-LBP comes next. It allows the center to be composed of m-by-n rectangular pixels, instead of a single pixel. This process is repeated for all the 2D rendered images, derived from both ‘depth-buffer’ and ‘silhouette’ rendering. Finally, we concatenate all the features vectors into one dimensional histogram as our proposed 3D shape descriptor. Through several experiments, we demonstrate that our proposed 3D shape descriptor outperform the previous methods by using a benchmark dataset.

Keywords: 3D shape retrieval, 3D shape descriptor, CS-LBP, overlapped sub-images

Procedia PDF Downloads 429
9421 Seashore Debris Detection System Using Deep Learning and Histogram of Gradients-Extractor Based Instance Segmentation Model

Authors: Anshika Kankane, Dongshik Kang

Abstract:

Marine debris has a significant influence on coastal environments, damaging biodiversity, and causing loss and damage to marine and ocean sector. A functional cost-effective and automatic approach has been used to look up at this problem. Computer vision combined with a deep learning-based model is being proposed to identify and categorize marine debris of seven kinds on different beach locations of Japan. This research compares state-of-the-art deep learning models with a suggested model architecture that is utilized as a feature extractor for debris categorization. The model is being proposed to detect seven categories of litter using a manually constructed debris dataset, with the help of Mask R-CNN for instance segmentation and a shape matching network called HOGShape, which can then be cleaned on time by clean-up organizations using warning notifications of the system. The manually constructed dataset for this system is created by annotating the images taken by fixed KaKaXi camera using CVAT annotation tool with seven kinds of category labels. A pre-trained HOG feature extractor on LIBSVM is being used along with multiple templates matching on HOG maps of images and HOG maps of templates to improve the predicted masked images obtained via Mask R-CNN training. This system intends to timely alert the cleanup organizations with the warning notifications using live recorded beach debris data. The suggested network results in the improvement of misclassified debris masks of debris objects with different illuminations, shapes, viewpoints and litter with occlusions which have vague visibility.

Keywords: computer vision, debris, deep learning, fixed live camera images, histogram of gradients feature extractor, instance segmentation, manually annotated dataset, multiple template matching

Procedia PDF Downloads 88
9420 Development of People's Participation in Environmental Development in Pathumthani Province

Authors: Sakapas Saengchai

Abstract:

Study on the development of people's participation in environmental development was a qualitative research method. Data were collected by participant observation, in-depth interview and discussion group in Pathumthani province. The study indicated that 1) People should be aware of environmental information from government agencies. 2) People in the community should be able to brainstorm information, exchange information about community environment development. 3) People should have a role with community leaders. 4) People in the community should have a role to play in the implementation of projects and activities in the development of the environment and 5) citizens, community leaders, village committee have directed the development of the area. Maintaining a community environment with a shared decision. By emphasizing the process of participation, self-reliance, mutual help, and responsibility for one's own community. Community empowerment strengthens the sustainable spatial development of the environment.

Keywords: people, participation, community, environment

Procedia PDF Downloads 259
9419 Image Quality and Dose Optimisations in Digital and Computed Radiography X-ray Radiography Using Lumbar Spine Phantom

Authors: Elhussaien Elshiekh

Abstract:

A study was performed to management and compare radiation doses and image quality during Lumbar spine PA and Lumbar spine LAT, x- ray radiography using Computed Radiography (CR) and Digital Radiography (DR). Standard exposure factors such as kV, mAs and FFD used for imaging the Lumbar spine anthropomorphic phantom obtained from average exposure factors that were used with CR in five radiology centres. Lumbar spine phantom was imaged using CR and DR systems. Entrance surface air kerma (ESAK) was calculated X-ray tube output and patient exposure factor. Images were evaluated using visual grading system based on the European Guidelines on Quality Criteria for diagnostic radiographic images. The ESAK corresponding to each image was measured at the surface of the phantom. Six experienced specialists evaluated hard copies of all the images, the image score (IS) was calculated for each image by finding the average score of the Six evaluators. The IS value also was used to determine whether an image was diagnostically acceptable. The optimum recommended exposure factors founded here for Lumbar spine PA and Lumbar spine LAT, with respectively (80 kVp,25 mAs at 100 cm FFD) and (75 kVp,15 mAs at 100 cm FFD) for CR system, and (80 kVp,15 mAs at100 cm FFD) and (75 kVp,10 mAs at 100 cm FFD) for DR system. For Lumbar spine PA, the lowest ESAK value required to obtain a diagnostically acceptable image were 0.80 mGy for DR and 1.20 mGy for CR systems. Similarly for Lumbar spine LAT projection, the lowest ESAK values to obtain a diagnostically acceptable image were 0.62 mGy for DR and 0.76 mGy for CR systems. At standard kVp and mAs values, the image quality did not vary significantly between the CR and the DR system, but at higher kVp and mAs values, the DR images were found to be of better quality than CR images. In addition, the lower limit of entrance skin dose consistent with diagnostically acceptable DR images was 40% lower than that for CR images.

Keywords: image quality, dosimetry, radiation protection, optimization, digital radiography, computed radiography

Procedia PDF Downloads 36
9418 Automatic Detection and Classification of Diabetic Retinopathy Using Retinal Fundus Images

Authors: A. Biran, P. Sobhe Bidari, A. Almazroe, V. Lakshminarayanan, K. Raahemifar

Abstract:

Diabetic Retinopathy (DR) is a severe retinal disease which is caused by diabetes mellitus. It leads to blindness when it progress to proliferative level. Early indications of DR are the appearance of microaneurysms, hemorrhages and hard exudates. In this paper, an automatic algorithm for detection of DR has been proposed. The algorithm is based on combination of several image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Also, Support Vector Machine (SVM) Classifier is used to classify retinal images to normal or abnormal cases including non-proliferative or proliferative DR. The proposed method has been tested on images selected from Structured Analysis of the Retinal (STARE) database using MATLAB code. The method is perfectly able to detect DR. The sensitivity specificity and accuracy of this approach are 90%, 87.5%, and 91.4% respectively.

Keywords: diabetic retinopathy, fundus images, STARE, Gabor filter, support vector machine

Procedia PDF Downloads 278
9417 Communicating Safety: Warnings, Appeals for Compliance and Visual Resources of Meaning

Authors: Sean McGovern

Abstract:

Discourses, in Foucault's sense of the term, exist as alternate knowledges about some aspect of reality. Discourses act as cognitive frameworks for how social matters are understood and legitimated. Alternate social discourses can stand competing and in conflict or be effectively interwoven. Discourses of public safety, for instance, can alternately be formulated in terms of physical risk; as a matter of social responsibility; or in terms of penalties and litigation. This research study investigates discourses of safety used in public transportation and consumer products in the Japanese cultural context. Employing a social semiotic analytic approach, it examines how posters, consumer manuals and other forms of visual (written and pictorial) warnings have been designed to influence behavioral compliance. The presentation identifies specific ways in which Japanese cultural sensibilities and social needs inform cultural design principles that operate in the visual domain. It makes the case that societies are not uniform in the way that objects and actions are represented and that visual forms of meaning are culturally shaped in ways consistent with social understandings and values.

Keywords: communication design, culture, discourse, public safety

Procedia PDF Downloads 257
9416 Glaucoma Detection in Retinal Tomography Using the Vision Transformer

Authors: Sushish Baral, Pratibha Joshi, Yaman Maharjan

Abstract:

Glaucoma is a chronic eye condition that causes vision loss that is irreversible. Early detection and treatment are critical to prevent vision loss because it can be asymptomatic. For the identification of glaucoma, multiple deep learning algorithms are used. Transformer-based architectures, which use the self-attention mechanism to encode long-range dependencies and acquire extremely expressive representations, have recently become popular. Convolutional architectures, on the other hand, lack knowledge of long-range dependencies in the image due to their intrinsic inductive biases. The aforementioned statements inspire this thesis to look at transformer-based solutions and investigate the viability of adopting transformer-based network designs for glaucoma detection. Using retinal fundus images of the optic nerve head to develop a viable algorithm to assess the severity of glaucoma necessitates a large number of well-curated images. Initially, data is generated by augmenting ocular pictures. After that, the ocular images are pre-processed to make them ready for further processing. The system is trained using pre-processed images, and it classifies the input images as normal or glaucoma based on the features retrieved during training. The Vision Transformer (ViT) architecture is well suited to this situation, as it allows the self-attention mechanism to utilise structural modeling. Extensive experiments are run on the common dataset, and the results are thoroughly validated and visualized.

Keywords: glaucoma, vision transformer, convolutional architectures, retinal fundus images, self-attention, deep learning

Procedia PDF Downloads 175
9415 A Comparative Study between Japan and the European Union on Software Vulnerability Public Policies

Authors: Stefano Fantin

Abstract:

The present analysis outcomes from the research undertaken in the course of the European-funded project EUNITY, which targets the gaps in research and development on cybersecurity and privacy between Europe and Japan. Under these auspices, the research presents a study on the policy approach of Japan, the EU and a number of Member States of the Union with regard to the handling and discovery of software vulnerabilities, with the aim of identifying methodological differences and similarities. This research builds upon a functional comparative analysis of both public policies and legal instruments from the identified jurisdictions. The result of this analysis is based on semi-structured interviews with EUNITY partners, as well as by the participation of the researcher to a recent report from the Center for EU Policy Study on software vulnerability. The European Union presents a rather fragmented legal framework on software vulnerabilities. The presence of a number of different legislations at the EU level (including Network and Information Security Directive, Critical Infrastructure Directive, Directive on the Attacks at Information Systems and the Proposal for a Cybersecurity Act) with no clear focus on such a subject makes it difficult for both national governments and end-users (software owners, researchers and private citizens) to gain a clear understanding of the Union’s approach. Additionally, the current data protection reform package (general data protection regulation), seems to create legal uncertainty around security research. To date, at the member states level, a few efforts towards transparent practices have been made, namely by the Netherlands, France, and Latvia. This research will explain what policy approach such countries have taken. Japan has started implementing a coordinated vulnerability disclosure policy in 2004. To date, two amendments can be registered on the framework (2014 and 2017). The framework is furthermore complemented by a series of instruments allowing researchers to disclose responsibly any new discovery. However, the policy has started to lose its efficiency due to a significant increase in reports made to the authority in charge. To conclude, the research conducted reveals two asymmetric policy approaches, time-wise and content-wise. The analysis therein will, therefore, conclude with a series of policy recommendations based on the lessons learned from both regions, towards a common approach to the security of European and Japanese markets, industries and citizens.

Keywords: cybersecurity, vulnerability, European Union, Japan

Procedia PDF Downloads 136