Search results for: image charge
2674 Video Club as a Pedagogical Tool to Shift Teachers’ Image of the Child
Authors: Allison Tucker, Carolyn Clarke, Erin Keith
Abstract:
Introduction: In education, the determination to uncover privileged practices requires critical reflection to be placed at the center of both pre-service and in-service teacher education. Confronting deficit thinking about children’s abilities and shifting to holding an image of the child as capable and competent is necessary for teachers to engage in responsive pedagogy that meets children where they are in their learning and builds on strengths. This paper explores the ways in which early elementary teachers' perceptions of the assets of children might shift through the pedagogical use of video clubs. Video club is a pedagogical practice whereby teachers record and view short videos with the intended purpose of deepening their practices. The use of video club as a learning tool has been an extensively documented practice. In this study, a video club is used to watch short recordings of playing children to identify the assets of their students. Methodology: The study on which this paper is based asks the question: What are the ways in which teachers’ image of the child and teaching practices evolve through the use of video club focused on the strengths of children demonstrated during play? Using critical reflection, it aims to identify and describe participants’ experiences of examining their personally held image of the child through the pedagogical tool video club, and how that image influences their practices, specifically in implementing play pedagogy. Teachers enrolled in a graduate-level play pedagogy course record and watch videos of their own students as a means to notice and reflect on the learning that happens during play. Using a co-constructed viewing protocol, teachers identify student strengths and consider their pedagogical responses. Video club provides a framework for teachers to critically reflect in action, return to the video to rewatch the children or themselves and discuss their noticings with colleagues. Critical reflection occurs when there is focused attention on identifying the ways in which actions perpetuate or challenge issues of inherent power in education. When the image of the child held by the teacher is from a deficit position and is influenced by hegemonic dimensions of practice, critical reflection is essential in naming and addressing power imbalances, biases, and practices that are harmful to children and become barriers to their thriving. The data is comprised of teacher reflections, analyzed using phenomenology. Phenomenology seeks to understand and appreciate how individuals make sense of their experiences. Teacher reflections are individually read, and researchers determine pools of meaning. Categories are identified by each researcher, after which commonalities are named through a recursive process of returning to the data until no more themes emerge or saturation is reached. Findings: The final analysis and interpretation of the data are forthcoming. However, emergent analysis of the data collected using teacher reflections reveals the ways in which the use of video club grew teachers’ awareness of their image of the child. It shows video club as a promising pedagogical tool when used with in-service teachers to prompt opportunities for play and to challenge deficit thinking about children and their abilities to thrive in learning.Keywords: asset-based teaching, critical reflection, image of the child, video club
Procedia PDF Downloads 1042673 Music Note Detection and Dictionary Generation from Music Sheet Using Image Processing Techniques
Authors: Muhammad Ammar, Talha Ali, Abdul Basit, Bakhtawar Rajput, Zobia Sohail
Abstract:
Music note detection is an area of study for the past few years and has its own influence in music file generation from sheet music. We proposed a method to detect music notes on sheet music using basic thresholding and blob detection. Subsequently, we created a notes dictionary using a semi-supervised learning approach. After notes detection, for each test image, the new symbols are added to the dictionary. This makes the notes detection semi-automatic. The experiments are done on images from a dataset and also on the captured images. The developed approach showed almost 100% accuracy on the dataset images, whereas varying results have been seen on captured images.Keywords: music note, sheet music, optical music recognition, blob detection, thresholding, dictionary generation
Procedia PDF Downloads 1792672 Facile Synthesis of Sulfur Doped TiO2 Nanoparticles with Enhanced Photocatalytic Activity
Authors: Vishnu V. Pillai, Sunil P. Lonkar, Akhil M. Abraham, Saeed M. Alhassan
Abstract:
An effectual technology for wastewater treatment is a great demand now in order to encounter the water pollution caused by organic pollutants. Photocatalytic oxidation technology is widely used in removal of such unsafe contaminants. Among the semi-conducting metal oxides, robust and thermally stable TiO2 has emerged as a fascinating material for photocatalysis. Enhanced catalytic activity was observed for nanostructured TiO2 due to its higher surface, chemical stability and higher oxidation ability. However, higher charge carrier recombination and wide band gap of TiO2 limits its use as a photocatalyst in the UV region. It is desirable to develop a photocatalyst that can efficiently absorb the visible light, which occupies the main part of the solar spectrum. Hence, in order to extend its photocatalytic efficiency under visible light, TiO2 nanoparticles are often doped with metallic or non-metallic elements. Non-metallic doping of TiO2 has attracted much attention due to the low thermal stability and enhanced recombination of charge carriers endowed by metallic doping of TiO2. Amongst, sulfur doped TiO2 is most widely used photocatalyst in environmental purification. However, the most of S-TiO2 synthesis technique uses toxic chemicals and complex procedures. Hence, a facile, scalable and environmentally benign preparation process for S-TiO2 is highly desirable. In present work, we have demonstrated new and facile solid-state reaction method for S-TiO2 synthesis that uses abundant elemental sulfur as S source and moderate temperatures. The resulting nano-sized S-TiO2 has been successfully employed as visible light photocatalyst in methylene blue dye removal from aqueous media.Keywords: ecofriendly, nanomaterials, methylene blue, photocatalysts
Procedia PDF Downloads 3472671 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification
Authors: Hung-Sheng Lin, Cheng-Hsuan Li
Abstract:
Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction
Procedia PDF Downloads 3422670 An Optimal Steganalysis Based Approach for Embedding Information in Image Cover Media with Security
Authors: Ahlem Fatnassi, Hamza Gharsellaoui, Sadok Bouamama
Abstract:
This paper deals with the study of interest in the fields of Steganography and Steganalysis. Steganography involves hiding information in a cover media to obtain the stego media in such a way that the cover media is perceived not to have any embedded message for its unintended recipients. Steganalysis is the mechanism of detecting the presence of hidden information in the stego media and it can lead to the prevention of disastrous security incidents. In this paper, we provide a critical review of the steganalysis algorithms available to analyze the characteristics of an image stego media against the corresponding cover media and understand the process of embedding the information and its detection. We anticipate that this paper can also give a clear picture of the current trends in steganography so that we can develop and improvise appropriate steganalysis algorithms.Keywords: optimization, heuristics and metaheuristics algorithms, embedded systems, low-power consumption, steganalysis heuristic approach
Procedia PDF Downloads 2902669 Automated 3D Segmentation System for Detecting Tumor and Its Heterogeneity in Patients with High Grade Ovarian Epithelial Cancer
Authors: Dimitrios Binas, Marianna Konidari, Charis Bourgioti, Lia Angela Moulopoulou, Theodore Economopoulos, George Matsopoulos
Abstract:
High grade ovarian epithelial cancer (OEC) is fatal gynecological cancer and the poor prognosis of this entity is closely related to considerable intratumoral genetic heterogeneity. By examining imaging data, it is possible to assess the heterogeneity of tumorous tissue. This study proposes a methodology for aligning, segmenting and finally visualizing information from various magnetic resonance imaging series in order to construct 3D models of heterogeneity maps from the same tumor in OEC patients. The proposed system may be used as an adjunct digital tool by health professionals for personalized medicine, as it allows for an easy visual assessment of the heterogeneity of the examined tumor.Keywords: image segmentation, ovarian epithelial cancer, quantitative characteristics, image registration, tumor visualization
Procedia PDF Downloads 2092668 Plagiarism Detection for Flowchart and Figures in Texts
Authors: Ahmadu Maidorawa, Idrissa Djibo, Muhammad Tella
Abstract:
This paper presents a method for detecting flow chart and figure plagiarism based on shape of image processing and multimedia retrieval. The method managed to retrieve flowcharts with ranked similarity according to different matching sets. Plagiarism detection is well known phenomenon in the academic arena. Copying other people is considered as serious offense that needs to be checked. There are many plagiarism detection systems such as turn-it-in that has been developed to provide these checks. Most, if not all, discard the figures and charts before checking for plagiarism. Discarding the figures and charts result in look holes that people can take advantage. That means people can plagiarize figures and charts easily without the current plagiarism systems detecting it. There are very few papers which talks about flowcharts plagiarism detection. Therefore, there is a need to develop a system that will detect plagiarism in figures and charts.Keywords: flowchart, multimedia retrieval, figures similarity, image comparison, figure retrieval
Procedia PDF Downloads 4632667 The Design of Imaginable Urban Road Landscape
Authors: Wang Zhenzhen, Wang Xu, Hong Liangping
Abstract:
With the rapid development of cities, the way that people commute has changed greatly, meanwhile, people turn to require more on physical and psychological aspects in the contemporary world. However, the current urban road landscape ignores these changes, for example, those road landscape elements are boring, confusing, fragmented and lack of integrity and hierarchy. Under such current situation, in order to shape beautiful, identifiable and unique road landscape, this article concentrates on the target of imaginability. This paper analyses the main elements of the urban road landscape, the concept of image and its generation mechanism, and then discusses the necessity and connotation of building imaginable urban road landscape as well as the main problems existing in current urban road landscape in terms of imaginability. Finally, this paper proposes how to design imaginable urban road landscape in details based on a specific case.Keywords: identifiability, imaginability, road landscape, the image of the city
Procedia PDF Downloads 4392666 Representation of the Iranian Community in the Videos of the Instagram Page of the World Health Organization Representative in Iran
Authors: Naeemeh Silvari
Abstract:
The phenomenon of the spread and epidemic of the corona virus caused many aspects of the social life of the people of the world to face various challenges. In this regard, and in order to improve the living conditions of the people, the World Health Organization has tried to publish the necessary instructions for its contacts in the world in the form of its media capacities. Considering the importance of cultural differences in the discussion of health communication and the distinct needs of people in different societies, some production contents were produced and published exclusively. This research has studied six videos published on the official page of the World Health Organization in Iran as a case study. The published content has the least semantic affinity with Iranian culture, and it has been tried to show a uniform image of the Middle East with the predominance of the image of the culture of the developing Arab countries.Keywords: corona, representation, semiotics, instagram, health communication
Procedia PDF Downloads 922665 Breast Cancer Metastasis Detection and Localization through Transfer-Learning Convolutional Neural Network Classification Based on Convolutional Denoising Autoencoder Stack
Authors: Varun Agarwal
Abstract:
Introduction: With the advent of personalized medicine, histopathological review of whole slide images (WSIs) for cancer diagnosis presents an exceedingly time-consuming, complex task. Specifically, detecting metastatic regions in WSIs of sentinel lymph node biopsies necessitates a full-scanned, holistic evaluation of the image. Thus, digital pathology, low-level image manipulation algorithms, and machine learning provide significant advancements in improving the efficiency and accuracy of WSI analysis. Using Camelyon16 data, this paper proposes a deep learning pipeline to automate and ameliorate breast cancer metastasis localization and WSI classification. Methodology: The model broadly follows five stages -region of interest detection, WSI partitioning into image tiles, convolutional neural network (CNN) image-segment classifications, probabilistic mapping of tumor localizations, and further processing for whole WSI classification. Transfer learning is applied to the task, with the implementation of Inception-ResNetV2 - an effective CNN classifier that uses residual connections to enhance feature representation, adding convolved outputs in the inception unit to the proceeding input data. Moreover, in order to augment the performance of the transfer learning CNN, a stack of convolutional denoising autoencoders (CDAE) is applied to produce embeddings that enrich image representation. Through a saliency-detection algorithm, visual training segments are generated, which are then processed through a denoising autoencoder -primarily consisting of convolutional, leaky rectified linear unit, and batch normalization layers- and subsequently a contrast-normalization function. A spatial pyramid pooling algorithm extracts the key features from the processed image, creating a viable feature map for the CNN that minimizes spatial resolution and noise. Results and Conclusion: The simplified and effective architecture of the fine-tuned transfer learning Inception-ResNetV2 network enhanced with the CDAE stack yields state of the art performance in WSI classification and tumor localization, achieving AUC scores of 0.947 and 0.753, respectively. The convolutional feature retention and compilation with the residual connections to inception units synergized with the input denoising algorithm enable the pipeline to serve as an effective, efficient tool in the histopathological review of WSIs.Keywords: breast cancer, convolutional neural networks, metastasis mapping, whole slide images
Procedia PDF Downloads 1292664 Kinoform Optimisation Using Gerchberg- Saxton Iterative Algorithm
Authors: M. Al-Shamery, R. Young, P. Birch, C. Chatwin
Abstract:
Computer Generated Holography (CGH) is employed to create digitally defined coherent wavefronts. A CGH can be created by using different techniques such as by using a detour-phase technique or by direct phase modulation to create a kinoform. The detour-phase technique was one of the first techniques that was used to generate holograms digitally. The disadvantage of this technique is that the reconstructed image often has poor quality due to the limited dynamic range it is possible to record using a medium with reasonable spatial resolution.. The kinoform (phase-only hologram) is an alternative technique. In this method, the phase of the original wavefront is recorded but the amplitude is constrained to be constant. The original object does not need to exist physically and so the kinoform can be used to reconstruct an almost arbitrary wavefront. However, the image reconstructed by this technique contains high levels of noise and is not identical to the reference image. To improve the reconstruction quality of the kinoform, iterative techniques such as the Gerchberg-Saxton algorithm (GS) are employed. In this paper the GS algorithm is described for the optimisation of a kinoform used for the reconstruction of a complex wavefront. Iterations of the GS algorithm are applied to determine the phase at a plane (with known amplitude distribution which is often taken as uniform), that satisfies given phase and amplitude constraints in a corresponding Fourier plane. The GS algorithm can be used in this way to enhance the reconstruction quality of the kinoform. Different images are employed as the reference object and their kinoform is synthesised using the GS algorithm. The quality of the reconstructed images is quantified to demonstrate the enhanced reconstruction quality achieved by using this method.Keywords: computer generated holography, digital holography, Gerchberg-Saxton algorithm, kinoform
Procedia PDF Downloads 5312663 Shattering Negative Stigmas, Creating Empathy and Willingness to Advocate for Unpopular Endangered Species: Evidence from Shark Watching in Israel
Authors: Nurit Carmi
Abstract:
There are many endangered species that are not popular but whose conservation is, nonetheless, important. The present study deals with sharks who suffer from demonization and, accordingly, from public indifference to the deteriorating state of their conservation. We used the seasonal appearance of sharks in the Israeli coastal zone to study public perceptions and attitudes towards sharks prior to ("control group") and after ("visitors") shark watching during a visit in an information center. We found that shark’s image was significantly more positive among the "visitors" compared to the control group. We found that visiting in the information center was strongly related to a more positive shark image, attitudes toward shark conservation, and willingness to act to preserve them.Keywords: wildlife tourism, shark conservation, attitudes towards animals, human-animal relationships, Smith's salience index
Procedia PDF Downloads 1622662 Endocardial Ultrasound Segmentation using Level Set method
Authors: Daoudi Abdelaziz, Mahmoudi Saïd, Chikh Mohamed Amine
Abstract:
This paper presents a fully automatic segmentation method of the left ventricle at End Systolic (ES) and End Diastolic (ED) in the ultrasound images by means of an implicit deformable model (level set) based on Geodesic Active Contour model. A pre-processing Gaussian smoothing stage is applied to the image, which is essential for a good segmentation. Before the segmentation phase, we locate automatically the area of the left ventricle by using a detection approach based on the Hough Transform method. Consequently, the result obtained is used to automate the initialization of the level set model. This initial curve (zero level set) deforms to search the Endocardial border in the image. On the other hand, quantitative evaluation was performed on a data set composed of 15 subjects with a comparison to ground truth (manual segmentation).Keywords: level set method, transform Hough, Gaussian smoothing, left ventricle, ultrasound images.
Procedia PDF Downloads 4632661 Experimental Modeling of Spray and Water Sheet Formation Due to Wave Interactions with Vertical and Slant Bow-Shaped Model
Authors: Armin Bodaghkhani, Bruce Colbourne, Yuri S. Muzychka
Abstract:
The process of spray-cloud formation and flow kinematics produced from breaking wave impact on vertical and slant lab-scale bow-shaped models were experimentally investigated. Bubble Image Velocimetry (BIV) and Image Processing (IP) techniques were applied to study the various types of wave-model impacts. Different wave characteristics were generated in a tow tank to investigate the effects of wave characteristics, such as wave phase velocity, wave steepness on droplet velocities, and behavior of the process of spray cloud formation. The phase ensemble-averaged vertical velocity and turbulent intensity were computed. A high-speed camera and diffused LED backlights were utilized to capture images for further post processing. Various pressure sensors and capacitive wave probes were used to measure the wave impact pressure and the free surface profile at different locations of the model and wave-tank, respectively. Droplet sizes and velocities were measured using BIV and IP techniques to trace bubbles and droplets in order to measure their velocities and sizes by correlating the texture in these images. The impact pressure and droplet size distributions were compared to several previously experimental models, and satisfactory agreements were achieved. The distribution of droplets in front of both models are demonstrated. Due to the highly transient process of spray formation, the drag coefficient for several stages of this transient displacement for various droplet size ranges and different Reynolds number were calculated based on the ensemble average method. From the experimental results, the slant model produces less spray in comparison with the vertical model, and the droplet velocities generated from the wave impact with the slant model have a lower velocity as compared with the vertical model.Keywords: spray charachteristics, droplet size and velocity, wave-body interactions, bubble image velocimetry, image processing
Procedia PDF Downloads 2992660 Characterization of Inertial Confinement Fusion Targets Based on Transmission Holographic Mach-Zehnder Interferometer
Authors: B. Zare-Farsani, M. Valieghbal, M. Tarkashvand, A. H. Farahbod
Abstract:
To provide the conditions for nuclear fusion by high energy and powerful laser beams, it is required to have a high degree of symmetry and surface uniformity of the spherical capsules to reduce the Rayleigh-Taylor hydrodynamic instabilities. In this paper, we have used the digital microscopic holography based on Mach-Zehnder interferometer to study the quality of targets for inertial fusion. The interferometric pattern of the target has been registered by a CCD camera and analyzed by Holovision software. The uniformity of the surface and shell thickness are investigated and measured in reconstructed image. We measured shell thickness in different zone where obtained non uniformity 22.82 percent.Keywords: inertial confinement fusion, mach-zehnder interferometer, digital holographic microscopy, image reconstruction, holovision
Procedia PDF Downloads 3022659 Real-Space Mapping of Surface Trap States in Cigse Nanocrystals Using 4D Electron Microscopy
Authors: Riya Bose, Ashok Bera, Manas R. Parida, Anirudhha Adhikari, Basamat S. Shaheen, Erkki Alarousu, Jingya Sun, Tom Wu, Osman M. Bakr, Omar F. Mohammed
Abstract:
This work reports visualization of charge carrier dynamics on the surface of copper indium gallium selenide (CIGSe) nanocrystals in real space and time using four-dimensional scanning ultrafast electron microscopy (4D S-UEM) and correlates it with the optoelectronic properties of the nanocrystals. The surface of the nanocrystals plays a key role in controlling their applicability for light emitting and light harvesting purposes. Typically for quaternary systems like CIGSe, which have many desirable attributes to be used for optoelectronic applications, relative abundance of surface trap states acting as non-radiative recombination centre for charge carriers remains as a major bottleneck preventing further advancements and commercial exploitation of these nanocrystals devices. Though ultrafast spectroscopic techniques allow determining the presence of picosecond carrier trapping channels, because of relative larger penetration depth of the laser beam, only information mainly from the bulk of the nanocrystals is obtained. Selective mapping of such ultrafast dynamical processes on the surfaces of nanocrystals remains as a key challenge, so far out of reach of purely optical probing time-resolved laser techniques. In S-UEM, the optical pulse generated from a femtosecond (fs) laser system is used to generate electron packets from the tip of the scanning electron microscope, instead of the continuous electron beam used in the conventional setup. This pulse is synchronized with another optical excitation pulse that initiates carrier dynamics in the sample. The principle of S-UEM is to detect the secondary electrons (SEs) generated in the sample, which is emitted from the first few nanometers of the top surface. Constructed at different time delays between the optical and electron pulses, these SE images give direct and precise information about the carrier dynamics on the surface of the material of interest. In this work, we report selective mapping of surface dynamics in real space and time of CIGSe nanocrystals applying 4D S-UEM. We show that the trap states can be considerably passivated by ZnS shelling of the nanocrystals, and the carrier dynamics can be significantly slowed down. We also compared and discussed the S-UEM kinetics with the carrier dynamics obtained from conventional ultrafast time-resolved techniques. Additionally, a direct effect of the state trap removal can be observed in the enhanced photoresponse of the nanocrystals after shelling. Direct observation of surface dynamics will not only provide a profound understanding of the photo-physical mechanisms on nanocrystals’ surfaces but also enable to unlock their full potential for light emitting and harvesting applications.Keywords: 4D scanning ultrafast microscopy, charge carrier dynamics, nanocrystals, optoelectronics, surface passivation, trap states
Procedia PDF Downloads 2932658 Automatic Measurement of Garment Sizes Using Deep Learning
Authors: Maulik Parmar, Sumeet Sandhu
Abstract:
The online fashion industry experiences high product return rates. Many returns are because of size/fit mismatches -the size scale on labels can vary across brands, the size parameters may not capture all fit measurements, or the product may have manufacturing defects. Warehouse quality check of garment sizes can be semi-automated to improve speed and accuracy. This paper presents an approach for automatically measuring garment sizes from a single image of the garment -using Deep Learning to learn garment keypoints. The paper focuses on the waist size measurement of jeans and can be easily extended to other garment types and measurements. Experimental results show that this approach can greatly improve the speed and accuracy of today’s manual measurement process.Keywords: convolutional neural networks, deep learning, distortion, garment measurements, image warping, keypoints
Procedia PDF Downloads 3072657 A Single Feature Probability-Object Based Image Analysis for Assessing Urban Landcover Change: A Case Study of Muscat Governorate in Oman
Authors: Salim H. Al Salmani, Kevin Tansey, Mohammed S. Ozigis
Abstract:
The study of the growth of built-up areas and settlement expansion is a major exercise that city managers seek to undertake to establish previous and current developmental trends. This is to ensure that there is an equal match of settlement expansion needs to the appropriate levels of services and infrastructure required. This research aims at demonstrating the potential of satellite image processing technique, harnessing the utility of single feature probability-object based image analysis technique in assessing the urban growth dynamics of the Muscat Governorate in Oman for the period 1990, 2002 and 2013. This need is fueled by the continuous expansion of the Muscat Governorate beyond predicted levels of infrastructural provision. Landsat Images of the years 1990, 2002 and 2013 were downloaded and preprocessed to forestall appropriate radiometric and geometric standards. A novel approach of probability filtering of the target feature segment was implemented to derive the spatial extent of the final Built-Up Area of the Muscat governorate for the three years period. This however proved to be a useful technique as high accuracy assessment results of 55%, 70%, and 71% were recorded for the Urban Landcover of 1990, 2002 and 2013 respectively. Furthermore, the Normalized Differential Built – Up Index for the various images were derived and used to consolidate the results of the SFP-OBIA through a linear regression model and visual comparison. The result obtained showed various hotspots where urbanization have sporadically taken place. Specifically, settlement in the districts (Wilayat) of AL-Amarat, Muscat, and Qurayyat experienced tremendous change between 1990 and 2002, while the districts (Wilayat) of AL-Seeb, Bawshar, and Muttrah experienced more sporadic changes between 2002 and 2013.Keywords: urban growth, single feature probability, object based image analysis, landcover change
Procedia PDF Downloads 2732656 Vibration Imaging Method for Vibrating Objects with Translation
Authors: Kohei Shimasaki, Tomoaki Okamura, Idaku Ishii
Abstract:
We propose a vibration imaging method for high frame rate (HFR)-video-based localization of vibrating objects with large translations. When the ratio of the translation speed of a target to its vibration frequency is large, obtaining its frequency response in image intensities becomes difficult because one or no waves are observable at the same pixel. Our method can precisely localize moving objects with vibration by virtually translating multiple image sequences for pixel-level short-time Fourier transform to observe multiple waves at the same pixel. The effectiveness of the proposed method is demonstrated by analyzing several HFR videos of flying insects in real scenarios.Keywords: HFR video analysis, pixel-level vibration source localization, short-time Fourier transform, virtual translation
Procedia PDF Downloads 1072655 The Death of Ruan Lingyu: Leftist Aesthetics and Cinematic Reality in the 1930s Shanghai
Authors: Chen Jin
Abstract:
This topic seeks to re-examine the New Women Incident in 1935 Shanghai from the perspective of the influence of leftist cinematic aesthetics on public discourse in 1930s Shanghai. Accordingly, an original means of interpreting the death of Ruan Lingyu will be provided. On 8th March 1935, Ruan Lingyu, the queen of Chinese silent film, committed suicide through overdosing on sleeping tablets. Her last words, ‘gossip is fearful thing’, interlinks her destiny with the protagonist she played in the film The New Women (Cai Chusheng, 1935). The coincidence was constantly questioned by the masses following her suicide, constituting the enduring question: ‘who killed Ruan Lingyu?’ Responding to this query, previous scholars primarily analyze the characters played by women -particularly new women as part of the leftist movement or public discourse of 1930s Shanghai- as a means of approaching the truth. Nevertheless, alongside her status as a public celebrity, Ruan Lingyu also plays as a screen image of mechanical reproduction. The overlap between her screen image and personal destiny attracts limited academic focus in terms of the effect and implications of leftist aesthetics of reality in relation to her death, which itself has provided impetus to this research. With the reconfiguration of early Chinese film theory in the 1980s, early discourses on the relationship between cinematic reality and consciousness proposed by Hou Yao and Gu Kenfu in the 1920s are integrated into the category of Chinese film ontology, which constitutes a transcultural contrast with the Euro-American ontology that advocates the representation of reality. The discussion of Hou and Gu overlaps cinematic reality with effect, which emphasizes the empathy of cinema that is directly reflected in the leftist aesthetics of the 1930s. As the main purpose of leftist cinema is to encourage revolution through depicting social reality truly, Ruan Lingyu became renowned for her natural and realistic acting proficiency, playing leading roles in several esteemed leftist films. The realistic reproduction and natural acting skill together constitute the empathy of leftist films, which establishes a dialogue with the virtuous female image within the 1930s public discourse. On this basis, this research considers Chinese cinematic ontology and affect theory as the theoretical foundation for investigating the relationship between the screen image of Ruan Lingyu reproduced by the leftist film The New Women and the female image in the 1930s public discourse. Through contextualizing Ruan Lingyu’s death within the Chinese leftist movement, the essay indicates that the empathy embodied within leftist cinematic reality limits viewers’ cognition of the actress, who project their sentiments for the perfect screen image on to Ruan Lingyu’s image in reality. Essentially, Ruan Lingyu is imprisoned in her own perfect replication. Consequently, this article states that alongside leftist anti-female consciousness, the leftist aesthetics of reality restricts women in a passive position within public discourse, which ultimately plays a role in facilitating the death of Ruan Lingyu.Keywords: cinematic reality, leftist aesthetics, Ruan Lingyu, The New Women
Procedia PDF Downloads 1182654 Added Value of 3D Ultrasound Image Guided Hepatic Interventions by X Matrix Technology
Authors: Ahmed Abdel Sattar Khalil, Hazem Omar
Abstract:
Background: Image-guided hepatic interventions are integral to the management of infective and neoplastic liver lesions. Over the past decades, 2D ultrasound was used for guidance of hepatic interventions; with the recent advances in ultrasound technology, 3D ultrasound was used to guide hepatic interventions. The aim of this study was to illustrate the added value of 3D image guided hepatic interventions by x matrix technology. Patients and Methods: This prospective study was performed on 100 patients who were divided into two groups; group A included 50 patients who were managed by 2D ultrasonography probe guidance, and group B included 50 patients who were managed by 3D X matrix ultrasonography probe guidance. Thermal ablation was done for 70 patients, 40 RFA (20 by the 2D probe and 20 by the 3D x matrix probe), and 30 MWA (15 by the 2D probe and 15 by the 3D x matrix probe). Chemical ablation (PEI) was done on 20 patients (10 by the 2D probe and 10 by the 3D x matrix probe). Drainage of hepatic collections and biopsy from undiagnosed hepatic focal lesions was done on 10 patients (5 by the 2D probe and 5 by the 3D x matrix probe). Results: The efficacy of ultrasonography-guided hepatic interventions by 3D x matrix probe was higher than the 2D probe but not significantly higher, with a p-value of 0.705, 0.5428 for RFA, MWA respectively, 0.5312 for PEI, 0.2918 for drainage of hepatic collections and biopsy. The complications related to the use of the 3D X matrix probe were significantly lower than the 2D probe, with a p-value of 0.003. The timing of the procedure was shorter by the usage of 3D x matrix probe in comparison to the 2D probe with a p-value of 0.08,0.34 for RFA and PEI and significantly shorter for MWA, and drainage of hepatic collection, biopsy with a P-value of 0.02,0.001 respectively. Conclusions: 3D ultrasonography-guided hepatic interventions by  x matrix probe have better efficacy, less complication, and shorter time of procedure than the 2D ultrasonography-guided hepatic interventions.Keywords: 3D, X matrix, 2D, ultrasonography, MWA, RFA, PEI, drainage of hepatic collections, biopsy
Procedia PDF Downloads 932653 The Layout Analysis of Handwriting Characters and the Fusion of Multi-style Ancient Books’ Background
Authors: Yaolin Tian, Shanxiong Chen, Fujia Zhao, Xiaoyu Lin, Hailing Xiong
Abstract:
Ancient books are significant culture inheritors and their background textures convey the potential history information. However, multi-style texture recovery of ancient books has received little attention. Restricted by insufficient ancient textures and complex handling process, the generation of ancient textures confronts with new challenges. For instance, training without sufficient data usually brings about overfitting or mode collapse, so some of the outputs are prone to be fake. Recently, image generation and style transfer based on deep learning are widely applied in computer vision. Breakthroughs within the field make it possible to conduct research upon multi-style texture recovery of ancient books. Under the circumstances, we proposed a network of layout analysis and image fusion system. Firstly, we trained models by using Deep Convolution Generative against Networks (DCGAN) to synthesize multi-style ancient textures; then, we analyzed layouts based on the Position Rearrangement (PR) algorithm that we proposed to adjust the layout structure of foreground content; at last, we realized our goal by fusing rearranged foreground texts and generated background. In experiments, diversified samples such as ancient Yi, Jurchen, Seal were selected as our training sets. Then, the performances of different fine-turning models were gradually improved by adjusting DCGAN model in parameters as well as structures. In order to evaluate the results scientifically, cross entropy loss function and Fréchet Inception Distance (FID) are selected to be our assessment criteria. Eventually, we got model M8 with lowest FID score. Compared with DCGAN model proposed by Radford at el., the FID score of M8 improved by 19.26%, enhancing the quality of the synthetic images profoundly.Keywords: deep learning, image fusion, image generation, layout analysis
Procedia PDF Downloads 1562652 Using Google Distance Matrix Application Programming Interface to Reveal and Handle Urban Road Congestion Hot Spots: A Case Study from Budapest
Authors: Peter Baji
Abstract:
In recent years, a growing body of literature emphasizes the increasingly negative impacts of urban road congestion in the everyday life of citizens. Although there are different responses from the public sector to decrease traffic congestion in urban regions, the most effective public intervention is using congestion charges. Because travel is an economic asset, its consumption can be controlled by extra taxes or prices effectively, but this demand-side intervention is often unpopular. Measuring traffic flows with the help of different methods has a long history in transport sciences, but until recently, there was not enough sufficient data for evaluating road traffic flow patterns on the scale of an entire road system of a larger urban area. European cities (e.g., London, Stockholm, Milan), in which congestion charges have already been introduced, designated a particular zone in their downtown for paying, but it protects only the users and inhabitants of the CBD (Central Business District) area. Through the use of Google Maps data as a resource for revealing urban road traffic flow patterns, this paper aims to provide a solution for a fairer and smarter congestion pricing method in cities. The case study area of the research contains three bordering districts of Budapest which are linked by one main road. The first district (5th) is the original downtown that is affected by the congestion charge plans of the city. The second district (13th) lies in the transition zone, and it has recently been transformed into a new CBD containing the biggest office zone in Budapest. The third district (4th) is a mainly residential type of area on the outskirts of the city. The raw data of the research was collected with the help of Google’s Distance Matrix API (Application Programming Interface) which provides future estimated traffic data via travel times between freely fixed coordinate pairs. From the difference of free flow and congested travel time data, the daily congestion patterns and hot spots are detectable in all measured roads within the area. The results suggest that the distribution of congestion peak times and hot spots are uneven in the examined area; however, there are frequently congested areas which lie outside the downtown and their inhabitants also need some protection. The conclusion of this case study is that cities can develop a real-time and place-based congestion charge system that forces car users to avoid frequently congested roads by changing their routes or travel modes. This would be a fairer solution for decreasing the negative environmental effects of the urban road transportation instead of protecting a very limited downtown area.Keywords: Budapest, congestion charge, distance matrix API, application programming interface, pilot study
Procedia PDF Downloads 1942651 Segmentation of Korean Words on Korean Road Signs
Authors: Lae-Jeong Park, Kyusoo Chung, Jungho Moon
Abstract:
This paper introduces an effective method of segmenting Korean text (place names in Korean) from a Korean road sign image. A Korean advanced directional road sign is composed of several types of visual information such as arrows, place names in Korean and English, and route numbers. Automatic classification of the visual information and extraction of Korean place names from the road sign images make it possible to avoid a lot of manual inputs to a database system for management of road signs nationwide. We propose a series of problem-specific heuristics that correctly segments Korean place names, which is the most crucial information, from the other information by leaving out non-text information effectively. The experimental results with a dataset of 368 road sign images show 96% of the detection rate per Korean place name and 84% per road sign image.Keywords: segmentation, road signs, characters, classification
Procedia PDF Downloads 4422650 Automatic Extraction of Water Bodies Using Whole-R Method
Authors: Nikhat Nawaz, S. Srinivasulu, P. Kesava Rao
Abstract:
Feature extraction plays an important role in many remote sensing applications. Automatic extraction of water bodies is of great significance in many remote sensing applications like change detection, image retrieval etc. This paper presents a procedure for automatic extraction of water information from remote sensing images. The algorithm uses the relative location of R-colour component of the chromaticity diagram. This method is then integrated with the effectiveness of the spatial scale transformation of whole method. The whole method is based on water index fitted from spectral library. Experimental results demonstrate the improved accuracy and effectiveness of the integrated method for automatic extraction of water bodies.Keywords: feature extraction, remote sensing, image retrieval, chromaticity, water index, spectral library, integrated method
Procedia PDF Downloads 3822649 Sociocultural Influences on Men of Color’s Body Image Concerns: A Structural Equation Modeling Study
Authors: Zikun Li, Regine Talleyrand
Abstract:
Negative body image is one of the most common causes of eating disorders, and it is not only happening to women. Regardless of the increasing attention that researchers and practitioners have been paying to the male population and their body image concerns, men of color have yet to be fully represented or studied. Given the consensus that the sociocultural experiences of people of color may play a significant role in their health and well-being, this study focused on assessing the mechanism through which sociocultural factors may influence men of color’s perceptions of body image. In particular, this study focused on untangling how interpersonal and media pressure, as well as ethnic-racial identities and perceptions, would impact body dissatisfaction in terms of muscularity, body fat, and height in men of color and how this mechanism is moderated across different ethnic-racial groups. The structural equation modeling approach was therefore applied to achieve the research goal. With the sample size of 181 self-identified Black, Indigenous, and People of Color male participants aged 20-50 (M=33.33, SD=6.9) through surveying on Amazon’s MTurk platform, the proposed model achieved a modestly acceptable model fit with the pooled sample, X2(836) = 1412.184, CFI = 0.900, RMSEA = 0.062 [0.056, 0.067]. And SRMR = 0.088, And it explained 89.5% of the variance in body dissatisfaction. The results showed that of all the direct effects on body dissatisfaction, interpersonal appearance pressure exhibited the strongest effect (β = 0.410***), followed by media appearance pressure (β = 0.272**) and self-hatred feeling (β = 0.245**). The ethnic-racial related factors (i.e., stereotype endorsement, ethnic-racial salience, and nationalistic assimilation) statistically influenced body dissatisfaction through the mediators of media appearance pressure and/or self-hatred feeling. Furthermore, the moderation analysis between Black/African American men and non-Black/African American men revealed the substantial differences in how ethnic/racial identity impacts one’s perception of body image, and the Black/African American men were found to be influenced by sociocultural factors at a higher level, compared with their counterparts. The impacts of demographic characteristics (i.e., SES, weight, height) on body dissatisfaction were also examined. Instead of considering interpersonal appearance pressure and media pressure as two subscales under one construct, this study considered them as two separate and distinct sociocultural factors. The good model fit to the data indicates this rationality and encourages scholars to reconsider the impacts of two sources of social pressures on body dissatisfaction. In addition, this study also provided empirical evidence of the moderation effect existing within the population of men of color, which reveals the heterogeneity existing across different ethnic-racial groups and implies the necessity to study individual ethnic-racial groups so as to better understand the mechanism of sociocultural influences on men of color’s body dissatisfaction. These findings strengthened the current understanding of the body image concerns exciting among men of color and meanwhile provided empirical evidence for practitioners to provide tailored health prevention and treatment options for this growing population in the United States.Keywords: men of color, body image concerns, sociocultural factors, structural equation modeling
Procedia PDF Downloads 682648 Refined Edge Detection Network
Authors: Omar Elharrouss, Youssef Hmamouche, Assia Kamal Idrissi, Btissam El Khamlichi, Amal El Fallah-Seghrouchni
Abstract:
Edge detection is represented as one of the most challenging tasks in computer vision, due to the complexity of detecting the edges or boundaries in real-world images that contains objects of different types and scales like trees, building as well as various backgrounds. Edge detection is represented also as a key task for many computer vision applications. Using a set of backbones as well as attention modules, deep-learning-based methods improved the detection of edges compared with the traditional methods like Sobel and Canny. However, images of complex scenes still represent a challenge for these methods. Also, the detected edges using the existing approaches suffer from non-refined results while the image output contains many erroneous edges. To overcome this, n this paper, by using the mechanism of residual learning, a refined edge detection network is proposed (RED-Net). By maintaining the high resolution of edges during the training process, and conserving the resolution of the edge image during the network stage, we make the pooling outputs at each stage connected with the output of the previous layer. Also, after each layer, we use an affined batch normalization layer as an erosion operation for the homogeneous region in the image. The proposed methods are evaluated using the most challenging datasets including BSDS500, NYUD, and Multicue. The obtained results outperform the designed edge detection networks in terms of performance metrics and quality of output images.Keywords: edge detection, convolutional neural networks, deep learning, scale-representation, backbone
Procedia PDF Downloads 1022647 Artificial Intelligence Based Analysis of Magnetic Resonance Signals for the Diagnosis of Tissue Abnormalities
Authors: Kapila Warnakulasuriya, Walimuni Janaka Mendis
Abstract:
In this study, an artificial intelligence-based approach is developed to diagnose abnormal tissues in human or animal bodies by analyzing magnetic resonance signals. As opposed to the conventional method of generating an image from the magnetic resonance signals, which are then evaluated by a radiologist for the diagnosis of abnormalities, in the discussed approach, the magnetic resonance signals are analyzed by an artificial intelligence algorithm without having to generate or analyze an image. The AI-based program compares magnetic resonance signals with millions of possible magnetic resonance waveforms which can be generated from various types of normal tissues. Waveforms generated by abnormal tissues are then identified, and images of the abnormal tissues are generated with the possible location of them in the body for further diagnostic tests.Keywords: magnetic resonance, artificial intelligence, magnetic waveform analysis, abnormal tissues
Procedia PDF Downloads 882646 Spatial Object-Oriented Template Matching Algorithm Using Normalized Cross-Correlation Criterion for Tracking Aerial Image Scene
Authors: Jigg Pelayo, Ricardo Villar
Abstract:
Leaning on the development of aerial laser scanning in the Philippine geospatial industry, researches about remote sensing and machine vision technology became a trend. Object detection via template matching is one of its application which characterized to be fast and in real time. The paper purposely attempts to provide application for robust pattern matching algorithm based on the normalized cross correlation (NCC) criterion function subjected in Object-based image analysis (OBIA) utilizing high-resolution aerial imagery and low density LiDAR data. The height information from laser scanning provides effective partitioning order, thus improving the hierarchal class feature pattern which allows to skip unnecessary calculation. Since detection is executed in the object-oriented platform, mathematical morphology and multi-level filter algorithms were established to effectively avoid the influence of noise, small distortion and fluctuating image saturation that affect the rate of recognition of features. Furthermore, the scheme is evaluated to recognized the performance in different situations and inspect the computational complexities of the algorithms. Its effectiveness is demonstrated in areas of Misamis Oriental province, achieving an overall accuracy of 91% above. Also, the garnered results portray the potential and efficiency of the implemented algorithm under different lighting conditions.Keywords: algorithm, LiDAR, object recognition, OBIA
Procedia PDF Downloads 2422645 A Fast Version of the Generalized Multi-Directional Radon Transform
Authors: Ines Elouedi, Atef Hammouda
Abstract:
This paper presents a new fast version of the generalized Multi-Directional Radon Transform method. The new method uses the inverse Fast Fourier Transform to lead to a faster Generalized Radon projections. We prove in this paper that the fast algorithm leads to almost the same results of the eldest one but with a considerable lower time computation cost. The projection end result of the fast method is a parameterized Radon space where a high valued pixel allows the detection of a curve from the original image. The proposed fast inversion algorithm leads to an exact reconstruction of the initial image from the Radon space. We show examples of the impact of this algorithm on the pattern recognition domain.Keywords: fast generalized multi-directional Radon transform, curve, exact reconstruction, pattern recognition
Procedia PDF Downloads 276