Search results for: image stabilization
1520 Surgical Imaging in Ancient Egypt
Authors: Ahmed Hefny Mohamed El-Badwy
Abstract:
This research aims to study of the surgery science and imaging in ancient Egypt, and how to diagnose the surgical cases, whether due to injuries or disease that requires surgical intervention, Medical diagnosis and how to treat it. The ancient Egyptian physician tried to change over from magic and theological thinking to become a stand-alone experimental science, they were able to distinguish between diseases, and they divide them into internal and external diseases even this division exists to date in modern medicine. There is no evidence to recognize the amount of human knowledge in the prehistoric knowledge of medicine and surgery except skeleton. It is not far from the human being in those times familiar with some means of treatment, Surgery in the Stone age was rudimentary, Flint stone was used after trimming in a certain way as a lancet to slit and open the skin. Wooden tree branches were used to make splints to treat bone fractures. Surgery developed further when copper was discovered, it led to the advancement of Egyptian civilization, then modern and advanced tools appeared in the operating theater, like a knife or a scalpel, there is evidence of surgery performed in ancient Egypt during the dynastic period (323 – 3200 BC). The climate and environmental conditions have preserved medical papyri and human remains that have confirmed their knowledge of surgical methods, including sedation. The ancient Egyptians reached a great importance in surgery, evidenced by the scenes that depict the pathological image and the surgical process, but the image alone is not sufficient to prove the pathology, its presence in ancient Egypt and its treatment method. As there are a number of medical papyri, especially Edwin Smith and Ebris, which prove the ancient Egyptian surgeon's knowledge of the pathological condition that It requires a surgical intervention, otherwise, its diagnosis and the method of treatment will not be described with such accuracy through these texts. Some surgeries are described in the department of surgery at Ebris papyrus (recipes from 863 to 877). The level of surgery in ancient Egypt was high, and they performed surgery such as hernias and Aneurysm, however, we have not received a lengthy explanation of the various surgeries, and the surgeon has usually only said “treated surgically”. It is evident in the Ebris papyrus that they used sharp surgical tools and cautery in operations where bleeding is expected, such as hernias, arterial sacs and tumors.Keywords: ancientegypt, egypt, archaeology, the ancient egyptian
Procedia PDF Downloads 691519 Automatic Differential Diagnosis of Melanocytic Skin Tumours Using Ultrasound and Spectrophotometric Data
Authors: Kristina Sakalauskiene, Renaldas Raisutis, Gintare Linkeviciute, Skaidra Valiukeviciene
Abstract:
Cutaneous melanoma is a melanocytic skin tumour, which has a very poor prognosis while is highly resistant to treatment and tends to metastasize. Thickness of melanoma is one of the most important biomarker for stage of disease, prognosis and surgery planning. In this study, we hypothesized that the automatic analysis of spectrophotometric images and high-frequency ultrasonic 2D data can improve differential diagnosis of cutaneous melanoma and provide additional information about tumour penetration depth. This paper presents the novel complex automatic system for non-invasive melanocytic skin tumour differential diagnosis and penetration depth evaluation. The system is composed of region of interest segmentation in spectrophotometric images and high-frequency ultrasound data, quantitative parameter evaluation, informative feature extraction and classification with linear regression classifier. The segmentation of melanocytic skin tumour region in ultrasound image is based on parametric integrated backscattering coefficient calculation. The segmentation of optical image is based on Otsu thresholding. In total 29 quantitative tissue characterization parameters were evaluated by using ultrasound data (11 acoustical, 4 shape and 15 textural parameters) and 55 quantitative features of dermatoscopic and spectrophotometric images (using total melanin, dermal melanin, blood and collagen SIAgraphs acquired using spectrophotometric imaging device SIAscope). In total 102 melanocytic skin lesions (including 43 cutaneous melanomas) were examined by using SIAscope and ultrasound system with 22 MHz center frequency single element transducer. The diagnosis and Breslow thickness (pT) of each MST were evaluated during routine histological examination after excision and used as a reference. The results of this study have shown that automatic analysis of spectrophotometric and high frequency ultrasound data can improve non-invasive classification accuracy of early-stage cutaneous melanoma and provide supplementary information about tumour penetration depth.Keywords: cutaneous melanoma, differential diagnosis, high-frequency ultrasound, melanocytic skin tumours, spectrophotometric imaging
Procedia PDF Downloads 2701518 Multi-Dimensional Experience of Processing Textual and Visual Information: Case Study of Allocations to Places in the Mind’s Eye Based on Individual’s Semantic Knowledge Base
Authors: Joanna Wielochowska, Aneta Wielochowska
Abstract:
Whilst the relationship between scientific areas such as cognitive psychology, neurobiology and philosophy of mind has been emphasized in recent decades of scientific research, concepts and discoveries made in both fields overlap and complement each other in their quest for answers to similar questions. The object of the following case study is to describe, analyze and illustrate the nature and characteristics of a certain cognitive experience which appears to display features of synaesthesia, or rather high-level synaesthesia (ideasthesia). The following research has been conducted on the subject of two authors, monozygotic twins (both polysynaesthetes) experiencing involuntary associations of identical nature. Authors made attempts to identify which cognitive and conceptual dependencies may guide this experience. Operating on self-introduced nomenclature, the described phenomenon- multi-dimensional processing of textual and visual information- aims to define a relationship that involuntarily and immediately couples the content introduced by means of text or image a sensation of appearing in a certain place in the mind’s eye. More precisely: (I) defining a concept introduced by means of textual content during activity of reading or writing, or (II) defining a concept introduced by means of visual content during activity of looking at image(s) with simultaneous sensation of being allocated to a given place in the mind’s eye. A place can be then defined as a cognitive representation of a certain concept. During the activity of processing information, a person has an immediate and involuntary feel of appearing in a certain place themselves, just like a character of a story, ‘observing’ a venue or a scenery from one or more perspectives and angles. That forms a unique and unified experience, constituting a background mental landscape of text or image being looked at. We came to a conclusion that semantic allocations to a given place could be divided and classified into the categories and subcategories and are naturally linked with an individual’s semantic knowledge-base. A place can be defined as a representation one’s unique idea of a given concept that has been established in their semantic knowledge base. A multi-level structure of selectivity of places in the mind’s eye, as a reaction to a given information (one stimuli), draws comparisons to structures and patterns found in botany. Double-flowered varieties of flowers and a whorl system (arrangement) which is characteristic to components of some flower species were given as an illustrative example. A composition of petals that fan out from one single point and wrap around a stem inspired an idea that, just like in nature, in philosophy of mind there are patterns driven by the logic specific to a given phenomenon. The study intertwines terms perceived through the philosophical lens, such as definition of meaning, subjectivity of meaning, mental atmosphere of places, and others. Analysis of this rare experience aims to contribute to constantly developing theoretical framework of the philosophy of mind and influence the way human semantic knowledge base and processing given content in terms of distinguishing between information and meaning is researched.Keywords: information and meaning, information processing, mental atmosphere of places, patterns in nature, philosophy of mind, selectivity, semantic knowledge base, senses, synaesthesia
Procedia PDF Downloads 1241517 Surgical Imaging in Ancient Egypt
Authors: Haitham Nabil Zaghlol Hasan
Abstract:
This research aims to study of the surgery science and imaging in ancient Egypt and how to diagnose the surgical cases, whether due to injuries or disease that requires surgical intervention, Medical diagnosis and how to treat it. The ancient Egyptian physician tried to change over from magic and theological thinking to become a stand-alone experimental science, they were able to distinguish between diseases, and they divide them into internal and external diseases even though this division exists to date in modern medicine. There is no evidence to recognize the amount of human knowledge in the prehistoric knowledge of medicine and surgery except skeleton. It is not far from the human being in those times familiar with some means of treatment, Surgery in the Stone age was rudimentary, Flint stone was used after trimming in a certain way as a lancet to slit and open the skin. Wooden tree branches were used to make splints to treat bone fractures. Surgery developed further when copper was discovered, it led to the advancement of Egyptian civilization, then modern and advanced tools appeared in the operating theater, like a knife or a scalpel, there is evidence of surgery performed in ancient Egypt during the dynastic period (323 – 3200 BC). The climate and environmental conditions have preserved medical papyri and human remains that have confirmed their knowledge of surgical methods, including sedation. The ancient Egyptians reached great importance in surgery, evidenced by the scenes that depict the pathological image and the surgical process, but the image alone is not sufficient to prove the pathology, its presence in ancient Egypt and its treatment method. As there are a number of medical papyri, especially Edwin Smith and Ebris, which prove the ancient Egyptian surgeon's knowledge of the pathological condition that It requires surgical intervention, otherwise, its diagnosis and the method of treatment will not be described with such accuracy through these texts. Some surgeries are described in the department of surgery at Ebris papyrus (recipes from 863 to 877). The level of surgery in ancient Egypt was high, and they performed surgery such as hernias and Aneurysm, however, we have not received a lengthy explanation of the various surgeries, and the surgeon has usually only said: “treated surgically”. It is evident in the Ebris papyrus that they used sharp surgical tools and cautery in operations where bleeding is expected, such as hernias, arterial sacs and tumors.Keywords: egypt, ancient_egypt, civilization, archaeology
Procedia PDF Downloads 691516 Application of Functionalized Magnetic Particles as Demulsifier for Oil‐in‐Water Emulsions
Authors: Hamideh Hamedi, Nima Rezaei, Sohrab Zendehboudi
Abstract:
Separating emulsified oil contaminations from waste- or produced water is of interest to various industries. Magnetic particles (MPs) application for separating dispersed and emulsified oil from wastewater is becoming more popular. Stabilization of MPs is required through developing a coating layer on their surfaces to prevent their agglomeration and enhance their dispersibility. In this research, we study the effects of coating material, size, and concentration of iron oxide MPs on oil separation efficiency, using oil adsorption capacity measurements. We functionalize both micro-and nanoparticles of Fe3O4 using sodium dodecyl sulfate (SDS) as an anionic surfactant, cetyltrimethylammonium bromide (CTAB) as a cationic surfactant, and stearic acid (SA). The chemical structures and morphologies of these particles are characterized using Scanning Electron Microscopy (SEM), Transmission Electron Microscopy (TEM), and Energy Dispersive X-ray (EDX). The oil-water separation results indicate that a low dosage of the coated magnetic nanoparticle with CTAB (0.5 g/L MNP-CTAB) results the highest oil adsorption capacity (nearly 100%) for 1000 ppm dodecane-in-water emulsion, containing ultra-small droplets (250–300 nm). While separation efficiency of the same dosage of bare MNPs is around 57.5%. Demulsification results of magnetic microparticles (MMPs) also reveal that the functionalizing particles with CTAB increase oil removal efficiency from 86.3% for bare MMP to 92% for MMP-CTAB. Comparing the results of different coating materials implies that the major interaction reaction is an electrostatic attraction between negatively charged oil droplets and positively charged MNP-CTAB and MMP-CTAB. Furthermore, the synthesized nanoparticles could be recycled and reused; after ten cycles the oil adsorption capacity slightly decreases to near 95%. In conclusion, functionalized magnetic particles with high oil separation efficiency could be used effectively in treatment of oily wastewater. Finally, optimization of the adsorption process is required by considering the effective system variables, and fluid properties.Keywords: oily wastewater treatment, emulsions, oil-water separation, adsorption, magnetic nanoparticles
Procedia PDF Downloads 1071515 Object-Scene: Deep Convolutional Representation for Scene Classification
Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang
Abstract:
Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization
Procedia PDF Downloads 3311514 Riesz Mixture Model for Brain Tumor Detection
Authors: Mouna Zitouni, Mariem Tounsi
Abstract:
This research introduces an application of the Riesz mixture model for medical image segmentation for accurate diagnosis and treatment of brain tumors. We propose a pixel classification technique based on the Riesz distribution, derived from an extended Bartlett decomposition. To our knowledge, this is the first study addressing this approach. The Expectation-Maximization algorithm is implemented for parameter estimation. A comparative analysis, using both synthetic and real brain images, demonstrates the superiority of the Riesz model over a recent method based on the Wishart distribution.Keywords: EM algorithm, segmentation, Riesz probability distribution, Wishart probability distribution
Procedia PDF Downloads 171513 The Impact of Social Customer Relationship Management on Brand Loyalty and Reducing Co-Destruction of Value by Customers
Authors: Sanaz Farhangi, Habib Alipour
Abstract:
The main objective of this paper is to explore how social media as a critical platform would increase the interactions between the tourism sector and stakeholders. Nowadays, human interactions through social media in many areas, especially in tourism, provide various experiences and information that users share and discuss. Organizations and firms can gain customer loyalty through social media platforms, albeit consumers' negative image of the product or services. Such a negative image can be reduced through constant communication between produces and consumers, especially with the availability of the new technology. Therefore, effective management of customer relationships in social media creates an extraordinary opportunity for organizations to enhance value and brand loyalty. In this study, we seek to develop a conceptual model for addressing factors such as social media, SCRM, and customer engagement affecting brand loyalty and diminish co-destruction. To support this model, we scanned the relevant literature using a comprehensive category of ideas in the context of marketing and customer relationship management. This will allow exploring whether there is any relationship between social media, customer engagement, social customer relationship management (SCRM), co-destruction, and brand loyalty. SCRM has been explored as a moderating factor in the relationship between customer engagement and social media to secure brand loyalty and diminish co-destruction of the company’s value. Although numerous studies have been conducted on the impact of social media on customers and marketing behavior, there are limited studies for investigating the relationship between SCRM, brand loyalty, and negative e-WOM, which results in the reduction of the co-destruction of value by customers. This study is an important contribution to the tourism and hospitality industry in orienting customer behavior in social media using SCRM. This study revealed that through social media platforms, management can generate discussion and engagement about the product and services, which facilitates customers feeling in an appositive way towards the firm and its product. Study has also revealed that customers’ complaints through social media have a multi-purpose effect; it can degrade the value of the product, but at the same time, it will motivate the firm to overcome its weaknesses and correct its shortcomings. This study has also implications for the managers and practitioners, especially in the tourism and hospitality sector. Future research direction and limitations of the research were also discussed.Keywords: brand loyalty, co-destruction, customer engagement, SCRM, tourism and hospitality
Procedia PDF Downloads 1161512 Elucidating the Genetic Determinism of Seed Protein Plasticity in Response to the Environment Using Medicago truncatula
Authors: K. Cartelier, D. Aime, V. Vernoud, J. Buitink, J. M. Prosperi, K. Gallardo, C. Le Signor
Abstract:
Legumes can produce protein-rich seeds without nitrogen fertilizer through root symbiosis with nitrogen-fixing rhizobia. Rich in lysine, these proteins are used for human nutrition and animal feed. However, the instability of seed protein yield and quality due to environmental fluctuations limits the wider use of legumes such as pea. Breeding efforts are needed to optimize and stabilize seed nutritional value, which requires to identify the genetic determinism of seed protein plasticity in response to the environment. Towards this goal, we have studied the plasticity of protein content and composition of seeds from a collection of 200 Medicago truncatula ecotypes grown under four controlled conditions (optimal, drought, and winter/spring sowing). A quantitative analysis of one-dimensional protein profiles of these mature seeds was performed and plasticity indices were calculated from each abundant protein band. Genome-Wide Association Studies (GWAS) from these data identified major GWAS hotspots, from which a list of candidate genes was obtained. A Gene Ontology Enrichment Analysis revealed an over-representation of genes involved in several amino acid metabolic pathways. This led us to propose that environmental variations are likely to modulate amino acid balance, thus impacting seed protein composition. The selection of candidate genes for controlling the plasticity of seed protein composition was refined using transcriptomics data from developing Medicago truncatula seeds. The pea orthologs of key genes were identified for functional studies by mean of TILLING (Targeting Induced Local Lesions in Genomes) lines in this crop. We will present how this study highlighted mechanisms that could govern seed protein plasticity, providing new cues towards the stabilization of legume seed quality.Keywords: GWAS, Medicago truncatula, plasticity, seed, storage proteins
Procedia PDF Downloads 1421511 Artificial Intelligence Based Method in Identifying Tumour Infiltrating Lymphocytes of Triple Negative Breast Cancer
Authors: Nurkhairul Bariyah Baharun, Afzan Adam, Reena Rahayu Md Zin
Abstract:
Tumor microenvironment (TME) in breast cancer is mainly composed of cancer cells, immune cells, and stromal cells. The interaction between cancer cells and their microenvironment plays an important role in tumor development, progression, and treatment response. The TME in breast cancer includes tumor-infiltrating lymphocytes (TILs) that are implicated in killing tumor cells. TILs can be found in tumor stroma (sTILs) and within the tumor (iTILs). TILs in triple negative breast cancer (TNBC) have been demonstrated to have prognostic and potentially predictive value. The international Immune-Oncology Biomarker Working Group (TIL-WG) had developed a guideline focus on the assessment of sTILs using hematoxylin and eosin (H&E)-stained slides. According to the guideline, the pathologists use “eye balling” method on the H&E stained- slide for sTILs assessment. This method has low precision, poor interobserver reproducibility, and is time-consuming for a comprehensive evaluation, besides only counted sTILs in their assessment. The TIL-WG has therefore recommended that any algorithm for computational assessment of TILs utilizing the guidelines provided to overcome the limitations of manual assessment, thus providing highly accurate and reliable TILs detection and classification for reproducible and quantitative measurement. This study is carried out to develop a TNBC digital whole slide image (WSI) dataset from H&E-stained slides and IHC (CD4+ and CD8+) stained slides. TNBC cases were retrieved from the database of the Department of Pathology, Hospital Canselor Tuanku Muhriz (HCTM). TNBC cases diagnosed between the year 2010 and 2021 with no history of other cancer and available block tissue were included in the study (n=58). Tissue blocks were sectioned approximately 4 µm for H&E and IHC stain. The H&E staining was performed according to a well-established protocol. Indirect IHC stain was also performed on the tissue sections using protocol from Diagnostic BioSystems PolyVue™ Plus Kit, USA. The slides were stained with rabbit monoclonal, CD8 antibody (SP16) and Rabbit monoclonal, CD4 antibody (EP204). The selected and quality-checked slides were then scanned using a high-resolution whole slide scanner (Pannoramic DESK II DW- slide scanner) to digitalize the tissue image with a pixel resolution of 20x magnification. A manual TILs (sTILs and iTILs) assessment was then carried out by the appointed pathologist (2 pathologists) for manual TILs scoring from the digital WSIs following the guideline developed by TIL-WG 2014, and the result displayed as the percentage of sTILs and iTILs per mm² stromal and tumour area on the tissue. Following this, we aimed to develop an automated digital image scoring framework that incorporates key elements of manual guidelines (including both sTILs and iTILs) using manually annotated data for robust and objective quantification of TILs in TNBC. From the study, we have developed a digital dataset of TNBC H&E and IHC (CD4+ and CD8+) stained slides. We hope that an automated based scoring method can provide quantitative and interpretable TILs scoring, which correlates with the manual pathologist-derived sTILs and iTILs scoring and thus has potential prognostic implications.Keywords: automated quantification, digital pathology, triple negative breast cancer, tumour infiltrating lymphocytes
Procedia PDF Downloads 1161510 Assessment of Seeding and Weeding Field Robot Performance
Authors: Victor Bloch, Eerikki Kaila, Reetta Palva
Abstract:
Field robots are an important tool for enhancing efficiency and decreasing the climatic impact of food production. There exists a number of commercial field robots; however, since this technology is still new, the robot advantages and limitations, as well as methods for optimal using of robots, are still unclear. In this study, the performance of a commercial field robot for seeding and weeding was assessed. A research 2-ha sugar beet field with 0.5m row width was used for testing, which included robotic sowing of sugar beet and weeding five times during the first two months of the growing. About three and five percent of the field were used as untreated and chemically weeded control areas, respectively. The plant detection was based on the exact plant location without image processing. The robot was equipped with six seeding and weeding tools, including passive between-rows harrow hoes and active hoes cutting inside rows between the plants, and it moved with a maximal speed of 0.9 km/h. The robot's performance was assessed by image processing. The field images were collected by an action camera with a height of 2 m and a resolution 27M pixels installed on the robot and by a drone with a 16M pixel camera flying at 4 m height. To detect plants and weeds, the YOLO model was trained with transfer learning from two available datasets. A preliminary analysis of the entire field showed that in the areas treated by the robot, the weed average density varied across the field from 6.8 to 9.1 weeds/m² (compared with 0.8 in the chemically treated area and 24.3 in the untreated area), the weed average density inside rows was 2.0-2.9 weeds / m (compared with 0 on the chemically treated area), and the emergence rate was 90-95%. The information about the robot's performance has high importance for the application of robotics for field tasks. With the help of the developed method, the performance can be assessed several times during the growth according to the robotic weeding frequency. When it’s used by farmers, they can know the field condition and efficiency of the robotic treatment all over the field. Farmers and researchers could develop optimal strategies for using the robot, such as seeding and weeding timing, robot settings, and plant and field parameters and geometry. The robot producers can have quantitative information from an actual working environment and improve the robots accordingly.Keywords: agricultural robot, field robot, plant detection, robot performance
Procedia PDF Downloads 871509 Chloroform-Formic Acid Solvent Systems for Nanofibrous Polycaprolactone Webs
Authors: I. Yalcin Enis, J. Vojtech, T. Gok Sadikoglu
Abstract:
In this study, polycaprolactone (PCL) was dissolved in chloroform: ethanol solvent system at a concentration of 18 w/v %. 1, 2, 4, and 6 droplets of formic acid were added to the prepared 10ml PCL-chloroform:ethanol solutions separately. Fibrous webs were produced by electrospinning technique. Morphology of the webs was investigated by using scanning electron microscopy (SEM) whereas fiber diameters were measured by Image J Software System. The effect of formic acid addition to the mostly used chloroform solvent on fiber morphology was examined.Keywords: chloroform, electrospinning, formic acid polycaprolactone, fiber
Procedia PDF Downloads 2761508 GPU-Based Back-Projection of Synthetic Aperture Radar (SAR) Data onto 3D Reference Voxels
Authors: Joshua Buli, David Pietrowski, Samuel Britton
Abstract:
Processing SAR data usually requires constraints in extent in the Fourier domain as well as approximations and interpolations onto a planar surface to form an exploitable image. This results in a potential loss of data requires several interpolative techniques, and restricts visualization to two-dimensional plane imagery. The data can be interpolated into a ground plane projection, with or without terrain as a component, all to better view SAR data in an image domain comparable to what a human would view, to ease interpretation. An alternate but computationally heavy method to make use of more of the data is the basis of this research. Pre-processing of the SAR data is completed first (matched-filtering, motion compensation, etc.), the data is then range compressed, and lastly, the contribution from each pulse is determined for each specific point in space by searching the time history data for the reflectivity values for each pulse summed over the entire collection. This results in a per-3D-point reflectivity using the entire collection domain. New advances in GPU processing have finally allowed this rapid projection of acquired SAR data onto any desired reference surface (called backprojection). Mathematically, the computations are fast and easy to implement, despite limitations in SAR phase history data size and 3D-point cloud size. Backprojection processing algorithms are embarrassingly parallel since each 3D point in the scene has the same reflectivity calculation applied for all pulses, independent of all other 3D points and pulse data under consideration. Therefore, given the simplicity of the single backprojection calculation, the work can be spread across thousands of GPU threads allowing for accurate reflectivity representation of a scene. Furthermore, because reflectivity values are associated with individual three-dimensional points, a plane is no longer the sole permissible mapping base; a digital elevation model or even a cloud of points (collected from any sensor capable of measuring ground topography) can be used as a basis for the backprojection technique. This technique minimizes any interpolations and modifications of the raw data, maintaining maximum data integrity. This innovative processing will allow for SAR data to be rapidly brought into a common reference frame for immediate exploitation and data fusion with other three-dimensional data and representations.Keywords: backprojection, data fusion, exploitation, three-dimensional, visualization
Procedia PDF Downloads 851507 Intrastromal Donor Limbal Segments Implantation as a Surgical Treatment of Progressive Keratoconus: Clinical and Functional Results
Authors: Mikhail Panes, Sergei Pozniak, Nikolai Pozniak
Abstract:
Purpose: To evaluate the effectiveness of intrastromal donor limbal segments implantation for treatment of progressive keratoconus considering on main characteristics of corneal endothelial cells. Setting: Outpatient ophthalmic clinic. Methods: Twenty patients (20 eyes) with progressive keratoconus II-III of Amsler classification were recruited. The worst eye was treated with the transplantation of donor limbal segments in the recipient corneal stroma, while the fellow eye was left untreated as a control of functional and morphological changes. Furthermore, twenty patients (20 eyes) without progressive keratoconus was used as a control of corneal endothelial cells changes. All patients underwent a complete ocular examination including uncorrected and corrected distance visual acuity (UDVA, CDVA), slit lamp examination fundus examination, corneal topography and pachymetry, auto-keratometry, Anterior Segment Optical Coherence Tomography and Corneal Endothelial Specular Microscopy. Results: After two years, statistically significant improvement in the UDVA and CDVA (on the average on two lines for UDVA and three-four lines for CDVA) were noted. Besides corneal astigmatism decreased from 5.82 ± 2.64 to 1.92 ± 1.4 D. Moreover there were no statistically significant differences in the changes of mean spherical equivalent, keratometry and pachymetry indicators. It should be noted that after two years there were no significant differences in the changes of the number and form of corneal endothelial cells. It can be regarded as a process stabilization. In untreated control eyes, there was a general trend towards worsening of UDVA, CDVA and corneal thickness, while corneal astigmatism was increased. Conclusion: Intrastromal donor segments implantation is a safe technique for keratoconus treatment. Intrastromal donor segments implantation is an efficient procedure to stabilize and improve progressive keratoconus.Keywords: corneal endothelial cells, intrastromal donor limbal segments, progressive keratoconus, surgical treatment of keratoconus
Procedia PDF Downloads 2811506 Parallel Version of Reinhard’s Color Transfer Algorithm
Authors: Abhishek Bhardwaj, Manish Kumar Bajpai
Abstract:
An image with its content and schema of colors presents an effective mode of information sharing and processing. By changing its color schema different visions and prospect are discovered by the users. This phenomenon of color transfer is being used by Social media and other channel of entertainment. Reinhard et al’s algorithm was the first one to solve this problem of color transfer. In this paper, we make this algorithm efficient by introducing domain parallelism among different processors. We also comment on the factors that affect the speedup of this problem. In the end by analyzing the experimental data we claim to propose a novel and efficient parallel Reinhard’s algorithm.Keywords: Reinhard et al’s algorithm, color transferring, parallelism, speedup
Procedia PDF Downloads 6141505 Remote Sensing Application in Environmental Researches: Case Study of Iran Mangrove Forests Quantitative Assessment
Authors: Neda Orak, Mostafa Zarei
Abstract:
Environmental assessment is an important session in environment management. Since various methods and techniques have been produces and implemented. Remote sensing (RS) is widely used in many scientific and research fields such as geology, cartography, geography, agriculture, forestry, land use planning, environment, etc. It can show earth surface objects cyclical changes. Also, it can show earth phenomena limits on basis of electromagnetic reflectance changes and deviations records. The research has been done on mangrove forests assessment by RS techniques. Mangrove forests quantitative analysis in Basatin and Bidkhoon estuaries was the aim of this research. It has been done by Landsat satellite images from 1975- 2013 and match to ground control points. This part of mangroves are the last distribution in northern hemisphere. It can provide a good background to improve better management on this important ecosystem. Landsat has provided valuable images to earth changes detection to researchers. This research has used MSS, TM, +ETM, OLI sensors from 1975, 1990, 2000, 2003-2013. Changes had been studied after essential corrections such as fix errors, bands combination, georeferencing on 2012 images as basic image, by maximum likelihood and IPVI Index. It was done by supervised classification. 2004 google earth image and ground points by GPS (2010-2012) was used to compare satellite images obtained changes. Results showed mangrove area in bidkhoon was 1119072 m2 by GPS and 1231200 m2 by maximum likelihood supervised classification and 1317600 m2 by IPVI in 2012. Basatin areas is respectively: 466644 m2, 88200 m2, 63000 m2. Final results show forests have been declined naturally. It is due to human activities in Basatin. The defect was offset by planting in many years. Although the trend has been declining in recent years again. So, it mentioned satellite images have high ability to estimation all environmental processes. This research showed high correlation between images and indexes such as IPVI and NDVI with ground control points.Keywords: IPVI index, Landsat sensor, maximum likelihood supervised classification, Nayband National Park
Procedia PDF Downloads 2931504 Detecting Tomato Flowers in Greenhouses Using Computer Vision
Authors: Dor Oppenheim, Yael Edan, Guy Shani
Abstract:
This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint.Keywords: agricultural engineering, image processing, computer vision, flower detection
Procedia PDF Downloads 3291503 Verification of a Simple Model for Rolling Isolation System Response
Authors: Aarthi Sridhar, Henri Gavin, Karah Kelly
Abstract:
Rolling Isolation Systems (RISs) are simple and effective means to mitigate earthquake hazards to equipment in critical and precious facilities, such as hospitals, network collocation facilities, supercomputer centers, and museums. The RIS works by isolating components acceleration the inertial forces felt by the subsystem. The RIS consists of two platforms with counter-facing concave surfaces (dishes) in each corner. Steel balls lie inside the dishes and allow the relative motion between the top and bottom platform. Formerly, a mathematical model for the dynamics of RISs was developed using Lagrange’s equations (LE) and experimentally validated. A new mathematical model was developed using Gauss’s Principle of Least Constraint (GPLC) and verified by comparing impulse response trajectories of the GPLC model and the LE model in terms of the peak displacements and accelerations of the top platform. Mathematical models for the RIS are tedious to derive because of the non-holonomic rolling constraints imposed on the system. However, using Gauss’s Principle of Least constraint to find the equations of motion removes some of the obscurity and yields a system that can be easily extended. Though the GPLC model requires more state variables, the equations of motion are far simpler. The non-holonomic constraint is enforced in terms of accelerations and therefore requires additional constraint stabilization methods in order to avoid the possibility that numerical integration methods can cause the system to go unstable. The GPLC model allows the incorporation of more physical aspects related to the RIS, such as contribution of the vertical velocity of the platform to the kinetic energy and the mass of the balls. This mathematical model for the RIS is a tool to predict the motion of the isolation platform. The ability to statistically quantify the expected responses of the RIS is critical in the implementation of earthquake hazard mitigation.Keywords: earthquake hazard mitigation, earthquake isolation, Gauss’s Principle of Least Constraint, nonlinear dynamics, rolling isolation system
Procedia PDF Downloads 2501502 Machine Learning Approach for Automating Electronic Component Error Classification and Detection
Authors: Monica Racha, Siva Chandrasekaran, Alex Stojcevski
Abstract:
The engineering programs focus on promoting students' personal and professional development by ensuring that students acquire technical and professional competencies during four-year studies. The traditional engineering laboratory provides an opportunity for students to "practice by doing," and laboratory facilities aid them in obtaining insight and understanding of their discipline. Due to rapid technological advancements and the current COVID-19 outbreak, the traditional labs were transforming into virtual learning environments. Aim: To better understand the limitations of the physical laboratory, this research study aims to use a Machine Learning (ML) algorithm that interfaces with the Augmented Reality HoloLens and predicts the image behavior to classify and detect the electronic components. The automated electronic components error classification and detection automatically detect and classify the position of all components on a breadboard by using the ML algorithm. This research will assist first-year undergraduate engineering students in conducting laboratory practices without any supervision. With the help of HoloLens, and ML algorithm, students will reduce component placement error on a breadboard and increase the efficiency of simple laboratory practices virtually. Method: The images of breadboards, resistors, capacitors, transistors, and other electrical components will be collected using HoloLens 2 and stored in a database. The collected image dataset will then be used for training a machine learning model. The raw images will be cleaned, processed, and labeled to facilitate further analysis of components error classification and detection. For instance, when students conduct laboratory experiments, the HoloLens captures images of students placing different components on a breadboard. The images are forwarded to the server for detection in the background. A hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm will be used to train the dataset for object recognition and classification. The convolution layer extracts image features, which are then classified using Support Vector Machine (SVM). By adequately labeling the training data and classifying, the model will predict, categorize, and assess students in placing components correctly. As a result, the data acquired through HoloLens includes images of students assembling electronic components. It constantly checks to see if students appropriately position components in the breadboard and connect the components to function. When students misplace any components, the HoloLens predicts the error before the user places the components in the incorrect proportion and fosters students to correct their mistakes. This hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm automating electronic component error classification and detection approach eliminates component connection problems and minimizes the risk of component damage. Conclusion: These augmented reality smart glasses powered by machine learning provide a wide range of benefits to supervisors, professionals, and students. It helps customize the learning experience, which is particularly beneficial in large classes with limited time. It determines the accuracy with which machine learning algorithms can forecast whether students are making the correct decisions and completing their laboratory tasks.Keywords: augmented reality, machine learning, object recognition, virtual laboratories
Procedia PDF Downloads 1341501 Improvement in Blast Furnace Performance Using Softening - Melting Zone Profile Prediction Model at G Blast Furnace, Tata Steel Jamshedpur
Authors: Shoumodip Roy, Ankit Singhania, K. R. K. Rao, Ravi Shankar, M. K. Agarwal, R. V. Ramna, Uttam Singh
Abstract:
The productivity of a blast furnace and the quality of the hot metal produced are significantly dependent on the smoothness and stability of furnace operation. The permeability of the furnace bed, as well as the gas flow pattern, influences the steady control of process parameters. The softening – melting zone that is formed inside the furnace contributes largely in distribution of the gas flow and the bed permeability. A better shape of softening-melting zone enhances the performance of blast furnace, thereby reducing the fuel rates and improving furnace life. Therefore, predictive model of the softening- melting zone profile can be utilized to control and improve the furnace operation. The shape of softening-melting zone depends upon the physical and chemical properties of the agglomerates and iron ore charged in the furnace. The variations in the agglomerate proportion in the burden at G Blast furnace disturbed the furnace stability. During such circumstances, it was analyzed that a w-shape softening-melting zone profile was formed inside the furnace. The formation of w-shape zone resulted in poor bed permeability and non-uniform gas flow. There was a significant increase in the heat loss at the lower zone of the furnace. The fuel demand increased, and the huge production loss was incurred. Therefore, visibility of softening-melting zone profile was necessary in order to pro-actively optimize the process parameters and thereby to operate the furnace smoothly. Using stave temperatures, a model was developed that predicted the shape of the softening-melting zone inside the furnace. It was observed that furnace operated smoothly during inverse V-shape of the zone and vice-versa during w-shape. This model helped to control the heat loss, optimize the burden distribution and lower the fuel rate at G Blast Furnace, TSL Jamshedpur. As a result of furnace stabilization productivity increased by 10% and fuel rate reduced by 80 kg/thm. Details of the process have been discussed in this paper.Keywords: agglomerate, blast furnace, permeability, softening-melting
Procedia PDF Downloads 2521500 Celebrity Culture and Social Role of Celebrities in Türkiye during the 1990s: The Case of Türkiye, Newspaper, Radio, Televison (TGRT) Channel
Authors: Yelda Yenel, Orkut Acele
Abstract:
In a media-saturated world, celebrities have become ubiquitous figures, encountered both in public spaces and within the privacy of our homes, seamlessly integrating into daily life. From Alexander the Great to contemporary media personalities, the image of celebrity has persisted throughout history, manifesting in various forms and contexts. Over time, as the relationship between society and the market evolved, so too did the roles and behaviors of celebrities. These transformations offer insights into the cultural climate, revealing shifts in habits and worldviews. In Türkiye, the emergence of private television channels brought an influx of celebrities into everyday life, making them a pervasive part of daily routines. To understand modern celebrity culture, it is essential to examine the ideological functions of media within political, economic, and social contexts. Within this framework, celebrities serve as both reflections and creators of cultural values and, at times, act as intermediaries, offering insights into the society of their era. Starting its broadcasting life in 1992 with religious films and religious conversation, Türkiye Newspaper, Radio, Television channel (TGRT) later changed its appearance, slogan, and the celebrities it featured in response to the political atmosphere. Celebrities played a critical role in transforming from the existing slogan 'Peace has come to the screen' to 'Watch and see what will happen”. Celebrities hold significant roles in society, and their images are produced and circulated by various actors, including media organizations and public relations teams. Understanding these dynamics is crucial for analyzing their influence and impact. This study aims to explore Turkish society in the 1990s, focusing on TGRT and its visual and discursive characteristics regarding celebrity figures such as Seda Sayan. The first section examines the historical development of celebrity culture and its transformations, guided by the conceptual framework of celebrity studies. The complex and interconnected image of celebrity, as introduced by post-structuralist approaches, plays a fundamental role in making sense of existing relationships. This section traces the existence and functions of celebrities from antiquity to the present day. The second section explores the economic, social, and cultural contexts of 1990s Türkiye, focusing on the media landscape and visibility that became prominent in the neoliberal era following the 1980s. This section also discusses the political factors underlying TGRT's transformation, such as the 1997 military memorandum. The third section analyzes TGRT as a case study, focusing on its significance as an Islamic television channel and the shifts in its public image, categorized into two distinct periods. The channel’s programming, which aligned with Islamic teachings, and the celebrities who featured prominently during these periods became the public face of both TGRT and the broader society. In particular, the transition to a more 'secular' format during TGRT's second phase is analyzed, focusing on changes in celebrity attire and program formats. This study reveals that celebrities are used as indicators of ideology, benefiting from this instrumentalization by enhancing their own fame and reflecting the prevailing cultural hegemony in society.Keywords: celebrity culture, media, neoliberalism, TGRT
Procedia PDF Downloads 301499 Application of Low-order Modeling Techniques and Neural-Network Based Models for System Identification
Authors: Venkatesh Pulletikurthi, Karthik B. Ariyur, Luciano Castillo
Abstract:
The system identification from the turbulence wakes will lead to the tactical advantage to prepare and also, to predict the trajectory of the opponents’ movements. A low-order modeling technique, POD, is used to predict the object based on the wake pattern and compared with pre-trained image recognition neural network (NN) to classify the wake patterns into objects. It is demonstrated that low-order modeling, POD, is able to predict the objects better compared to pretrained NN by ~30%.Keywords: the bluff body wakes, low-order modeling, neural network, system identification
Procedia PDF Downloads 1801498 Post 2014 Afghanistan and Its Implications on Pakistan
Authors: Naad-E-Ali Sulehria
Abstract:
This paper unfolds the facts and findings of Afghan scenario particularly its implications on Pakistan. At present, the Post 2014 withdrawal of US and ISAF combat forces from Afghan land is one of the up-to-the-minute issues among analysts of international relations. Deliberating from the current situation of Afghanistan towards its future prospects and the elements vibrating Afghanistan's internal dynamics, as well as exploitation of its resources by other states and non-state actors, are discussed accordingly. Moreover, the reasons behind such a paradigm shift in US foreign policy are tried to be contemplated with first hand knowledge. It is investigated that 'what is the current image of Afghanistan in today's world?', 'what will be its future aspects?', and 'what sort of Afghanistan does Pakistan foresees' as the concerned area of discussion.Keywords: Afghanistan, Pakistan, new great game, taliban
Procedia PDF Downloads 3001497 Clustering the Wheat Seeds Using SOM Artificial Neural Networks
Authors: Salah Ghamari
Abstract:
In this study, the ability of self organizing map artificial (SOM) neural networks in clustering the wheat seeds varieties according to morphological properties of them was considered. The SOM is one type of unsupervised competitive learning. Experimentally, five morphological features of 300 seeds (including three varieties: gaskozhen, Md and sardari) were obtained using image processing technique. The results show that the artificial neural network has a good performance (90.33% accuracy) in classification of the wheat varieties despite of high similarity in them. The highest classification accuracy (100%) was achieved for sardari.Keywords: artificial neural networks, clustering, self organizing map, wheat variety
Procedia PDF Downloads 6561496 Reviving Sustainable Architecture in Non-Wester Culture
Authors: Khaled Asfour
Abstract:
Going for LEED certification is the latest concern in Egyptian practice that only materialized during the last 4 years. Egyptian Consultant Group (ECG) together with Credit Agricole had the vision to design a headquarters (Cairo) that delivers a serious sustainable design. The bank is a strong advocator of “green banking” and supports renewable energy and energy saving projects. Their HQ in Cairo has passed all the hurdles to become the first platinum LEED certificate holder in Egypt. With this design Egyptian practice has finally re-engaged in a serious way with its long-standing traditions in sustainable architecture. Perhaps the closest to our memory is the medieval houses of Cairo. Few centuries later these qualities disappeared with the advent of Modern Movement that focused more on standard modernist image making than real localized quality of living environments. The first person to note this disappearance was Hassan Fathy half a century ago. Despite international applaud for his efforts he had no effect on prevailing local practice that continued senselessly adopting recycled modernist templates. The Egyptian society was not ready to accept any reference to historic architecture. Disciples of Hassan Fathy, few decades later sought, of tackling the lack of interest in green architecture in a different way. Mohamed Awad introduced in his design sustainable ideals inspired from traditional architecture rather than recycling directly historic forms and images. Despite success, this approach did not go far enough to influence the prevailing practice. Since year 2000 Egyptian economy was ebbing and flowing dramatically. This staggering fluctuation coupled by energy crisis has disillusioned architects and clients on the issue of modern image making. No more shining architecture under the sun with high running cost of fossil fuel. They sought of adopting contemporary green measures that offer pleasant living while saving on energy. A revival is on its way but is very slow and timid. The paper will present this problem of reviving sustainable architecture. How this process can be accelerated in order to give stronger impact on current practice will be addressed through the works of Mario Cucinella and Norman Foster.Keywords: LEED certification, Hasan Fathy, Medieval architecture, Mario Cucinella, Norman Foster
Procedia PDF Downloads 4911495 Implementation of a Method of Crater Detection Using Principal Component Analysis in FPGA
Authors: Izuru Nomura, Tatsuya Takino, Yuji Kageyama, Shin Nagata, Hiroyuki Kamata
Abstract:
We propose a method of crater detection from the image of the lunar surface captured by the small space probe. We use the principal component analysis (PCA) to detect craters. Nevertheless, considering severe environment of the space, it is impossible to use generic computer in practice. Accordingly, we have to implement the method in FPGA. This paper compares FPGA and generic computer by the processing time of a method of crater detection using principal component analysis.Keywords: crater, PCA, eigenvector, strength value, FPGA, processing time
Procedia PDF Downloads 5551494 A Survey and Analysis on Inflammatory Pain Detection and Standard Protocol Selection Using Medical Infrared Thermography from Image Processing View Point
Authors: Mrinal Kanti Bhowmik, Shawli Bardhan Jr., Debotosh Bhattacharjee
Abstract:
Human skin containing temperature value more than absolute zero, discharges infrared radiation related to the frequency of the body temperature. The difference in infrared radiation from the skin surface reflects the abnormality present in human body. Considering the difference, detection and forecasting the temperature variation of the skin surface is the main objective of using Medical Infrared Thermography(MIT) as a diagnostic tool for pain detection. Medical Infrared Thermography(MIT) is a non-invasive imaging technique that records and monitors the temperature flow in the body by receiving the infrared radiated from the skin and represent it through thermogram. The intensity of the thermogram measures the inflammation from the skin surface related to pain in human body. Analysis of thermograms provides automated anomaly detection associated with suspicious pain regions by following several image processing steps. The paper represents a rigorous study based survey related to the processing and analysis of thermograms based on the previous works published in the area of infrared thermal imaging for detecting inflammatory pain diseases like arthritis, spondylosis, shoulder impingement, etc. The study also explores the performance analysis of thermogram processing accompanied by thermogram acquisition protocols, thermography camera specification and the types of pain detected by thermography in summarized tabular format. The tabular format provides a clear structural vision of the past works. The major contribution of the paper introduces a new thermogram acquisition standard associated with inflammatory pain detection in human body to enhance the performance rate. The FLIR T650sc infrared camera with high sensitivity and resolution is adopted to increase the accuracy of thermogram acquisition and analysis. The survey of previous research work highlights that intensity distribution based comparison of comparable and symmetric region of interest and their statistical analysis assigns adequate result in case of identifying and detecting physiological disorder related to inflammatory diseases.Keywords: acquisition protocol, inflammatory pain detection, medical infrared thermography (MIT), statistical analysis
Procedia PDF Downloads 3421493 Transformation of Aluminum Unstable Oxyhydroxides in Ultrafine α-Al2O3 in Presence of Various Seeds
Authors: T. Kuchukhidze, N. Jalagonia, Z. Phachulia, R. Chedia
Abstract:
Ceramic obtained on the base of aluminum oxide has wide application range, because it has unique properties, for example, wear-resistance, dielectric characteristics, exploitation ability at high temperatures and in corrosive atmosphere. Low temperature synthesis of α-Al2O3 is energo-economical process and it is actual for developing technologies of corundum ceramics fabrication. In the present work possibilities of low temperature transformation of oxyhydroxides in α-Al2O3, during a presence of small amount of rare–earth elements compounds (also Th, Re), have been discussed. Aluminium unstable oxyhydroxides have been obtained by hydrolysis of aluminium isopropoxide, nitrates, sulphate, chloride in alkaline environment at 80-90ºC tempertures. β-Al(OH)3 has been received from aluminium powder by ultrasonic development. Drying of oxyhydroxide sol has been conducted with presence of various types seeds, which amount reaches 0,1-0,2% (mas). Neodymium, holmium, thorium, lanthanum, cerium, gadolinium, disprosium nitrates and rhenium carbonyls have been used as seeds and they have been added to the sol specimens in amount of 0.1-0.2% (mas) calculated on metals. Annealing of obtained gels is carried out at 70 – 1100ºC for 2 hrs. The same specimen transforms in α-Al2O3 at 1100ºC. At this temperature in case of presence of lanthanum and gadolinium transformation takes place by 70-85%. In case of presence of thorium stabilization of γ-and θ-phases takes place. It is established, that thorium causes inhibition of α-phase generation at 1100ºC, at the time in all other doped specimens α-phase is generated at lower temperatures (1000-1050ºC). During the work the following devices have been used: X-ray difractometer DRON-3M (Cu-Kα, Ni filter, 2º/min), High temperature vacuum furnace OXY-GON, electronic scanning microscopes Nikon ECLIPSE LV 150, NMM-800TRF, planetary mill Pulverisette 7 premium line, SHIMADZU Dynamic Ultra Micro Hardness Tester, DUH-211S, Analysette 12 Dyna sizer.Keywords: α-Alumina, combustion, phase transformation, seeding
Procedia PDF Downloads 3931492 Sensory Ethnography and Interaction Design in Immersive Higher Education
Authors: Anna-Kaisa Sjolund
Abstract:
The doctoral thesis examines interaction design and sensory ethnography as tools to create immersive education environments. In recent years, there has been increasing interest and discussions among researchers and educators on immersive education like augmented reality tools, virtual glasses and the possibilities to utilize them in education at all levels. Using virtual devices as learning environments it is possible to create multisensory learning environments. Sensory ethnography in this study refers to the way of the senses consider the impact on the information dynamics in immersive learning environments. The past decade has seen the rapid development of virtual world research and virtual ethnography. Christine Hine's Virtual Ethnography offers an anthropological explanation of net behavior and communication change. Despite her groundbreaking work, time has changed the users’ communication style and brought new solutions to do ethnographical research. The virtual reality with all its new potential has come to the fore and considering all the senses. Movie and image have played an important role in cultural research for centuries, only the focus has changed in different times and in a different field of research. According to Karin Becker, the role of image in our society is information flow and she found two meanings what the research of visual culture is. The images and pictures are the artifacts of visual culture. Images can be viewed as a symbolic language that allows digital storytelling. Combining the sense of sight, but also the other senses, such as hear, touch, taste, smell, balance, the use of a virtual learning environment offers students a way to more easily absorb large amounts of information. It offers also for teachers’ different ways to produce study material. In this article using sensory ethnography as research tool approaches the core question. Sensory ethnography is used to describe information dynamics in immersive environment through interaction design. Immersive education environment is understood as three-dimensional, interactive learning environment, where the audiovisual aspects are central, but all senses can be taken into consideration. When designing learning environments or any digital service, interaction design is always needed. The question what is interaction design is justified, because there is no simple or consistent idea of what is the interaction design or how it can be used as a research method or whether it is only a description of practical actions. When discussing immersive learning environments or their construction, consideration should be given to interaction design and sensory ethnography.Keywords: immersive education, sensory ethnography, interaction design, information dynamics
Procedia PDF Downloads 1371491 Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing
Authors: Anoop T. R., Otman Basir, Robert F. Hess, Eileen E. Birch, Brooke A. Koritala, Reed M. Jost, Becky Luu, David Stager, Ben Thompson
Abstract:
Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes).Keywords: strabismus, deep neural networks, face detection, facial landmarks, face alignment, segmentation, VGG 16, mask R-CNN, pupil coordinates, angle deviation, horizontal and vertical deviation
Procedia PDF Downloads 93