Search results for: malicious images detector
1107 Hybrid Deep Learning and FAST-BRISK 3D Object Detection Technique for Bin-Picking Application
Authors: Thanakrit Taweesoontorn, Sarucha Yanyong, Poom Konghuayrob
Abstract:
Robotic arms have gained popularity in various industries due to their accuracy and efficiency. This research proposes a method for bin-picking tasks using the Cobot, combining the YOLOv5 CNNs model for object detection and pose estimation with traditional feature detection (FAST), feature description (BRISK), and matching algorithms. By integrating these algorithms and utilizing a small-scale depth sensor camera for capturing depth and color images, the system achieves real-time object detection and accurate pose estimation, enabling the robotic arm to pick objects correctly in both position and orientation. Furthermore, the proposed method is implemented within the ROS framework to provide a seamless platform for robotic control and integration. This integration of robotics, cameras, and AI technology contributes to the development of industrial robotics, opening up new possibilities for automating challenging tasks and improving overall operational efficiency.Keywords: robotic vision, image processing, applications of robotics, artificial intelligent
Procedia PDF Downloads 971106 A Study on Real-Time Fluorescence-Photoacoustic Imaging System for Mouse Thrombosis Monitoring
Authors: Sang Hun Park, Moung Young Lee, Su Min Yu, Hyun Sang Jo, Ji Hyeon Kim, Chul Gyu Song
Abstract:
A near-infrared light source used as a light source in the fluorescence imaging system is suitable for use in real-time during the operation since it has no interference in surgical vision. However, fluorescence images do not have depth information. In this paper, we configured the device with the research on molecular imaging systems for monitoring thrombus imaging using fluorescence and photoacoustic. Fluorescence imaging was performed using a phantom experiment in order to search the exact location, and the Photoacoustic image was in order to detect the depth. Fluorescence image obtained when evaluated through current phantom experiments when the concentration of the contrast agent is 25μg / ml, it was confirmed that it looked sharper. The phantom experiment is has shown the possibility with the fluorescence image and photoacoustic image using an indocyanine green contrast agent. For early diagnosis of cardiovascular diseases, more active research with the fusion of different molecular imaging devices is required.Keywords: fluorescence, photoacoustic, indocyanine green, carotid artery
Procedia PDF Downloads 6011105 Production, Quality Control, and Biodistribution Assessment of 111In-BPAMD as a New Bone Imaging Agent
Authors: H. Yousefnia, A. Aghanejad, A. Mirzaei, R. Enayati, A. R. Jalilian, S. Zolghadri
Abstract:
Bone metastases occur in many cases at an early stage of the tumour disease; however, their symptoms are recognized rather late. The aim of this study was the preparation and quality control of 111In-BPAMD for diagnostic purposes. 111In was produced at the Agricultural, Medical, and Industrial Research School (AMIRS) by means of 30 MeV cyclotron via natCd(p,x)111In reaction. Complexion of In‐111 with BPAMD was carried out by using acidic solution of 111InCl3 and BPAMD in absolute water. The effect of various parameters such as temperature, ligand concentration, pH, and time on the radiolabeled yield was studied. 111In-BPAMD was prepared successfully with the radiochemical purity of 95% at the optimized condition (100 µg of BPAMD, pH=5, and at 90°C for 1 h) which was measured by ITLC method. The final solution was injected to wild-type mice and biodistribution was determined up to 72 h. SPECT images were acquired after 2 and 24 h post injection. Both the biodistribution studies and SPECT imaging indicated high bone uptake while accumulation in other organs was approximately negligible. The results show that 111In-BPAMD can be used as an excellent tracer for diagnosis of bone metastases by SPECT imaging.Keywords: biodistribution, BPAMD, 111In, SPECT
Procedia PDF Downloads 5611104 Multi-Spectral Medical Images Enhancement Using a Weber’s law
Authors: Muna F. Al-Sammaraie
Abstract:
The aim of this research is to present a multi spectral image enhancement methods used to achieve highly real digital image populates only a small portion of the available range of digital values. Also, a quantitative measure of image enhancement is presented. This measure is related with concepts of the Webers Low of the human visual system. For decades, several image enhancement techniques have been proposed. Although most techniques require profuse amount of advance and critical steps, the result for the perceive image are not as satisfied. This study involves changing the original values so that more of the available range is used; then increases the contrast between features and their backgrounds. It consists of reading the binary image on the basis of pixels taking them byte-wise and displaying it, calculating the statistics of an image, automatically enhancing the color of the image based on statistics calculation using algorithms and working with RGB color bands. Finally, the enhanced image is displayed along with image histogram. A number of experimental results illustrated the performance of these algorithms. Particularly the quantitative measure has helped to select optimal processing parameters: the best parameters and transform.Keywords: image enhancement, multi-spectral, RGB, histogram
Procedia PDF Downloads 3281103 Mesoporous Carbon Ceramic SiO2/C Prepared by Sol-Gel Method and Modified with Cobalt Phthalocyanine and Used as an Electrochemical Sensor for Nitrite
Authors: Abdur Rahim, Lauro Tatsuo Kubota, Yoshitaka Gushikem
Abstract:
Carbon ceramic mesoporous SiO2/50wt%C (SBET= 170 m2g-1), where C is graphite, was prepared by the sol gel method. Scanning electron microscopy images and the respective element mapping showed that, within the magnification used, no phase segregation was detectable. It presented the electric conductivities of 0.49 S cm-1. This material was used to support cobalt phthalocyanine, prepared in situ, to assure a homogeneous dispersion of the electro active complex in the pores of the matrix. The surface density of cobalt phthalocyanine, on the matrix surfaces was 0.015 mol cm-2. Pressed disk, made with SiO2/50wt%C/CoPc, was used to fabricate an electrode and tested as sensors for nitrite determination by electro chemical technique. A linear response range between 0.039 and 0.42 mmol l−1,and correlation coefficient r=0.9996 was obtained. The electrode was chemically very stable and presented very high sensitivity for this analyte, with a limit of detection, LOD = 1.087 x 10-6 mol L-1.Keywords: SiO2/C/CoPc, sol-gel method, electrochemical sensor, nitrite oxidation, carbon ceramic material, cobalt phthalocyanine
Procedia PDF Downloads 3171102 Variability of the X-Ray Sun during Descending Period of Solar Cycle 23
Authors: Zavkiddin Mirtoshev, Mirabbos Mirkamalov
Abstract:
We have analyzed the time series of full disk integrated soft X-ray (SXR) and hard X-ray (HXR) emission from the solar corona during 2004 January 1 to 2009 December 31, covering the descending phase of solar cycle 23. We employed the daily X-ray index (DXI) derived from X-ray observations from the Solar X-ray Spectrometer (SOXS) mission in four different energy bands: 4-5.5; 5.5-7.5 keV (SXR) and 15-20; 20-25 keV (HXR). The application of Lomb-Scargle periodogram technique to the DXI time series observed by the Silicium detector in the energy bands reveals several short and intermediate periodicities of the X-ray corona. The DXI explicitly show the periods of 13.6 days, 26.7 days, 128.5 days, 151 days, 180 days, 220 days, 270 days, 1.24 year and 1.54 year periods in SXR as well as in HXR energy bands. Although all periods are above 70% confidence level in all energy bands, they show strong power in HXR emission in comparison to SXR emission. These periods are distinctly clear in three bands but somehow not unambiguously clear in 5.5-7.5 keV band. This might be due to the presence of Ferrum and Ferrum/Niccolum line features, which frequently vary with small scale flares like micro-flares. The regular 27-day rotation and 13.5 day period of sunspots from the invisible side of the Sun are found stronger in HXR band relative to SXR band. However, flare activity Rieger periods (150 and 180 days) and near Rieger period 220 days are very strong in HXR emission which is very much expected. On the other hand, our current study reveals strong 270 day periodicity in SXR emission which may be connected with tachocline, similar to a fundamental rotation period of the Sun. The 1.24 year and 1.54 year periodicities, represented from the present research work, are well observable in both SXR as well as in HXR channels. These long-term periodicities must also have connection with tachocline and should be regarded as a consequence of variation in rotational modulation over long time scales. The 1.24 year and 1.54 year periods are also found great importance and significance in the life formation and it evolution on the Earth, and therefore they also have great astro-biological importance. We gratefully acknowledge support by the Indian Centre for Space Science and Technology Education in Asia and the Pacific (CSSTEAP, the Centre is affiliated to the United Nations), Physical Research Laboratory (PRL) at Ahmedabad, India. This work has done under the supervision of Prof. Rajmal Jain and paper consist materials of pilot project and research part of the M. Tech program which was made during Space and Atmospheric Science Course.Keywords: corona, flares, solar activity, X-ray emission
Procedia PDF Downloads 3451101 Climate Change Effects of Vehicular Carbon Monoxide Emission from Road Transportation in Part of Minna Metropolis, Niger State, Nigeria
Authors: H. M. Liman, Y. M. Suleiman A. A. David
Abstract:
Poor air quality often considered one of the greatest environmental threats facing the world today is caused majorly by the emission of carbon monoxide into the atmosphere. The principal air pollutant is carbon monoxide. One prominent source of carbon monoxide emission is the transportation sector. Not much was known about the emission levels of carbon monoxide, the primary pollutant from the road transportation in the study area. Therefore, this study assessed the levels of carbon monoxide emission from road transportation in the Minna, Niger State. The database shows the carbon monoxide data collected. MSA Altair gas alert detector was used to take the carbon monoxide emission readings in Parts per Million for the peak and off-peak periods of vehicular movement at the road intersections. Their Global Positioning System (GPS) coordinates were recorded in the Universal Transverse Mercator (UTM). Bar chart graphs were plotted by using the emissions level of carbon dioxide as recorded on the field against the scientifically established internationally accepted safe limit of 8.7 Parts per Million of carbon monoxide in the atmosphere. Further statistical analysis was also carried out on the data recorded from the field using the Statistical Package for Social Sciences (SPSS) software and Microsoft excel to show the variance of the emission levels of each of the parameters in the study area. The results established that emissions’ level of atmospheric carbon monoxide from the road transportation in the study area exceeded the internationally accepted safe limits of 8.7 parts per million. In addition, the variations in the average emission levels of CO between the four parameters showed that morning peak is having the highest average emission level of 24.5PPM followed by evening peak with 22.84PPM while morning off peak is having 15.33 and the least is evening off peak 12.94PPM. Based on these results, recommendations made for poor air quality mitigation via carbon monoxide emissions reduction from transportation include Introduction of the urban mass transit would definitely reduce the number of traffic on the roads, hence the emissions from several vehicles that would have been on the road. This would also be a cheaper means of transportation for the masses and Encouraging the use of vehicles using alternative sources of energy like solar, electric and biofuel will also result in less emission levels as the these alternative energy sources other than fossil fuel originated diesel and petrol vehicles do not emit especially carbon monoxide.Keywords: carbon monoxide, climate change emissions, road transportation, vehicular
Procedia PDF Downloads 3751100 Spatio-Temporal Dynamic of Woody Vegetation Assessment Using Oblique Landscape Photographs
Authors: V. V. Fomin, A. P. Mikhailovich, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova
Abstract:
Ground-level landscape photos can be used as a source of objective data on woody vegetation and vegetation dynamics. We proposed a method for processing, analyzing, and presenting ground photographs, which has the following advantages: 1) researcher has to form holistic representation of the study area in form of a set of interlapping ground-level landscape photographs; 2) it is necessary to define or obtain characteristics of the landscape, objects, and phenomena present on the photographs; 3) it is necessary to create new or supplement existing textual descriptions and annotations for the ground-level landscape photographs; 4) single or multiple ground-level landscape photographs can be used to develop specialized geoinformation layers, schematic maps or thematic maps; 5) it is necessary to determine quantitative data that describes both images as a whole, and displayed objects and phenomena, using algorithms for automated image analysis. It is suggested to match each photo with a polygonal geoinformation layer, which is a sector consisting of areas corresponding with parts of the landscape visible in the photos. Calculation of visibility areas is performed in a geoinformation system within a sector using a digital model of a study area relief and visibility analysis functions. Superposition of the visibility sectors corresponding with various camera viewpoints allows matching landscape photos with each other to create a complete and wholesome representation of the space in question. It is suggested to user-defined data or phenomenons on the images with the following superposition over the visibility sector in the form of map symbols. The technology of geoinformation layers’ spatial superposition over the visibility sector creates opportunities for image geotagging using quantitative data obtained from raster or vector layers within the sector with the ability to generate annotations in natural language. The proposed method has proven itself well for relatively open and clearly visible areas with well-defined relief, for example, in mountainous areas in the treeline ecotone. When the polygonal layers of visibility sectors for a large number of different points of photography are topologically superimposed, a layer of visibility of sections of the entire study area is formed, which is displayed in the photographs. Also, as a result of this overlapping of sectors, areas that did not appear in the photo will be assessed as gaps. According to the results of this procedure, it becomes possible to obtain information about the photos that display a specific area and from which points of photography it is visible. This information may be obtained either as a query on the map or as a query for the attribute table of the layer. The method was tested using repeated photos taken from forty camera viewpoints located on Ray-Iz mountain massif (Polar Urals, Russia) from 1960 until 2023. It has been successfully used in combination with other ground-based and remote sensing methods of studying the climate-driven dynamics of woody vegetation in the Polar Urals. Acknowledgment: This research was collaboratively funded by the Russian Ministry for Science and Education project No. FEUG-2023-0002 (image representation) and Russian Science Foundation project No. 24-24-00235 (automated textual description).Keywords: woody, vegetation, repeated, photographs
Procedia PDF Downloads 891099 Efects of Data Corelation in a Sparse-View Compresive Sensing Based Image Reconstruction
Authors: Sajid Abas, Jon Pyo Hong, Jung-Ryun Le, Seungryong Cho
Abstract:
Computed tomography and laminography are heavily investigated in a compressive sensing based image reconstruction framework to reduce the dose to the patients as well as to the radiosensitive devices such as multilayer microelectronic circuit boards. Nowadays researchers are actively working on optimizing the compressive sensing based iterative image reconstruction algorithm to obtain better quality images. However, the effects of the sampled data’s properties on reconstructed the image’s quality, particularly in an insufficient sampled data conditions have not been explored in computed laminography. In this paper, we investigated the effects of two data properties i.e. sampling density and data incoherence on the reconstructed image obtained by conventional computed laminography and a recently proposed method called spherical sinusoidal scanning scheme. We have found that in a compressive sensing based image reconstruction framework, the image quality mainly depends upon the data incoherence when the data is uniformly sampled.Keywords: computed tomography, computed laminography, compressive sending, low-dose
Procedia PDF Downloads 4641098 Hybrid Conductive Polymer Composites: Effect of Mixed Fillers and Polymer Blends on Pyroresistive Properties
Authors: Eric Asare, Jamie Evans, Mark Newton, Emiliano Bilotti
Abstract:
High-density polyethylene (HDPE) filled with silver coated glass flakes (5µm) was investigated and the effect on PTC by addition of a second filler (100µm silver coated glass flake) or matrix (polypropylene elastomer) to the composite were examined. The addition of the secondary filler promoted the electrical properties of the composite. The bigger flakes acted like a bridge between the small flakes and this helped to enhance the electrical properties. The PTC behaviour of the composite was also improved by the addition of the bigger flakes due to the increase in separation distance between particles caused by the bigger flakes. Addition of small amount of polypropylene elastomer enhanced not only PTC effect but also improved substantially the flexibility of the composite as well as reduces the overall filler content. SEM images showed that the fillers were dispersed in the HDPE phase.Keywords: positive temperature coefficient, conductive polymer composite, electrical conductivity, high density polyethylene
Procedia PDF Downloads 4711097 A Proposal of Multi-modal Teaching Model for College English
Authors: Huang Yajing
Abstract:
Multimodal discourse refers to the phenomenon of using various senses such as hearing, vision, and touch to communicate through various means and symbolic resources such as language, images, sounds, and movements. With the development of modern technology and multimedia, language and technology have become inseparable, and foreign language teaching is becoming more and more modal. Teacher-student communication resorts to multiple senses and uses multiple symbol systems to construct and interpret meaning. The classroom is a semiotic space where multimodal discourses are intertwined. College English multi-modal teaching is to rationally utilize traditional teaching methods while mobilizing and coordinating various modern teaching methods to form a joint force to promote teaching and learning. Multimodal teaching makes full and reasonable use of various meaning resources and can maximize the advantages of multimedia and network environments. Based upon the above theories about multimodal discourse and multimedia technology, the present paper will propose a multi-modal teaching model for college English in China.Keywords: multimodal discourse, multimedia technology, English education, applied linguistics
Procedia PDF Downloads 681096 The Pedagogical Functions of Arts and Cultural-Heritage Education with ICTs in Museums – A Case Study of FINNA and Google Art
Authors: Pei Zhao, Sara Sintonen, Heikki Kynäslahti
Abstract:
Digital museums and arts galleries have become popular in museum education and management. Museum and arts galleries website is one of the most effective and efficient ways. Google, a corporation specializing in Internet-related services and projects, not only puts high-resolution arts images online, but also uses augmented-reality in digital art gallery. The Google Art Project, Google’s production, provides users a platform in appreciating and learning arts. After Google Art Project, more and more countries released their own museum and arts gallery websites, like British Paining in BBC, and FINNA in Finland. Pedagogical function in these websites is one of the most important functions. In this paper, we use Google Art Project and FINNA as the case studies to investigate what kinds of pedagogical functions exist in these websites. Finally, this paper will give the recommendation to digital museums and websites development, especially the pedagogical functions development, in the future.Keywords: arts education, cultural-heritage education, education with ICTs, pedagogical functions
Procedia PDF Downloads 5481095 A Topological Approach for Motion Track Discrimination
Authors: Tegan H. Emerson, Colin C. Olson, George Stantchev, Jason A. Edelberg, Michael Wilson
Abstract:
Detecting small targets at range is difficult because there is not enough spatial information present in an image sub-region containing the target to use correlation-based methods to differentiate it from dynamic confusers present in the scene. Moreover, this lack of spatial information also disqualifies the use of most state-of-the-art deep learning image-based classifiers. Here, we use characteristics of target tracks extracted from video sequences as data from which to derive distinguishing topological features that help robustly differentiate targets of interest from confusers. In particular, we calculate persistent homology from time-delayed embeddings of dynamic statistics calculated from motion tracks extracted from a wide field-of-view video stream. In short, we use topological methods to extract features related to target motion dynamics that are useful for classification and disambiguation and show that small targets can be detected at range with high probability.Keywords: motion tracks, persistence images, time-delay embedding, topological data analysis
Procedia PDF Downloads 1141094 Study of Polyphenol Profile and Antioxidant Capacity in Italian Ancient Apple Varieties by Liquid Chromatography
Authors: A. M. Tarola, R. Preti, A. M. Girelli, P. Campana
Abstract:
Safeguarding, studying and enhancing biodiversity play an important and indispensable role in re-launching agriculture. The ancient local varieties are therefore a precious resource for genetic and health improvement. In order to protect biodiversity through the recovery and valorization of autochthonous varieties, in this study we analyzed 12 samples of four ancient apple cultivars representative of Friuli Venezia Giulia, selected by local farmers who work on a project for the recovery of ancient apple cultivars. The aim of this study is to evaluate the polyphenolic profile and the antioxidant capacity that characterize the organoleptic and functional qualities of this fruit species, besides having beneficial properties for health. In particular, for each variety, the following compounds were analyzed, both in the skins and in the pulp: gallic acid, catechin, chlorogenic acid, epicatechin, caffeic acid, coumaric acid, ferulic acid, rutin, phlorizin, phloretin and quercetin to highlight any differences in the edible parts of the apple. The analysis of individual phenolic compounds was performed by High Performance Liquid Chromatography (HPLC) coupled with a diode array UV detector (DAD), the antioxidant capacity was estimated using an in vitro essay based on a Free Radical Scavenging Method and the total phenolic compounds was determined using the Folin-Ciocalteau method. From the results, it is evident that the catechins are the most present polyphenols, reaching a value of 140-200 μg/g in the pulp and of 400-500 μg/g in the skin, with the prevalence of epicatechin. Catechins and phlorizin, a dihydrohalcone typical of apples, are always contained in larger quantities in the peel. Total phenolic compounds content was positively correlated with antioxidant activity in apple pulp (r2 = 0,850) and peel (r2 = 0,820). Comparing the results, differences between the varieties analyzed and between the edible parts (pulp and peel) of the apple were highlighted. In particular, apple peel is richer in polyphenolic compounds than pulp and flavonols are exclusively present in the peel. In conclusion, polyphenols, being antioxidant substances, have confirmed the benefits of fruit in the diet, especially as a prevention and treatment for degenerative diseases. They demonstrated to be also a good marker for the characterization of different apple cultivars. The importance of protecting biodiversity in agriculture was also highlighted through the exploitation of native products and ancient varieties of apples now forgotten.Keywords: apple, biodiversity, polyphenols, antioxidant activity, HPLC-DAD, characterization
Procedia PDF Downloads 1361093 Estimation of Grinding Force and Material Characterization of Ceramic Matrix Composite
Authors: Lakshminarayanan, Vijayaraghavan, Krishnamurthy
Abstract:
The ever-increasing demand for high efficiency in automotive and aerospace applications requires new materials to suit to high temperature applications. The Ceramic Matrix Composites nowadays find its applications for high strength and high temperature environments. In this paper, Al2O3 and Sic ceramic materials are taken in particulate form as matrix and reinforcement respectively. They are blended together in Ball Milling and compacted in Cold Compaction Machine by powder metallurgy technique. Scanning Electron Microscope images are taken for the samples in order to find out proper blending of powders. Micro harness testing is also carried out for the samples in Vickers Micro Hardness Testing Equipment. Surface grinding of the samples is also carried out in Surface Grinding Machine in order to find out grinding force estimates. The surface roughness of the grounded samples is also taken in Surface Profilometer. These are yielding promising results.Keywords: ceramic matrix composite, cold compaction, material characterization, particulate and surface grinding
Procedia PDF Downloads 2421092 Establishing Correlation between Urban Heat Island and Urban Greenery Distribution by Means of Remote Sensing and Statistics Data to Prioritize Revegetation in Yerevan
Authors: Linara Salikhova, Elmira Nizamova, Aleksandra Katasonova, Gleb Vitkov, Olga Sarapulova.
Abstract:
While most European cities conduct research on heat-related risks, there is a research gap in the Caucasus region, particularly in Yerevan, Armenia. This study aims to test the method of establishing a correlation between urban heat islands (UHI) and urban greenery distribution for prioritization of heat-vulnerable areas for revegetation. Armenia has failed to consider measures to mitigate UHI in urban development strategies despite a 2.1°C increase in average annual temperature over the past 32 years. However, planting vegetation in the city is commonly used to deal with air pollution and can be effective in reducing UHI if it prioritizes heat-vulnerable areas. The research focuses on establishing such priorities while considering the distribution of urban greenery across the city. The lack of spatially explicit air temperature data necessitated the use of satellite images to achieve the following objectives: (1) identification of land surface temperatures (LST) and quantification of temperature variations across districts; (2) classification of massifs of land surface types using normalized difference vegetation index (NDVI); (3) correlation of land surface classes with LST. Examination of the heat-vulnerable city areas (in this study, the proportion of individuals aged 75 years and above) is based on demographic data (Census 2011). Based on satellite images (Sentinel-2) captured on June 5, 2021, NDVI calculations were conducted. The massifs of the land surface were divided into five surface classes. Due to capacity limitations, the average LST for each district was identified using one satellite image from Landsat-8 on August 15, 2021. In this research, local relief is not considered, as the study mainly focuses on the interconnection between temperatures and green massifs. The average temperature in the city is 3.8°C higher than in the surrounding non-urban areas. The temperature excess ranges from a low in Norq Marash to a high in Nubarashen. Norq Marash and Avan have the highest tree and grass coverage proportions, with 56.2% and 54.5%, respectively. In other districts, the balance of wastelands and buildings is three times higher than the grass and trees, ranging from 49.8% in Quanaqer-Zeytun to 76.6% in Nubarashen. Studies have shown that decreased tree and grass coverage within a district correlates with a higher temperature increase. The temperature excess is highest in Erebuni, Ajapnyak, and Nubarashen districts. These districts have less than 25% of their area covered with grass and trees. On the other hand, Avan and Norq Marash districts have a lower temperature difference, as more than 50% of their areas are covered with trees and grass. According to the findings, a significant proportion of the elderly population (35%) aged 75 years and above reside in the Erebuni, Ajapnyak, and Shengavit neighborhoods, which are more susceptible to heat stress with an LST higher than in other city districts. The findings suggest that the method of comparing the distribution of green massifs and LST can contribute to the prioritization of heat-vulnerable city areas for revegetation. The method can become a rationale for the formation of an urban greening program.Keywords: heat-vulnerability, land surface temperature, urban greenery, urban heat island, vegetation
Procedia PDF Downloads 721091 Exploring Relationship between Attention and Consciousness
Authors: Aarushi Agarwal, Tara Singh, Anju Lata Singh, Trayambak Tiwari, Indramani Lal Singh
Abstract:
The existing interdependent relationship between attention and consciousness has been put to debate since long. To testify the nature, dual-task paradigm has been used to simultaneously manipulate awareness and attention. With central discrimination task which is attentional demanding, participants also perform simple discrimination task in the periphery in near absence of attention. Individual-based analysis of performance accuracy in single and dual condition showed and above chance level performance i.e. more than 80%. In order to widen the understanding of extent of discrimination carried in near absence of attention, natural image and its geometric equivalent shape were presented in the periphery; synthetic objects accounted to lower level of performance than natural objects in dual condition. The gaze plot and heatmap indicate that peripheral performance do not necessarily involve saccade every time, verifying the discrimination in the periphery was in near absence of attention. Thus our studies show an interdependent nature of attention and awareness.Keywords: attention, awareness, dual task paradigm, natural and geometric images
Procedia PDF Downloads 5181090 Calibration of 2D and 3D Optical Measuring Instruments in Industrial Environments at Submillimeter Range
Authors: Alberto Mínguez-Martínez, Jesús de Vicente y Oliva
Abstract:
Modern manufacturing processes have led to the miniaturization of systems and, as a result, parts at the micro-and nanoscale are produced. This trend seems to become increasingly important in the near future. Besides, as a requirement of Industry 4.0, the digitalization of the models of production and processes makes it very important to ensure that the dimensions of newly manufactured parts meet the specifications of the models. Therefore, it is possible to reduce the scrap and the cost of non-conformities, ensuring the stability of the production at the same time. To ensure the quality of manufactured parts, it becomes necessary to carry out traceable measurements at scales lower than one millimeter. Providing adequate traceability to the SI unit of length (the meter) to 2D and 3D measurements at this scale is a problem that does not have a unique solution in industrial environments. Researchers in the field of dimensional metrology all around the world are working on this issue. A solution for industrial environments, even if it is not complete, will enable working with some traceability. At this point, we believe that the study of the surfaces could provide us with a first approximation to a solution. Among the different options proposed in the literature, the areal topography methods may be the most relevant because they could be compared to those measurements performed using Coordinate Measuring Machines (CMM’s). These measuring methods give (x, y, z) coordinates for each point, expressing it in two different ways, either expressing the z coordinate as a function of x, denoting it as z(x), for each Y-axis coordinate, or as a function of the x and y coordinates, denoting it as z (x, y). Between others, optical measuring instruments, mainly microscopes, are extensively used to carry out measurements at scales lower than one millimeter because it is a non-destructive measuring method. In this paper, the authors propose a calibration procedure for the scales of optical measuring instruments, particularizing for a confocal microscope, using material standards easy to find and calibrate in metrology and quality laboratories in industrial environments. Confocal microscopes are measuring instruments capable of filtering the out-of-focus reflected light so that when it reaches the detector, it is possible to take pictures of the part of the surface that is focused. Varying and taking pictures at different Z levels of the focus, a specialized software interpolates between the different planes, and it could reconstruct the surface geometry into a 3D model. As it is easy to deduce, it is necessary to give traceability to each axis. As a complementary result, the roughness Ra parameter will be traced to the reference. Although the solution is designed for a confocal microscope, it may be used for the calibration of other optical measuring instruments by applying minor changes.Keywords: industrial environment, confocal microscope, optical measuring instrument, traceability
Procedia PDF Downloads 1561089 Protocol for Dynamic Load Distributed Low Latency Web-Based Augmented Reality and Virtual Reality
Authors: Rohit T. P., Sahil Athrij, Sasi Gopalan
Abstract:
Currently, the content entertainment industry is dominated by mobile devices. As the trends slowly shift towards Augmented/Virtual Reality applications the computational demands on these devices are increasing exponentially and we are already reaching the limits of hardware optimizations. This paper proposes a software solution to this problem. By leveraging the capabilities of cloud computing we can offload the work from mobile devices to dedicated rendering servers that are way more powerful. But this introduces the problem of latency. This paper introduces a protocol that can achieve high-performance low latency Augmented/Virtual Reality experience. There are two parts to the protocol, 1) In-flight compression The main cause of latency in the system is the time required to transmit the camera frame from client to server. The round trip time is directly proportional to the amount of data transmitted. This can therefore be reduced by compressing the frames before sending. Using some standard compression algorithms like JPEG can result in minor size reduction only. Since the images to be compressed are consecutive camera frames there won't be a lot of changes between two consecutive images. So inter-frame compression is preferred. Inter-frame compression can be implemented efficiently using WebGL but the implementation of WebGL limits the precision of floating point numbers to 16bit in most devices. This can introduce noise to the image due to rounding errors, which will add up eventually. This can be solved using an improved interframe compression algorithm. The algorithm detects changes between frames and reuses unchanged pixels from the previous frame. This eliminates the need for floating point subtraction thereby cutting down on noise. The change detection is also improved drastically by taking the weighted average difference of pixels instead of the absolute difference. The kernel weights for this comparison can be fine-tuned to match the type of image to be compressed. 2) Dynamic Load distribution Conventional cloud computing architectures work by offloading as much work as possible to the servers, but this approach can cause a hit on bandwidth and server costs. The most optimal solution is obtained when the device utilizes 100% of its resources and the rest is done by the server. The protocol balances the load between the server and the client by doing a fraction of the computing on the device depending on the power of the device and network conditions. The protocol will be responsible for dynamically partitioning the tasks. Special flags will be used to communicate the workload fraction between the client and the server and will be updated in a constant interval of time ( or frames ). The whole of the protocol is designed so that it can be client agnostic. Flags are available to the client for resetting the frame, indicating latency, switching mode, etc. The server can react to client-side changes on the fly and adapt accordingly by switching to different pipelines. The server is designed to effectively spread the load and thereby scale horizontally. This is achieved by isolating client connections into different processes.Keywords: 2D kernelling, augmented reality, cloud computing, dynamic load distribution, immersive experience, mobile computing, motion tracking, protocols, real-time systems, web-based augmented reality application
Procedia PDF Downloads 741088 Synoptic Analysis of a Heavy Flood in the Province of Sistan-Va-Balouchestan: Iran January 2020
Authors: N. Pegahfar, P. Ghafarian
Abstract:
In this research, the synoptic weather conditions during the heavy flood of 10-12 January 2020 in the Sistan-va-Balouchestan Province of Iran will be analyzed. To this aim, reanalysis data from the National Centers for Environmental Prediction (NCEP) and National Center for Atmospheric Research (NCAR), NCEP Global Forecasting System (GFS) analysis data, measured data from a surface station together with satellite images from the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT) have been used from 9 to 12 January 2020. Atmospheric parameters both at the lower troposphere and also at the upper part of that have been used, including absolute vorticity, wind velocity, temperature, geopotential height, relative humidity, and precipitation. Results indicated that both lower-level and upper-level currents were strong. In addition, the transport of a large amount of humidity from the Oman Sea and the Red Sea to the south and southeast of Iran (Sistan-va-Balouchestan Province) led to the vast and unexpected precipitation and then a heavy flood.Keywords: Sistan-va-Balouchestn Province, heavy flood, synoptic, analysis data
Procedia PDF Downloads 1021087 Quantum Dots Incorporated in Biomembrane Models for Cancer Marker
Authors: Thiago E. Goto, Carla C. Lopes, Helena B. Nader, Anielle C. A. Silva, Noelio O. Dantas, José R. Siqueira Jr., Luciano Caseli
Abstract:
Quantum dots (QD) are semiconductor nanocrystals that can be employed in biological research as a tool for fluorescence imagings, having the potential to expand in vivo and in vitro analysis as cancerous cell biomarkers. Particularly, cadmium selenide (CdSe) magic-sized quantum dots (MSQDs) exhibit stable luminescence that is feasible for biological applications, especially for imaging of tumor cells. For these facts, it is interesting to know the mechanisms of action of how such QDs mark biological cells. For that, simplified models are a suitable strategy. Among these models, Langmuir films of lipids formed at the air-water interface seem to be adequate since they can mimic half a membrane. They are monomolecular films formed at liquid-gas interfaces that can spontaneously form when organic solutions of amphiphilic compounds are spread on the liquid-gas interface. After solvent evaporation, the monomolecular film is formed, and a variety of techniques, including tensiometric, spectroscopic and optic can be applied. When the monolayer is formed by membrane lipids at the air-water interface, a model for half a membrane can be inferred where the aqueous subphase serve as a model for external or internal compartment of the cell. These films can be transferred to solid supports forming the so-called Langmuir-Blodgett (LB) films, and an ampler variety of techniques can be additionally used to characterize the film, allowing for the formation of devices and sensors. With these ideas in mind, the objective of this work was to investigate the specific interactions of CdSe MSQDs with tumorigenic and non-tumorigenic cells using Langmuir monolayers and LB films of lipids and specific cell extracts as membrane models for diagnosis of cancerous cells. Surface pressure-area isotherms and polarization modulation reflection-absorption spectroscopy (PM-IRRAS) showed an intrinsic interaction between the quantum dots, inserted in the aqueous subphase, and Langmuir monolayers, constructed either of selected lipids or of non-tumorigenic and tumorigenic cells extracts. The quantum dots expanded the monolayers and changed the PM-IRRAS spectra for the lipid monolayers. The mixed films were then compressed to high surface pressures and transferred from the floating monolayer to solid supports by using the LB technique. Images of the films were then obtained with atomic force microscopy (AFM) and confocal microscopy, which provided information about the morphology of the films. Similarities and differences between films with different composition representing cell membranes, with or without CdSe MSQDs, was analyzed. The results indicated that the interaction of quantum dots with the bioinspired films is modulated by the lipid composition. The properties of the normal cell monolayer were not significantly altered, whereas for the tumorigenic cell monolayer models, the films presented significant alteration. The images therefore exhibited a stronger effect of CdSe MSQDs on the models representing cancerous cells. As important implication of these findings, one may envisage for new bioinspired surfaces based on molecular recognition for biomedical applications.Keywords: biomembrane, langmuir monolayers, quantum dots, surfaces
Procedia PDF Downloads 1961086 Application of 3D Apparel CAD for Costume Reproduction
Authors: Zi Y. Kang, Tracy D. Cassidy, Tom Cassidy
Abstract:
3D apparel CAD is one of the remarkable products in advanced technology which enables intuitive design, visualisation and evaluation of garments through stereoscopic drape simulation. The progressive improvements of 3D apparel CAD have led to the creation of more realistic clothing simulation which is used not only in design development but also in presentation, promotion and communication for fashion as well as other industries such as film, game and social network services. As a result, 3D clothing technology is becoming more ubiquitous in human culture and lives today. This study considers that such phenomenon implies that the technology has reached maturity and it is time to inspect the status of current technology and to explore its potential uses in ways to create cultural values to further move forward. For this reason, this study aims to generate virtual costumes as culturally significant objects using 3D apparel CAD and to assess its capability, applicability and attitudes of the audience towards clothing simulation through comparison with physical counterparts. Since the access to costume collection is often limited due to the conservative issues, the technology may make valuable contribution by democratization of culture and knowledge for museums and its audience. This study is expected to provide foundation knowledge for development of clothing technology and for expanding its boundary of practical uses. To prevent any potential damage, two replicas of the costumes in the 1860s and 1920s at the Museum of London were chosen as samples. Their structural, visual and physical characteristics were measured and collected using patterns, scanned images of fabrics and objective fabric measurements with scale, KES-F (Kawabata Evaluation System of Fabrics) and Titan. Commercial software, DC Suite 5.0 was utilised to create virtual costumes applying collected data and the following outcomes were produced for the evaluation: Images of virtual costumes and video clips showing static and dynamic simulation. Focus groups were arranged with fashion design students and the public for evaluation which exposed the outcomes together with physical samples, fabrics swatches and photographs. The similarities, application and acceptance of virtual costumes were estimated through discussion and a questionnaire. The findings show that the technology has the capability to produce realistic or plausible simulation but expression of some factors such as details and capability of light material requires improvements. While the use of virtual costumes was viewed as more interesting and futuristic replacements to physical objects by the public group, the fashion student group noted more differences in detail and preferred physical garments highlighting the absence of tangibility. However, the advantages and potential of virtual costumes as effective and useful visual references for educational and exhibitory purposes were underlined by both groups. Although 3D apparel CAD has sufficient capacity to assist garment design process, it has limits in identical replication and more study on accurate reproduction of details and drape is needed for its technical improvements. Nevertheless, the virtual costumes in this study demonstrated the possibility of the technology to contribute to cultural and knowledgeable value creation through its applicability and as an interesting way to offer 3D visual information.Keywords: digital clothing technology, garment simulation, 3D Apparel CAD, virtual costume
Procedia PDF Downloads 2211085 SEM Analysis of the Effectiveness of the Acid Etching on Cat Enamel
Authors: C. Gallottini, W. Di Mari, C. De Carolis, A. Dolci, G. Dolci, L. Gallottini, G. Barraco, S. Eramo
Abstract:
The aim of this paper is to summarize the literature on micromorphology and composition of the enamel of the cat and present an original experiment by SEM on how it responds to the etching with ortophosphoric acid for the time recommended in the veterinary literature (30", 45", 60"), derived from research and experience on human enamel; 21 teeth of cat were randomly divided into three groups of 7 (A, B, C): Group A was subjected to etching for 30 seconds by means of orthophosphoric acid to 40% on a circular area with diameter of about 2mm of the enamel coronal; the Groups B and C had the same treatment but, respectively, for 45 and 60 seconds. The samples obtained were observed by SEM to constant magnification of 1000x framing, in particular, the border area between enamel exposed and not exposed to etching to highlight differences. The images were subjected to the analysis of three blinded experienced operators in electron microscopy. In the enamel of the cat the etching for the times considered is not optimally effective for the purpose adhesives and the presence of a thick prismless layer could explain this situation. To improve this condition may clinically in the likeness of what is proposed for the enamel of human deciduous teeth: a bevel or a chamfer of 1 mm on the contour of the cavity to discover the prismatic enamel and increase the bonding surface.Keywords: cat enamel, SEM, veterinary dentistry, acid etching
Procedia PDF Downloads 3071084 Effect of Sewing Speed on the Physical Properties of Firefighter Sewing Threads
Authors: Adnan Mazari, Engin Akcagun, Antonin Havelka, Funda Buyuk Mazari, Pavel Kejzlar
Abstract:
This article experimentally investigates various physical properties of special fire retardant sewing threads under different sewing speeds. The aramid threads are common for sewing the fire-fighter clothing due to high strength and high melting temperature. 3 types of aramid threads with different linear densities are used for sewing at different speed of 2000 to 4000 r/min. The needle temperature is measured at different speeds of sewing and tensile properties of threads are measured before and after the sewing process respectively. The results shows that the friction and abrasion during the sewing process causes a significant loss to the tensile properties of the threads and needle temperature rises to nearly 300oC at 4000 r/min of machine speed. The Scanning electron microscope images are taken before and after the sewing process and shows no melting spots but significant damage to the yarn. It is also found that machine speed of 2000r/min is ideal for sewing firefighter clothing for higher tensile properties and production.Keywords: Kevlar, needle temperautre, nomex, sewing
Procedia PDF Downloads 5321083 Dual-Channel Reliable Breast Ultrasound Image Classification Based on Explainable Attribution and Uncertainty Quantification
Authors: Haonan Hu, Shuge Lei, Dasheng Sun, Huabin Zhang, Kehong Yuan, Jian Dai, Jijun Tang
Abstract:
This paper focuses on the classification task of breast ultrasound images and conducts research on the reliability measurement of classification results. A dual-channel evaluation framework was developed based on the proposed inference reliability and predictive reliability scores. For the inference reliability evaluation, human-aligned and doctor-agreed inference rationals based on the improved feature attribution algorithm SP-RISA are gracefully applied. Uncertainty quantification is used to evaluate the predictive reliability via the test time enhancement. The effectiveness of this reliability evaluation framework has been verified on the breast ultrasound clinical dataset YBUS, and its robustness is verified on the public dataset BUSI. The expected calibration errors on both datasets are significantly lower than traditional evaluation methods, which proves the effectiveness of the proposed reliability measurement.Keywords: medical imaging, ultrasound imaging, XAI, uncertainty measurement, trustworthy AI
Procedia PDF Downloads 1011082 Thin Films of Copper Oxide Deposited by Sol-Gel Spin Coating Method: Effect of Annealing Temperature on Structural and Optical Properties
Authors: Touka Nassim, Tabli Dalila
Abstract:
In this study, CuO thin films synthesized via simple sol-gel method, have been deposited on glass substrates by the spin coating technique and annealed at various temperatures. Samples were characterized by X-ray diffraction (XRD), scanning electron microscope (SEM), Fourier-transform infrared (FT-IR) and Raman spectroscopy, and UV-visible spectroscopy. The structural characterization by XRD reveals that the as prepared films were tenorite phase and have a high level of purity and crystallinity. The crystallite size of the CuO films was affected by the annealing temperature and was estimated in the range 20-31.5 nm. SEM images show a homogeneous distribution of spherical nanoparticles over the surface of the annealed films at 350 and 450 °C. Vibrational Spectroscopy revealed vibration modes specific to CuO with monolithic structure on the Raman spectra at 289 cm−1 and on FT-IR spectra around 430-580 cm−1. Electronic investigation performed by UV–Visible spectroscopy showed that the films have high absorbance in the visible region and their optical band gap increases from 2.40 to 2.66 eV (blue shift) with increasing annealing temperature from 350 to 550 °C.Keywords: Sol-gel, Spin coating method, Copper oxide, Thin films
Procedia PDF Downloads 1601081 Scalable CI/CD and Scalable Automation: Assisting in Optimizing Productivity and Fostering Delivery Expansion
Authors: Solanki Ravirajsinh, Kudo Kuniaki, Sharma Ankit, Devi Sherine, Kuboshima Misaki, Tachi Shuntaro
Abstract:
In software development life cycles, the absence of scalable CI/CD significantly impacts organizations, leading to increased overall maintenance costs, prolonged release delivery times, heightened manual efforts, and difficulties in meeting tight deadlines. Implementing CI/CD with standard serverless technologies using cloud services overcomes all the above-mentioned issues and helps organizations improve efficiency and faster delivery without the need to manage server maintenance and capacity. By integrating scalable CI/CD with scalable automation testing, productivity, quality, and agility are enhanced while reducing the need for repetitive work and manual efforts. Implementing scalable CI/CD for development using cloud services like ECS (Container Management Service), AWS Fargate, ECR (to store Docker images with all dependencies), Serverless Computing (serverless virtual machines), Cloud Log (for monitoring errors and logs), Security Groups (for inside/outside access to the application), Docker Containerization (Docker-based images and container techniques), Jenkins (CI/CD build management tool), and code management tools (GitHub, Bitbucket, AWS CodeCommit) can efficiently handle the demands of diverse development environments and are capable of accommodating dynamic workloads, increasing efficiency for faster delivery with good quality. CI/CD pipelines encourage collaboration among development, operations, and quality assurance teams by providing a centralized platform for automated testing, deployment, and monitoring. Scalable CI/CD streamlines the development process by automatically fetching the latest code from the repository every time the process starts, building the application based on the branches, testing the application using a scalable automation testing framework, and deploying the builds. Developers can focus more on writing code and less on managing infrastructure as it scales based on the need. Serverless CI/CD eliminates the need to manage and maintain traditional CI/CD infrastructure, such as servers and build agents, reducing operational overhead and allowing teams to allocate resources more efficiently. Scalable CI/CD adjusts the application's scale according to usage, thereby alleviating concerns about scalability, maintenance costs, and resource needs. Creating scalable automation testing using cloud services (ECR, ECS Fargate, Docker, EFS, Serverless Computing) helps organizations run more than 500 test cases in parallel, aiding in the detection of race conditions, performance issues, and reducing execution time. Scalable CI/CD offers flexibility, dynamically adjusting to varying workloads and demands, allowing teams to scale resources up or down as needed. It optimizes costs by only paying for the resources as they are used and increases reliability. Scalable CI/CD pipelines employ automated testing and validation processes to detect and prevent errors early in the development cycle.Keywords: achieve parallel execution, cloud services, scalable automation testing, scalable continuous integration and deployment
Procedia PDF Downloads 441080 The Hybridization of Muslim Spaces in Germany: A Historical Perspective on the Perception of Muslims
Authors: Alex Konrad
Abstract:
In 2017, about 4.5 million Muslims live in Germany. They can practice their faith openly, mostly in well-equipped community centers. At the same time, right-wing politicians and media allege that all Muslims tend to be radical and undemocratic. Both perspectives are rooted in an interacting development since the 1970s. German authorities closed the 'King Fahd Academy' international school in Bonn in summer 2017 because they accused the school administration of attracting Islamists. Only 30 years ago, German authorities and labor unions directed their requests for pastoral care of the Muslim communities in Germany to the Turkish and Saudi administrations. This study shows the leading and misleading tracks of Muslim life and its perception in Germany from a historical point of view. Most of the Muslims came as so-called 'Gastarbeiter' (migrant workers) from Turkey and Morocco to West Germany in the 1960s and 1970s. Until the late 1970s, German society recognized them as workforce solely and ignored their religious needs broadly. The Iranian Revolution of 1979 caused widespread hysteria about Islamic radicalization. Likewise, it shifted the German perception of migrant workers in Germany. For the first time, the majority society saw them as religious people. Media and self-proclaimed 'experts' on Islam suspected Muslims in Germany of subversive and undemocratic belief. On the upside, they obtained the opportunity to be heard by German society and authorities. In the ensuing decades, Muslims and Islamophiles fought a discursive struggle against right-wing politicians, 'experts' and media with monolithic views. In the 1990s, Muslims achieved to establish a solid infrastructure of Islamic community center throughout Germany. Their religious life became present and contributed to diversifying the common monolithic images of Muslims as insane fundamentalists in Germany. However, the media and many 'experts' promoted the fundamentalist narrative, which gained more and more acceptance in German society at the same time. This study uses archival sources from German authorities, Islamic communities, together with local and national media to get a close approach to the contemporary historical debates. In addition, contributions by Muslims and Islamophiles in Germany, for example in magazines, event reports, and internal communication, revealing their quotidian struggle for more acceptance are being used as sources. The inclusion of widely publicized books, documentaries and newspaper articles about Islam as a menace to Europe conduces to a balanced analysis of the contemporary debates and views. Theoretically, the study applies the Third Space approach. Muslims in Germany fight the othering by the German majority society. It was their chief purpose not to be marginalized in both spatial meanings, discursively and physically. Therefore, they established realities of life as hybrids in Germany. This study reconstructs the development of the perception of Muslims in Germany. It claims that self-proclaimed experts and politicians with monolithic views maintained the hegemonic discursive positions and coined the German images of Muslims. Nevertheless, Muslims in Germany accomplished that Muslim presence in Germany’s everyday life became an integral part of society and the public sphere. This is how Muslims hybridized religious spaces in Germany.Keywords: experts, fundamentalism, Germany, hybridization, Islamophobia, migrant workers
Procedia PDF Downloads 2261079 The Guide Presentation: The Grand Palace
Authors: Nuchanat Handumrongkul Danaya Darnsawasdi, Anantachai Aeka
Abstract:
To be a model for performing oral presentations by the tour guides, this research has been conducted. In order to develop French language teaching and studying for tourism, its purpose is to analyze the content used by tour guides. The study employed audio recordings of these presentations as an interview method in authentic situations, having four guides as respondents and information providers. The data was analyzed through content analysis. The results found that the tour guides described eight important items by giving more importance to details at Wat Phra Kaew or the Temple of the Emerald Buddha than at the palaces. They preferred the buildings upon the upper terrace, Buddhist cosmology, the decoration techniques, the royal chapel, the mural paintings, Thai offerings to Buddha images, palaces with architectural features and functions including royal ceremonies and others. This information represents the Thai characteristics of each building and other related content. The findings were used as a manual for guides for how to describe a tourist attraction, especially the temple and other related cultural topics of interest.Keywords: guide, guide presentation, Grand Palace, Buddhist cosmology
Procedia PDF Downloads 5001078 Determination of Critical Organ Doses for Liver Scintigraphy Using Cr-51
Authors: O. Maranci, A. B. Tugrul
Abstract:
Scintigraphy is an imaging method of nuclear events provoked by collisions or charged current interactions with radiation. It is used for diagnostic test used in nuclear medicine via radiopharmaceuticals emitting radiation which is captured by gamma cameras to form two-dimensional images. Liver scintigraphy is widely used in nuclear medicine.Tc-99m and Cr-51 gamma radioisotopes can be used for this purpose. Cr-51 usage is more important for patients’ organ dose that has higher energy and longer half-life as compared to Tc-99m. In this study, it is aimed to determine the required dose for critical organs of patient through liver scintigraphy via Cr-51 gamma radioisotope. Experimental studies were conducted on patients even though conducting experimental studies on patients is extremely difficult for determination of critical organ doses. Torso phantom was utilized to simulate the liver scintigraphy by using 20 mini packages of Cr-51 that were placed on the organ. The radioisotope was produced by irradiation in central thimble of TRIGA MARK II Reactor at 250 KW power. As the results of the study, critical organ doses were determined and evaluated with different critic organs.Keywords: critical organ doses, liver, scintigraphy, TRIGA Mark-II
Procedia PDF Downloads 556