Search results for: ultrasound images.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1245

Search results for: ultrasound images.

645 Combined Feature Based Hyperspectral Image Classification Technique Using Support Vector Machines

Authors: Mrs.K.Kavitha, S.Arivazhagan

Abstract:

A spatial classification technique incorporating a State of Art Feature Extraction algorithm is proposed in this paper for classifying a heterogeneous classes present in hyper spectral images. The classification accuracy can be improved if and only if both the feature extraction and classifier selection are proper. As the classes in the hyper spectral images are assumed to have different textures, textural classification is entertained. Run Length feature extraction is entailed along with the Principal Components and Independent Components. A Hyperspectral Image of Indiana Site taken by AVIRIS is inducted for the experiment. Among the original 220 bands, a subset of 120 bands is selected. Gray Level Run Length Matrix (GLRLM) is calculated for the selected forty bands. From GLRLMs the Run Length features for individual pixels are calculated. The Principle Components are calculated for other forty bands. Independent Components are calculated for next forty bands. As Principal & Independent Components have the ability to represent the textural content of pixels, they are treated as features. The summation of Run Length features, Principal Components, and Independent Components forms the Combined Features which are used for classification. SVM with Binary Hierarchical Tree is used to classify the hyper spectral image. Results are validated with ground truth and accuracies are calculated.

Keywords: Multi-class, Run Length features, PCA, ICA, classification and Support Vector Machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1499
644 An Edge Detection and Filtering Mechanism of Two Dimensional Digital Objects Based on Fuzzy Inference

Authors: Ayman A. Aly, Abdallah A. Alshnnaway

Abstract:

The general idea behind the filter is to average a pixel using other pixel values from its neighborhood, but simultaneously to take care of important image structures such as edges. The main concern of the proposed filter is to distinguish between any variations of the captured digital image due to noise and due to image structure. The edges give the image the appearance depth and sharpness. A loss of edges makes the image appear blurred or unfocused. However, noise smoothing and edge enhancement are traditionally conflicting tasks. Since most noise filtering behaves like a low pass filter, the blurring of edges and loss of detail seems a natural consequence. Techniques to remedy this inherent conflict often encompass generation of new noise due to enhancement. In this work a new fuzzy filter is presented for the noise reduction of images corrupted with additive noise. The filter consists of three stages. (1) Define fuzzy sets in the input space to computes a fuzzy derivative for eight different directions (2) construct a set of IFTHEN rules by to perform fuzzy smoothing according to contributions of neighboring pixel values and (3) define fuzzy sets in the output space to get the filtered and edged image. Experimental results are obtained to show the feasibility of the proposed approach with two dimensional objects.

Keywords: Additive noise, edge preserving filtering, fuzzy image filtering, noise reduction, two dimensional mechanical images.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1546
643 Evaluation of Coastal Erosion in the Jurisdiction of the Municipalities of Puerto Colombia and Tubará, Atlántico, Colombia in Google Earth Engine with Landsat and Sentinel 2 Images

Authors: Francisco Javier Reyes Salazar, Héctor Mauricio Ramírez

Abstract:

The coastal zones are home to mangrove swamps, coral reefs, and seagrass ecosystems, which are the most biodiverse and fragile on the planet. These areas support a great diversity of marine life; they are also extraordinarily important for humans in the provision of food, water, wood, and other associated goods and services; they also contribute to climate regulation. The lack of an automated model that generates information on the dynamics of changes in coastlines and coastal erosion is identified as a central problem. In this paper, coastlines were determined from 1984 to 2020 on the Google Earth Engine platform from Landsat and Sentinel images. Then, we determined the Modified Normalized Difference Water Index (MNDWI) and used Digital Shoreline Analysis System (DSAS) v5.0. Starting from the 2020 coastline; the 10-year prediction (Year 2031) was determined with the erosion of 238.32 hectares and an accretion of 181.96 hectares. For the 20-year prediction (Year 2041) will be presented an erosion of 544.04 hectares and an accretion of 133.94 hectares. The erosion and accretion of Playa Muelle in the municipality of Puerto Colombia were established, which will register the highest value of erosion. The coverage that presented the greatest change was that of artificialized territories.

Keywords: Coastline, coastal erosion, MNDWI, Google Earth Engine, Colombia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 146
642 Objects Extraction by Cooperating Optical Flow, Edge Detection and Region Growing Procedures

Authors: C. Lodato, S. Lopes

Abstract:

The image segmentation method described in this paper has been developed as a pre-processing stage to be used in methodologies and tools for video/image indexing and retrieval by content. This method solves the problem of whole objects extraction from background and it produces images of single complete objects from videos or photos. The extracted images are used for calculating the object visual features necessary for both indexing and retrieval processes. The segmentation algorithm is based on the cooperation among an optical flow evaluation method, edge detection and region growing procedures. The optical flow estimator belongs to the class of differential methods. It permits to detect motions ranging from a fraction of a pixel to a few pixels per frame, achieving good results in presence of noise without the need of a filtering pre-processing stage and includes a specialised model for moving object detection. The first task of the presented method exploits the cues from motion analysis for moving areas detection. Objects and background are then refined using respectively edge detection and seeded region growing procedures. All the tasks are iteratively performed until objects and background are completely resolved. The method has been applied to a variety of indoor and outdoor scenes where objects of different type and shape are represented on variously textured background.

Keywords: Image Segmentation, Motion Detection, Object Extraction, Optical Flow

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1741
641 Face Recognition Using Principal Component Analysis, K-Means Clustering, and Convolutional Neural Network

Authors: Zukisa Nante, Wang Zenghui

Abstract:

Face recognition is the problem of identifying or recognizing individuals in an image. This paper investigates a possible method to bring a solution to this problem. The method proposes an amalgamation of Principal Component Analysis (PCA), K-Means clustering, and Convolutional Neural Network (CNN) for a face recognition system. It is trained and evaluated using the ORL dataset. This dataset consists of 400 different faces with 40 classes of 10 face images per class. Firstly, PCA enabled the usage of a smaller network. This reduces the training time of the CNN. Thus, we get rid of the redundancy and preserve the variance with a smaller number of coefficients. Secondly, the K-Means clustering model is trained using the compressed PCA obtained data which select the K-Means clustering centers with better characteristics. Lastly, the K-Means characteristics or features are an initial value of the CNN and act as input data. The accuracy and the performance of the proposed method were tested in comparison to other Face Recognition (FR) techniques namely PCA, Support Vector Machine (SVM), as well as K-Nearest Neighbour (kNN). During experimentation, the accuracy and the performance of our suggested method after 90 epochs achieved the highest performance: 99% accuracy F1-Score, 99% precision, and 99% recall in 463.934 seconds. It outperformed the PCA that obtained 97% and KNN with 84% during the conducted experiments. Therefore, this method proved to be efficient in identifying faces in the images.

Keywords: Face recognition, Principal Component Analysis, PCA, Convolutional Neural Network, CNN, Rectified Linear Unit, ReLU, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 462
640 Implementing a Visual Servoing System for Robot Controlling

Authors: Maryam Vafadar, Alireza Behrad, Saeed Akbari

Abstract:

Nowadays, with the emerging of the new applications like robot control in image processing, artificial vision for visual servoing is a rapidly growing discipline and Human-machine interaction plays a significant role for controlling the robot. This paper presents a new algorithm based on spatio-temporal volumes for visual servoing aims to control robots. In this algorithm, after applying necessary pre-processing on video frames, a spatio-temporal volume is constructed for each gesture and feature vector is extracted. These volumes are then analyzed for matching in two consecutive stages. For hand gesture recognition and classification we tested different classifiers including k-Nearest neighbor, learning vector quantization and back propagation neural networks. We tested the proposed algorithm with the collected data set and results showed the correct gesture recognition rate of 99.58 percent. We also tested the algorithm with noisy images and algorithm showed the correct recognition rate of 97.92 percent in noisy images.

Keywords: Back propagation neural network, Feature vector, Hand gesture recognition, k-Nearest Neighbor, Learning vector quantization neural network, Robot control, Spatio-temporal volume, Visual servoing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1648
639 Object Identification with Color, Texture, and Object-Correlation in CBIR System

Authors: Awais Adnan, Muhammad Nawaz, Sajid Anwar, Tamleek Ali, Muhammad Ali

Abstract:

Needs of an efficient information retrieval in recent years in increased more then ever because of the frequent use of digital information in our life. We see a lot of work in the area of textual information but in multimedia information, we cannot find much progress. In text based information, new technology of data mining and data marts are now in working that were started from the basic concept of database some where in 1960. In image search and especially in image identification, computerized system at very initial stages. Even in the area of image search we cannot see much progress as in the case of text based search techniques. One main reason for this is the wide spread roots of image search where many area like artificial intelligence, statistics, image processing, pattern recognition play their role. Even human psychology and perception and cultural diversity also have their share for the design of a good and efficient image recognition and retrieval system. A new object based search technique is presented in this paper where object in the image are identified on the basis of their geometrical shapes and other features like color and texture where object-co-relation augments this search process. To be more focused on objects identification, simple images are selected for the work to reduce the role of segmentation in overall process however same technique can also be applied for other images.

Keywords: Object correlation, Geometrical shape, Color, texture, features, contents.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2007
638 Pre-Operative Tool for Facial-Post-Surgical Estimation and Detection

Authors: Ayat E. Ali, Christeen R. Aziz, Merna A. Helmy, Mohammed M. Malek, Sherif H. El-Gohary

Abstract:

Goal: Purpose of the project was to make a plastic surgery prediction by using pre-operative images for the plastic surgeries’ patients and to show this prediction on a screen to compare between the current case and the appearance after the surgery. Methods: To this aim, we implemented a software which used data from the internet for facial skin diseases, skin burns, pre-and post-images for plastic surgeries then the post- surgical prediction is done by using K-nearest neighbor (KNN). So we designed and fabricated a smart mirror divided into two parts a screen and a reflective mirror so patient's pre- and post-appearance will be showed at the same time. Results: We worked on some skin diseases like vitiligo, skin burns and wrinkles. We classified the three degrees of burns using KNN classifier with accuracy 60%. We also succeeded in segmenting the area of vitiligo. Our future work will include working on more skin diseases, classify them and give a prediction for the look after the surgery. Also we will go deeper into facial deformities and plastic surgeries like nose reshaping and face slim down. Conclusion: Our project will give a prediction relates strongly to the real look after surgery and decrease different diagnoses among doctors. Significance: The mirror may have broad societal appeal as it will make the distance between patient's satisfaction and the medical standards smaller.

Keywords: K-nearest neighbor, face detection, vitiligo, bone deformity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 679
637 A FE-Based Scheme for Computing Wave Interaction with Nonlinear Damage and Generation of Harmonics in Layered Composite Structures

Authors: R. K. Apalowo, D. Chronopoulos

Abstract:

A Finite Element (FE) based scheme is presented for quantifying guided wave interaction with Localised Nonlinear Structural Damage (LNSD) within structures of arbitrary layering and geometric complexity. The through-thickness mode-shape of the structure is obtained through a wave and finite element method. This is applied in a time domain FE simulation in order to generate time harmonic excitation for a specific wave mode. Interaction of the wave with LNSD within the system is computed through an element activation and deactivation iteration. The scheme is validated against experimental measurements and a WFE-FE methodology for calculating wave interaction with damage. Case studies for guided wave interaction with crack and delamination are presented to verify the robustness of the proposed method in classifying and identifying damage.

Keywords: Layered Structures, nonlinear ultrasound, wave interaction with nonlinear damage, wave finite element, finite element.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 509
636 Monitoring the Effect of Doxorubicin Liposomal in VX2 Tumor Using Magnetic Resonance Imaging

Authors: Ren-Jy Ben, Jo-Chi Jao, Chiu-Ya Liao, Ya-Ru Tsai, Lain-Chyr Hwang, Po-Chou Chen

Abstract:

Cancer is still one of the serious diseases threatening the lives of human beings. How to have an early diagnosis and effective treatment for tumors is a very important issue. The animal carcinoma model can provide a simulation tool for the studies of pathogenesis, biological characteristics, and therapeutic effects. Recently, drug delivery systems have been rapidly developed to effectively improve the therapeutic effects. Liposome plays an increasingly important role in clinical diagnosis and therapy for delivering a pharmaceutic or contrast agent to the targeted sites. Liposome can be absorbed and excreted by the human body, and is well known that no harm to the human body. This study aimed to compare the therapeutic effects between encapsulated (doxorubicin liposomal, Lipodox) and un-encapsulated (doxorubicin, Dox) anti-tumor drugs using magnetic resonance imaging (MRI). Twenty-four New Zealand rabbits implanted with VX2 carcinoma at left thighs were classified into three groups: control group (untreated), Dox-treated group, and LipoDox-treated group, 8 rabbits for each group. MRI scans were performed three days after tumor implantation. A 1.5T GE Signa HDxt whole body MRI scanner with a high resolution knee coil was used in this study. After a 3-plane localizer scan was performed, three-dimensional (3D) fast spin echo (FSE) T2-weighted Images (T2WI) was used for tumor volumetric quantification. Afterwards, two-dimensional (2D) spoiled gradient recalled echo (SPGR) dynamic contrast-enhanced (DCE) MRI was used for tumor perfusion evaluation. DCE-MRI was designed to acquire four baseline images, followed by contrast agent Gd-DOTA injection through the ear vein of rabbit. A series of 32 images were acquired to observe the signals change over time in the tumor and muscle. The MRI scanning was scheduled on a weekly basis for a period of four weeks to observe the tumor progression longitudinally. The Dox and LipoDox treatments were prescribed 3 times in the first week immediately after the first MRI scan; i.e. 3 days after VX2 tumor implantation. ImageJ was used to quantitate tumor volume and time course signal enhancement on DCE images. The changes of tumor size showed that the growth of VX2 tumors was effectively inhibited for both LipoDox-treated and Dox-treated groups. Furthermore, the tumor volume of LipoDox-treated group was significantly lower than that of Dox-treated group, which implies that LipoDox has better therapeutic effect than Dox. The signal intensity of LipoDox-treated group is significantly lower than that of the other two groups, which implies that targeted therapeutic drug remained in the tumor tissue. This study provides a radiation-free and non-invasive MRI method for therapeutic monitoring of targeted liposome on an animal tumor model.

Keywords: Doxorubicin, dynamic contrast-enhanced MRI, lipodox, magnetic resonance imaging, VX2 tumor model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1966
635 Ice Load Measurements on Known Structures Using Image Processing Methods

Authors: Azam Fazelpour, Saeed R. Dehghani, Vlastimil Masek, Yuri S. Muzychka

Abstract:

This study employs a method based on image analyses and structure information to detect accumulated ice on known structures. The icing of marine vessels and offshore structures causes significant reductions in their efficiency and creates unsafe working conditions. Image processing methods are used to measure ice loads automatically. Most image processing methods are developed based on captured image analyses. In this method, ice loads on structures are calculated by defining structure coordinates and processing captured images. A pyramidal structure is designed with nine cylindrical bars as the known structure of experimental setup. Unsymmetrical ice accumulated on the structure in a cold room represents the actual case of experiments. Camera intrinsic and extrinsic parameters are used to define structure coordinates in the image coordinate system according to the camera location and angle. The thresholding method is applied to capture images and detect iced structures in a binary image. The ice thickness of each element is calculated by combining the information from the binary image and the structure coordinate. Averaging ice diameters from different camera views obtains ice thicknesses of structure elements. Comparison between ice load measurements using this method and the actual ice loads shows positive correlations with an acceptable range of error. The method can be applied to complex structures defining structure and camera coordinates.

Keywords: Camera calibration, Ice detection, ice load measurements, image processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1230
634 Formation of Protective Aluminum-Oxide Layer on the Surface of Fe-Cr-Al Sintered-Metal-Fibers via Multi-Stage Thermal Oxidation

Authors: Loai Ben Naji, Osama M. Ibrahim, Khaled J. Al-Fadhalah

Abstract:

The objective of this paper is to investigate the formation and adhesion of a protective aluminum-oxide (Al2O3, alumina) layer on the surface of Iron-Chromium-Aluminum Alloy (Fe-Cr-Al) sintered-metal-fibers. The oxide-scale layer was developed via multi-stage thermal oxidation at 930 oC for 1 hour, followed by 1 hour at 960 oC, and finally at 990 oC for 2 hours. Scanning Electron Microscope (SEM) images show that the multi-stage thermal oxidation resulted in the formation of predominantly Al2O3 platelets-like and whiskers. SEM images also reveal non-uniform oxide-scale growth on the surface of the fibers. Furthermore, peeling/spalling of the alumina protective layer occurred after minimum handling, which indicates weak adhesion forces between the protective layer and the base metal alloy.  Energy Dispersive Spectroscopy (EDS) analysis of the heat-treated Fe-Cr-Al sintered-metal-fibers confirmed the high aluminum content on the surface of the protective layer, and the low aluminum content on the exposed base metal alloy surface. In conclusion, the failure of the oxide-scale protective layer exposes the base metal alloy to further oxidation, and the fragile non-uniform oxide-scale is not suitable as a support for catalysts.

Keywords: High-temperature oxidation, alumina protective layer, iron-chromium-aluminum alloy, sintered-metal-fibers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 869
633 Extraction of Natural Colorant from the Flowers of Flame of Forest Using Ultrasound

Authors: Sunny Arora, Meghal A. Desai

Abstract:

An impetus towards green consumerism and implementation of sustainable techniques, consumption of natural products and utilization of environment friendly techniques have gained accelerated acceptance. Butein, a natural colorant, has many medicinal properties apart from its use in dyeing industries. Extraction of butein from the flowers of flame of forest was carried out using ultrasonication bath. Solid loading (2-6 g), extraction time (30-50 min), volume of solvent (30-50 mL) and types of solvent (methanol, ethanol and water) have been studied to maximize the yield of butein using the Taguchi method. The highest yield of butein 4.67% (w/w) was obtained using 4 g of plant material, 40 min of extraction time and 30 mL volume of methanol as a solvent. The present method provided a greater reduction in extraction time compared to the conventional method of extraction. Hence, the outcome of the present investigation could further be utilized to develop the method at a higher scale.

Keywords: Butein, flowers of flame of forest, Taguchi method, ultrasonic bath.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 907
632 Evaluation of Carbon Dioxide Pressure through Radial Velocity Difference in Arterial Blood Modeled by Drift Flux Model

Authors: Aicha Rima Cheniti, Hatem Besbes, Joseph Haggege, Christophe Sintes

Abstract:

In this paper, we are interested to determine the carbon dioxide pressure in the arterial blood through radial velocity difference. The blood was modeled as a two phase mixture (an aqueous carbon dioxide solution with carbon dioxide gas) by Drift flux model and the Young-Laplace equation. The distributions of mixture velocities determined from the considered model permitted the calculation of the radial velocity distributions with different values of mean mixture pressure and the calculation of the mean carbon dioxide pressure knowing the mean mixture pressure. The radial velocity distributions are used to deduce a calculation method of the mean mixture pressure through the radial velocity difference between two positions which is measured by ultrasound. The mean carbon dioxide pressure is then deduced from the mean mixture pressure.

Keywords: Mean carbon dioxide pressure, mean mixture pressure, mixture velocity, radial velocity difference.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1152
631 3D Modelling and Numerical Analysis of Human Inner Ear by Means of Finite Elements Method

Authors: C. Castro-Egler, A. Durán-Escalante, A. García-González

Abstract:

This paper presents a method to generate a finite element model of the human auditory inner ear system. The geometric model has been realized using 2D images from a virtual model of temporal bones. A point cloud has been gotten manually from those images to construct a whole mesh with hexahedral elements. The main difference with the predecessor models is the spiral shape of the cochlea with its three scales completely defined: scala tympani, scala media and scala vestibuli; which are separate by basilar membrane and Reissner membrane. To validate this model, numerical simulations have been realised with two models: an isolated inner ear and a whole model of human auditory system. Ideal conditions of displacement are applied over the oval window in the isolated Inner Ear model. The whole model is made up of the outer auditory channel, the tympani, the ossicular chain, and the inner ear. The boundary condition for the whole model is 1Pa over the auditory channel entrance. The numerical simulations by FEM have been done using a harmonic analysis with a frequency range between 100-10.000 Hz with an interval of 100Hz. The following results have been carried out: basilar membrane displacement; the scala media pressure according to the cochlea length and the transfer function of the middle ear normalized with the pressure in the tympanic membrane. The basilar membrane displacements and the pressure in the scala media make it possible to validate the response in frequency of the basilar membrane.

Keywords: Finite elements method, human auditory system model, numerical analysis, 3D modelling cochlea.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1511
630 Sperm Identification Using Elliptic Model and Tail Detection

Authors: Vahid Reza Nafisi, Mohammad Hasan Moradi, Mohammad Hosain Nasr-Esfahani

Abstract:

The conventional assessment of human semen is a highly subjective assessment, with considerable intra- and interlaboratory variability. Computer-Assisted Sperm Analysis (CASA) systems provide a rapid and automated assessment of the sperm characteristics, together with improved standardization and quality control. However, the outcome of CASA systems is sensitive to the method of experimentation. While conventional CASA systems use digital microscopes with phase-contrast accessories, producing higher contrast images, we have used raw semen samples (no staining materials) and a regular light microscope, with a digital camera directly attached to its eyepiece, to insure cost benefits and simple assembling of the system. However, since the accurate finding of sperms in the semen image is the first step in the examination and analysis of the semen, any error in this step can affect the outcome of the analysis. This article introduces and explains an algorithm for finding sperms in low contrast images: First, an image enhancement algorithm is applied to remove extra particles from the image. Then, the foreground particles (including sperms and round cells) are segmented form the background. Finally, based on certain features and criteria, sperms are separated from other cells.

Keywords: Computer-Assisted Sperm Analysis (CASA), Sperm identification, Tail detection, Elliptic shape model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1904
629 Unpacking Chilean Preservice Teachers’ Beliefs on Practicum Experiences through Digital Stories

Authors: Claudio Díaz, Mabel Ortiz

Abstract:

An EFL teacher education programme in Chile takes five years to train a future teacher of English. Preservice teachers are prepared to learn an advanced level of English and teach the language from 5th to 12th grade in the Chilean educational system. In the context of their first EFL Methodology course in year four, preservice teachers have to create a five-minute digital story that starts from a critical incident they have experienced as teachers-to-be during their observations or interventions in the schools. A critical incident can be defined as a happening, a specific incident or event either observed by them or involving them. The happening sparks their thinking and may make them subsequently think differently about the particular event. When they create their digital stories, preservice teachers put technology, teaching practice and theory together to narrate a story that is complemented by still images, moving images, text, sound effects and music. The story should be told as a personal narrative, which explains the critical incident. This presentation will focus on the creation process of 50 Chilean preservice teachers’ digital stories highlighting the critical incidents they started their stories. It will also unpack preservice teachers’ beliefs and reflections when approaching their teaching practices in schools. These beliefs will be coded and categorized through content analysis to evidence preservice teachers’ most rooted conceptions about English teaching and learning in Chilean schools. The findings seem to indicate that preservice teachers’ beliefs are strongly mediated by contextual and affective factors.

Keywords: Beliefs, Digital stories, Preservice teachers, Practicum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1425
628 Ultrasound Assisted Method to Increase the Aluminum Dissolve Rate from Acidified Water

Authors: Wen Po Cheng, Chi Hua Fu, Ping Hung Chen, Ruey Fang Yu

Abstract:

Aluminum salt that is generally presents as a solid phase in the water purification sludge (WPS) can be dissolved, recovering a liquid phase, by adding strong acid to the sludge solution. According to the reaction kinetics, when reactant is in the form of small particles with a large specific surface area, or when the reaction temperature is high, the quantity of dissolved aluminum salt or reaction rate, respectively are high. Therefore, in this investigation, water purification sludge (WPS) solution was treated with ultrasonic waves to break down the sludge, and different acids (1 N HCl and 1 N H2SO4) were used to acidify it. Acid dosages that yielded the solution pH of less than two were used. The results thus obtained indicate that the quantity of dissolved aluminum in H2SO4-acidified solution exceeded that in HCl-acidified solution. Additionally, ultrasonic treatment increased the rate of dissolution of aluminum and the amount dissolved. The quantity of aluminum dissolved at 60℃ was 1.5 to 2.0 times higher than that at 25℃.

Keywords: Coagulant, Aluminum, Ultrasonic, Acidification, Temperature, Sludge.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2251
627 Image Transmission via Iterative Cellular-Turbo System

Authors: Ersin Gose, Kenan Buyukatak, Onur Osman, Osman N. Ucan

Abstract:

To compress, improve bit error performance and also enhance 2D images, a new scheme, called Iterative Cellular-Turbo System (IC-TS) is introduced. In IC-TS, the original image is partitioned into 2N quantization levels, where N is denoted as bit planes. Then each of the N-bit-plane is coded by Turbo encoder and transmitted over Additive White Gaussian Noise (AWGN) channel. At the receiver side, bit-planes are re-assembled taking into consideration of neighborhood relationship of pixels in 2-D images. Each of the noisy bit-plane values of the image is evaluated iteratively using IC-TS structure, which is composed of equalization block; Iterative Cellular Image Processing Algorithm (ICIPA) and Turbo decoder. In IC-TS, there is an iterative feedback link between ICIPA and Turbo decoder. ICIPA uses mean and standard deviation of estimated values of each pixel neighborhood. It has extra-ordinary satisfactory results of both Bit Error Rate (BER) and image enhancement performance for less than -1 dB Signal-to-Noise Ratio (SNR) values, compared to traditional turbo coding scheme and 2-D filtering, applied separately. Also, compression can be achieved by using IC-TS systems. In compression, less memory storage is used and data rate is increased up to N-1 times by simply choosing any number of bit slices, sacrificing resolution. Hence, it is concluded that IC-TS system will be a compromising approach in 2-D image transmission, recovery of noisy signals and image compression.

Keywords: Iterative Cellular Image Processing Algorithm (ICIPA), Turbo Coding, Iterative Cellular Turbo System (IC-TS), Image Compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1787
626 Screening of Congenital Heart Diseases with Fetal Phonocardiography

Authors: F. Kovács, K. Kádár, G. Hosszú, Á. T. Balogh, T. Zsedrovits, N. Kersner, A. Nagy, Gy. Jeney

Abstract:

The paper presents a novel screening method to indicate congenital heart diseases (CHD), which otherwise could remain undetected because of their low level. Therefore, not belonging to the high-risk population, the pregnancies are not subject to the regular fetal monitoring with ultrasound echocardiography. Based on the fact that CHD is a morphological defect of the heart causing turbulent blood flow, the turbulence appears as a murmur, which can be detected by fetal phonocardiography (fPCG). The proposed method applies measurements on the maternal abdomen and from the recorded sound signal a sophisticated processing determines the fetal heart murmur. The paper describes the problems and the additional advantages of the fPCG method including the possibility of measurements at home and its combination with the prescribed regular cardiotocographic (CTG) monitoring. The proposed screening process implemented on a telemedicine system provides an enhanced safety against hidden cardiac diseases.

Keywords: Cardiac murmurs, fetal phonocardiography, screening of CHDs, telemedicine system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2235
625 A Supervised Learning Data Mining Approach for Object Recognition and Classification in High Resolution Satellite Data

Authors: Mais Nijim, Rama Devi Chennuboyina, Waseem Al Aqqad

Abstract:

Advances in spatial and spectral resolution of satellite images have led to tremendous growth in large image databases. The data we acquire through satellites, radars, and sensors consists of important geographical information that can be used for remote sensing applications such as region planning, disaster management. Spatial data classification and object recognition are important tasks for many applications. However, classifying objects and identifying them manually from images is a difficult task. Object recognition is often considered as a classification problem, this task can be performed using machine-learning techniques. Despite of many machine-learning algorithms, the classification is done using supervised classifiers such as Support Vector Machines (SVM) as the area of interest is known. We proposed a classification method, which considers neighboring pixels in a region for feature extraction and it evaluates classifications precisely according to neighboring classes for semantic interpretation of region of interest (ROI). A dataset has been created for training and testing purpose; we generated the attributes by considering pixel intensity values and mean values of reflectance. We demonstrated the benefits of using knowledge discovery and data-mining techniques, which can be on image data for accurate information extraction and classification from high spatial resolution remote sensing imagery.

Keywords: Remote sensing, object recognition, classification, data mining, waterbody identification, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2030
624 Spectroscopic and SEM Investigation of TCPP in Titanium Matrix

Authors: R.Rahimi, F.Moharrami

Abstract:

Titanium gels doped with water-soluble cationic porphyrin were synthesized by the sol–gel polymerization of Ti (OC4H9)4. In this work we investigate the spectroscopic properties along with SEM images of tetra carboxyl phenyl porphyrin when incorporated into porous matrix produced by the sol–gel technique.

Keywords: TCPP, Titanium matrix, UV/Vis spectroscopy, SEM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1546
623 Evaluating the Radiation Dose Involved in Interventional Radiology Procedures

Authors: Kholood Baron

Abstract:

Radiologic interventional studies use fluoroscopy imaging guidance to perform both diagnostic and therapeutic procedures. These could result in high radiation doses being delivered to the patients and also to the radiology team. This is due to the prolonged fluoroscopy time and the large number of images taken, even when dose-minimizing techniques and modern fluoroscopic tools are applied. Hence, these procedures are part of the everyday routine of interventional radiology doctors, assistant nurses, and radiographers. Thus, it is important to estimate the radiation exposure dose they received in order to give objective advice and reduce both patient and radiology team radiation exposure dose. The aim of this study was to find out the total radiation dose reaching the radiologist and the patient during an interventional procedure, and to determine the impact of certain parameters on the patient dose. The radiation dose was measured by TLD devices (Thermoluminescent Dosimeter; radiation dosimeter device). Physicians, patients, nurses, and radiographers wore TLDs during 12 interventional radiology procedures performed in two hospitals, Mubarak and Chest Hospital. This study highlights the need for interventional radiologists to be mindful of the radiation doses received by both patients and medical staff during interventional radiology procedures. The findings emphasize the impact of factors such as fluoroscopy duration and the number of images taken on the patient dose. By raising awareness and providing insights into optimizing techniques and protective measures, this research contributes to the overall goal of reducing radiation doses and ensuring the safety of patients and medical staff.

Keywords: Dosimetry, radiation dose, interventional radiology procedures, patient radiation dose.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21
622 Bioactivity of Peptides from Two Mushrooms

Authors: Parisa Farzaneh, Azade Harati

Abstract:

Mushrooms or macro-fungi, as an important superfood, contain many bioactive compounds, particularly bio-peptides. In this research, mushroom proteins were extracted by buffer or buffer plus salt (0.15 M), along with ultrasound bath to extract the intercellular protein. As a result, the highest amount of proteins in mushrooms were categorized into albumin. Proteins were also hydrolyzed and changed into peptides through endogenous and exogenous proteases, including gastrointestinal enzymes. The potency of endogenous proteases was also higher in Agaricus bisporus than Terfezia claveryi, as their activity ended at 75 for 15 min. The blanching process, endogenous enzymes, the mixture of gastrointestinal enzymes (pepsin-trypsin-α-chymotrypsin or trypsin- α-chymotrypsin) produced the different antioxidant and antibacterial hydrolysates. The peptide fractions produced with different cut-off ultrafilters also had various levels of radical scavenging, lipid peroxidation inhibition, and antibacterial activities. The bio-peptides with the superior bio-activities (less than 3 kD of T. claveryi) were resistance to various environmental conditions (pH and temperatures). Therefore, they are good options to be added in nutraceutical and pharmaceutical preparations or functional foods, even during processing. 

Keywords: Bio-peptides, mushrooms, gastrointestinal enzymes, bioactivities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26
621 Template-Based Object Detection through Partial Shape Matching and Boundary Verification

Authors: Feng Ge, Tiecheng Liu, Song Wang, Joachim Stahl

Abstract:

This paper presents a novel template-based method to detect objects of interest from real images by shape matching. To locate a target object that has a similar shape to a given template boundary, the proposed method integrates three components: contour grouping, partial shape matching, and boundary verification. In the first component, low-level image features, including edges and corners, are grouped into a set of perceptually salient closed contours using an extended ratio-contour algorithm. In the second component, we develop a partial shape matching algorithm to identify the fractions of detected contours that partly match given template boundaries. Specifically, we represent template boundaries and detected contours using landmarks, and apply a greedy algorithm to search the matched landmark subsequences. For each matched fraction between a template and a detected contour, we estimate an affine transform that transforms the whole template into a hypothetic boundary. In the third component, we provide an efficient algorithm based on oriented edge lists to determine the target boundary from the hypothetic boundaries by checking each of them against image edges. We evaluate the proposed method on recognizing and localizing 12 template leaves in a data set of real images with clutter back-grounds, illumination variations, occlusions, and image noises. The experiments demonstrate the high performance of our proposed method1.

Keywords: Object detection, shape matching, contour grouping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2278
620 Statistical Feature Extraction Method for Wood Species Recognition System

Authors: Mohd Iz'aan Paiz Bin Zamri, Anis Salwa Mohd Khairuddin, Norrima Mokhtar, Rubiyah Yusof

Abstract:

Effective statistical feature extraction and classification are important in image-based automatic inspection and analysis. An automatic wood species recognition system is designed to perform wood inspection at custom checkpoints to avoid mislabeling of timber which will results to loss of income to the timber industry. The system focuses on analyzing the statistical pores properties of the wood images. This paper proposed a fuzzy-based feature extractor which mimics the experts’ knowledge on wood texture to extract the properties of pores distribution from the wood surface texture. The proposed feature extractor consists of two steps namely pores extraction and fuzzy pores management. The total number of statistical features extracted from each wood image is 38 features. Then, a backpropagation neural network is used to classify the wood species based on the statistical features. A comprehensive set of experiments on a database composed of 5200 macroscopic images from 52 tropical wood species was used to evaluate the performance of the proposed feature extractor. The advantage of the proposed feature extraction technique is that it mimics the experts’ interpretation on wood texture which allows human involvement when analyzing the wood texture. Experimental results show the efficiency of the proposed method.

Keywords: Classification, fuzzy, inspection system, image analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1722
619 A New High Speed Neural Model for Fast Character Recognition Using Cross Correlation and Matrix Decomposition

Authors: Hazem M. El-Bakry

Abstract:

Neural processors have shown good results for detecting a certain character in a given input matrix. In this paper, a new idead to speed up the operation of neural processors for character detection is presented. Such processors are designed based on cross correlation in the frequency domain between the input matrix and the weights of neural networks. This approach is developed to reduce the computation steps required by these faster neural networks for the searching process. The principle of divide and conquer strategy is applied through image decomposition. Each image is divided into small in size sub-images and then each one is tested separately by using a single faster neural processor. Furthermore, faster character detection is obtained by using parallel processing techniques to test the resulting sub-images at the same time using the same number of faster neural networks. In contrast to using only faster neural processors, the speed up ratio is increased with the size of the input image when using faster neural processors and image decomposition. Moreover, the problem of local subimage normalization in the frequency domain is solved. The effect of image normalization on the speed up ratio of character detection is discussed. Simulation results show that local subimage normalization through weight normalization is faster than subimage normalization in the spatial domain. The overall speed up ratio of the detection process is increased as the normalization of weights is done off line.

Keywords: Fast Character Detection, Neural Processors, Cross Correlation, Image Normalization, Parallel Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1518
618 Development of a Non-invasive System to Measure the Thickness of the Subcutaneous Adipose Tissue Layer for Human

Authors: Hyuck Ki Hong, Young Chang Jo, Yeon Shik Choi, Beom Joon Kim, Hyo Derk Park

Abstract:

To measure the thickness of the subcutaneous adipose tissue layer, a non-invasive optical measurement system (λ=1300 nm) is introduced. Animal and human subjects are used for the experiments. The results of human subjects are compared with the data of ultrasound device measurements, and a high correlation (r=0.94 for n=11) is observed. There are two modes in the corresponding signals measured by the optical system, which can be explained by two-layered and three-layered tissue models. If the target tissue is thinner than the critical thickness, detected data using diffuse reflectance method follow the three-layered tissue model, so the data increase as the thickness increases. On the other hand, if the target tissue is thicker than the critical thickness, the data follow the two-layered tissue model, so they decrease as the thickness increases.

Keywords: Subcutaneous adipose tissue layer, non-invasive measurement system, two-layered and three-layered tissue models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1828
617 FZP Design Considering Spherical Wave Incidence

Authors: Sergio Pérez-López, Daniel Tarrazó-Serrano, José M. Fuster, Pilar Candelas, Constanza Rubio

Abstract:

Fresnel Zone Plates (FZPs) are widely used in many areas, such as optics, microwaves or acoustics. On the design of FZPs, plane wave incidence is typically considered, but that is not usually the case in ultrasounds, especially in applications where a piston emitter is placed at a certain distance from the lens. In these cases, having control of the focal distance is very important, and with the usual Fresnel equation a focal displacement from the theoretical distance is observed due to the plane wave supposition. In this work, a comparison between FZP with plane wave incidence design and FZP with point source design in the case of piston emitter is presented. Influence of the main parameters of the piston in the final focalization profile has been studied. Numerical models and experimental results are shown, and they prove that when spherical wave incidence is considered for the piston case, it is possible to have a fine control of the focal distance in comparison with the classical design method.

Keywords: Focusing, Fresnel zone plate, ultrasound, spherical wave incidence, piston emitter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 733
616 An Advanced Stereo Vision Based Obstacle Detection with a Robust Shadow Removal Technique

Authors: Saeid Fazli, Hajar Mohammadi D., Payman Moallem

Abstract:

This paper presents a robust method to detect obstacles in stereo images using shadow removal technique and color information. Stereo vision based obstacle detection is an algorithm that aims to detect and compute obstacle depth using stereo matching and disparity map. The proposed advanced method is divided into three phases, the first phase is detecting obstacles and removing shadows, the second one is matching and the last phase is depth computing. We propose a robust method for detecting obstacles in stereo images using a shadow removal technique based on color information in HIS space, at the first phase. In this paper we use Normalized Cross Correlation (NCC) function matching with a 5 × 5 window and prepare an empty matching table τ and start growing disparity components by drawing a seed s from S which is computed using canny edge detector, and adding it to τ. In this way we achieve higher performance than the previous works [2,17]. A fast stereo matching algorithm is proposed that visits only a small fraction of disparity space in order to find a semi-dense disparity map. It works by growing from a small set of correspondence seeds. The obstacle identified in phase one which appears in the disparity map of phase two enters to the third phase of depth computing. Finally, experimental results are presented to show the effectiveness of the proposed method.

Keywords: obstacle detection, stereo vision, shadowremoval, color, stereo matching

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2048