Search results for: multi sensor image fusion
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8212

Search results for: multi sensor image fusion

7582 Noninvasive Disease Diagnosis through Breath Analysis Using DNA-functionalized SWNT Sensor Array

Authors: W. J. Zhang, Y. Q. Du, M. L. Wang

Abstract:

Noninvasive diagnostics of diseases via breath analysis has attracted considerable scientific and clinical interest for many years and become more and more promising with the rapid advancement in nanotechnology and biotechnology. The volatile organic compounds (VOCs) in exhaled breath, which are mainly blood borne, particularly provide highly valuable information about individuals’ physiological and pathophysiological conditions. Additionally, breath analysis is noninvasive, real-time, painless and agreeable to patients. We have developed a wireless sensor array based on single-stranded DNA (ssDNA)-decorated single-walled carbon nanotubes (SWNT) for the detection of a number of physiological indicators in breath. Eight DNA sequences were used to functionalize SWNT sensors to detect trace amount of methanol, benzene, dimethyl sulfide, hydrogen sulfide, acetone and ethanol, which are indicators of heavy smoking, excessive drinking, and diseases such as lung cancer, breast cancer, cirrhosis and diabetes. Our tests indicated that DNA functionalized SWNT sensors exhibit great selectivity, sensitivity, reproducibility, and repeatability. Furthermore, different molecules can be distinguished through pattern recognition enabled by this sensor array. Thus, the DNA-SWNT sensor array has great potential to be applied in chemical or bimolecular detection for the noninvasive diagnostics of diseases and health monitoring.

Keywords: breath analysis, diagnosis, DNA-SWNT sensor array, noninvasive

Procedia PDF Downloads 348
7581 Development of Polymeric Fluorescence Sensor for the Determination of Bisphenol-A

Authors: Neşe Taşci, Soner Çubuk, Ece Kök Yetimoğlu, M. Vezir Kahraman

Abstract:

Bisphenol-A (BPA), 2,2-bis(4-hydroxyphenly)propane, is one of the highest usage volume chemicals in the world. Studies showed that BPA maybe has negative effects on the central nervous system, immune and endocrine systems. Several of analytical methods for the analysis of BPA have been reported including electrochemical processes, chemical oxidation, ozonization, spectrophotometric, chromatographic techniques. Compared with other conventional analytical techniques, optic sensors are reliable, providing quick results, low cost, easy to use, stands out as a much more advantageous method because of the high precision and sensitivity. In this work, a new photocured polymeric fluorescence sensor was prepared and characterized for Bisphenol-A (BPA) analysis. Characterization of the membrane was carried out by Attenuated Total Reflectance Fourier Transform Infrared Spectroscopy (ATR-FTIR) and Scanning Electron Microscope (SEM) techniques. The response characteristics of the sensor including dynamic range, pH effect and response time were systematically investigated. Acknowledgment: This work was supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under Grant 115Y469.

Keywords: bisphenol-a, fluorescence, photopolymerization, polymeric sensor

Procedia PDF Downloads 236
7580 An Improved Image Steganography Technique Based on Least Significant Bit Insertion

Authors: Olaiya Folorunsho, Comfort Y. Daramola, Joel N. Ugwu, Lawrence B. Adewole, Olufisayo S. Ekundayo

Abstract:

In today world, there is a tremendous rise in the usage of internet due to the fact that almost all the communication and information sharing is done over the web. Conversely, there is a continuous growth of unauthorized access to confidential data. This has posed a challenge to information security expertise whose major goal is to curtail the menace. One of the approaches to secure the safety delivery of data/information to the rightful destination without any modification is steganography. Steganography is the art of hiding information inside an embedded information. This research paper aimed at designing a secured algorithm with the use of image steganographic technique that makes use of Least Significant Bit (LSB) algorithm for embedding the data into the bit map image (bmp) in order to enhance security and reliability. In the LSB approach, the basic idea is to replace the LSB of the pixels of the cover image with the Bits of the messages to be hidden without destroying the property of the cover image significantly. The system was implemented using C# programming language of Microsoft.NET framework. The performance evaluation of the proposed system was experimented by conducting a benchmarking test for analyzing the parameters like Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR). The result showed that image steganography performed considerably in securing data hiding and information transmission over the networks.

Keywords: steganography, image steganography, least significant bits, bit map image

Procedia PDF Downloads 266
7579 Tank Barrel Surface Damage Detection Algorithm

Authors: Tomáš Dyk, Stanislav Procházka, Martin Drahanský

Abstract:

The article proposes a new algorithm for detecting damaged areas of the tank barrel based on the image of the inner surface of the tank barrel. Damage position is calculated using image processing techniques such as edge detection, discrete wavelet transformation and image segmentation for accurate contour detection. The algorithm can detect surface damage in smoothbore and even in rifled tank barrels. The algorithm also calculates the volume of the detected damage from the depth map generated, for example, from the distance measurement unit. The proposed method was tested on data obtained by a tank barrel scanning device, which generates both surface image data and depth map. The article also discusses tank barrel scanning devices and how damaged surface impacts material resistance.

Keywords: barrel, barrel diagnostic, image processing, surface damage detection, tank

Procedia PDF Downloads 137
7578 Quality Assurance in Cardiac Disorder Detection Images

Authors: Anam Naveed, Asma Andleeb, Mehreen Sirshar

Abstract:

In the article, Image processing techniques have been applied on cardiac images for enhancing the image quality. Two types of methodologies considers for survey, invasive techniques and non-invasive techniques. Different image processes for improvement of cardiac image quality and reduce the amount of radiation exposure for invasive techniques are explored. Different image processing algorithms for enhancing the noninvasive cardiac image qualities are described. Beside these two methodologies, third methodology has applied on live streaming of heart rate on ECG window for extracting necessary information, removing noise and enhancing quality. Sensitivity analyses have been carried out to investigate the impacts of cardiac images for diagnosis of cardiac arteries disease and how the enhancement on images will help the cardiologist to diagnoses disease. The paper evaluates strengths and weaknesses of different techniques applied for improved the image quality and draw a conclusion. Some specific limitations must be considered for whole survey, like the patient heart beat must be 70-75 beats/minute while doing the angiography, similarly patient weight and exposure radiation amount has some limitation.

Keywords: cardiac images, CT angiography, critical analysis, exposure radiation, invasive techniques, invasive techniques, non-invasive techniques

Procedia PDF Downloads 352
7577 Edge Detection in Low Contrast Images

Authors: Koushlendra Kumar Singh, Manish Kumar Bajpai, Rajesh K. Pandey

Abstract:

The edges of low contrast images are not clearly distinguishable to the human eye. It is difficult to find the edges and boundaries in it. The present work encompasses a new approach for low contrast images. The Chebyshev polynomial based fractional order filter has been used for filtering operation on an image. The preprocessing has been performed by this filter on the input image. Laplacian of Gaussian method has been applied on preprocessed image for edge detection. The algorithm has been tested on two test images.

Keywords: low contrast image, fractional order differentiator, Laplacian of Gaussian (LoG) method, chebyshev polynomial

Procedia PDF Downloads 636
7576 Thermal Image Segmentation Method for Stratification of Freezing Temperatures

Authors: Azam Fazelpour, Saeed R. Dehghani, Vlastimil Masek, Yuri S. Muzychka

Abstract:

The study uses an image analysis technique employing thermal imaging to measure the percentage of areas with various temperatures on a freezing surface. An image segmentation method using threshold values is applied to a sequence of image recording the freezing process. The phenomenon is transient and temperatures vary fast to reach the freezing point and complete the freezing process. Freezing salt water is subjected to the salt rejection that makes the freezing point dynamic and dependent on the salinity at the phase interface. For a specific area of freezing, nucleation starts from one side and end to another side, which causes a dynamic and transient temperature in that area. Thermal cameras are able to reveal a difference in temperature due to their sensitivity to infrared radiance. Using Experimental setup, a video is recorded by a thermal camera to monitor radiance and temperatures during the freezing process. Image processing techniques are applied to all frames to detect and classify temperatures on the surface. Image processing segmentation method is used to find contours with same temperatures on the icing surface. Each segment is obtained using the temperature range appeared in the image and correspond pixel values in the image. Using the contours extracted from image and camera parameters, stratified areas with different temperatures are calculated. To observe temperature contours on the icing surface using the thermal camera, the salt water sample is dropped on a cold surface with the temperature of -20°C. A thermal video is recorded for 2 minutes to observe the temperature field. Examining the results obtained by the method and the experimental observations verifies the accuracy and applicability of the method.

Keywords: ice contour boundary, image processing, image segmentation, salt ice, thermal image

Procedia PDF Downloads 320
7575 The Study of Suan Sunandha Rajabhat University’s Image among People in Bangkok

Authors: Sawitree Suvanno

Abstract:

The objective of this study is to investigate the Suan Sunandha Rajabhat University (SSRU) image among people in Bangkok. This study was conducted in the quantitative research and the questionnaires were used to collect data from 360 people of a sample group. Descriptive and inferential statistics were used in data analysis. The result showed that the SSRU’s image among people in Bangkok is in the “rather true” level of questionnaire scale in all aspects measured. The aspect that gains the utmost average is that the university is considered as royal-oriented and conservative; 2) the instructional supplies, buildings and venue promoting Thai art and tradition; 3) the moral and honest university administration; 4) the curriculum and the skillful students as well as graduates. Additional, people in Bangkok with different profession have the different view to the SSRU’s image at the significant level 0.05; there is no significant difference in gender, age and income.

Keywords: Bangkok, demographics, image, Suan Sunandha Rajabhpat University

Procedia PDF Downloads 247
7574 Brand Equity Tourism Destinations: An Application in Wine Regions Comparing Visitors' and Managers' Perspectives

Authors: M. Gomez, A. Molina

Abstract:

The concept of brand equity in the wine tourism area is an interesting topic to explore the factors that determine it. The aim of this study is to address this gap by investigating wine tourism destinations brand equity, and understanding the impact that the denomination of origin (DO) brand image and the destination image have on brand equity. Managing and monitoring the branding of wine tourism destinations is crucial to attract tourist arrivals. The multiplicity of stakeholders involved in the branding process calls for research that, unlike previous studies, adopts a broader perspective and incorporates an internal and an external perspective. Therefore, this gap by comparing managers’ and visitors’ approaches to wine tourism destination brand equity has been addressed. A survey questionnaire for data collection purposes was used. The hypotheses were tested using winery managers and winery visitors, each leading a different position relative to the wine tourism destination brand equity. All the interviews were conducted face-to-face. The survey instrument included several scales related to DO brand image, destination image, and wine tourism destination brand equity. All items were measured on seven-point Likert scales. Partial least squares was used to analyze the accuracy of scales, the structural model, and multi-group analysis to identify the differences in the path coefficients and to test the hypotheses. The results show that the positive influence of DO brand image on wine tourism destination brand equity is stronger for wineries than for visitors, but there are no significant differences between the two groups. However, there are significant differences in the positive effect of destination brand image on both wine tourism destination brand equity and DO brand image. The results of this study are important for consultants, practitioners, and policy makers. The gap between managers and visitors calls for the development of a number of campaigns to enhance the image that visitors hold and, thus, increase tourist arrivals. Events such as wine gatherings and gastronomic symposiums held at universities and culinary schools and participation in business meetings can enhance the perceptions and in turn, the added value, brand equity of the wine tourism destinations. The images of destinations and DOs can help strengthen the brand equity of the wine tourism destinations, especially for visitors. Thus, the development and reinforcement of favorable, strong, and unique destination associations and DO associations are important to increase that value. Joint campaigns are advisable to enhance the images of destinations and DOs and, as a consequence, the value of the wine tourism destination brand.

Keywords: brand equity, managers, visitors, wine tourism

Procedia PDF Downloads 134
7573 Treatment of Interferograms Image of Perturbation Processes in Metallic Samples by Optical Method

Authors: Daira Radouane, Naim Boudmagh, Hamada Adel

Abstract:

The but of this handling is to use the technique of the shearing with a mechanism lapping machine of image: a prism of Wollaston. We want to characterize this prism in order to be able to employ it later on in an analysis by shearing. A prism of Wollaston is a prism produced in a birefringent material i.e. having two indexes of refraction. This prism is cleaved so as to present the directions associated with these indices in its face with entry. It should be noted that these directions are perpendicular between them.

Keywords: non destructive control, aluminium, interferometry, treatment of image

Procedia PDF Downloads 331
7572 Facility Anomaly Detection with Gaussian Mixture Model

Authors: Sunghoon Park, Hank Kim, Jinwon An, Sungzoon Cho

Abstract:

Internet of Things allows one to collect data from facilities which are then used to monitor them and even predict malfunctions in advance. Conventional quality control methods focus on setting a normal range on a sensor value defined between a lower control limit and an upper control limit, and declaring as an anomaly anything falling outside it. However, interactions among sensor values are ignored, thus leading to suboptimal performance. We propose a multivariate approach which takes into account many sensor values at the same time. In particular Gaussian Mixture Model is used which is trained to maximize likelihood value using Expectation-Maximization algorithm. The number of Gaussian component distributions is determined by Bayesian Information Criterion. The negative Log likelihood value is used as an anomaly score. The actual usage scenario goes like a following. For each instance of sensor values from a facility, an anomaly score is computed. If it is larger than a threshold, an alarm will go off and a human expert intervenes and checks the system. A real world data from Building energy system was used to test the model.

Keywords: facility anomaly detection, gaussian mixture model, anomaly score, expectation maximization algorithm

Procedia PDF Downloads 272
7571 High Speed Motion Tracking with Magnetometer in Nonuniform Magnetic Field

Authors: Jeronimo Cox, Tomonari Furukawa

Abstract:

Magnetometers have become more popular in inertial measurement units (IMU) for their ability to correct estimations using the earth's magnetic field. Accelerometer and gyroscope-based packages fail with dead-reckoning errors accumulated over time. Localization in robotic applications with magnetometer-inclusive IMUs has become popular as a way to track the odometry of slower-speed robots. With high-speed motions, the accumulated error increases over smaller periods of time, making them difficult to track with IMU. Tracking a high-speed motion is especially difficult with limited observability. Visual obstruction of motion leaves motion-tracking cameras unusable. When motions are too dynamic for estimation techniques reliant on the observability of the gravity vector, the use of magnetometers is further justified. As available magnetometer calibration methods are limited with the assumption that background magnetic fields are uniform, estimation in nonuniform magnetic fields is problematic. Hard iron distortion is a distortion of the magnetic field by other objects that produce magnetic fields. This kind of distortion is often observed as the offset from the origin of the center of data points when a magnetometer is rotated. The magnitude of hard iron distortion is dependent on proximity to distortion sources. Soft iron distortion is more related to the scaling of the axes of magnetometer sensors. Hard iron distortion is more of a contributor to the error of attitude estimation with magnetometers. Indoor environments or spaces inside ferrite-based structures, such as building reinforcements or a vehicle, often cause distortions with proximity. As positions correlate to areas of distortion, methods of magnetometer localization include the production of spatial mapping of magnetic field and collection of distortion signatures to better aid location tracking. The goal of this paper is to compare magnetometer methods that don't need pre-productions of magnetic field maps. Mapping the magnetic field in some spaces can be costly and inefficient. Dynamic measurement fusion is used to track the motion of a multi-link system with us. Conventional calibration by data collection of rotation at a static point, real-time estimation of calibration parameters each time step, and using two magnetometers for determining local hard iron distortion are compared to confirm the robustness and accuracy of each technique. With opposite-facing magnetometers, hard iron distortion can be accounted for regardless of position, Rather than assuming that hard iron distortion is constant regardless of positional change. The motion measured is a repeatable planar motion of a two-link system connected by revolute joints. The links are translated on a moving base to impulse rotation of the links. Equipping the joints with absolute encoders and recording the motion with cameras to enable ground truth comparison to each of the magnetometer methods. While the two-magnetometer method accounts for local hard iron distortion, the method fails where the magnetic field direction in space is inconsistent.

Keywords: motion tracking, sensor fusion, magnetometer, state estimation

Procedia PDF Downloads 84
7570 Reliable and Energy-Aware Data Forwarding under Sink-Hole Attack in Wireless Sensor Networks

Authors: Ebrahim Alrashed

Abstract:

Wireless sensor networks are vulnerable to attacks from adversaries attempting to disrupt their operations. Sink-hole attacks are a type of attack where an adversary node drops data forwarded through it and hence affecting the reliability and accuracy of the network. Since sensor nodes have limited battery power, it is essential that any solution to the sinkhole attack problem be very energy-aware. In this paper, we present a reliable and energy efficient scheme to forward data from source nodes to the base station while under sink-hole attack. The scheme also detects sink-hole attack nodes and avoid paths that includes them.

Keywords: energy-aware routing, reliability, sink-hole attack, WSN

Procedia PDF Downloads 396
7569 MAS Capped CdTe/ZnS Core/Shell Quantum Dot Based Sensor for Detection of Hg(II)

Authors: Dilip Saikia, Suparna Bhattacharjee, Nirab Adhikary

Abstract:

In this piece of work, we have presented the synthesis and characterization of CdTe/ZnS core/shell (CS) quantum dots (QD). CS QDs are used as a fluorescence probe to design a simple cost-effective and ultrasensitive sensor for the detection of toxic Hg(II) in an aqueous medium. Mercaptosuccinic acid (MSA) has been used as a capping agent for the synthesis CdTe/ZnS CS QD. Photoluminescence quenching mechanism has been used in the detection experiment of Hg(II). The designed sensing technique shows a remarkably low detection limit of about 1 picomolar (pM). Here, the CS QDs are synthesized by a simple one-pot aqueous method. The synthesized CS QDs are characterized by using advanced diagnostics tools such as UV-vis, Photoluminescence, XRD, FTIR, TEM and Zeta potential analysis. The interaction between CS QDs and the Hg(II) ions results in the quenching of photoluminescence (PL) intensity of QDs, via the mechanism of excited state electron transfer. The proposed mechanism is explained using cyclic voltammetry and zeta potential analysis. The designed sensor is found to be highly selective towards Hg (II) ions. The analysis of the real samples such as drinking water and tap water has been carried out and the CS QDs show remarkably good results. Using this simple sensing method we have designed a prototype low-cost electronic device for the detection of Hg(II) in an aqueous medium. The findings of the experimental results of the designed sensor is crosschecked by using AAS analysis.

Keywords: photoluminescence, quantum dots, quenching, sensor

Procedia PDF Downloads 266
7568 Urban Growth Analysis Using Multi-Temporal Satellite Images, Non-stationary Decomposition Methods and Stochastic Modeling

Authors: Ali Ben Abbes, ImedRiadh Farah, Vincent Barra

Abstract:

Remotely sensed data are a significant source for monitoring and updating databases for land use/cover. Nowadays, changes detection of urban area has been a subject of intensive researches. Timely and accurate data on spatio-temporal changes of urban areas are therefore required. The data extracted from multi-temporal satellite images are usually non-stationary. In fact, the changes evolve in time and space. This paper is an attempt to propose a methodology for changes detection in urban area by combining a non-stationary decomposition method and stochastic modeling. We consider as input of our methodology a sequence of satellite images I1, I2, … In at different periods (t = 1, 2, ..., n). Firstly, a preprocessing of multi-temporal satellite images is applied. (e.g. radiometric, atmospheric and geometric). The systematic study of global urban expansion in our methodology can be approached in two ways: The first considers the urban area as one same object as opposed to non-urban areas (e.g. vegetation, bare soil and water). The objective is to extract the urban mask. The second one aims to obtain a more knowledge of urban area, distinguishing different types of tissue within the urban area. In order to validate our approach, we used a database of Tres Cantos-Madrid in Spain, which is derived from Landsat for a period (from January 2004 to July 2013) by collecting two frames per year at a spatial resolution of 25 meters. The obtained results show the effectiveness of our method.

Keywords: multi-temporal satellite image, urban growth, non-stationary, stochastic model

Procedia PDF Downloads 428
7567 Exploring the Impact of Dual Brand Image on Continuous Smartphone Usage Intention

Authors: Chiao-Chen Chang, Yang-Chieh Chin

Abstract:

The mobile phone has no longer confined to communication, from the aspect of smartphones, consumers are only willing to pay for the product which the added value has corresponded with their appetites, such as multiple application, upgrade of the camera, and the appearance of the phone and so on. Moreover, as the maturity stage of smartphone industry today, the strategy which manufactures used to gain competitive advantages through hardware as well as software differentiation, is no longer valid. Thus, this research aims to initiate from brand image, to examine exactly whether consumers’ buying intention focus on smartphone brand or operating system, at the same time, perceived value and customer satisfaction will be added between brand image and continuous usage intention to investigate the impact of these two facets toward continuous usage intention. This study verifies the correlation, fitness, and relationship between the variables that lies within the conceptual framework. The result of using structural equation modeling shows that brand image has a positive impact on continuous usage intention. Firms can affect consumer perceived value and customer satisfaction through the creation of the brand image. It also shows that the brand image of smartphone and brand image of the operating system have a positive impact on customer perceived value and customer satisfaction. Furthermore, perceived value also has a positive impact on satisfaction, and so is the relation within satisfaction and perceived value to the continuous usage intention. Last but not least, the brand image of the smartphone has a more remarkable impact on customers than the brand image of the operating system. In addition, this study extends the results to management practice and suggests manufactures to provide fine product design and hardware.

Keywords: smartphone, brand image, perceived value, continuous usage intention

Procedia PDF Downloads 197
7566 Using Satellite Images Datasets for Road Intersection Detection in Route Planning

Authors: Fatma El-Zahraa El-Taher, Ayman Taha, Jane Courtney, Susan Mckeever

Abstract:

Understanding road networks plays an important role in navigation applications such as self-driving vehicles and route planning for individual journeys. Intersections of roads are essential components of road networks. Understanding the features of an intersection, from a simple T-junction to larger multi-road junctions, is critical to decisions such as crossing roads or selecting the safest routes. The identification and profiling of intersections from satellite images is a challenging task. While deep learning approaches offer the state-of-the-art in image classification and detection, the availability of training datasets is a bottleneck in this approach. In this paper, a labelled satellite image dataset for the intersection recognition problem is presented. It consists of 14,692 satellite images of Washington DC, USA. To support other users of the dataset, an automated download and labelling script is provided for dataset replication. The challenges of construction and fine-grained feature labelling of a satellite image dataset is examined, including the issue of how to address features that are spread across multiple images. Finally, the accuracy of the detection of intersections in satellite images is evaluated.

Keywords: satellite images, remote sensing images, data acquisition, autonomous vehicles

Procedia PDF Downloads 144
7565 The Influence of Noise on Aerial Image Semantic Segmentation

Authors: Pengchao Wei, Xiangzhong Fang

Abstract:

Noise is ubiquitous in this world. Denoising is an essential technology, especially in image semantic segmentation, where noises are generally categorized into two main types i.e. feature noise and label noise. The main focus of this paper is aiming at modeling label noise, investigating the behaviors of different types of label noise on image semantic segmentation tasks using K-Nearest-Neighbor and Convolutional Neural Network classifier. The performance without label noise and with is evaluated and illustrated in this paper. In addition to that, the influence of feature noise on the image semantic segmentation task is researched as well and a feature noise reduction method is applied to mitigate its influence in the learning procedure.

Keywords: convolutional neural network, denoising, feature noise, image semantic segmentation, k-nearest-neighbor, label noise

Procedia PDF Downloads 220
7564 Filtering and Reconstruction System for Grey-Level Forensic Images

Authors: Ahd Aljarf, Saad Amin

Abstract:

Images are important source of information used as evidence during any investigation process. Their clarity and accuracy is essential and of the utmost importance for any investigation. Images are vulnerable to losing blocks and having noise added to them either after alteration or when the image was taken initially, therefore, having a high performance image processing system and it is implementation is very important in a forensic point of view. This paper focuses on improving the quality of the forensic images. For different reasons packets that store data can be affected, harmed or even lost because of noise. For example, sending the image through a wireless channel can cause loss of bits. These types of errors might give difficulties generally for the visual display quality of the forensic images. Two of the images problems: noise and losing blocks are covered. However, information which gets transmitted through any way of communication may suffer alteration from its original state or even lose important data due to the channel noise. Therefore, a developed system is introduced to improve the quality and clarity of the forensic images.

Keywords: image filtering, image reconstruction, image processing, forensic images

Procedia PDF Downloads 366
7563 X-Ray Detector Technology Optimization In CT Imaging

Authors: Aziz Ikhlef

Abstract:

Most of multi-slices CT scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80kVp and 140kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.

Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts

Procedia PDF Downloads 271
7562 Design, Analysis and Obstacle Avoidance Control of an Electric Wheelchair with Sit-Sleep-Seat Elevation Functions

Authors: Waleed Ahmed, Huang Xiaohua, Wilayat Ali

Abstract:

The wheelchair users are generally exposed to physical and psychological health problems, e.g., pressure sores and pain in the hip joint, associated with seating posture or being inactive in a wheelchair for a long time. Reclining Wheelchair with back, thigh, and leg adjustment helps in daily life activities and health preservation. The seat elevating function of an electric wheelchair allows the user (lower limb amputation) to reach different heights. An electric wheelchair is expected to ease the lives of the elderly and disable people by giving them mobility support and decreasing the percentage of accidents caused by users’ narrow sight or joystick operation errors. Thus, this paper proposed the design, analysis and obstacle avoidance control of an electric wheelchair with sit-sleep-seat elevation functions. A 3D model of a wheelchair is designed in SolidWorks that was later used for multi-body dynamic (MBD) analysis and to verify driving control system. The control system uses the fuzzy algorithm to avoid the obstacle by getting information in the form of distance from the ultrasonic sensor and user-specified direction from the joystick’s operation. The proposed fuzzy driving control system focuses on the direction and velocity of the wheelchair. The wheelchair model has been examined and proven in MSC Adams (Automated Dynamic Analysis of Mechanical Systems). The designed fuzzy control algorithm is implemented on Gazebo robotic 3D simulator using Robotic Operating System (ROS) middleware. The proposed wheelchair design enhanced mobility and quality of life by improving the user’s functional capabilities. Simulation results verify the non-accidental behavior of the electric wheelchair.

Keywords: fuzzy logic control, joystick, multi body dynamics, obstacle avoidance, scissor mechanism, sensor

Procedia PDF Downloads 129
7561 Application of Improved Semantic Communication Technology in Remote Sensing Data Transmission

Authors: Tingwei Shu, Dong Zhou, Chengjun Guo

Abstract:

Semantic communication is an emerging form of communication that realize intelligent communication by extracting semantic information of data at the source and transmitting it, and recovering the data at the receiving end. It can effectively solve the problem of data transmission under the situation of large data volume, low SNR and restricted bandwidth. With the development of Deep Learning, semantic communication further matures and is gradually applied in the fields of the Internet of Things, Uumanned Air Vehicle cluster communication, remote sensing scenarios, etc. We propose an improved semantic communication system for the situation where the data volume is huge and the spectrum resources are limited during the transmission of remote sensing images. At the transmitting, we need to extract the semantic information of remote sensing images, but there are some problems. The traditional semantic communication system based on Convolutional Neural Network cannot take into account the global semantic information and local semantic information of the image, which results in less-than-ideal image recovery at the receiving end. Therefore, we adopt the improved vision-Transformer-based structure as the semantic encoder instead of the mainstream one using CNN to extract the image semantic features. In this paper, we first perform pre-processing operations on remote sensing images to improve the resolution of the images in order to obtain images with more semantic information. We use wavelet transform to decompose the image into high-frequency and low-frequency components, perform bilinear interpolation on the high-frequency components and bicubic interpolation on the low-frequency components, and finally perform wavelet inverse transform to obtain the preprocessed image. We adopt the improved Vision-Transformer structure as the semantic coder to extract and transmit the semantic information of remote sensing images. The Vision-Transformer structure can better train the huge data volume and extract better image semantic features, and adopt the multi-layer self-attention mechanism to better capture the correlation between semantic features and reduce redundant features. Secondly, to improve the coding efficiency, we reduce the quadratic complexity of the self-attentive mechanism itself to linear so as to improve the image data processing speed of the model. We conducted experimental simulations on the RSOD dataset and compared the designed system with a semantic communication system based on CNN and image coding methods such as BGP and JPEG to verify that the method can effectively alleviate the problem of excessive data volume and improve the performance of image data communication.

Keywords: semantic communication, transformer, wavelet transform, data processing

Procedia PDF Downloads 78
7560 An Online 3D Modeling Method Based on a Lossless Compression Algorithm

Authors: Jiankang Wang, Hongyang Yu

Abstract:

This paper proposes a portable online 3D modeling method. The method first utilizes a depth camera to collect data and compresses the depth data using a frame-by-frame lossless data compression method. The color image is encoded using the H.264 encoding format. After the cloud obtains the color image and depth image, a 3D modeling method based on bundlefusion is used to complete the 3D modeling. The results of this study indicate that this method has the characteristics of portability, online, and high efficiency and has a wide range of application prospects.

Keywords: 3D reconstruction, bundlefusion, lossless compression, depth image

Procedia PDF Downloads 82
7559 Hardness Analysis of Samples of Friction Stir Welded Joints of (Al-Cu)

Authors: Upamanyu Majumder, Angshuman Das

Abstract:

Friction Stir Welding (FSW) is a Solid-State joining process. Unlike fusion welding techniques it does not involve operation above the melting point temperature of metals, but above the re-crystallization temperature. FSW also does not involve fusion of other material. FSW of ALUMINIUM has been commercialized and recent studies on joining dissimilar metals have been studied. Friction stir welding was introduced and patented in 1991 by The Welding Institute. For this paper, a total of nine samples each of copper and ALUMINIUM(Dissimilar metals) were welded using FSW process and Vickers Hardness were conducted on each of the samples.

Keywords: friction stir welding (FSW), recrystallization temperature, dissimilar metals, aluminium-copper, Vickers hardness test

Procedia PDF Downloads 354
7558 Smart Sensor Data to Predict Machine Performance with IoT-Based Machine Learning and Artificial Intelligence

Authors: C. J. Rossouw, T. I. van Niekerk

Abstract:

The global manufacturing industry is utilizing the internet and cloud-based services to further explore the anatomy and optimize manufacturing processes in support of the movement into the Fourth Industrial Revolution (4IR). The 4IR from a third world and African perspective is hindered by the fact that many manufacturing systems that were developed in the third industrial revolution are not inherently equipped to utilize the internet and services of the 4IR, hindering the progression of third world manufacturing industries into the 4IR. This research focuses on the development of a non-invasive and cost-effective cyber-physical IoT system that will exploit a machine’s vibration to expose semantic characteristics in the manufacturing process and utilize these results through a real-time cloud-based machine condition monitoring system with the intention to optimize the system. A microcontroller-based IoT sensor was designed to acquire a machine’s mechanical vibration data, process it in real-time, and transmit it to a cloud-based platform via Wi-Fi and the internet. Time-frequency Fourier analysis was applied to the vibration data to form an image representation of the machine’s behaviour. This data was used to train a Convolutional Neural Network (CNN) to learn semantic characteristics in the machine’s behaviour and relate them to a state of operation. The same data was also used to train a Convolutional Autoencoder (CAE) to detect anomalies in the data. Real-time edge-based artificial intelligence was achieved by deploying the CNN and CAE on the sensor to analyse the vibration. A cloud platform was deployed to visualize the vibration data and the results of the CNN and CAE in real-time. The cyber-physical IoT system was deployed on a semi-automated metal granulation machine with a set of trained machine learning models. Using a single sensor, the system was able to accurately visualize three states of the machine’s operation in real-time. The system was also able to detect a variance in the material being granulated. The research demonstrates how non-IoT manufacturing systems can be equipped with edge-based artificial intelligence to establish a remote machine condition monitoring system.

Keywords: IoT, cyber-physical systems, artificial intelligence, manufacturing, vibration analytics, continuous machine condition monitoring

Procedia PDF Downloads 88
7557 X-Ray Detector Technology Optimization in Computed Tomography

Authors: Aziz Ikhlef

Abstract:

Most of multi-slices Computed Tomography (CT) scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This is translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80 kVp and 140 kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.

Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts

Procedia PDF Downloads 194
7556 Measuring Corporate Brand Loyalties in Business Markets: A Case for Caution

Authors: Niklas Bondesson

Abstract:

Purpose: This paper attempts to examine how different facets of attitudinal brand loyalty are determined by different brand image elements in business markets. Design/Methodology/Approach: Statistical analysis is employed to data from a web survey, covering 226 professional packaging buyers in eight countries. Findings: The results reveal that different brand loyalty facets have different antecedents. Affective brand loyalties (or loyalty 'feelings') are mainly driven by customer associations to service relationships, whereas customers’ loyalty intentions (to purchase and recommend a brand) are triggered by associations to the general reputation of the company. The findings also indicate that willingness to pay a price premium is a distinct form of loyalty, with unique determinants. Research implications: Theoretically, the paper suggests that corporate B2B brand loyalty needs to be conceptualised with more refinement than has been done in extant B2B branding work. Methodologically, the paper highlights that single-item approaches can be fruitful when measuring B2B brand loyalty, and that multi-item scales can conceal important nuances in terms of understanding why customers are loyal. Practical implications: The idea of a loyalty 'silver metric' is an attractive idea, but this study indicates that firms who rely too much on one single type of brand loyalty risk to miss important building blocks. Originality/Value/Contribution: The major contribution is a more multi-faceted conceptualisation, and measurement, of corporate B2B brand loyalty and its brand image determinants than extant work has provided.

Keywords: brand equity, business-to-business branding, industrial marketing, buying behaviour

Procedia PDF Downloads 413
7555 Contrast Enhancement of Masses in Mammograms Using Multiscale Morphology

Authors: Amit Kamra, V. K. Jain, Pragya

Abstract:

Mammography is widely used technique for breast cancer screening. There are various other techniques for breast cancer screening but mammography is the most reliable and effective technique. The images obtained through mammography are of low contrast which causes problem for the radiologists to interpret. Hence, a high quality image is mandatory for the processing of the image for extracting any kind of information from it. Many contrast enhancement algorithms have been developed over the years. In the present work, an efficient morphology based technique is proposed for contrast enhancement of masses in mammographic images. The proposed method is based on Multiscale Morphology and it takes into consideration the scale of the structuring element. The proposed method is compared with other state-of-the-art techniques. The experimental results show that the proposed method is better both qualitatively and quantitatively than the other standard contrast enhancement techniques.

Keywords: enhancement, mammography, multi-scale, mathematical morphology

Procedia PDF Downloads 423
7554 Simulation of Forest Fire Using Wireless Sensor Network

Authors: Mohammad F. Fauzi, Nurul H. Shahba M. Shahrun, Nurul W. Hamzah, Mohd Noah A. Rahman, Afzaal H. Seyal

Abstract:

In this paper, we proposed a simulation system using Wireless Sensor Network (WSN) that will be distributed around the forest for early forest fire detection and to locate the areas affected. In Brunei Darussalam, approximately 78% of the nation is covered by forest. Since the forest is Brunei’s most precious natural assets, it is very important to protect and conserve our forest. The hot climate in Brunei Darussalam can lead to forest fires which can be a fatal threat to the preservation of our forest. The process consists of getting data from the sensors, analyzing the data and producing an alert. The key factors that we are going to analyze are the surrounding temperature, wind speed and wind direction, humidity of the air and soil.

Keywords: forest fire monitor, humidity, wind direction, wireless sensor network

Procedia PDF Downloads 453
7553 Low Light Image Enhancement with Multi-Stage Interconnected Autoencoders Integration in Pix to Pix GAN

Authors: Muhammad Atif, Cang Yan

Abstract:

The enhancement of low-light images is a significant area of study aimed at enhancing the quality of captured images in challenging lighting environments. Recently, methods based on convolutional neural networks (CNN) have gained prominence as they offer state-of-the-art performance. However, many approaches based on CNN rely on increasing the size and complexity of the neural network. In this study, we propose an alternative method for improving low-light images using an autoencoder-based multiscale knowledge transfer model. Our method leverages the power of three autoencoders, where the encoders of the first two autoencoders are directly connected to the decoder of the third autoencoder. Additionally, the decoder of the first two autoencoders is connected to the encoder of the third autoencoder. This architecture enables effective knowledge transfer, allowing the third autoencoder to learn and benefit from the enhanced knowledge extracted by the first two autoencoders. We further integrate the proposed model into the PIX to PIX GAN framework. By integrating our proposed model as the generator in the GAN framework, we aim to produce enhanced images that not only exhibit improved visual quality but also possess a more authentic and realistic appearance. These experimental results, both qualitative and quantitative, show that our method is better than the state-of-the-art methodologies.

Keywords: low light image enhancement, deep learning, convolutional neural network, image processing

Procedia PDF Downloads 80