Search results for: Image entropy.
604 Analysis of Aiming Performance for Games Using Mapping Method of Corneal Reflections Based on Two Different Light Sources
Authors: Yoshikazu Onuki, Itsuo Kumazawa
Abstract:
Fundamental motivation of this paper is how gaze estimation can be utilized effectively regarding an application to games. In games, precise estimation is not always important in aiming targets but an ability to move a cursor to an aiming target accurately is also significant. Incidentally, from a game producing point of view, a separate expression of a head movement and gaze movement sometimes becomes advantageous to expressing sense of presence. A case that panning a background image associated with a head movement and moving a cursor according to gaze movement can be a representative example. On the other hand, widely used technique of POG estimation is based on a relative position between a center of corneal reflection of infrared light sources and a center of pupil. However, a calculation of a center of pupil requires relatively complicated image processing, and therefore, a calculation delay is a concern, since to minimize a delay of inputting data is one of the most significant requirements in games. In this paper, a method to estimate a head movement by only using corneal reflections of two infrared light sources in different locations is proposed. Furthermore, a method to control a cursor using gaze movement as well as a head movement is proposed. By using game-like-applications, proposed methods are evaluated and, as a result, a similar performance to conventional methods is confirmed and an aiming control with lower computation power and stressless intuitive operation is obtained.
Keywords: Point-of-gaze, gaze estimation, head movement, corneal reflections, two infrared light sources, game.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1071603 Fuzzy Uncertainty Theory for Stealth Fighter Aircraft Selection in Entropic Fuzzy TOPSIS Decision Analysis Process
Authors: C. Ardil
Abstract:
The purpose of this paper is to present fuzzy TOPSIS in an entropic fuzzy environment. Due to the ambiguous concepts often represented in decision data, exact values are insufficient to model real-life situations. In this paper, the rating of each alternative is defined in fuzzy linguistic terms, which can be expressed with triangular fuzzy numbers. The weight of each criterion is then derived from the decision matrix using the entropy weighting method. Next, a vertex method is proposed to calculate the distance between two triangular fuzzy numbers. According to the TOPSIS concept, a closeness coefficient is defined to determine the ranking order of all alternatives by simultaneously calculating the distances to both the fuzzy positive-ideal solution (FPIS) and the fuzzy negative-ideal solution (FNIS). Finally, an illustrative example of selecting stealth fighter aircraft is shown at the end of this article to highlight the procedure of the proposed method. Correlation analysis and validation analysis using TOPSIS, WSM, and WPM methods were performed to compare the ranking order of the alternatives.
Keywords: stealth fighter aircraft selection, fuzzy uncertainty theory (FUT), fuzzy entropic decision (FED), fuzzy linguistic variables, triangular fuzzy numbers, multiple criteria decision making analysis, MCDMA, TOPSIS, WSM, WPM
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 603602 A Process of Forming a Single Competitive Factor in the Digital Camera Industry
Authors: Kiyohiro Yamazaki
Abstract:
This paper considers a forming process of a single competitive factor in the digital camera industry from the viewpoint of product platform. To make product development easier for companies and to increase product introduction ratios, development efforts concentrate on improving and strengthening certain product attributes, and it is born in the process that the product platform is formed continuously. It is pointed out that the formation of this product platform raises product development efficiency of individual companies, but on the other hand, it has a trade-off relationship of causing unification of competitive factors in the whole industry. This research tries to analyze product specification data which were collected from the web page of digital camera companies. Specifically, this research collected all product specification data released in Japan from 1995 to 2003 and analyzed the composition of image sensor and optical lens; and it identified product platforms shared by multiple products and discussed their application. As a result, this research found that the product platformation was born in the development of the standard product for major market segmentation. Every major company has made product platforms of image sensors and optical lenses, and as a result, this research found that the competitive factors were unified in the entire industry throughout product platformation. In other words, this product platformation brought product development efficiency of individual firms; however, it also caused industrial competition factors to be unified in the industry.
Keywords: Digital camera industry, product evolution trajectory, product platform, unification of competitive factors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 652601 Automated Video Surveillance System for Detection of Suspicious Activities during Academic Offline Examination
Authors: G. Sandhya Devi, G. Suvarna Kumar, S. Chandini
Abstract:
This research work aims to develop a system that will analyze and identify students who indulge in malpractices/suspicious activities during the course of an academic offline examination. Automated Video Surveillance provides an optimal solution which helps in monitoring the students and identifying the malpractice event immediately. This work is organized into three modules. The first module deals with performing an impersonation check using a PCA-based face recognition method which is done by cross checking his profile with the database. The presence or absence of the student is even determined in this module by implementing an image registration technique wherein a grid is formed by considering all the images registered using the frontal camera at the determined positions. Second, detecting such facial malpractices in which a student gets involved in conversation with another, trying to obtain unauthorized information etc., based on the threshold range evaluated by considering his/her mouth state whether open or closed. The third module deals with identification of unauthorized material or gadgets used in the examination hall by training the positive samples of the object through various stages. Here, a top view camera feed is analyzed to detect the suspicious activities. The system automatically alerts the administration when any suspicious activities are identified, thereby reducing the error rate caused due to manual monitoring. This work is an improvement over our previous work published in identifying suspicious activities done by examinees in an offline examination.
Keywords: Impersonation, image registration, incrimination, object detection, threshold evaluation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1575600 Near-Infrared Hyperspectral Imaging Spectroscopy to Detect Microplastics and Pieces of Plastic in Almond Flour
Authors: H. Apaza, L. Chévez, H. Loro
Abstract:
Plastic and microplastic pollution in human food chain is a big problem for human health that requires more elaborated techniques that can identify their presences in different kinds of food. Hyperspectral imaging technique is an optical technique than can detect the presence of different elements in an image and can be used to detect plastics and microplastics in a scene. To do this statistical techniques are required that need to be evaluated and compared in order to find the more efficient ones. In this work, two problems related to the presence of plastics are addressed, the first is to detect and identify pieces of plastic immersed in almond seeds, and the second problem is to detect and quantify microplastic in almond flour. To do this we make use of the analysis hyperspectral images taken in the range of 900 to 1700 nm using 4 unmixing techniques of hyperspectral imaging which are: least squares unmixing (LSU), non-negatively constrained least squares unmixing (NCLSU), fully constrained least squares unmixing (FCLSU), and scaled constrained least squares unmixing (SCLSU). NCLSU, FCLSU, SCLSU techniques manage to find the region where the plastic is found and also manage to quantify the amount of microplastic contained in the almond flour. The SCLSU technique estimated a 13.03% abundance of microplastics and 86.97% of almond flour compared to 16.66% of microplastics and 83.33% abundance of almond flour prepared for the experiment. Results show the feasibility of applying near-infrared hyperspectral image analysis for the detection of plastic contaminants in food.
Keywords: Food, plastic, microplastic, NIR hyperspectral imaging, unmixing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1188599 Constitutive Equations for Human Saphenous Vein Coronary Artery Bypass Graft
Authors: Hynek Chlup, Lukas Horny, Rudolf Zitny, Svatava Konvickova, Tomas Adamek
Abstract:
Coronary artery bypass grafts (CABG) are widely studied with respect to hemodynamic conditions which play important role in presence of a restenosis. However, papers which concern with constitutive modeling of CABG are lacking in the literature. The purpose of this study is to find a constitutive model for CABG tissue. A sample of the CABG obtained within an autopsy underwent an inflation–extension test. Displacements were recoredered by CCD cameras and subsequently evaluated by digital image correlation. Pressure – radius and axial force – elongation data were used to fit material model. The tissue was modeled as onelayered composite reinforced by two families of helical fibers. The material is assumed to be locally orthotropic, nonlinear, incompressible and hyperelastic. Material parameters are estimated for two strain energy functions (SEF). The first is classical exponential. The second SEF is logarithmic which allows interpretation by means of limiting (finite) strain extensibility. Presented material parameters are estimated by optimization based on radial and axial equilibrium equation in a thick-walled tube. Both material models fit experimental data successfully. The exponential model fits significantly better relationship between axial force and axial strain than logarithmic one.Keywords: Constitutive model, coronary artery bypass graft, digital image correlation, fiber reinforced composite, inflation test, saphenous vein.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1643598 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images
Authors: U. Datta
Abstract:
The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.
Keywords: Co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 730597 Increase of Organization in Complex Systems
Authors: Georgi Yordanov Georgiev, Michael Daly, Erin Gombos, Amrit Vinod, Gajinder Hoonjan
Abstract:
Measures of complexity and entropy have not converged to a single quantitative description of levels of organization of complex systems. The need for such a measure is increasingly necessary in all disciplines studying complex systems. To address this problem, starting from the most fundamental principle in Physics, here a new measure for quantity of organization and rate of self-organization in complex systems based on the principle of least (stationary) action is applied to a model system - the central processing unit (CPU) of computers. The quantity of organization for several generations of CPUs shows a double exponential rate of change of organization with time. The exact functional dependence has a fine, S-shaped structure, revealing some of the mechanisms of self-organization. The principle of least action helps to explain the mechanism of increase of organization through quantity accumulation and constraint and curvature minimization with an attractor, the least average sum of actions of all elements and for all motions. This approach can help describe, quantify, measure, manage, design and predict future behavior of complex systems to achieve the highest rates of self organization to improve their quality. It can be applied to other complex systems from Physics, Chemistry, Biology, Ecology, Economics, Cities, network theory and others where complex systems are present.
Keywords: Organization, self-organization, complex system, complexification, quantitative measure, principle of least action, principle of stationary action, attractor, progressive development, acceleration, stochastic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1641596 A Perceptually Optimized Foveation Based Wavelet Embedded Zero Tree Image Coding
Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf
Abstract:
In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.
Keywords: DWT, linear-phase 9/7 filter, Foveation Filtering, CSF implementation approaches, 9/7 Wavelet JND Thresholds and Wavelet Error Sensitivity WES, Luminance and Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1795595 Effective Dose and Size Specific Dose Estimation with and without Tube Current Modulation for Thoracic Computed Tomography Examinations: A Phantom Study
Authors: S. Gharbi, S. Labidi, M. Mars, M. Chelli, F. Ladeb
Abstract:
The purpose of this study is to reduce radiation dose for chest CT examination by including Tube Current Modulation (TCM) to a standard CT protocol. A scan of an anthropomorphic male Alderson phantom was performed on a 128-slice scanner. The estimation of effective dose (ED) in both scans with and without mAs modulation was done via multiplication of Dose Length Product (DLP) to a conversion factor. Results were compared to those measured with a CT-Expo software. The size specific dose estimation (SSDE) values were obtained by multiplication of the volume CT dose index (CTDIvol) with a conversion size factor related to the phantom’s effective diameter. Objective assessment of image quality was performed with Signal to Noise Ratio (SNR) measurements in phantom. SPSS software was used for data analysis. Results showed including CARE Dose 4D; ED was lowered by 48.35% and 51.51% using DLP and CT-expo, respectively. In addition, ED ranges between 7.01 mSv and 6.6 mSv in case of standard protocol, while it ranges between 3.62 mSv and 3.2 mSv with TCM. Similar results are found for SSDE; dose was higher without TCM of 16.25 mGy and was lower by 48.8% including TCM. The SNR values calculated were significantly different (p=0.03<0.05). The highest one is measured on images acquired with TCM and reconstructed with Filtered back projection (FBP). In conclusion, this study proves the potential of TCM technique in SSDE and ED reduction and in conserving image quality with high diagnostic reference level for thoracic CT examinations.
Keywords: Anthropomorphic phantom, computed tomography, CT-expo, radiation dose.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1464594 Anti-Homomorphism in Fuzzy Ideals
Authors: K. Chandrasekhara Rao, V. Swaminathan
Abstract:
The anti-homomorphic image of fuzzy ideals, fuzzy ideals of near-rings and anti ideals are discussed in this note. A necessary and sufficient condition has been established for near-ring anti ideal to be characteristic.Keywords: Fuzzy Ideals, Anti fuzzy subgroup, Anti fuzzy ideals, Anti homomorphism, Lower α level cut.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2311593 A Study on the Average Information Ratio of Perfect Secret-Sharing Schemes for Access Structures Based On Bipartite Graphs
Authors: Hui-Chuan Lu
Abstract:
A perfect secret-sharing scheme is a method to distribute a secret among a set of participants in such a way that only qualified subsets of participants can recover the secret and the joint share of participants in any unqualified subset is statistically independent of the secret. The collection of all qualified subsets is called the access structure of the perfect secret-sharing scheme. In a graph-based access structure, each vertex of a graph G represents a participant and each edge of G represents a minimal qualified subset. The average information ratio of a perfect secret-sharing scheme realizing the access structure based on G is defined as AR = (Pv2V (G) H(v))/(|V (G)|H(s)), where s is the secret and v is the share of v, both are random variables from and H is the Shannon entropy. The infimum of the average information ratio of all possible perfect secret-sharing schemes realizing a given access structure is called the optimal average information ratio of that access structure. Most known results about the optimal average information ratio give upper bounds or lower bounds on it. In this present structures based on bipartite graphs and determine the exact values of the optimal average information ratio of some infinite classes of them.
Keywords: secret-sharing scheme, average information ratio, star covering, core sequence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1579592 Experimental Correlation for Erythrocyte Aggregation Rate in Population Balance Modeling
Authors: Erfan Niazi, Marianne Fenech
Abstract:
Red Blood Cells (RBCs) or erythrocytes tend to form chain-like aggregates under low shear rate called rouleaux. This is a reversible process and rouleaux disaggregate in high shear rates. Therefore, RBCs aggregation occurs in the microcirculation where low shear rates are present but does not occur under normal physiological conditions in large arteries. Numerical modeling of RBCs interactions is fundamental in analytical models of a blood flow in microcirculation. Population Balance Modeling (PBM) is particularly useful for studying problems where particles agglomerate and break in a two phase flow systems to find flow characteristics. In this method, the elementary particles lose their individual identity due to continuous destructions and recreations by break-up and agglomeration. The aim of this study is to find RBCs aggregation in a dynamic situation. Simplified PBM was used previously to find the aggregation rate on a static observation of the RBCs aggregation in a drop of blood under the microscope. To find aggregation rate in a dynamic situation we propose an experimental set up testing RBCs sedimentation. In this test, RBCs interact and aggregate to form rouleaux. In this configuration, disaggregation can be neglected due to low shear stress. A high-speed camera is used to acquire video-microscopic pictures of the process. The sizes of the aggregates and velocity of sedimentation are extracted using an image processing techniques. Based on the data collection from 5 healthy human blood samples, the aggregation rate was estimated as 2.7x103(±0.3 x103) 1/s.
Keywords: Red blood cell, Rouleaux, microfluidics, image processing, population balance modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1058591 Brand Identity Creation for Thai Halal Brands
Authors: Pibool Waijittragum
Abstract:
The purpose of this paper is to synthesize the research result of brand Identities of Thai Halal brands which related to the way of life for Thai Muslims. The results will be transforming to Thai Halal Brands packaging and label design. The expected benefit is an alternative of marketing strategy for brand building process for Halal products in Thailand. Four elements of marketing strategies which necessary for the brand identity creation is the research framework: consists of Attributes, Benefits, Values and Personality. The research methodology was applied using qualitative and quantitative; 19 marketing experts with dynamic roles in Thai consumer products were interviewed. In addition, a field survey of 122 Thai Muslims selected from 175 Muslim communities in Bangkok was studied. Data analysis will be according to 5 categories of Thai Halal product: 1) Meat 2) Vegetable and Fruits 3) Instant foods and Garnishing ingredient 4) Beverages, Desserts and Snacks 5) Hygienic daily products.
The results will explain some suitable approach for brand Identities of Thai Halal brands as are: 1) Benefit approach as the characteristics of the product with its benefit. The brand identity created transform to the packaging design should be clear and display a fresh product 2) Value approach as the value of products that affect to consumers’ perception. The brand identity created transform to the packaging design should be simply look and using a trustful image 3) Personality approach as the reflection of consumers thought. The brand identity created transform to the packaging design should be sincere, enjoyable, merry, flamboyant look and using a humoristic image.
Keywords: Marketing strategies, Brand identity, Packaging and Label Design, Thai Halal products.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2221590 A Spatial Hypergraph Based Semi-Supervised Band Selection Method for Hyperspectral Imagery Semantic Interpretation
Authors: Akrem Sellami, Imed Riadh Farah
Abstract:
Hyperspectral imagery (HSI) typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image. Hence, a pixel in HSI is a high-dimensional vector of intensities with a large spectral range and a high spectral resolution. Therefore, the semantic interpretation is a challenging task of HSI analysis. We focused in this paper on object classification as HSI semantic interpretation. However, HSI classification still faces some issues, among which are the following: The spatial variability of spectral signatures, the high number of spectral bands, and the high cost of true sample labeling. Therefore, the high number of spectral bands and the low number of training samples pose the problem of the curse of dimensionality. In order to resolve this problem, we propose to introduce the process of dimensionality reduction trying to improve the classification of HSI. The presented approach is a semi-supervised band selection method based on spatial hypergraph embedding model to represent higher order relationships with different weights of the spatial neighbors corresponding to the centroid of pixel. This semi-supervised band selection has been developed to select useful bands for object classification. The presented approach is evaluated on AVIRIS and ROSIS HSIs and compared to other dimensionality reduction methods. The experimental results demonstrate the efficacy of our approach compared to many existing dimensionality reduction methods for HSI classification.Keywords: Hyperspectral image, spatial hypergraph, dimensionality reduction, semantic interpretation, band selection, feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1220589 Selecting the Best Sub-Region Indexing the Images in the Case of Weak Segmentation Based On Local Color Histograms
Authors: Mawloud Mosbah, Bachir Boucheham
Abstract:
Color Histogram is considered as the oldest method used by CBIR systems for indexing images. In turn, the global histograms do not include the spatial information; this is why the other techniques coming later have attempted to encounter this limitation by involving the segmentation task as a preprocessing step. The weak segmentation is employed by the local histograms while other methods as CCV (Color Coherent Vector) are based on strong segmentation. The indexation based on local histograms consists of splitting the image into N overlapping blocks or sub-regions, and then the histogram of each block is computed. The dissimilarity between two images is reduced, as consequence, to compute the distance between the N local histograms of the both images resulting then in N*N values; generally, the lowest value is taken into account to rank images, that means that the lowest value is that which helps to designate which sub-region utilized to index images of the collection being asked. In this paper, we make under light the local histogram indexation method in the hope to compare the results obtained against those given by the global histogram. We address also another noteworthy issue when Relying on local histograms namely which value, among N*N values, to trust on when comparing images, in other words, which sub-region among the N*N sub-regions on which we base to index images. Based on the results achieved here, it seems that relying on the local histograms, which needs to pose an extra overhead on the system by involving another preprocessing step naming segmentation, does not necessary mean that it produces better results. In addition to that, we have proposed here some ideas to select the local histogram on which we rely on to encode the image rather than relying on the local histogram having lowest distance with the query histograms.
Keywords: CBIR, Color Global Histogram, Color Local Histogram, Weak Segmentation, Euclidean Distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730588 LSGENSYS - An Integrated System for Pattern Recognition and Summarisation
Authors: Hema Nair
Abstract:
This paper presents a new system developed in Java® for pattern recognition and pattern summarisation in multi-band (RGB) satellite images. The system design is described in some detail. Results of testing the system to analyse and summarise patterns in SPOT MS images and LANDSAT images are also discussed.Keywords: Pattern recognition, image analysis, feature extraction, blackboard component, linguistic summary.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1547587 Faster Pedestrian Recognition Using Deformable Part Models
Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia
Abstract:
Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.Keywords: Autonomous vehicles, deformable part model, dpm, pedestrian recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1397586 Progressive AAM Based Robust Face Alignment
Authors: Daehwan Kim, Jaemin Kim, Seongwon Cho, Yongsuk Jang, Sun-Tae Chung, Boo-Gyoun Kim
Abstract:
AAM has been successfully applied to face alignment, but its performance is very sensitive to initial values. In case the initial values are a little far distant from the global optimum values, there exists a pretty good possibility that AAM-based face alignment may converge to a local minimum. In this paper, we propose a progressive AAM-based face alignment algorithm which first finds the feature parameter vector fitting the inner facial feature points of the face and later localize the feature points of the whole face using the first information. The proposed progressive AAM-based face alignment algorithm utilizes the fact that the feature points of the inner part of the face are less variant and less affected by the background surrounding the face than those of the outer part (like the chin contour). The proposed algorithm consists of two stages: modeling and relation derivation stage and fitting stage. Modeling and relation derivation stage first needs to construct two AAM models: the inner face AAM model and the whole face AAM model and then derive relation matrix between the inner face AAM parameter vector and the whole face AAM model parameter vector. In the fitting stage, the proposed algorithm aligns face progressively through two phases. In the first phase, the proposed algorithm will find the feature parameter vector fitting the inner facial AAM model into a new input face image, and then in the second phase it localizes the whole facial feature points of the new input face image based on the whole face AAM model using the initial parameter vector estimated from using the inner feature parameter vector obtained in the first phase and the relation matrix obtained in the first stage. Through experiments, it is verified that the proposed progressive AAM-based face alignment algorithm is more robust with respect to pose, illumination, and face background than the conventional basic AAM-based face alignment algorithm.Keywords: Face Alignment, AAM, facial feature detection, model matching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1639585 Evaluation Techniques of Photography in Visual Communications in Iran
Authors: Firouzeh Keshavarzi
Abstract:
Although a picture can be automatically a graphic work, but especially in the field of graphics and images based on the idea of advertising and graphic design will be prepared and photographers to realize the design using his own knowledge and skills to help does. It is evident that knowledge of photography, photographer and designer of the facilities, fields of reaching a higher level of quality offers. At the same time do not have a graphic designer is also skilled photographer, but can execute your idea may delegate to an expert photographer. Using technology and methods in all fields of photography, graphic art may be applicable. But most of its application in Iran, in works such as packaging, posters, Bill Board, advertising, brochures and catalogs are. In this study, we review how the images and techniques in the chart should be used in Iranian graphic photo what impact has left. Using photography techniques and procedures can be designed and helped advance the goals graphic. Technique could not determine the idea. But what is important to think about design and photography and his creativity can flourish as a tool to be effective graphic designer in mind. Computer software to help it's very promotes creativity techniques shall graphic designer but also it is as a tool. Using images in various fields, especially graphic arts and only because it is not being documented, but applications are beautiful. As to his photographic style from today is graphics. Graphic works try to affect impacts on their audience. Hence the photo as an important factor is attention. The other hand saw the man with the extent of forgiving and understanding people's image, instead of using the word to your files, allows large messages and concepts should be sent in the shortest time. Posters, advertisements, brochures, catalog and packaging products very diverse agricultural, industrial and food could not be self-image. Today, the use of graphic images for a big score and the photos to richen the role graphic design plays a major.Keywords: Photo, Photography Techniques, Contacts, GraphicDesigner, Visual Communications, Iran.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2881584 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data
Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L Duan
Abstract:
The conditional density characterizes the distribution of a response variable y given other predictor x, and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts a motivating starting point. In this work, we extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zP , zN]. The zP component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zN component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach, coined Augmented Posterior CDE (AP-CDE), only requires a simple modification on the common normalizing flow framework, while significantly improving the interpretation of the latent component, since zP represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of x-related variations due to factors such as lighting condition and subject id, from the other random variations. Further, the experiments show that an unconditional NF neural network, based on an unsupervised model of z, such as Gaussian mixture, fails to generate interpretable results.
Keywords: Conditional density estimation, image generation, normalizing flow, supervised dimension reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 167583 Application of Particle Image Velocimetry in the Analysis of Scale Effects in Granular Soil
Authors: Zuhair Kadhim Jahanger, S. Joseph Antony
Abstract:
The available studies in the literature which dealt with the scale effects of strip footings on different sand packing systematically still remain scarce. In this research, the variation of ultimate bearing capacity and deformation pattern of soil beneath strip footings of different widths under plane-strain condition on the surface of loose, medium-dense and dense sand have been systematically studied using experimental and noninvasive methods for measuring microscopic deformations. The presented analyses are based on model scale compression test analysed using Particle Image Velocimetry (PIV) technique. Upper bound analysis of the current study shows that the maximum vertical displacement of the sand under the ultimate load increases for an increase in the width of footing, but at a decreasing rate with relative density of sand, whereas the relative vertical displacement in the sand decreases for an increase in the width of the footing. A well agreement is observed between experimental results for different footing widths and relative densities. The experimental analyses have shown that there exists pronounced scale effect for strip surface footing. The bearing capacity factors Nγ rapidly decrease up to footing widths B=0.25 m, 0.35 m, and 0.65 m for loose, medium-dense and dense sand respectively, after that there is no significant decrease in Nγ. The deformation modes of the soil as well as the ultimate bearing capacity values have been affected by the footing widths. The obtained results could be used to improve settlement calculation of the foundation interacting with granular soil.
Keywords: PIV, granular mechanics, scale effect, upper bound analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1009582 A Comparative Analysis of Multiple Criteria Decision Making Analysis Methods for Strategic, Tactical, and Operational Decisions in Military Fighter Aircraft Selection
Authors: C. Ardil
Abstract:
This paper considers a comparative analysis of multiple criteria decision making analysis methods for strategic, tactical, and operational decisions in military fighter aircraft selection for the air force fleet planning. The evaluation criteria governing the decision analysis process are determined from the literature for the three existing military combat aircraft. Military fighter aircraft selection problem is structured using "preference analysis for reference ideal solution (PARIS)” approach in multiple criteria decision analysis (MCDMA). Systematic comparisons were made with existing MCDMA methods (PARIS, and TOPSIS) to verify the stability and accuracy of the results obtained. The proposed integrated MCDMA systematic approach is expected to address the issues encountered in the aircraft selection process. The comparative analysis results show that the proposed method is an effective and accurate tool that can help analysts make better strategic, tactical, and operational decisions.
Keywords: aircraft, military fighter aircraft selection, multiple criteria decision making, multiple criteria decision making analysis, mean weight, entropy weight, MCDMA, PARIS, TOPSIS, Saab Gripen, Dassault Rafale, Eurofighter Typhoon
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 574581 Emotions Triggered by Children’s Literature Images
Abstract:
The role of images/illustrations in communicating meanings and triggering emotions assumes an increasingly relevant role in contemporary texts, regardless of the age group for which they are intended or the nature of the texts that host them. It is no coincidence that children's books are full of illustrations and that the image/text ratio decreases as the age group grows. The vast majority of children's books can be considered as multimodal texts containing text and images/illustrations, interacting with each other, to provide the young reader with a broader and more creative understanding of the book's narrative. This interaction is very diverse, ranging from images/illustrations that are not essential for understanding the storytelling to those that contribute significantly to the meaning of the story. Usually, these books are also read by adults, namely by parents, educators, and teachers who act as mediators between the book and the children, explaining aspects that are or seem to be too complex for the child's context. It should be noted that there are books labeled as children's books, that are clearly intended for both children and adults. In this work, following a qualitative and interpretative methodology based on written productions, participant observation, and field notes, we will describe the perceptions of future teachers of the 1st cycle of basic education, attending a master’s degree at a Portuguese university, about the role of the image in literary and non-literary texts, namely in mathematical texts, and how these can constitute precious resources for emotional regulation and for the design of creative didactic situations. The analysis of the collected data allowed us to obtain evidence regarding the evolution of the participants' perception regarding the crucial role of images in children's literature, not only as an emotional regulator for young readers but also as a creative source for the design of meaningful didactical situations, crossing other scientific areas, other than the mother tongue, namely mathematics.
Keywords: Children’s literature, emotions, multimodal texts, soft skills.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 200580 Image Segmentation Using Suprathreshold Stochastic Resonance
Authors: Rajib Kumar Jha, P.K.Biswas, B.N.Chatterji
Abstract:
In this paper a new concept of partial complement of a graph G is introduced and using the same a new graph parameter, called completion number of a graph G, denoted by c(G) is defined. Some basic properties of graph parameter, completion number, are studied and upperbounds for completion number of classes of graphs are obtained , the paper includes the characterization also.
Keywords: Completion Number, Maximum Independent subset, Partial complements, Partial self complementary.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1228579 Landcover Mapping Using Lidar Data and Aerial Image and Soil Fertility Degradation Assessment for Rice Production Area in Quezon, Nueva Ecija, Philippines
Authors: Eliza. E. Camaso, Guiller. B. Damian, Miguelito. F. Isip, Ronaldo T. Alberto
Abstract:
Land-cover maps were important for many scientific, ecological and land management purposes and during the last decades, rapid decrease of soil fertility was observed to be due to land use practices such as rice cultivation. High-precision land-cover maps are not yet available in the area which is important in an economy management. To assure accurate mapping of land cover to provide information, remote sensing is a very suitable tool to carry out this task and automatic land use and cover detection. The study did not only provide high precision land cover maps but it also provides estimates of rice production area that had undergone chemical degradation due to fertility decline. Land-cover were delineated and classified into pre-defined classes to achieve proper detection features. After generation of Land-cover map, of high intensity of rice cultivation, soil fertility degradation assessment in rice production area due to fertility decline was created to assess the impact of soils used in agricultural production. Using Simple spatial analysis functions and ArcGIS, the Land-cover map of Municipality of Quezon in Nueva Ecija, Philippines was overlaid to the fertility decline maps from Land Degradation Assessment Philippines- Bureau of Soils and Water Management (LADA-Philippines-BSWM) to determine the area of rice crops that were most likely where nitrogen, phosphorus, zinc and sulfur deficiencies were induced by high dosage of urea and imbalance N:P fertilization. The result found out that 80.00 % of fallow and 99.81% of rice production area has high soil fertility decline.
Keywords: Aerial image, land-cover, LiDAR, soil fertility degradation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1142578 A Novel Impulse Detector for Filtering of Highly Corrupted Images
Authors: Umesh Ghanekar
Abstract:
As the performance of the filtering system depends upon the accuracy of the noise detection scheme, in this paper, we present a new scheme for impulse noise detection based on two levels of decision. In this scheme in the first stage we coarsely identify the corrupted pixels and in the second stage we finally decide whether the pixel under consideration is really corrupt or not. The efficacy of the proposed filter has been confirmed by extensive simulations.Keywords: Impulse detection, noise removal, image filtering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1409577 Disparity Estimation for Objects of Interest
Authors: Yen San Yong, Hock Woon Hon
Abstract:
An algorithm for estimating the disparity of objects of interest is proposed. This algorithm uses image shifting and overlapping area to estimate the disparity value; thereby depth of the objects of interest can be obtained. The algorithm is able to perform at different levels of accuracy. However, as the accuracy increases the processing speed decreases. The algorithm is tested with static stereo images and sequence of stereo images. The experimental results are presented in this paper.Keywords: stereo vision, binocular parallax
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1230576 Validation on 3D Surface Roughness Algorithm for Measuring Roughness of Psoriasis Lesion
Authors: M.H. Ahmad Fadzil, Esa Prakasa, Hurriyatul Fitriyah, Hermawan Nugroho, Azura Mohd Affandi, S.H. Hussein
Abstract:
Psoriasis is a widespread skin disease affecting up to 2% population with plaque psoriasis accounting to about 80%. It can be identified as a red lesion and for the higher severity the lesion is usually covered with rough scale. Psoriasis Area Severity Index (PASI) scoring is the gold standard method for measuring psoriasis severity. Scaliness is one of PASI parameter that needs to be quantified in PASI scoring. Surface roughness of lesion can be used as a scaliness feature, since existing scale on lesion surface makes the lesion rougher. The dermatologist usually assesses the severity through their tactile sense, therefore direct contact between doctor and patient is required. The problem is the doctor may not assess the lesion objectively. In this paper, a digital image analysis technique is developed to objectively determine the scaliness of the psoriasis lesion and provide the PASI scaliness score. Psoriasis lesion is modelled by a rough surface. The rough surface is created by superimposing a smooth average (curve) surface with a triangular waveform. For roughness determination, a polynomial surface fitting is used to estimate average surface followed by a subtraction between rough and average surface to give elevation surface (surface deviations). Roughness index is calculated by using average roughness equation to the height map matrix. The roughness algorithm has been tested to 444 lesion models. From roughness validation result, only 6 models can not be accepted (percentage error is greater than 10%). These errors occur due the scanned image quality. Roughness algorithm is validated for roughness measurement on abrasive papers at flat surface. The Pearson-s correlation coefficient of grade value (G) of abrasive paper and Ra is -0.9488, its shows there is a strong relation between G and Ra. The algorithm needs to be improved by surface filtering, especially to overcome a problem with noisy data.
Keywords: psoriasis, roughness algorithm, polynomial surfacefitting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2491575 Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach
Authors: V. Veeraprathap, G. S. Harish, G. Narendra Kumar
Abstract:
Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.
Keywords: ANN, DWT, GLCM, KNN, ROI, artificial neural networks, discrete wavelet transform, gray-level co-occurrence matrix, k-nearest neighbor, region of interest.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 960