Search results for: content based image retrieval (CBIR)
33065 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores
Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan
Abstract:
Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics
Procedia PDF Downloads 13033064 Effect of Threshold Configuration on Accuracy in Upper Airway Analysis Using Cone Beam Computed Tomography
Authors: Saba Fahham, Supak Ngamsom, Suchaya Damrongsri
Abstract:
Objective: The objective is to determine the optimal threshold of Romexis software for the airway volume and minimum cross-section area (MCA) analysis using Image J as a gold standard. Materials and Methods: A total of ten cone-beam computed tomography (CBCT) images were collected. The airway volume and MCA of each patient were analyzed using the automatic airway segmentation function in the CBCT DICOM viewer (Romexis). Airway volume and MCA measurements were conducted on each CBCT sagittal view with fifteen different threshold values from the Romexis software, Ranging from 300 to 1000. Duplicate DICOM files, in axial view, were imported into Image J for concurrent airway volume and MCA analysis as the gold standard. The airway volume and MCA measured from Romexis and Image J were compared using a t-test with Bonferroni correction, and statistical significance was set at p<0.003. Results: Concerning airway volume, thresholds of 600 to 850 as well as 1000, exhibited results that were not significantly distinct from those obtained through Image J. Regarding MCA, employing thresholds from 400 to 850 within Romexis Viewer showed no variance from Image J. Notably, within the threshold range of 600 to 850, there were no statistically significant differences observed in both airway volume and MCA analyses, in comparison to Image J. Conclusion: This study demonstrated that the utilization of Planmeca Romexis Viewer 6.4.3.3 within threshold range of 600 to 850 yields airway volume and MCA measurements that exhibit no statistically significant variance in comparison to measurements obtained through Image J. This outcome holds implications for diagnosing upper airway obstructions and post-orthodontic surgical monitoring.Keywords: airway analysis, airway segmentation, cone beam computed tomography, threshold
Procedia PDF Downloads 4433063 The Intersection/Union Region Computation for Drosophila Brain Images Using Encoding Schemes Based on Multi-Core CPUs
Authors: Ming-Yang Guo, Cheng-Xian Wu, Wei-Xiang Chen, Chun-Yuan Lin, Yen-Jen Lin, Ann-Shyn Chiang
Abstract:
With more and more Drosophila Driver and Neuron images, it is an important work to find the similarity relationships among them as the functional inference. There is a general problem that how to find a Drosophila Driver image, which can cover a set of Drosophila Driver/Neuron images. In order to solve this problem, the intersection/union region for a set of images should be computed at first, then a comparison work is used to calculate the similarities between the region and other images. In this paper, three encoding schemes, namely Integer, Boolean, Decimal, are proposed to encode each image as a one-dimensional structure. Then, the intersection/union region from these images can be computed by using the compare operations, Boolean operators and lookup table method. Finally, the comparison work is done as the union region computation, and the similarity score can be calculated by the definition of Tanimoto coefficient. The above methods for the region computation are also implemented in the multi-core CPUs environment with the OpenMP. From the experimental results, in the encoding phase, the performance by the Boolean scheme is the best than that by others; in the region computation phase, the performance by Decimal is the best when the number of images is large. The speedup ratio can achieve 12 based on 16 CPUs. This work was supported by the Ministry of Science and Technology under the grant MOST 106-2221-E-182-070.Keywords: Drosophila driver image, Drosophila neuron images, intersection/union computation, parallel processing, OpenMP
Procedia PDF Downloads 23933062 Quality Rabbit Skin Gelatin with Acetic Acid Extract
Authors: Wehandaka Pancapalaga
Abstract:
This study aimed to analyze the water content, yield, fat content, protein content, viscosity, gel strength, pH, melting and organoleptic rabbit skin gelatin with acetic acid extraction levels are different. The materials used in this study were Rex rabbit skin male. Treatments that P1 = the extraction of acetic acid 2% (v / v); P2 = the extraction of acetic acid 3% (v / v); P3 = the extraction of acetic acid 4 % (v / v). P5 = the extraction of acetic acid 5% (v / v). The results showed that the greater the concentration of acetic acid as the extraction of rabbit skin can reduce the water content and fat content of rabbit skin gelatin but increase the protein content, viscosity, pH, gel strength, yield and melting point rabbit skin gelatin. texture, color and smell of gelatin rabbits there were no differences with cow skin gelatin. The results showed that the quality of rabbit skin gelatin accordance Indonesian National Standard (SNI). Conclusion 5% acetic acid extraction produces the best quality gelatin.Keywords: gelatin, skin rabbit, acetic acid extraction, quality
Procedia PDF Downloads 41733061 Enhancing of Biogas Production from Slaughterhouse and Dairy Farm Waste with Pasteurization
Authors: Mahmoud Hassan Onsa, Saadelnour Abdueljabbar Adam
Abstract:
Wastes from slaughterhouses in most towns in Sudan are often poorly managed and sometimes discharged into adjoining streams due to poor implementation of standards, thus causing environmental and public health hazards and also there is a large amount of manure from dairy farms. This paper presents solution of organic waste from cow dairy farms and slaughterhouse the anaerobic digestion and biogas production. The paper presents the findings of experimental investigation of biogas production with and without pasteurization using cow manure, blood and rumen content were mixed at two proportions, 72.3% manure, 21.7%, rumen content and 6% blood for bio digester1with 62% dry matter at the beginning and without pasteurization and 72.3% manure, 21.7%, rumen content and 6% blood for bio-digester2 with 10% dry matter and pasteurization. The paper analyses the quantitative and qualitative composition of biogas: gas content, the concentration of methane. The highest biogas output 2.9 mL/g dry matter/day (from bio-digester2) together with a high quality biogas of 87.4% methane content which is useful for combustion and energy production and healthy bio-fertilizer but biodigester1 gave 1.68 mL/g dry matter/day with methane content 85% which is useful for combustion, energy production and can be considered as new technology of dryer bio-digesters.Keywords: anaerobic digestion, bio-digester, blood, cow manure, rumen content
Procedia PDF Downloads 72733060 The Effects of Different Amounts of Additional Moisture on the Physical Properties of Cow Pea (Vigna unguiculata (L.) Walp.) Extrudates
Authors: L. Strauta, S. Muižniece-Brasava
Abstract:
Even though legumes possess high nutritional value and have a rather high protein content for plant origin products, they are underutilized mostly due to their lengthy cooking time. To increase the presence of legume-based products in human diet, new extruded products were made of cow peas (Vigna unguiculata (L.) Walp.). But as it is known, adding different moisture content to flour before extrusion can change the physical properties of the extruded product. Experiments were carried out to estimate the optimal moisture content for cow pea extrusion. After extrusion, the pH level had dropped from 6.7 to 6.5 and the lowest hardness rate was observed in the samples with additional 9 g 100g-1 of moisture - 28±4N, but the volume mass of the samples with additional 9 g100g-1 of water was 263±3 g L-1; all samples were approximately 7±1mm long.Keywords: cow pea, extrusion–cooking, moisture, size
Procedia PDF Downloads 20733059 A Convolutional Neural Network-Based Model for Lassa fever Virus Prediction Using Patient Blood Smear Image
Authors: A. M. John-Otumu, M. M. Rahman, M. C. Onuoha, E. P. Ojonugwa
Abstract:
A Convolutional Neural Network (CNN) model for predicting Lassa fever was built using Python 3.8.0 programming language, alongside Keras 2.2.4 and TensorFlow 2.6.1 libraries as the development environment in order to reduce the current high risk of Lassa fever in West Africa, particularly in Nigeria. The study was prompted by some major flaws in existing conventional laboratory equipment for diagnosing Lassa fever (RT-PCR), as well as flaws in AI-based techniques that have been used for probing and prognosis of Lassa fever based on literature. There were 15,679 blood smear microscopic image datasets collected in total. The proposed model was trained on 70% of the dataset and tested on 30% of the microscopic images in avoid overfitting. A 3x3x3 convolution filter was also used in the proposed system to extract features from microscopic images. The proposed CNN-based model had a recall value of 96%, a precision value of 93%, an F1 score of 95%, and an accuracy of 94% in predicting and accurately classifying the images into clean or infected samples. Based on empirical evidence from the results of the literature consulted, the proposed model outperformed other existing AI-based techniques evaluated. If properly deployed, the model will assist physicians, medical laboratory scientists, and patients in making accurate diagnoses for Lassa fever cases, allowing the mortality rate due to the Lassa fever virus to be reduced through sound decision-making.Keywords: artificial intelligence, ANN, blood smear, CNN, deep learning, Lassa fever
Procedia PDF Downloads 12033058 Monocular Depth Estimation Benchmarking with Thermal Dataset
Authors: Ali Akyar, Osman Serdar Gedik
Abstract:
Depth estimation is a challenging computer vision task that involves estimating the distance between objects in a scene and the camera. It predicts how far each pixel in the 2D image is from the capturing point. There are some important Monocular Depth Estimation (MDE) studies that are based on Vision Transformers (ViT). We benchmark three major studies. The first work aims to build a simple and powerful foundation model that deals with any images under any condition. The second work proposes a method by mixing multiple datasets during training and a robust training objective. The third work combines generalization performance and state-of-the-art results on specific datasets. Although there are studies with thermal images too, we wanted to benchmark these three non-thermal, state-of-the-art studies with a hybrid image dataset which is taken by Multi-Spectral Dynamic Imaging (MSX) technology. MSX technology produces detailed thermal images by bringing together the thermal and visual spectrums. Using this technology, our dataset images are not blur and poorly detailed as the normal thermal images. On the other hand, they are not taken at the perfect light conditions as RGB images. We compared three methods under test with our thermal dataset which was not done before. Additionally, we propose an image enhancement deep learning model for thermal data. This model helps extract the features required for monocular depth estimation. The experimental results demonstrate that, after using our proposed model, the performance of these three methods under test increased significantly for thermal image depth prediction.Keywords: monocular depth estimation, thermal dataset, benchmarking, vision transformers
Procedia PDF Downloads 3233057 Effect of Alkaline Activator, Water, Superplasticiser and Slag Contents on the Compressive Strength and Workability of Slag-Fly Ash Based Geopolymer Mortar Cured under Ambient Temperature
Authors: M. Al-Majidi, A. Lampropoulos, A. Cundy
Abstract:
Geopolymer (cement-free) concrete is the most promising green alternative to ordinary Portland cement concrete and other cementitious materials. While a range of different geopolymer concretes have been produced, a common feature of these concretes is heat curing treatment which is essential in order to provide sufficient mechanical properties in the early age. However, there are several practical issues with the application of heat curing in large-scale structures. The purpose of this study is to develop cement-free concrete without heat curing treatment. Experimental investigations were carried out in two phases. In the first phase (Phase A), the optimum content of water, polycarboxylate based superplasticizer contents and potassium silicate activator in the mix was determined. In the second stage (Phase B), the effect of ground granulated blast furnace slag (GGBFS) incorporation on the compressive strength of fly ash (FA) and Slag based geopolymer mixtures was evaluated. Setting time and workability were also conducted alongside with compressive tests. The results showed that as the slag content was increased the setting time was reduced while the compressive strength was improved. The obtained compressive strength was in the range of 40-50 MPa for 50% slag replacement mixtures. Furthermore, the results indicated that increment of water and superplasticizer content resulted to retarding of the setting time and slight reduction of the compressive strength. The compressive strength of the examined mixes was considerably increased as potassium silicate content was increased.Keywords: fly ash, geopolymer, potassium silicate, slag
Procedia PDF Downloads 22233056 Utilization of Rice and Corn Bran with Dairy By-Product in Tarhana Production
Authors: Kübra Aktaş, Nihat Akin
Abstract:
Tarhana is a traditional Turkish fermented food. It is widely consumed as soup and includes many different ingredients such as wheat flour, various vegetables, and spices, yoghurt, bakery yeast. It can also be enriched by adding other ingredients. Thus, its nutritional properties can be enhanced. In this study, tarhana was supplemented with two different types of brans (rice bran and corn bran) and WPC (whey protein concentrate powder) to improve its nutritional and functional properties. Some chemical properties of tarhana containing two different brans and their levels (0, 5, 10 and 15%) and WPC (0, 5, 10%) were investigated. The results indicated that addition of WPC increased ash content in tarhanas which were fortified with rice and corn bran. The highest antioxidant and phenolic content values were obtained with addition of rice bran in tarhana formulation. Compared to tarhana with corn bran, rice bran addition gave higher oil content values. The cellulose content of tarhana samples was determined between 0.75% and 2.74% and corn bran showed an improving effect on cellulose contents of samples. In terms of protein content, addition of WPC into the tarhana raised protein content for the samples.Keywords: corn, rice, tarhana, whey
Procedia PDF Downloads 33433055 Image Segmentation: New Methods
Authors: Flaurence Benjamain, Michel Casperance
Abstract:
We present in this paper, first, a comparative study of three mathematical theories to achieve the fusion of information sources. This study aims to identify the characteristics inherent in theories of possibilities, belief functions (DST) and plausible and paradoxical reasoning to establish a strategy of choice that allows us to adopt the most appropriate theory to solve a problem of fusion in order, taking into account the acquired information and imperfections that accompany them. Using the new theory of plausible and paradoxical reasoning, also called Dezert-Smarandache Theory (DSmT), to fuse information multi-sources needs, at first step, the generation of the composites events witch is, in general, difficult. Thus, we present in this paper a new approach to construct pertinent paradoxical classes based on gray levels histograms, which also allows to reduce the cardinality of the hyper-powerset. Secondly, we developed a new technique for order and coding generalized focal elements. This method is exploited, in particular, to calculate the cardinality of Dezert and Smarandache. Then, we give an experimentation of classification of a remote sensing image that illustrates the given methods and we compared the result obtained by the DSmT with that resulting from the use of the DST and theory of possibilities.Keywords: segmentation, image, approach, vision computing
Procedia PDF Downloads 27533054 Uplift Modeling Approach to Optimizing Content Quality in Social Q/A Platforms
Authors: Igor A. Podgorny
Abstract:
TurboTax AnswerXchange is a social Q/A system supporting users working on federal and state tax returns. Content quality and popularity in the AnswerXchange can be predicted with propensity models using attributes of the question and answer. Using uplift modeling, we identify features of questions and answers that can be modified during the question-asking and question-answering experience in order to optimize the AnswerXchange content quality. We demonstrate that adding details to the questions always results in increased question popularity that can be used to promote good quality content. Responding to close-ended questions assertively improve content quality in the AnswerXchange in 90% of cases. Answering knowledge questions with web links increases the likelihood of receiving a negative vote from 60% of the askers. Our findings provide a rationale for employing the uplift modeling approach for AnswerXchange operations.Keywords: customer relationship management, human-machine interaction, text mining, uplift modeling
Procedia PDF Downloads 24433053 The Effect of Unconscious Exposure to Religious Concepts on Mutual Stereotypes of Jews and Muslims in Israel
Authors: Lipaz Shamoa-Nir, Irene Razpurker-Apfeld
Abstract:
This research examined the impact of subliminal exposure to religious content on the mutual attitudes of majority group members (Jews) and minority group members (Muslims). Participants were subliminally exposed to religious concepts (e.g., Mezuzah, yarmulke or veil) and then they filled questionnaires assessing their stereotypes towards the out-group members. Each participant was primed with either in-group religious concepts, out-group concepts or neutral ones. The findings show that the Muslim participants were not influenced by the religious content to which they were exposed while the Jewish participants perceived the Muslims as less 'hostile' when subliminally exposed to religious concepts, regardless of concept type (out-group/in-group). This research highlights the influence of evoked religious content on out-group attitudes even when the perceiver is unaware of prime content. The power that exposure to content in a non-native language has in activating attitudes towards the out-group is also discussed.Keywords: intergroup attitudes, stereotypes, majority-minority, religious out-group, implicit content, native language
Procedia PDF Downloads 24533052 Video Stabilization Using Feature Point Matching
Authors: Shamsundar Kulkarni
Abstract:
Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper, an algorithm is proposed to stabilize jittery videos .A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video are identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances.Keywords: video stabilization, point feature matching, salient points, image quality measurement
Procedia PDF Downloads 31333051 A Comparative Study of Particle Image Velocimetry (PIV) and Particle Tracking Velocimetry (PTV) for Airflow Measurement
Authors: Sijie Fu, Pascal-Henry Biwolé, Christian Mathis
Abstract:
Among modern airflow measurement methods, Particle Image Velocimetry (PIV) and Particle Tracking Velocimetry (PTV), as visualized and non-instructive measurement techniques, are playing more important role. This paper conducts a comparative experimental study for airflow measurement employing both techniques with the same condition. Velocity vector fields, velocity contour fields, voticity profiles and turbulence profiles are selected as the comparison indexes. The results show that the performance of both PIV and PTV techniques for airflow measurement is satisfied, but some differences between the both techniques are existed, it suggests that selecting the measurement technique should be based on a comprehensive consideration.Keywords: airflow measurement, comparison, PIV, PTV
Procedia PDF Downloads 42433050 Hierarchical Cluster Analysis of Raw Milk Samples Obtained from Organic and Conventional Dairy Farming in Autonomous Province of Vojvodina, Serbia
Authors: Lidija Jevrić, Denis Kučević, Sanja Podunavac-Kuzmanović, Strahinja Kovačević, Milica Karadžić
Abstract:
In the present study, the Hierarchical Cluster Analysis (HCA) was applied in order to determine the differences between the milk samples originating from a conventional dairy farm (CF) and an organic dairy farm (OF) in AP Vojvodina, Republic of Serbia. The clustering was based on the basis of the average values of saturated fatty acids (SFA) content and unsaturated fatty acids (UFA) content obtained for every season. Therefore, the HCA included the annual SFA and UFA content values. The clustering procedure was carried out on the basis of Euclidean distances and Single linkage algorithm. The obtained dendrograms indicated that the clustering of UFA in OF was much more uniform compared to clustering of UFA in CF. In OF, spring stands out from the other months of the year. The same case can be noticed for CF, where winter is separated from the other months. The results could be expected because the composition of fatty acids content is greatly influenced by the season and nutrition of dairy cows during the year.Keywords: chemometrics, clustering, food engineering, milk quality
Procedia PDF Downloads 28133049 Using Set Up Candid Clips as Viral Marketing via New Media
Authors: P. Suparada, D. Eakapotch
Abstract:
This research’s objectives were to analyze the using of new media in the form of set up candid clip that affects the product and presenter, to study the effectiveness of using new media in the form of set up candid clip in order to increase the circulation and audience satisfaction and to use the earned information and knowledge to develop the communication for publicizing and advertising via new media. This research is qualitative research based on questionnaire from 50 random sampling representative samples and in-depth interview from experts in publicizing and advertising fields. The findings indicated the positive and negative effects to the brands’ image and presenters’ image of product named “Scotch 100” and “Snickers” that used set up candid clips via new media for publicizing and advertising in Thailand. It will be useful for fields of publicizing and advertising in the new media forms.Keywords: candid clip, effect, new media, social network
Procedia PDF Downloads 22333048 Comparative Analysis of Classical and Parallel Inpainting Algorithms Based on Affine Combinations of Projections on Convex Sets
Authors: Irina Maria Artinescu, Costin Radu Boldea, Eduard-Ionut Matei
Abstract:
The paper is a comparative study of two classical variants of parallel projection methods for solving the convex feasibility problem with their equivalents that involve variable weights in the construction of the solutions. We used a graphical representation of these methods for inpainting a convex area of an image in order to investigate their effectiveness in image reconstruction applications. We also presented a numerical analysis of the convergence of these four algorithms in terms of the average number of steps and execution time in classical CPU and, alternatively, in parallel GPU implementation.Keywords: convex feasibility problem, convergence analysis, inpainting, parallel projection methods
Procedia PDF Downloads 17433047 X-Ray Detector Technology Optimization In CT Imaging
Authors: Aziz Ikhlef
Abstract:
Most of multi-slices CT scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80kVp and 140kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts
Procedia PDF Downloads 27133046 Large Neural Networks Learning From Scratch With Very Few Data and Without Explicit Regularization
Authors: Christoph Linse, Thomas Martinetz
Abstract:
Recent findings have shown that Neural Networks generalize also in over-parametrized regimes with zero training error. This is surprising, since it is completely against traditional machine learning wisdom. In our empirical study we fortify these findings in the domain of fine-grained image classification. We show that very large Convolutional Neural Networks with millions of weights do learn with only a handful of training samples and without image augmentation, explicit regularization or pretraining. We train the architectures ResNet018, ResNet101 and VGG19 on subsets of the difficult benchmark datasets Caltech101, CUB_200_2011, FGVCAircraft, Flowers102 and StanfordCars with 100 classes and more, perform a comprehensive comparative study and draw implications for the practical application of CNNs. Finally, we show that VGG19 with 140 million weights learns to distinguish airplanes and motorbikes with up to 95% accuracy using only 20 training samples per class.Keywords: convolutional neural networks, fine-grained image classification, generalization, image recognition, over-parameterized, small data sets
Procedia PDF Downloads 8833045 Body Composition Analysis of Wild Labeo Bata in Relation to Body Size and Condition Factor from Chenab, Multan, Pakistan
Authors: Muhammad Naeem, Amina Zubari, Abdus Salam, Syed Ali Ayub Bukhari, Naveed Ahmad Khan
Abstract:
Seventy three wild Labeo bata of different body sizes, ranging from 8.20-16.00 cm total length and 7.4-86.19 g body weight, were studied for the analysis of body composition parameters (Water content, ash content, fat content, protein content) in relation to body size and condition factor. Mean percentage is found as for water 77.71 %, ash 3.42 %, fat 2.20 % and protein content 16.65 % in whole wet body weight. Highly significant positive correlations were observed between condition factor and body weight (r = 0.243). Protein contents, organic content and ash (% wet body weight) increase with increasing percent water contents for Labeo bata while these constituents (% dry body weight) and fat contents (% wet and dry body weight) have no influence on percent water. It was observed that variations in the body constituents have no association to body weight or length.Keywords: Labeo bata, body size, body composition, condition factor
Procedia PDF Downloads 49733044 Computer Countenanced Diagnosis of Skin Nodule Detection and Histogram Augmentation: Extracting System for Skin Cancer
Authors: S. Zith Dey Babu, S. Kour, S. Verma, C. Verma, V. Pathania, A. Agrawal, V. Chaudhary, A. Manoj Puthur, R. Goyal, A. Pal, T. Danti Dey, A. Kumar, K. Wadhwa, O. Ved
Abstract:
Background: Skin cancer is now is the buzzing button in the field of medical science. The cyst's pandemic is drastically calibrating the body and well-being of the global village. Methods: The extracted image of the skin tumor cannot be used in one way for diagnosis. The stored image contains anarchies like the center. This approach will locate the forepart of an extracted appearance of skin. Partitioning image models has been presented to sort out the disturbance in the picture. Results: After completing partitioning, feature extraction has been formed by using genetic algorithm and finally, classification can be performed between the trained and test data to evaluate a large scale of an image that helps the doctors for the right prediction. To bring the improvisation of the existing system, we have set our objectives with an analysis. The efficiency of the natural selection process and the enriching histogram is essential in that respect. To reduce the false-positive rate or output, GA is performed with its accuracy. Conclusions: The objective of this task is to bring improvisation of effectiveness. GA is accomplishing its task with perfection to bring down the invalid-positive rate or outcome. The paper's mergeable portion conflicts with the composition of deep learning and medical image processing, which provides superior accuracy. Proportional types of handling create the reusability without any errors.Keywords: computer-aided system, detection, image segmentation, morphology
Procedia PDF Downloads 15033043 Towards Visual Personality Questionnaires Based on Deep Learning and Social Media
Authors: Pau Rodriguez, Jordi Gonzalez, Josep M. Gonfaus, Xavier Roca
Abstract:
Image sharing in social networks has increased exponentially in the past years. Officially, there are 600 million Instagrammers uploading around 100 million photos and videos per day. Consequently, there is a need for developing new tools to understand the content expressed in shared images, which will greatly benefit social media communication and will enable broad and promising applications in education, advertisement, entertainment, and also psychology. Following these trends, our work aims to take advantage of the existing relationship between text and personality, already demonstrated by multiple researchers, so that we can prove that there exists a relationship between images and personality as well. To achieve this goal, we consider that images posted on social networks are typically conditioned on specific words, or hashtags, therefore any relationship between text and personality can also be observed with those posted images. Our proposal makes use of the most recent image understanding models based on neural networks to process the vast amount of data generated by social users to determine those images most correlated with personality traits. The final aim is to train a weakly-supervised image-based model for personality assessment that can be used even when textual data is not available, which is an increasing trend. The procedure is described next: we explore the images directly publicly shared by users based on those accompanying texts or hashtags most strongly related to personality traits as described by the OCEAN model. These images will be used for personality prediction since they have the potential to convey more complex ideas, concepts, and emotions. As a result, the use of images in personality questionnaires will provide a deeper understanding of respondents than through words alone. In other words, from the images posted with specific tags, we train a deep learning model based on neural networks, that learns to extract a personality representation from a picture and use it to automatically find the personality that best explains such a picture. Subsequently, a deep neural network model is learned from thousands of images associated with hashtags correlated to OCEAN traits. We then analyze the network activations to identify those pictures that maximally activate the neurons: the most characteristic visual features per personality trait will thus emerge since the filters of the convolutional layers of the neural model are learned to be optimally activated depending on each personality trait. For example, among the pictures that maximally activate the high Openness trait, we can see pictures of books, the moon, and the sky. For high Conscientiousness, most of the images are photographs of food, especially healthy food. The high Extraversion output is mostly activated by pictures of a lot of people. In high Agreeableness images, we mostly see flower pictures. Lastly, in the Neuroticism trait, we observe that the high score is maximally activated by animal pets like cats or dogs. In summary, despite the huge intra-class and inter-class variabilities of the images associated to each OCEAN traits, we found that there are consistencies between visual patterns of those images whose hashtags are most correlated to each trait.Keywords: emotions and effects of mood, social impact theory in social psychology, social influence, social structure and social networks
Procedia PDF Downloads 19633042 Prosperous Digital Image Watermarking Approach by Using DCT-DWT
Authors: Prabhakar C. Dhavale, Meenakshi M. Pawar
Abstract:
In this paper, everyday tons of data is embedded on digital media or distributed over the internet. The data is so distributed that it can easily be replicated without error, putting the rights of their owners at risk. Even when encrypted for distribution, data can easily be decrypted and copied. One way to discourage illegal duplication is to insert information known as watermark, into potentially valuable data in such a way that it is impossible to separate the watermark from the data. These challenges motivated researchers to carry out intense research in the field of watermarking. A watermark is a form, image or text that is impressed onto paper, which provides evidence of its authenticity. Digital watermarking is an extension of the same concept. There are two types of watermarks visible watermark and invisible watermark. In this project, we have concentrated on implementing watermark in image. The main consideration for any watermarking scheme is its robustness to various attacksKeywords: watermarking, digital, DCT-DWT, security
Procedia PDF Downloads 42233041 Brand Content Optimization: A Major Challenge for Sellers on Marketplaces
Authors: Richardson Ciguene, Bertrand Marron, Nicolas Habert
Abstract:
Today, more and more consumers are purchasing their products and services online. At the same time, the penetration rate of very small and medium-sized businesses on marketplaces continues to increase, which has the direct impact of intensifying competition between sellers. Thus, only the best-optimized deals are ranked well by algorithms and are visible to consumers. However, it is almost impossible to know all the Brand Content rules and criteria established by marketplaces, which is essential to optimizing their product sheets, especially since these rules change constantly. In this paper, we propose to detail this question of Brand Content optimization by taking into account the case of Amazon in order to capture the scientific dimension behind such a subject. In a second step, we will present the genesis of our research project, DEEPERFECT, which aims to set up original methods and effective tools in order to help sellers present on marketplaces in the optimization of their branded content.Keywords: e-commerce, scoring, marketplace, Amazon, brand content, product sheets
Procedia PDF Downloads 12333040 A Study of Some Water Relations and Soil Salinity Using Geotextile Mat under Sprinkler System
Abstract:
This work aimed to study the influence of a geotextile material under sprinkler irrigation on the availability of soil moisture content and salinity of 40 cm top soil profile. Field experiment was carried out to measure soil moisture content, soil salinity and water application efficiency under sprinkler irrigation system. The results indicated that, the mats placed at 20 cm depth leads to increasing of the availability of soil moisture content in the root zone. The results further showed increases in water application efficiency because of using the geotextile material. In addition, soil salinity in the root zone decreased because of increasing soil moisture content.Keywords: geotextile, moisture content, sprinkler irrigation
Procedia PDF Downloads 39933039 Image Processing of Scanning Electron Microscope Micrograph of Ferrite and Pearlite Steel for Recognition of Micro-Constituents
Authors: Subir Gupta, Subhas Ganguly
Abstract:
In this paper, we demonstrate the new area of application of image processing in metallurgical images to develop the more opportunity for structure-property correlation based approaches of alloy design. The present exercise focuses on the development of image processing tools suitable for phrase segmentation, grain boundary detection and recognition of micro-constituents in SEM micrographs of ferrite and pearlite steels. A comprehensive data of micrographs have been experimentally developed encompassing the variation of ferrite and pearlite volume fractions and taking images at different magnification (500X, 1000X, 15000X, 2000X, 3000X and 5000X) under scanning electron microscope. The variation in the volume fraction has been achieved using four different plain carbon steel containing 0.1, 0.22, 0.35 and 0.48 wt% C heat treated under annealing and normalizing treatments. The obtained data pool of micrographs arbitrarily divided into two parts to developing training and testing sets of micrographs. The statistical recognition features for ferrite and pearlite constituents have been developed by learning from training set of micrographs. The obtained features for microstructure pattern recognition are applied to test set of micrographs. The analysis of the result shows that the developed strategy can successfully detect the micro constitutes across the wide range of magnification and variation of volume fractions of the constituents in the structure with an accuracy of about +/- 5%.Keywords: SEM micrograph, metallurgical image processing, ferrite pearlite steel, microstructure
Procedia PDF Downloads 19933038 X-Ray Detector Technology Optimization in Computed Tomography
Authors: Aziz Ikhlef
Abstract:
Most of multi-slices Computed Tomography (CT) scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This is translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80 kVp and 140 kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts
Procedia PDF Downloads 19433037 The Isolation of Enterobacter Ludwigii Strain T976 from Nicotiana Tabacum L. Yunyan 97 and Its Application Study
Authors: Gao Qin, Hu Liwei, Dong Xiangzhou, Zhu Qifa, Cheng Tingming, Zhao Limei, Yang Mengmeng, Zhai Zhen, Dai Huaxin, Liang Taibo, Zhang Shixiang, Xue Chaoqun
Abstract:
The functional strain T976 for starch degradation was isolated from Nicotiana tabacum L. Yunyan 97 tobacco leaves, the ratio of starch hydrolysis transparent circle diameter to colony diameter of the strain was 4.14, 16S rDNA sequencing identified these strains as Enterobacter ludwigii. Then Enterobacter ludwigii T976 was fermented and spaying Yunyan 97 plant in vigorous growing stage. The results of once spraying fermentation broth of Enterobacter ludwigii T976 showed that starch content of upper leaves decreased slightly, from 3.77% to 3.1%, the reducing sugar content increased from 4.39% to 5.53%, and the total sugar content increased from 5.82% to 7.39%. The chemical content was also checked after three time spraying. The starch content of middle leaves decreased from 5.63% to 3.74%, while the content of total sugar and reducing sugar decreased slightly. And the starch content of upper leaves decreased from 7.62% to 4.78%, the total sugar and reducing sugar decreased slightly, and starch content of middle leaf decreased from 6.27% to 3.62%, the total sugar and reducing sugar did not change much, and other chemical components were in a suitable range.Keywords: nicotiana tabacum, yunyan 97, leaf, starch, degradation, enterobacter ludwigii
Procedia PDF Downloads 5633036 Quantification and Thermal Behavior of Rice Bran Oil, Sunflower Oil and Their Model Blends
Authors: Harish Kumar Sharma, Garima Sengar
Abstract:
Rice bran oil is considered comparatively nutritionally superior than different fats/oils. Therefore, model blends prepared from pure rice bran oil (RBO) and sunflower oil (SFO) were explored for changes in the different physicochemical parameters. Repeated deep fat frying process was carried out by using dried potato in order to study the thermal behaviour of pure rice bran oil, sunflower oil and their model blends. Pure rice bran oil and sunflower oil had shown good thermal stability during the repeated deep fat frying cycles. Although, the model blends constituting 60% RBO + 40% SFO showed better suitability during repeated deep fat frying than the remaining blended oils. The quantification of pure rice bran oil in the blended oils, physically refined rice bran oil (PRBO): SnF (sunflower oil) was carried by different methods. The study revealed that regression equations based on the oryzanol content, palmitic acid composition and iodine value can be used for the quantification. The rice bran oil can easily be quantified in the blended oils based on the oryzanol content by HPLC even at 1% level. The palmitic acid content in blended oils can also be used as an indicator to quantify rice bran oil at or above 20% level in blended oils whereas the method based on ultrasonic velocity, acoustic impedance and relative association showed initial promise in the quantification.Keywords: rice bran oil, sunflower oil, frying, quantification
Procedia PDF Downloads 308