Graph Codes-2D Projections of Multimedia Feature Graphs for Fast and Effective Retrieval
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32807
Graph Codes-2D Projections of Multimedia Feature Graphs for Fast and Effective Retrieval

Authors: Stefan Wagenpfeil, Felix Engel, Paul McKevitt, Matthias Hemmje

Abstract:

Multimedia Indexing and Retrieval is generally de-signed and implemented by employing feature graphs. These graphs typically contain a significant number of nodes and edges to reflect the level of detail in feature detection. A higher level of detail increases the effectiveness of the results but also leads to more complex graph structures. However, graph-traversal-based algorithms for similarity are quite inefficient and computation intensive, espe-cially for large data structures. To deliver fast and effective retrieval, an efficient similarity algorithm, particularly for large graphs, is mandatory. Hence, in this paper, we define a graph-projection into a 2D space (Graph Code) as well as the corresponding algorithms for indexing and retrieval. We show that calculations in this space can be performed more efficiently than graph-traversals due to a simpler processing model and a high level of parallelisation. In consequence, we prove that the effectiveness of retrieval also increases substantially, as Graph Codes facilitate more levels of detail in feature fusion. Thus, Graph Codes provide a significant increase in efficiency and effectiveness (especially for Multimedia indexing and retrieval) and can be applied to images, videos, audio, and text information.

Keywords: indexing, retrieval, multimedia, graph code, graph algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 364

References:


[1] E. Spyrou, D. Iakovidis, and P. Mylonas, Semantic Multimedia Analysis and Processing. Boca Raton, Fla: CRC Press, 2017, ISBN: 978-1-351-83183-3.
[2] C. J., “Social media - statistics & facts”, Statista Inc., https://www.statista.com/topics/1164/social-networks/, Tech. Rep., Aug. 2020.
[3] W3C.org, “W3c semantic web activity”, W3C.org, http://w3.org/2001/sw, Tech. Rep., Jul. 2020.
[4] N. Inc. (2020). “Neo4j”,
[Online]. Available: https : / / neo4j . com, Download: 01.10.2020.
[5] S. Wagenpfeil, F. Engel, P. M. Kevitt, and M. Hemmje, “Ai-based semantic multimedia indexing and retrieval for social media on smart-phones”, Information, vol. 12, no. 1, 2021, ISSN: 2078-2489. DOI: 10. 3390/info12010043.
[Online]. Available: https://www.mdpi.com/2078-2489/12/1/43.
[6] S. Wagenpfeil, “Gmaf prototype”, University of Hagen, Faculty of Math-ematics and Computer Science, http://diss.step2e.de:8080/GMAFWeb/, Tech. Rep., Jul. 2020.
[7] ——, (Sep. 2021). “Github repository of gmaf and mmfvg”,
[Online]. Available: https : / / github . com / stefanwagenpfeil / GMAF/, Download: 03.10.2020.
[8] S. Wagenpfeil and M. Hemmje, Towards ai-bases semantic multimedia indexing and retrieval for social media on smartphones, SMAP 2020 Conference, Sep. 2020.
[9] J. Beyerer, M. Richter, and M. Nagel, Pattern Recognition - Introduction, Features, Classifiers and Principles. Berlin: Walter de Gruyter GmbH & Co KG, 2017, ISBN: 978-3-110-53794-9.
[10] O. Kurland and J. S. Culpepper, “Fusion in information retrieval: Sigir 2018 half-day tutorial”, in The 41st International ACM SIGIR Confer-ence on Research and Development in Information Retrieval, ser. SIGIR ’18, New York, NY, USA: Association for Computing Machinery, 2018, 1383–1386, ISBN: 9781450356572. DOI: 10 . 1145 / 3209978 . 3210186.
[Online]. Available: https://doi.org/10.1145/3209978.3210186.
[11] J. Leveling, “Interpretation of coordinations, compound generation, and result fusion for query variants”, in Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval, ser. SIGIR ’13, Dublin, Ireland: Association for Computing Machinery, 2013, 805–808, ISBN: 9781450320344. DOI: 10 . 1145 / 2484028.2484115.
[Online]. Available: https://doi.org/10.1145/2484028. 2484115.
[12] A. Bhute, B. Meshram, and H. Bhute, “Multimedia indexing and retrieval techniques: A review”, International Journal of Computer Applications, vol. 58, pp. 35–42, Nov. 2012. DOI: 10.5120/9264-3443.
[13] M. S. Lew, N. Sebe, C. Djeraba, and R. Jain, “Content-based multimedia information retrieval: State of the art and challenges”, ACM Trans. Multimedia Comput. Commun. Appl., vol. 2, no. 1, 1–19, Feb. 2006, ISSN: 1551-6857. DOI: 10.1145/1126004.1126005.
[Online]. Available: https://doi.org/10.1145/1126004.1126005.
[14] C. Hernandez´-Gracidas, A. Juarez,´ L. E. Sucar, M. Montes-y Gomez,´ and L. Villasenor,˜ “Data fusion and label weighting for image retrieval based on spatio-conceptual information”, in Adaptivity, Personalization and Fusion of Heterogeneous Information, ser. RIAO ’10, Paris, France: Le Centre des Hautes etudes Internationales, 2010, 76–79.
[15] R. Dufour, Y. Esteve,` P. Deleglise,´ and F. Bechet, “Local and global models for spontaneous speech segment detection and characterization”, Jan. 2010, pp. 558 –561. DOI: 10.1109/ASRU.2009.5372928.
[16] V. S. Subrahmanian, Principles of Multimedia Database Systems. San Francisco: Morgan Kaufmann Publishers, 1998, ISBN: 978-1-558-60466- 7.
[17] Shih-Fu Chang, T. Sikora, and A. Purl, “Overview of the mpeg-7 standard”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 11, no. 6, pp. 688–695, 2001.
[18] FFMpeg.org, “Ffmpeg documentation”, FFMpeg.org, http://ffmpeg.org, Tech. Rep., Jul. 2020.
[19] X. Mu, “Content-based video retrieval: Does video’s semantic visual feature matter?”, in Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, ser. SIGIR ’06, Seattle, Washington, USA: Association for Computing Machinery, 2006, 679–680, ISBN: 1595933697. DOI: 10 . 1145/1148170.1148314.
[Online]. Available: https://doi.org/10.1145/ 1148170.1148314.
[20] F. Gurski, D. Komander, and C. Rehs, “On characterizations for sub-classes of directed co-graphs”, CoRR, vol. abs/1907.00801, 2019. arXiv: 1907.00801.
[Online]. Available: http://arxiv.org/abs/1907.00801.
[21] yWorks GmbH, “Yed graph editor”, yWorks GmbH, https://www.yworks.com/products/yed, Tech. Rep., Aug. 2020.
[22] Sciencedirect.com. (2020). “Adjacency matrix”,
[Online]. Available: https://www.sciencedirect.com/topics/mathematics/adjacency-matrix.
[23] G. Fischer, Lineare Algebra. Springer Spektrum, 2014, ISBN: 978-3-658-03945-5.
[24] Y. Yuan, G. Wang, L. Chen, and H. Wang, “Graph similarity search on large uncertain graph databases”, The VLDB Journal, vol. 24, no. 2, pp. 271–296, Apr. 2015, ISSN: 0949-877X. DOI: 10.1007/s00778-014-0373-y.
[Online]. Available: https://doi.org/10.1007/s00778-014-0373-y.
[25] M. Needham, Graph Algorithms. 1005 Gravenstein Highway North Sebastopol CA 95472: O’Reilly Media Inc., 2019, ISBN: 978-1-492-05781-9.
[26] J. Qiu, Y. Dong, H. Ma, J. Li, K. Wang, and J. Tang, “Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec”, CoRR, vol. abs/1710.02971, 2017. arXiv: 1710 . 02971.
[Online]. Available: http://arxiv.org/abs/1710.02971.
[27] T. R. Conference. (Jan. 2020). “Datasets”,
[Online]. Available: https : //trec.nist.gov/data.html.
[28] T. D. Science. (Nov. 2018). “Over 1.5 tb’s of labeled audio datasets”,
[Online]. Available: https://towardsdatascience.com/a-data-lakes-worth-of-audio-datasets-b45b88cd4ad.
[29] Google.com. (Jan. 2021). “A large and diverse labeled video dataset for video understanding research”,
[Online]. Available: http://research. google.com/youtube8m/.
[30] P. Young. (2016). “From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions”,
[Online]. Available: http://shannon.cs.illinois.edu/DenotationGraph/.
[31] E. Agustsson and R. Timofte, “Ntire 2017 challenge on single image super-resolution: Dataset and study”, in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2017.
[Online]. Available: http : / / www . vision . ee . ethz . ch / timofter / publications/Agustsson-CVPRW-2017.pdf.
[32] M. Grubinger, P. Clough, H. Muller,¨ and T. Deselaers, “The iapr tc12 benchmark: A new evaluation resource for visual information systems”, Workshop Ontoimage, Oct. 2006.
[33] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisser-man, “The pascal visual object classes (voc) challenge”, International Journal of Computer Vision, vol. 88(2), pp. 303–338, 2010.
[34] (Aug. 2021). “The oxford english dictionary”, Oxford University Press,
[Online]. Available: https://www.oed.com, Download: 11.08.2021.
[35] W3C.org, “Sparql query language for rdf”, W3C.org, https://www.w3.org/TR/sparql11-overview/, Tech. Rep., Aug. 2013.
[36] H. Jung and W. Kim, “Automated conversion from natural language query to sparql query”, Journal of Intelligent Information Systems, 2020, ISSN: 1573-7675. DOI: 10 . 1007 / s10844 - 019 - 00589 - 2.
[Online]. Available: https://doi.org/10.1007/s10844-019-00589-2.
[37] I. Schmitt, N. Schulz, and T. Herstel, “Ws-qbe: A qbe-like query language for complex multimedia queries”, in 11th International Mul-timedia Modelling Conference, 2005, pp. 222–229.
[38] Nvidia.com. (Nov. 2020). “Rtx 2080”,
[Online]. Available: https://www. nvidia.com/de-de/geforce/graphics-cards/rtx-2080/.
[39] Apple.com. (Oct. 2020). “Apple iphone 12”,
[Online]. Available: https://www.apple.com/iphone/, Download: 13.10.2020.
[40] Oracle.com, “Java enterprise edition”, Oracle.com, https://www.oracle.com/de/java/technologies/java-ee-glance.html, Tech. Rep., Aug. 2020.
[41] Apple.com. (Nov. 2020). “Apple development programme”,
[Online]. Available: http://developer.apple.com.
[42] Apple. (Oct. 2020). “Apple ipad pro”,
[Online]. Available: https://www.apple.com/ipad-pro/, Download: 13.10.2020.
[43] Jupyter.org. (Oct. 2020). “The jupyter notebook”,
[Online]. Available: https://jupyter.org, Download: 13.10.2020.
[44] Adobe.com. (Oct. 2020). “Adobe stock”,
[Online]. Available: https : / /stock.adobe.com, Download: 02.10.2020.
[45] R. Scherer, Computer Vision Methods for Fast Image Classification and Retrieval. Gewerbestrase 11, 6330 Cham, Switzerland: Springer Nature Switzerland AG, 2020, pp. 53–55, ISBN: 978-3-030-12194-5.