Negative sampling for triplet-based loss: improving representation in self-supervised representation learning

dc.contributor.authorManuel Alejandro Goyo
dc.contributor.authorHidalgo Barrientos, Mauricio Fernando
dc.date.accessioned2024-12-03T13:06:57Z
dc.date.available2024-12-03T13:06:57Z
dc.date.issued2024-11-17
dc.description.abstractSignificant strides have been made in artificial neural networks across various fields, necessitating extensive labeled data for effective training. However, the acquisition of such annotated data is both costly and labor-intensive. To address this challenge, Self-Supervised Representation Learning (SSRL) has emerged as a promising solution. One prominent SSRL method, Contrastive Self-Supervised Learning (CSL), enhances feature representations by discerning similarities and differences among samples in the feature space. Yet, accurately identifying dissimilar samples remains a persistent issue, limiting CSL’s effectiveness. In response, an innovative enhancement to CSL is proposed in this paper. Explicit negative sampling strategies using a binary classification algorithm within the feature space are introduced to distinguish between similar and dissimilar features precisely. Additionally, Triplet Loss, originally designed for tasks such as person re-identification and face recognition, is incorporated to further refine feature learning. Experimental evaluations on the CIFAR-10 and SVHN datasets validate the proposed method’s superiority in content-based image retrieval (CBIR) and classification tasks. Significant improvements are demonstrated in metrics such as mean average precision (MAP), accuracy, recall, precision, and F1-score compared to existing techniques. This framework contributes to the advancement of SSRL by enabling scalable neural network training on large datasets with minimal annotation, effectively bridging the gap between supervised and unsupervised learning paradigms.
dc.identifier.citationEn: Hernández-García, Ruben; Barrientos, Ricardo J.; Velastin, Sergio A. (eds). Progress in pattern recognition, image analysis, computer vision, and applications: 27th Iberoamerican Congress, CIARP 2024 Talca, Chile. November 26-29, 2024, Proceeding, Part I. Springer, 2025. pp. 133-147
dc.identifier.doihttps://doi.org/10.1007/978-3-031-76607-7_10
dc.identifier.isbn9783031766060
dc.identifier.isbn9783031766077
dc.identifier.issn0302-9743
dc.identifier.issn1611-3349
dc.identifier.orcidhttps://orcid.org/0000-0003-3191-3673
dc.identifier.urihttp://hdl.handle.net/20.500.12254/3955
dc.language.isoen
dc.publisherSpringer
dc.relation.ispartofLecture Notes in Computer Science
dc.relation.ispartofProgress in Pattern Recognition, Image Analysis, Computer Vision, and Applications
dc.relation.ispartofseriesLecture Notes in Computer Science; 15368
dc.rightsAtribución-NoComercial-CompartirIgual 3.0 Chile (CC BY-NC-SA 3.0 CL)
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/cl/
dc.subjectSelf representation learning
dc.subjectSelf supervision
dc.subjectTriplet loss
dc.subjectNegative sampling
dc.subjectImage classification
dc.subjectComputer vision
dc.titleNegative sampling for triplet-based loss: improving representation in self-supervised representation learning
dc.typebook-chapter
Archivos
Bloque original
Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
Referencia Negative_Sampling.pdf
Tamaño:
172.39 KB
Formato:
Adobe Portable Document Format
Descripción:
Texto referencial
Bloque de licencias
Mostrando 1 - 1 de 1
No hay miniatura disponible
Nombre:
license.txt
Tamaño:
347 B
Formato:
Item-specific license agreed upon to submission
Descripción: