Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup
Paper
•
2101.06983
•
Published
•
1
This is a sentence-transformers model finetuned from intfloat/multilingual-e5-base. It maps learning outcomes, course descriptions etc. & ESCO skill descriptions to a 768-dimensional dense vector space and can be used for semantic search, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'query: Erweitern Sie Ihr Beratungsprofil: Lernen Sie, wie Sie Betriebe bei der Reduzierung von Chemikalien nach aktuellen Umweltvorschriften unterstützen – für mehr Nachhaltigkeit und Sicherheit!',
'passage: Zur Verringerung des Chemikalieneinsatzes beraten: Zur Verringerung des Einsatzes von Chemikalien wie Pestiziden und der Emissionen verschiedener chemischer Stoffe beraten, um deren Auswirkungen auf die Umwelt zu begrenzen und die Risiken für den Menschen zu verringern. Bezüglich geltender Vorschriften auf dem Laufenden bleiben.',
'passage: Sägetechniken: Verschiedene Sägetechniken zur Verwendung manueller und elektrischer Sägen.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, 0.6994, -0.0105],
# [ 0.6994, 1.0000, -0.0066],
# [-0.0105, -0.0066, 1.0000]])
learning_outcome_esco_pairsInformationRetrievalEvaluator| Metric | Value |
|---|---|
| cosine_accuracy@1 | 0.8582 |
| cosine_accuracy@3 | 0.9546 |
| cosine_accuracy@5 | 0.9716 |
| cosine_accuracy@10 | 0.983 |
| cosine_precision@1 | 0.8582 |
| cosine_precision@3 | 0.3963 |
| cosine_precision@5 | 0.2556 |
| cosine_precision@10 | 0.1359 |
| cosine_recall@1 | 0.7239 |
| cosine_recall@3 | 0.8962 |
| cosine_recall@5 | 0.9324 |
| cosine_recall@10 | 0.9639 |
| cosine_ndcg@5 | 0.8942 |
| cosine_ndcg@10 | 0.9054 |
| cosine_mrr@10 | 0.9092 |
| cosine_map@100 | 0.8753 |
Size: 9,562 training samples
Columns: anchor, positive, and negative
Approximate statistics based on the first 1000 samples:
| anchor | positive | negative | |
|---|---|---|---|
| type | string | string | list |
| details |
|
|
|
Loss: CachedMultipleNegativesRankingLoss with these parameters:
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"mini_batch_size": 32,
"gather_across_devices": false
}
per_device_train_batch_size: 32gradient_accumulation_steps: 2learning_rate: 2e-05weight_decay: 0.01num_train_epochs: 8lr_scheduler_type: cosinewarmup_ratio: 0.1save_only_model: Truefp16: Trueoverwrite_output_dir: Falsedo_predict: Falseeval_strategy: noprediction_loss_only: Trueper_device_train_batch_size: 32per_device_eval_batch_size: 8per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 2eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 2e-05weight_decay: 0.01adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 8max_steps: -1lr_scheduler_type: cosinelr_scheduler_kwargs: {}warmup_ratio: 0.1warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Truerestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: proportionalrouter_mapping: {}learning_rate_mapping: {}| Epoch | Step | Training Loss | cosine_ndcg@10 |
|---|---|---|---|
| 0 | 50 | 4.287 | 0.8414 |
| 1 | 100 | 0.7882 | 0.8823 |
| 1 | 150 | 0.6427 | 0.8970 |
| 2 | 200 | 0.4974 | 0.8960 |
| 3 | 250 | 0.4099 | 0.8987 |
| 3 | 300 | - | 0.9054 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Base model
intfloat/multilingual-e5-base