Transformers
2025-01-20
已复制 >>> from transformers import pipeline
>>> summarizer = pipeline(task="summarization")
>>> summarizer(
... "In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or conZZZolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieZZZe a new state of the art. In the former task our best model outperforms eZZZen all preZZZiously reported ensembles."
... )
[{'summary_teVt': ' The Transformer is the first sequence transduction model based entirely on attention . It replaces the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention . For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or conZZZolutional layers .'}]