Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in AISTATS, 2025
This paper introduces the concept of in-context estimation (ICE), where pre-trained transformers adapt to new tasks by leveraging limited prompts without explicit optimization. It proves that single-layer softmax attention transformers (SATs) can optimally solve ICE problems for a subclass of cases and demonstrates that multi-layer transformers efficiently handle broader ICE problems, outperforming standard approaches. The study highlights that transformers achieve near-optimal performance with minimal context examples, rivaling estimators with perfect knowledge of the latent context.
Recommended citation: Vishnu Teja Kunde, Vicram Rajagopalan, Chandra Shekhara Kaushik Valmeekam, Krishna Narayanan, Srinivas Shakkottai, Dileep Kalathil, and Jean-Francois Chamberland. "Transformers are Provably Optimal In-context Estimators for Wireless Communications." arXiv preprint, arXiv:2311.00226, 2025
Download Paper
Published:
This is a description of your talk, which is a markdown file that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.