As generic machine translation (MT) quality has improved, the need for targeted benchmarks that explore fine-grained aspects of quality has increased (Freitag et al., 2021; Isabelle et al., 2017). In particular, gender accuracy in translation (Choubey et al., 2021; Saunders and Byrne, 2020) can have implications in terms of output fluency, translation accuracy, and ethics. In this paper, we introduce MTGenEval, a benchmark for evaluating gender accuracy in translation from English into eight widely-spoken languages. MT-GenEval complements existing benchmarks by providing realistic, gender-balanced, counterfactual data in eight language pairs where the gender of individuals is unambiguous in the input segment, including multi-sentence segments requiring inter-sentential gender agreement. Our data and code is publicly available under a CC BY SA 3.0 license.