Inverse probability weighting (IPW) is a general tool in survey sampling and causal inference, used both in Horvitz-Thompson estimators, which normalize by the sample size, and Hajek/self-normalized estimators, which normalize by the sum of the inverse probability weights. In this work we study a family of IPW estimators, first proposed by Trotter and Tukey in the context of Monte Carlo problems, that are normalized by an affine combination of these two terms. We show how selecting an estimator from this family in a data-dependent way to minimize asymptotic variance leads to an iterative procedure that converges to an estimator with connections to regression control methods. We refer to this estimator as an adaptively normalized estimator. For mean estimation in survey sampling, this estimator has asymptotic variance that is never worse than the Horvitz–Thompson or Hajek estimators, and is smaller except in edge cases. Going further, we show that adaptive normalization can be used to propose improvements of the augmented IPW (AIPW) estimator, average treatment effect (ATE) estimators, and policy learning objectives. Appealingly, these proposals preserve both the asymptotic efficiency of AIPW and the regret bounds for policy learning with IPW objectives, and deliver consistent finite sample improvements in simulations for all three of mean estimation, ATE estimation, and policy learning.