r i j = ∣ R i j ∣ , R ^ i j = R i j / r i j and E ~ ( x ) = ∑ i ≠ j ϕ ( r i j ) , \begin{aligned} & R_{ij} = x_i - x_j, \quad r_{ij} = |R_{ij}|, \quad \hat{R}_{ij} = R_{ij} / r_{ij} \\ & \text{and} \quad \tilde{E}(x) = \sum_{i \neq j} \phi(r_{ij}), \end{aligned} R ij = x i − x j , r ij = ∣ R ij ∣ , R ^ ij = R ij / r ij and E ~ ( x ) = i = j ∑ ϕ ( r ij ) , then
⟨ H ( x ) u , u ⟩ = ∑ i ≠ j u i j T h i j u i j where h i j = R ^ i j ⊗ R ^ i j ϕ ′ ′ ( r i j ) + ( I − R ^ i j ⊗ R ^ i j ) ϕ ′ ( r i j ) r i j . \begin{aligned} \langle H(x) u, u \rangle &= \sum_{i \neq j} u_{ij}^T h_{ij} u_{ij} \quad \text{where} \\ % h_{ij} &= \hat{R}_{ij} \otimes \hat{R}_{ij} \phi''(r_{ij}) + (I - \hat{R}_{ij} \otimes \hat{R}_{ij}) \frac{\phi'(r_{ij})}{r_{ij}}. \end{aligned} ⟨ H ( x ) u , u ⟩ h ij = i = j ∑ u ij T h ij u ij where = R ^ ij ⊗ R ^ ij ϕ ′′ ( r ij ) + ( I − R ^ ij ⊗ R ^ ij ) r ij ϕ ′ ( r ij ) . We then specify the preconditioner my making positive each "local" hessian h i j h_{ij} h ij rather than the global hessian H H H , that is, we define
⟨ P ( x ) u , u ⟩ = ∑ i ≠ j u i j T p i j u i j where p i j = R ^ i j ⊗ R ^ i j ∣ ϕ ′ ′ ( r i j ) ∣ + ( I − R ^ i j ⊗ R ^ i j ) ∣ ϕ ′ ( r i j ) r i j ∣ . \begin{aligned} \langle P(x) u, u \rangle &= \sum_{i \neq j} u_{ij}^T p_{ij} u_{ij} \quad \text{where} \\ % p_{ij} &= \hat{R}_{ij} \otimes \hat{R}_{ij} |\phi''(r_{ij})| + (I - \hat{R}_{ij} \otimes \hat{R}_{ij}) \left|\frac{\phi'(r_{ij})}{r_{ij}}\right|. \end{aligned} ⟨ P ( x ) u , u ⟩ p ij = i = j ∑ u ij T p ij u ij where = R ^ ij ⊗ R ^ ij ∣ ϕ ′′ ( r ij ) ∣ + ( I − R ^ ij ⊗ R ^ ij ) ∣ ∣ r ij ϕ ′ ( r ij ) ∣ ∣ . This guarantees that P ( x ) P(x) P ( x ) will be positive, hence we can set P n = P ( x n ) P_n = P(x_n) P n = P ( x n ) in the preconditioned steepest descent method.
This construction can be readily generalised to a wide range of bond types and optimisation schemes; see [75 , 74 , 59 ] for the details and prototype applications.