Minimizing deviations in hypergraph matrices

The definition of good rows in the incidence matrix allows establishing a basis for the stability of the hypergraph even with extreme noise levels.

To begin, let's define the inequality that allows us to determine good rows in the incidence matrix. A row ii of the matrix γ(Υ)\gamma(\Upsilon) in our case will be called good if for each j=1,,rj = 1, \ldots, r the inequality holds:

ϵij1r1r22rlnr\epsilon_{ij} \leq \frac{1}{r} - \frac{1}{r^2} - \frac{2}{r \ln r}

Using this fact and the previously determined basis, we obtain:

ki(γ(Υ))6+37r+O(1r2)j=1rϵij2k_i(\gamma(\Upsilon)) \leq 6 + \frac{37}{r} + O\left(\frac{1}{r^2}\right) \sum_{j=1}^r \epsilon_{ij}^2

Next, applying the upper bound, obtain:

ki(γ(Υ))rlnr+306lnr259lnr36+rlnrr+O(1r2)j=1rϵij2 k_i(\gamma(\Upsilon)) \leq r \ln r + \frac{30}{6} \ln r - \frac{259 \ln r}{36} + \frac{r \ln r}{r} + O\left(\frac{1}{r^2}\right) \sum_{j=1}^r \epsilon_{ij}^2

Knowing this, consider situations in the case of good rows for ϵij<0\epsilon_{ij} < 0:

 (1r2+ϵij)ln(1+r2ϵij)ϵij+1r2(r2ϵij)2=ϵij+r2ϵij22\ \left(\frac{1}{r^2} + \epsilon_{ij}\right) \ln(1 + r^2 \epsilon_{ij}) \geq \epsilon_{ij} + \frac{1}{r^2} (r^2 \epsilon_{ij})^2 = \epsilon_{ij} + \frac{r^2 \epsilon_{ij}^2}{2}

If ϵij>0\epsilon_{ij} > 0, but ϵij1rlnr1r2\epsilon_{ij} \leq \frac{1}{r \ln r} - \frac{1}{r^2}, we use the estimate of the function φ(x)=(1+x)ln(1+x)\varphi(x) = (1 + x) \ln (1 + x). It is known that φ(x)>x+x22(1+x3)\varphi(x) > x + \frac{x^2}{2(1 + \frac{x}{3})} for x>0x > 0. Let x=r2ϵijx = r^2 \epsilon_{ij} and get the following estimate:

(1r2+ϵij)ln(1+r2ϵij)ϵij+r2ϵij22(1+r2ϵij3)\left(\frac{1}{r^2} + \epsilon_{ij}\right) \ln(1 + r^2 \epsilon_{ij}) \geq \epsilon_{ij} + \frac{r^2 \epsilon_{ij}^2}{2(1 + \frac{r^2 \epsilon_{ij}}{3})}

Estimating the denominator: 1+r2ϵij31+3rlnrr1 + \frac{r^2 \epsilon_{ij}}{3} \leq 1 + \frac{3r \ln r}{r}:

(1r2+ϵij)ln(1+r2ϵij)ϵij+r2ϵij22(1+rlnr)\left(\frac{1}{r^2} + \epsilon_{ij}\right) \ln(1 + r^2 \epsilon_{ij}) \geq \epsilon_{ij} + \frac{r^2 \epsilon_{ij}^2}{2(1 + r \ln r)}

Further, if ϵij(1rlnr1r2,1r2lnlnrlnr)\epsilon_{ij} \in \left( \frac{1}{r \ln r} - \frac{1}{r^2}, \frac{1}{r} - \frac{2 \ln \ln r}{\ln r} \right), we estimate the summand as follows:

(1r2+ϵij)ln(1+r2ϵij)(1r2+ϵij)ln(rlnrrlnrr)\left(\frac{1}{r^2} + \epsilon_{ij}\right) \ln(1 + r^2 \epsilon_{ij}) \geq \left(\frac{1}{r^2} + \epsilon_{ij}\right) \ln \left( \frac{r \ln r}{r \ln r - r} \right)

Using the inequality 1>rϵij(11r2lnlnrlnr)11 > r \epsilon_{ij} \left(1 - \frac{1}{r} - \frac{2 \ln \ln r}{\ln r}\right) - 1:

(1r2+ϵij)ln(1+r2ϵij)ϵij+rlnrϵij(1+2lnlnrlnr+O(lnlnrlnr)2(1lnlnrlnr))\left(\frac{1}{r^2} + \epsilon_{ij}\right) \ln(1 + r^2 \epsilon_{ij}) \geq \epsilon_{ij} + r \ln r \epsilon_{ij} \left(1 + \frac{2 \ln \ln r}{\ln r} + O \left(\frac{\ln \ln r}{\ln r}\right)^2 (1 - \frac{\ln \ln r}{\ln r})\right)

Thus, for a good row with sufficiently large rr we get:

hi(γ(Υ))ki(γ(Υ))r24j:ϵij<0ϵij2+r2j:ϵij>0ϵij2h_i(\gamma(Υ)) - k_i(γ(Υ)) \geq \frac{r^2}{4} \sum_{j: \epsilon_{ij} < 0} \epsilon_{ij}^2 + \frac{r}{2} \sum_{j: \epsilon_{ij} > 0} \epsilon_{ij}^2

The rows in the matrix remain good even with a high level of noise ϵij\epsilon_{ij} (after several operations). A good row means that the hypergraph is stable.

The inequality also shows that for good rows in the matrix, the difference between the first element hi(γ(Υ))h_i(\gamma(Υ)) and the second element ki(γ(Υ))k_i(γ(Υ)) (at the time of recalculation) with a large rr value is limited from below by the sum of squares of noise r24j:ϵij<0ϵij2+r2j:ϵij>0ϵij2\frac{r^2}{4} \sum_{j : \epsilon_{ij} < 0} \epsilon_{ij}^2 + \frac{r}{2} \sum_{j : \epsilon_{ij} > 0} \epsilon_{ij}^2.

A large value of rr allows the hypergraph system to remain stable, as proven above.

Last updated