octra
TwitterDiscordTelegramChat [en]GitHub
  • Octra Developer Documentation
  • Technology
    • HFHE
      • Modular arithmetic over hypergraphs in HFHE
      • Subtraction in hypergraph space
        • Hypergraph stability after subtraction
        • Minimizing deviations in hypergraph matrices
      • Multiplication of integers
      • Integer division and range narrowing mechanics
      • HFHE Key Sharding
      • Starting vector generation
      • SBox module
      • Transformation mechanism
      • Noise basis determination
      • Noise assessment in ciphertext elements
    • Isolated Environments
  • Running a Node
    • Validators
Powered by GitBook
On this page
Export as PDF
  1. Technology
  2. HFHE

Integer division and range narrowing mechanics

First, it should be determined that a direct division operation is impossible. To obtain the quotient, a multi-step mechanism should be used.

In the integer division scheme, we define this operation as an equation in a multidimensional space HHH, in which the main goal is to determine the cipher quotient hyperedge ec\mathbf{e}_cec​ from the dividend hyperedge ea\mathbf{e}_aea​ and the divisor hyperedge eb\mathbf{e}_beb​.

To simplify, we can define division as an operator equation:

F(ec)=Γϵ(Ha)−Γϵ(Hb)⊙ec=0F(\mathbf{e}_c) = \Gamma_\epsilon(H_a) - \Gamma_\epsilon(H_b) \odot \mathbf{e}_c = \mathbf{0} F(ec​)=Γϵ​(Ha​)−Γϵ​(Hb​)⊙ec​=0

It is worth clarifying what each element of this equation means:

  • Γϵ(Ha)\Gamma_\epsilon(H_a) Γϵ​(Ha​) and Γϵ(Hb)\Gamma_\epsilon(H_b)Γϵ​(Hb​), are incidence matrices with noise;

  • ⊙\odot⊙, a linear transformation that inverts the influence of Γϵ(Hb)\Gamma_\epsilon(H_b)Γϵ​(Hb​) on ec\mathbf{e}_cec​, which is constructed using the Moore–Penrose inverse.

To solve the division problem, it is necessary to define the mechanics of range narrowing as:

ec∈R={e∈Rn∣0≤e≤1}\mathbf{e}_c \in \mathcal{R} = \left\{ \mathbf{e} \in \mathbb{R}^n \mid \mathbf{0} \leq \mathbf{e} \leq \mathbf{1} \right\} ec​∈R={e∈Rn∣0≤e≤1}

This is done to ensure that the quotient hyperedge ec\mathbf{e}_cec​ remains within the ranges of R\mathcal{R}R.

The next step is to initialize the cipher hyperedge and then define the inverse operator for the incidence matrix of the divisor.

Cipher hyperedges with cubic noise elements:

ea=Γϵ(Ha)+ϵa3,eb=Γϵ(Hb)+ϵb3\mathbf{e}_a = \Gamma_\epsilon(H_a) + \boldsymbol{\epsilon}_a^3, \quad \mathbf{e}_b = \Gamma_\epsilon(H_b) + \boldsymbol{\epsilon}_b^3 ea​=Γϵ​(Ha​)+ϵa3​,eb​=Γϵ​(Hb​)+ϵb3​

Where ϵa3\boldsymbol{\epsilon}_a^3ϵa3​ and ϵb3\boldsymbol{\epsilon}_b^3ϵb3​ are the cubic noise vectors added to HaH_aHa​ and HbH_bHb​.

Also introduce regularization parameters to reduce the noise increase due to small singular values at the points:

Γϵ(Hb)+=(Γϵ(Hb)⊤Γϵ(Hb)+αI)−1Γϵ(Hb)⊤\Gamma_\epsilon(H_b)^+ = \left( \Gamma_\epsilon(H_b)^\top \Gamma_\epsilon(H_b) + \alpha I \right)^{-1} \Gamma_\epsilon(H_b)^\top Γϵ​(Hb​)+=(Γϵ​(Hb​)⊤Γϵ​(Hb​)+αI)−1Γϵ​(Hb​)⊤

In this case, Γϵ(Hb)⊤\Gamma_\epsilon(H_b)^\topΓϵ​(Hb​)⊤ is a transposition of Γϵ(Hb)\Gamma_\epsilon(H_b)Γϵ​(Hb​), III is the identity matrix of the corresponding size.

As a result, the division operation can be expressed as:

ec=Γϵ(Hb)+⊙Γϵ(Ha)\mathbf{e}_c = \Gamma_\epsilon(H_b)^+ \odot \Gamma_\epsilon(H_a) ec​=Γϵ​(Hb​)+⊙Γϵ​(Ha​)

Let's substitute the operator ⊙\odot⊙:

ec=T⋅S⋅Γϵ(Hb)+⋅Γϵ(Ha)\mathbf{e}_c = \mathcal{T} \cdot \mathcal{S} \cdot \Gamma_\epsilon(H_b)^+ \cdot \Gamma_\epsilon(H_a) ec​=T⋅S⋅Γϵ​(Hb​)+⋅Γϵ​(Ha​)

We continue to expand:

ec=T⋅S⋅(Γϵ(Hb)⊤Γϵ(Hb)+αI)−1Γϵ(Hb)⊤Γϵ(Ha)\mathbf{e}_c = \mathcal{T} \cdot \mathcal{S} \cdot \left( \Gamma_\epsilon(H_b)^\top \Gamma_\epsilon(H_b) + \alpha I \right)^{-1} \Gamma_\epsilon(H_b)^\top \Gamma_\epsilon(H_a) ec​=T⋅S⋅(Γϵ​(Hb​)⊤Γϵ​(Hb​)+αI)−1Γϵ​(Hb​)⊤Γϵ​(Ha​)

A further goal is to ensure that ec\mathbf{e}_cec​ is within the acceptable range R\mathcal{R}R while minimizing the residual noise. That is, we literally apply the projection operator PR\mathcal{P}_{\mathcal{R}}PR​ to limit ec\mathbf{e}_cec​:

ec′=PR(ec)=min⁡(max⁡(ec,0),1)\mathbf{e}_c' = \mathcal{P}_{\mathcal{R}}(\mathbf{e}_c) = \min\left( \max(\mathbf{e}_c, \mathbf{0}), \mathbf{1} \right) ec′​=PR​(ec​)=min(max(ec​,0),1)

Component by component, for each i=1,2,...,ni = 1, 2, ... , ni=1,2,...,n:

(ec′)i={0if (ec)i<0(ec)iif 0≤(ec)i≤11if (ec)i>1(\mathbf{e}_c')_i = \begin{cases} 0 & \text{if } (\mathbf{e}_c)_i < 0 \\ (\mathbf{e}_c)_i & \text{if } 0 \leq (\mathbf{e}_c)_i \leq 1 \\ 1 & \text{if } (\mathbf{e}_c)_i > 1 \end{cases} (ec′​)i​=⎩⎨⎧​0(ec​)i​1​if (ec​)i​<0if 0≤(ec​)i​≤1if (ec​)i​>1​

Let's calculate the residual noise:

ϵres=Γϵ(Hb)⊙ec′−Γϵ(Ha)\boldsymbol{\epsilon}_{\text{res}} = \Gamma_\epsilon(H_b) \odot \mathbf{e}_c' - \Gamma_\epsilon(H_a) ϵres​=Γϵ​(Hb​)⊙ec′​−Γϵ​(Ha​)

Now the criteria for assessing compliance will be determined as:

∥ϵres∥2≤ϵtol\| \boldsymbol{\epsilon}_{\text{res}} \|_2 \leq \epsilon_{\text{tol}} ∥ϵres​∥2​≤ϵtol​

Where ϵtol\epsilon_{\text{tol}}ϵtol​ is the noise tolerance threshold after which the ciphertexts will become insane.

Next step iteratively refine ec\mathbf{e}_cec​ ​to minimize residual noise:

ec(0)=ec′\mathbf{e}_c^{(0)} = \mathbf{e}_c' ec(0)​=ec′​

For each iteration 𝑘=0,1,2,…,𝐾𝑘 = 0 , 1 , 2 , … , 𝐾k=0,1,2,…,K until ∥ϵres(k)∥2≤ϵtol\| {\boldsymbol{\epsilon}_{\text{res}}^{(k)}} \|_2 \leq \epsilon_{\text{tol}} ∥ϵres(k)​∥2​≤ϵtol​:

ec(k+1)=ec(k)+Δec(k)\mathbf{e}_c^{(k+1)} = \mathbf{e}_c^{(k)} + \Delta \mathbf{e}_c^{(k)} ec(k+1)​=ec(k)​+Δec(k)​

Where Δec(k)\Delta \mathbf{e}_c^{(k)}Δec(k)​ is defined as:

Δec(k)=Γϵ(Hb)+⊙(Γϵ(Ha)−Γϵ(Hb)⊙ec(k))+βkϵc(k)3\Delta \mathbf{e}_c^{(k)} = \Gamma_\epsilon(H_b)^+ \odot \left( \Gamma_\epsilon(H_a) - \Gamma_\epsilon(H_b) \odot \mathbf{e}_c^{(k)} \right) + \beta_k \boldsymbol{\epsilon}_c^{(k)^3} Δec(k)​=Γϵ​(Hb​)+⊙(Γϵ​(Ha​)−Γϵ​(Hb​)⊙ec(k)​)+βk​ϵc(k)3​

In this case, βk\beta_kβk​ is a scaling factor to control the influence of noise on iterations knk_nkn​.

Additive type of error is:

r(k)=Γϵ(Ha)−Γϵ(Hb)⊙ec(k)\mathbf{r}^{(k)} = \Gamma_\epsilon(H_a) - \Gamma_\epsilon(H_b) \odot \mathbf{e}_c^{(k)} r(k)=Γϵ​(Ha​)−Γϵ​(Hb​)⊙ec(k)​

Cubic noise correction:

ϵc(k)3=ϵc(k)∘ϵc(k)∘ϵc(k)\boldsymbol{\epsilon}_c^{(k)^3} = \boldsymbol{\epsilon}_c^{(k)} \circ \boldsymbol{\epsilon}_c^{(k)} \circ \boldsymbol{\epsilon}_c^{(k)} ϵc(k)3​=ϵc(k)​∘ϵc(k)​∘ϵc(k)​

The next step is to recalculate the residual noise after the update:

ϵres(k+1)=∥Γϵ(Hb)⊙ec(k+1)−Γϵ(Ha)∥2\boldsymbol{\epsilon}_{\text{res}}^{(k+1)} = \| \Gamma_\epsilon(H_b) \odot \mathbf{e}_c^{(k+1)} - \Gamma_\epsilon(H_a) \|_2 ϵres(k+1)​=∥Γϵ​(Hb​)⊙ec(k+1)​−Γϵ​(Ha​)∥2​

We will also check the compliance with tolerances:

∥ϵres(k+1)∥2≤ϵtol\| \boldsymbol{\epsilon}_{\text{res}}^{(k+1)} \|_2 \leq \epsilon_{\text{tol}} ∥ϵres(k+1)​∥2​≤ϵtol​

It is reasonable to ask how the stability of the structure is determined. Recall that the incidence matrix has good rows provided that:

ϵij≤1r−1r2−2ln⁡rr\epsilon_{ij} \leq \frac{1}{r} - \frac{1}{r^2} - \frac{2 \ln r}{r} ϵij​≤r1​−r21​−r2lnr​

And also the definition of the kernel norm:

ki(Γ(Υ))≤6+37r+O(1r2)∑j=1rϵij2k_i(\Gamma(\Upsilon)) \leq 6 + \frac{37}{r} + O\left( \frac{1}{r^2} \right) \sum_{j=1}^{r} \epsilon_{ij}^2 ki​(Γ(Υ))≤6+r37​+O(r21​)j=1∑r​ϵij2​
hi(Γ(Υ))−ki(Γ(Υ))≥r24∑j=1ϵij<0rϵij2+2r∑j=1ϵij>0rϵij2h_i(\Gamma(\Upsilon)) - k_i(\Gamma(\Upsilon)) \geq \frac{r^2}{4} \sum_{\substack{j=1 \\ \epsilon_{ij} < 0}}^{r} \epsilon_{ij}^2 + 2r \sum_{\substack{j=1 \\ \epsilon_{ij} > 0}}^{r} \epsilon_{ij}^2 hi​(Γ(Υ))−ki​(Γ(Υ))≥4r2​j=1ϵij​<0​∑r​ϵij2​+2rj=1ϵij​>0​∑r​ϵij2​

After iterative refinement, the final chiper hyperedge ec\mathbf{e}_cec​ is described as follows:

ec=Γϵ(Hb)+⊙Γϵ(Ha)+∑k=13βkϵk\mathbf{e}_c = \Gamma_\epsilon(H_b)^+ \odot \Gamma_\epsilon(H_a) + \sum_{k=1}^{3} \beta_k \boldsymbol{\epsilon}^k ec​=Γϵ​(Hb​)+⊙Γϵ​(Ha​)+k=1∑3​βk​ϵk
ec=T⋅S⋅(Γϵ(Hb)⊤Γϵ(Hb)+αI)−1Γϵ(Hb)⊤Γϵ(Ha)+∑k=13βkϵk\mathbf{e}_c = \mathcal{T} \cdot \mathcal{S} \cdot \left( \Gamma_\epsilon(H_b)^\top \Gamma_\epsilon(H_b) + \alpha I \right)^{-1} \Gamma_\epsilon(H_b)^\top \Gamma_\epsilon(H_a) + \sum_{k=1}^{3} \beta_k \boldsymbol{\epsilon}^k ec​=T⋅S⋅(Γϵ​(Hb​)⊤Γϵ​(Hb​)+αI)−1Γϵ​(Hb​)⊤Γϵ​(Ha​)+k=1∑3​βk​ϵk
ec=T⋅S⋅(Γϵ(Hb)⊤Γϵ(Hb)+αI)−1Γϵ(Hb)⊤Γϵ(Ha)+β1ϵ1+β2ϵ2+β3ϵ3\mathbf{e}_c = \mathcal{T} \cdot \mathcal{S} \cdot \left( \Gamma_\epsilon(H_b)^\top \Gamma_\epsilon(H_b) + \alpha I \right)^{-1} \Gamma_\epsilon(H_b)^\top \Gamma_\epsilon(H_a) + \beta_1 \boldsymbol{\epsilon}^1 + \beta_2 \boldsymbol{\epsilon}^2 + \beta_3 \boldsymbol{\epsilon}^3 ec​=T⋅S⋅(Γϵ​(Hb​)⊤Γϵ​(Hb​)+αI)−1Γϵ​(Hb​)⊤Γϵ​(Ha​)+β1​ϵ1+β2​ϵ2+β3​ϵ3

Finally, let's define a limit for the total accumulated noise:

ϵtotal=∑k=13βkϵk≤ϵmax\boldsymbol{\epsilon}_{\text{total}} = \sum_{k=1}^{3} \beta_k \boldsymbol{\epsilon}^k \leq \epsilon_{\text{max}} ϵtotal​=k=1∑3​βk​ϵk≤ϵmax​

We have briefly discussed the mechanics of dividing integers, and in the following we will also describe approaches to dividing rational numbers.

PreviousMultiplication of integersNextHFHE Key Sharding

Last updated 5 months ago

Hope it's clear why, but if for some reason it is not clear then you can read our article on noise management (there we are talking about noise components after subtraction, but the logic is appropriate).

As was said earlier, to guarantee the stability of the hypergraph structure in the presence of cubic noise, the inequality must be satisfied for good strings with large rrr (more details were described ):

here
here