---
title: "Math: every flavour we render"
url: https://mdfy.app/y3zQ915R
updated: 2026-05-14T18:15:49.480Z
source: "mdfy.app"
---
# Math: every flavour we render

> A tour of what KaTeX gives us inside the renderer. Useful as a copy-paste reference for new docs.

## Inline math

Euler's identity is $e^{i\pi} + 1 = 0$. The Cauchy-Schwarz inequality is $|\langle u, v \rangle| \le \|u\|\|v\|$ — the same inequality that limits how strongly correlated two embeddings can be when their norms are bounded. We rely on it implicitly every time we threshold a cosine-similarity score.

A more workaday example: the sigmoid is $\sigma(x) = \frac{1}{1 + e^{-x}}$, and our reranker uses softmax $\text{softmax}(x_i) = \frac{e^{x_i}}{\sum_j e^{x_j}}$ to normalise the logit scores before we threshold.

## Block math

The Gaussian integral, foundational for normalising kernels:

$$\int_{-\infty}^{\infty} e^{-x^2}\, dx = \sqrt{\pi}$$

The HNSW search complexity, roughly:

$$O(\log N) \text{ expected, with } M \cdot \log N \text{ build cost}$$

where $N$ is the number of indexed vectors and $M$ is the max-connections parameter we set to 16 in our pgvector index.

## Matrices

$$
A = \begin{pmatrix}
a & b \\\\
c & d
\end{pmatrix}
\quad
B = \begin{pmatrix}
1 & 0 \\\\
0 & 1
\end{pmatrix}
$$

## Aligned equations

$$
\begin{aligned}
y &= mx + b \\\\
m &= \frac{y_2 - y_1}{x_2 - x_1} \\\\
b &= y_1 - m \cdot x_1
\end{aligned}
$$

## What we *don't* render (yet)

Custom macros. KaTeX supports `\newcommand`, but we don't wire it through the editor — adding it would require either a per-doc macro registry or a hub-wide one. Filed as "v7 maybe" if anyone asks.
