tsb ships numpy/scipy-style numeric utility functions β all implemented
from scratch with no external dependencies:
digitize, histogram, linspace, arange,
percentileOfScore, zscore, minMaxNormalize,
coefficientOfVariation.
Map each value to the index of the bin it falls into. Mirrors numpy.digitize.
Indices are 0-based; values below the first edge return -1.
import { digitize, seriesDigitize, Series } from "tsb";
// Find which [0,33), [33,66), [66,100] bucket each score belongs to
const scores = [15, 45, 70, 33, 100];
const edges = [33, 66, 100];
const bins = digitize(scores, edges);
// β [-1, 1, 2, 0, 2]
// 15 < 33 β bin -1 (below first edge)
// 45 β [33,66) β bin 1
// 70 β [66,100)β bin 2
// 33 β [33,66) β bin 0 (33 < 66, right=false default)
// 100 = last β bin 2
// Series version β preserves index
const s = new Series({ data: [15, 45, 70], index: ["Alice","Bob","Carol"] });
seriesDigitize(s, [33, 66, 100]);
// Series: Aliceβ-1, Bobβ1, Carolβ2
Count how many values fall in each bin. Mirrors numpy.histogram.
import { histogram } from "tsb";
const data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
// Default: 10 equal-width bins
const { counts, binEdges } = histogram(data);
// Custom: 5 bins, density normalised
const { counts: d, binEdges: e } = histogram(data, { bins: 5, density: true });
// Explicit edges
histogram(data, { binEdges: [1, 4, 7, 10] });
// counts: [ 3, 3, 4 ]
Generate evenly-spaced sequences, mirroring numpy.linspace and numpy.arange.
import { linspace, arange } from "tsb";
// 5 values from 0 to 1 (inclusive)
linspace(0, 1, 5);
// β [0, 0.25, 0.5, 0.75, 1]
// 0..4
arange(5);
// β [0, 1, 2, 3, 4]
// From 2 to 10, step 2
arange(2, 10, 2);
// β [2, 4, 6, 8]
// Descending
arange(5, 0, -1);
// β [5, 4, 3, 2, 1]
Compute what percentile a given score falls at within a dataset.
Mirrors scipy.stats.percentileofscore.
import { percentileOfScore } from "tsb";
const grades = [55, 60, 70, 75, 80, 85, 90, 95];
// What percentile is a score of 75?
percentileOfScore(grades, 75); // 50 (rank β default)
percentileOfScore(grades, 75, "weak"); // 50 (β€ 75: 4/8 = 50%)
percentileOfScore(grades, 75, "strict"); // 37.5 (< 75: 3/8 = 37.5%)
Transform values to zero mean and unit variance. Mirrors scipy.stats.zscore.
Missing values are propagated; zero-variance data returns all NaN.
import { zscore, Series } from "tsb";
const s = new Series({ data: [2, 4, 4, 4, 5, 5, 7, 9], name: "values" });
const z = zscore(s);
// z.values β [-1.5, -0.5, -0.5, -0.5, 0, 0, 1, 2]
// With population std (ddof=0)
const zPop = zscore(s, { ddof: 0 });
Scale all values to the interval [0, 1] (or a custom range).
Mirrors sklearn MinMaxScaler.
import { minMaxNormalize, Series } from "tsb";
const s = new Series({ data: [0, 25, 50, 75, 100] });
minMaxNormalize(s).values;
// β [0, 0.25, 0.5, 0.75, 1]
// Scale to [-1, 1]
minMaxNormalize(s, { featureRangeMin: -1, featureRangeMax: 1 }).values;
// β [-1, -0.5, 0, 0.5, 1]
Dimensionless measure of dispersion: std / |mean|.
Useful for comparing spread across datasets with different units.
import { coefficientOfVariation, Series } from "tsb";
// Dataset A: [10, 20, 30] mean=20, std=10 β CV=0.5
coefficientOfVariation(new Series({ data: [10, 20, 30] }));
// Dataset B: [100, 200, 300] same shape, higher scale β CV=0.5
coefficientOfVariation(new Series({ data: [100, 200, 300] }));
// CV with population std
coefficientOfVariation(new Series({ data: [1, 2, 3, 4, 5] }), { ddof: 0 });