Skip to content

Analyzing Text

Language has been the heartbeat of Taters from day one. My background is heavily influenced by this area, and so are the project's roots: the psychology of verbal behavior. What follows is some general information to help you grapple with the idea of using text for psychometric purposes regardless of the analytic method used, ranging from dictionary-based analyses to transformer-based methods, and everything in-between and beyond.

If you're new to this space and want a single on-ramp, start here:

  • Kennedy, B., Ashokkumar, A., Boyd, R. L., & Dehghani, M. (2022). Text analysis for Psychology: Methods, principles, and practices. In M. Dehghani & R. L. Boyd (Eds.), The handbook of language analysis in psychology (pp. 3–62). The Guilford Press.

For broader field context and the "verbal behavior" perspective:

  • Boyd, R. L., & Schwartz, H. A. (2021). Natural language analysis and the psychology of verbal behavior: The past, present, and future states of the field. Journal of Language and Social Psychology, 40(1), 21–41. https://doi.org/10.1177/0261927X20967028

  • Boyd, R. L., & Markowitz, D. M. (2025). Verbal behavior and the future of social science. American Psychologist, 80(3), 411–433. https://doi.org/10.1037/amp0001319


Using text analysis methods in Taters

Most methods accept analysis-ready CSVs (text_id,text), raw CSVs (with text_cols / optional id_cols / optional group_by), or a folder of .txt. Below, you'll find short descriptions and information drawn from the full API. Note that this page is really intended as a starting point to wrap your head around some core concepts — it is not exhaustive in the least, and necessarily reflects my own perspectives, knowledge, and limitations of both.


Dictionary-based analyses

What & why

Dictionary methods treat language as evidence of attention and style: the words we use (including the "little" ones) systematically reflect cognitive, affective, and social processes. Decades of work show that transparent category counts can be highly diagnostic — especially with function words and psychologically motivated lexicons. A few starting points:

  • Pennebaker, J. W. (2011). The secret life of pronouns: What our words say about us. Bloomsbury.

  • Tausczik, Y. R., & Pennebaker, J. W. (2010). The psychological meaning of words: LIWC and computerized text analysis methods. Journal of Language and Social Psychology, 29(1), 24–54. https://doi.org/10.1177/0261927X09351676

  • Boyd, R. L. (2017). Psychological text analysis in the digital humanities. In S. Hai-Jew (Ed.), Data Analytics in Digital Humanities (pp. 161–189). Springer. https://doi.org/10.1007/978-3-319-54499-1_7

API: analyze texts with an arbitrary number of dictionaries

Compute LIWC-style dictionary features for text rows and write a wide features CSV.

The function supports exactly one of three input modes:

  1. analysis_csv — Use a prebuilt file with columns text_id and text.
  2. csv_path — Gather text from an arbitrary CSV using text_cols (and optional id_cols/group_by) to produce an analysis-ready file.
  3. txt_dir — Gather text from a folder of .txt files.

If out_features_csv is omitted, the default output path is ./features/dictionary/<analysis_ready_filename>. Multiple dictionaries are supported; passing a directory discovers all .dic, .dicx, and .csv dictionary files recursively in a stable order. Global columns (e.g., word counts, punctuation) are emitted once (from the first dictionary) and each dictionary contributes a namespaced block.

Parameters:

Name Type Description Default
csv_path str or Path

Source CSV to gather from. Mutually exclusive with txt_dir and analysis_csv.

None
txt_dir str or Path

Folder containing .txt files to gather from. Mutually exclusive with other modes.

None
analysis_csv str or Path

Prebuilt analysis-ready CSV with exactly two columns: text_id and text.

None
out_features_csv str or Path

Output file path. If None, defaults to ./features/dictionary/<analysis_ready_filename>.

None
overwrite_existing bool

If False and the output file already exists, skip processing and return the path.

False
dict_paths Sequence[str or Path]

One or more dictionary inputs (files or directories). Supported extensions: .dic, .dicx, .csv. Directories are expanded recursively.

required
encoding str

Text encoding used for reading/writing CSV files.

"utf-8-sig"
text_cols Sequence[str]

When gathering from a CSV, name(s) of the column(s) containing text.

("text",)
id_cols Sequence[str] or None

Optional ID columns to carry into grouping when gathering from CSV.

None
mode (concat, separate)

Gathering behavior when multiple text columns are provided. "concat" joins them into one text field using joiner; "separate" creates one row per column.

"concat"
group_by Sequence[str] or None

Optional grouping keys used during CSV gathering (e.g., ["speaker"]).

None
delimiter str

Delimiter for reading/writing CSV files.

","
joiner str

Separator used when concatenating multiple text chunks in "concat" mode.

" "
num_buckets int

Number of temporary hash buckets used during scalable CSV gathering.

512
max_open_bucket_files int

Maximum number of bucket files kept open concurrently during gathering.

64
tmp_root str or Path or None

Root directory for temporary gathering artifacts.

None
recursive bool

When gathering from a text folder, recurse into subdirectories.

True
pattern str

Glob pattern for selecting text files when gathering from a folder.

"*.txt"
id_from (stem, name, path)

How to derive text_id for gathered .txt files.

"stem"
include_source_path bool

If True, include the absolute source path as an additional column when gathering from a text folder.

True
relative_freq bool

Emit relative frequencies instead of raw counts, when supported by the dictionary engine.

True
drop_punct bool

Drop punctuation prior to analysis (dictionary-dependent).

True
rounding int

Decimal places to round numeric outputs. Use None to disable rounding.

4
retain_captures bool

Pass-through flag to the underlying analyzer to retain capture groups, if applicable.

False
wildcard_mem bool

Pass-through optimization flag for wildcard handling in the analyzer.

True

Returns:

Type Description
Path

Path to the written features CSV.

Raises:

Type Description
FileNotFoundError

If input files/folders or any dictionary file cannot be found.

ValueError

If input modes are misconfigured (e.g., multiple sources provided or none), required columns are missing from the analysis-ready CSV, or unsupported dictionary extensions are encountered.

Examples:

Run on a transcript CSV, grouped by speaker:

>>> analyze_with_dictionaries(
...     csv_path="transcripts/session.csv",
...     text_cols=["text"], id_cols=["speaker"], group_by=["speaker"],
...     dict_paths=["dictionaries/liwc/LIWC-22 Dictionary (2022-01-27).dicx"]
... )
PosixPath('.../features/dictionary/session.csv')
Notes

If overwrite_existing is False and the output exists, the existing file path is returned without recomputation.

Source code in src\taters\text\analyze_with_dictionaries.py
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
def analyze_with_dictionaries(
    *,
    # ----- Input source (choose exactly one, or pass analysis_csv directly) -----
    csv_path: Optional[Union[str, Path]] = None,
    txt_dir: Optional[Union[str, Path]] = None,
    analysis_csv: Optional[Union[str, Path]] = None,  # if provided, gathering is skipped

    # ----- Output -----
    out_features_csv: Optional[Union[str, Path]] = None,
    overwrite_existing: bool = False,  # if the file already exists, let's not overwrite by default

    # ----- Dictionaries -----
    dict_paths: Sequence[Union[str, Path]], # LIWC2007 (.dic) or LIWC-22 format (.dicx, .csv)

    # ====== SHARED I/O OPTIONS ======
    encoding: str = "utf-8-sig",

    # ====== CSV GATHER OPTIONS ======
    # Only used when csv_path is provided
    text_cols: Sequence[str] = ("text",),
    id_cols: Optional[Sequence[str]] = None,
    mode: Literal["concat", "separate"] = "concat",
    group_by: Optional[Sequence[str]] = None,
    delimiter: str = ",",
    joiner: str = " ",
    num_buckets: int = 512,
    max_open_bucket_files: int = 64,
    tmp_root: Optional[Union[str, Path]] = None,

    # ====== TXT FOLDER GATHER OPTIONS ======
    # Only used when txt_dir is provided
    recursive: bool = True,
    pattern: str = "*.txt",
    id_from: Literal["stem", "name", "path"] = "stem",
    include_source_path: bool = True,

    # ====== ANALYZER OPTIONS (passed through to ContentCoder) ======
    relative_freq: bool = True,
    drop_punct: bool = True,
    rounding: int = 4,
    retain_captures: bool = False,
    wildcard_mem: bool = True,
) -> Path:
    """
    Compute LIWC-style dictionary features for text rows and write a wide features CSV.

    The function supports exactly one of three input modes:

    1. ``analysis_csv`` — Use a prebuilt file with columns ``text_id`` and ``text``.
    2. ``csv_path`` — Gather text from an arbitrary CSV using ``text_cols`` (and optional
    ``id_cols``/``group_by``) to produce an analysis-ready file.
    3. ``txt_dir`` — Gather text from a folder of ``.txt`` files.

    If ``out_features_csv`` is omitted, the default output path is
    ``./features/dictionary/<analysis_ready_filename>``. Multiple dictionaries are supported;
    passing a directory discovers all ``.dic``, ``.dicx``, and ``.csv`` dictionary files
    recursively in a stable order. Global columns (e.g., word counts, punctuation) are emitted
    once (from the first dictionary) and each dictionary contributes a namespaced block.

    Parameters
    ----------
    csv_path : str or pathlib.Path, optional
        Source CSV to gather from. Mutually exclusive with ``txt_dir`` and ``analysis_csv``.
    txt_dir : str or pathlib.Path, optional
        Folder containing ``.txt`` files to gather from. Mutually exclusive with other modes.
    analysis_csv : str or pathlib.Path, optional
        Prebuilt analysis-ready CSV with exactly two columns: ``text_id`` and ``text``.
    out_features_csv : str or pathlib.Path, optional
        Output file path. If ``None``, defaults to
        ``./features/dictionary/<analysis_ready_filename>``.
    overwrite_existing : bool, default=False
        If ``False`` and the output file already exists, skip processing and return the path.
    dict_paths : Sequence[str or pathlib.Path]
        One or more dictionary inputs (files or directories). Supported extensions:
        ``.dic``, ``.dicx``, ``.csv``. Directories are expanded recursively.
    encoding : str, default="utf-8-sig"
        Text encoding used for reading/writing CSV files.
    text_cols : Sequence[str], default=("text",)
        When gathering from a CSV, name(s) of the column(s) containing text.
    id_cols : Sequence[str] or None, optional
        Optional ID columns to carry into grouping when gathering from CSV.
    mode : {"concat", "separate"}, default="concat"
        Gathering behavior when multiple text columns are provided. ``"concat"`` joins them
        into one text field using ``joiner``; ``"separate"`` creates one row per column.
    group_by : Sequence[str] or None, optional
        Optional grouping keys used during CSV gathering (e.g., ``["speaker"]``).
    delimiter : str, default=","
        Delimiter for reading/writing CSV files.
    joiner : str, default=" "
        Separator used when concatenating multiple text chunks in ``"concat"`` mode.
    num_buckets : int, default=512
        Number of temporary hash buckets used during scalable CSV gathering.
    max_open_bucket_files : int, default=64
        Maximum number of bucket files kept open concurrently during gathering.
    tmp_root : str or pathlib.Path or None, optional
        Root directory for temporary gathering artifacts.
    recursive : bool, default=True
        When gathering from a text folder, recurse into subdirectories.
    pattern : str, default="*.txt"
        Glob pattern for selecting text files when gathering from a folder.
    id_from : {"stem", "name", "path"}, default="stem"
        How to derive ``text_id`` for gathered ``.txt`` files.
    include_source_path : bool, default=True
        If ``True``, include the absolute source path as an additional column when gathering
        from a text folder.
    relative_freq : bool, default=True
        Emit relative frequencies instead of raw counts, when supported by the dictionary engine.
    drop_punct : bool, default=True
        Drop punctuation prior to analysis (dictionary-dependent).
    rounding : int, default=4
        Decimal places to round numeric outputs. Use ``None`` to disable rounding.
    retain_captures : bool, default=False
        Pass-through flag to the underlying analyzer to retain capture groups, if applicable.
    wildcard_mem : bool, default=True
        Pass-through optimization flag for wildcard handling in the analyzer.

    Returns
    -------
    pathlib.Path
        Path to the written features CSV.

    Raises
    ------
    FileNotFoundError
        If input files/folders or any dictionary file cannot be found.
    ValueError
        If input modes are misconfigured (e.g., multiple sources provided or none),
        required columns are missing from the analysis-ready CSV, or unsupported
        dictionary extensions are encountered.

    Examples
    --------
    Run on a transcript CSV, grouped by speaker:

    >>> analyze_with_dictionaries(
    ...     csv_path="transcripts/session.csv",
    ...     text_cols=["text"], id_cols=["speaker"], group_by=["speaker"],
    ...     dict_paths=["dictionaries/liwc/LIWC-22 Dictionary (2022-01-27).dicx"]
    ... )
    PosixPath('.../features/dictionary/session.csv')

    Notes
    -----
    If ``overwrite_existing`` is ``False`` and the output exists, the existing file path
    is returned without recomputation.
    """


    # 1) Produce or accept the analysis-ready CSV (must have columns: text_id,text)
    if analysis_csv is not None:
        analysis_ready = Path(analysis_csv)
        if not analysis_ready.exists():
            raise FileNotFoundError(f"analysis_csv not found: {analysis_ready}")
    else:
        if (csv_path is None) == (txt_dir is None):
            raise ValueError("Provide exactly one of csv_path or txt_dir (or pass analysis_csv).")

        if csv_path is not None:
            analysis_ready = Path(
                csv_to_analysis_ready_csv(
                    csv_path=csv_path,
                    text_cols=list(text_cols),
                    id_cols=list(id_cols) if id_cols else None,
                    mode=mode,
                    group_by=list(group_by) if group_by else None,
                    delimiter=delimiter,
                    encoding=encoding,
                    joiner=joiner,
                    num_buckets=num_buckets,
                    max_open_bucket_files=max_open_bucket_files,
                    tmp_root=tmp_root,
                )
            )
        else:
            analysis_ready = Path(
                txt_folder_to_analysis_ready_csv(
                    root_dir=txt_dir,
                    recursive=recursive,
                    pattern=pattern,
                    encoding=encoding,
                    id_from=id_from,
                    include_source_path=include_source_path,
                )
            )

    # 1b) Decide default features path if not provided:
    #     <cwd>/features/dictionary/<analysis_ready_filename>
    if out_features_csv is None:
        out_features_csv = Path.cwd() / "features" / "dictionary" / analysis_ready.name
    out_features_csv = Path(out_features_csv)
    out_features_csv.parent.mkdir(parents=True, exist_ok=True)

    if not overwrite_existing and Path(out_features_csv).is_file():
        print("Dictionary content coding output file already exists; returning existing file.")
        return out_features_csv


    # 2) Validate dictionaries
    def _expand_dict_inputs(paths):
        """
        Normalize dictionary inputs into a unique, ordered list of files.

        Parameters
        ----------
        paths : Iterable[Union[str, pathlib.Path]]
            Files or directories. Directories are expanded recursively to files with
            extensions ``.dic``, ``.dicx``, or ``.csv``.

        Returns
        -------
        list[pathlib.Path]
            Deduplicated, resolved file paths in stable order.

        Raises
        ------
        FileNotFoundError
            If a referenced file or directory does not exist.
        ValueError
            If a file has an unsupported extension or if no dictionary files are found.
        """

        out = []
        seen = set()
        for p in map(Path, paths):
            if p.is_dir():
                # Find .dic/.dicx/.csv under this folder (recursive), stable order
                found = find_files(
                    root_dir=p,
                    extensions=[".dic", ".dicx", ".csv"],
                    recursive=True,
                    absolute=True,
                    sort=True,
                )
                for f in found:
                    fp = Path(f).resolve()
                    if fp.suffix.lower().lstrip(".") in {"dic", "dicx", "csv"}:
                        if fp not in seen:
                            out.append(fp)
                            seen.add(fp)
            else:
                if not p.exists():
                    raise FileNotFoundError(f"Dictionary path not found: {p}")
                fp = p.resolve()
                if fp.suffix.lower().lstrip(".") not in {"dic", "dicx", "csv"}:
                    raise ValueError(f"Unsupported dictionary extension: {fp.name}")
                if fp not in seen:
                    out.append(fp)
                    seen.add(fp)
        if not out:
            raise ValueError("No dictionary files found. Supply .dic/.dicx/.csv files or folders containing them.")
        return out

    dict_paths = _expand_dict_inputs(dict_paths)

    # 3) Stream the analysis-ready CSV into the analyzer → features CSV
    def _iter_items_from_csv(
        path: Path, *, id_col: str = "text_id", text_col: str = "text"
    ) -> Iterable[Tuple[str, str]]:
        """
        Stream ``(text_id, text)`` pairs from an analysis-ready CSV.

        Parameters
        ----------
        path : pathlib.Path
            Path to the analysis-ready CSV file.
        id_col : str, default="text_id"
            Name of the identifier column to read.
        text_col : str, default="text"
            Name of the text column to read.

        Yields
        ------
        tuple[str, str]
            ``(text_id, text)`` for each row; missing text values are emitted as empty strings.

        Raises
        ------
        ValueError
            If the required columns are not present in the CSV header.
        """

        with path.open("r", newline="", encoding=encoding) as f:
            reader = csv.DictReader(f, delimiter=delimiter)
            if id_col not in reader.fieldnames or text_col not in reader.fieldnames:
                raise ValueError(
                    f"Expected columns '{id_col}' and '{text_col}' in {path}; found {reader.fieldnames}"
                )
            for row in reader:
                yield str(row[id_col]), (row.get(text_col) or "")

    # Use multi_dict_analyzer as the middle layer (new API)
    mda.analyze_texts_to_csv(
        items=_iter_items_from_csv(analysis_ready),
        dict_files=dict_paths,
        out_csv=out_features_csv,
        relative_freq=relative_freq,
        drop_punct=drop_punct,
        rounding=rounding,
        retain_captures=retain_captures,
        wildcard_mem=wildcard_mem,
        id_col_name="text_id",
        encoding=encoding,
    )

    return out_features_csv

Lexical richness analyses

What & why

Lexical richness/diversity asks: how varied is a speaker's vocabulary use? Classic measures (e.g., TTR, RTTR/CTTR, Herdan's C, Yule's K/I) capture type–token structure, while modern, length-robust metrics (MTLD, MATTR, HD-D, VOCD/D) reduce text-length bias and are widely used in psycholinguistics and language assessment. In Taters, these metrics are computed per text (or per group such as source,speaker) with tokenization and reproducibility controls (window sizes, draws, seeds). A few helpful references and codebases:

Tip: Results can vary with tokenization choices (e.g., handling of hyphens) and window/draw parameters. For strict comparability with prior work or other toolkits, keep those settings consistent and document them in your analysis.

API: analyze texts with lexical diversity metrics

Compute lexical richness/diversity metrics for each text row and write a features CSV. Draws heavily from https://github.com/LSYS/lexicalrichness but makes several key changes with the goals of minimizing dependencies, attempting to make some speed optimizations with grid search instead of precise curve specifications, and making some principled decisions around punctuation/hyphenization that differ from the original Note that these decisions are not objectively "better" than the original but, instead, reflect my own experiences/intuitions about what makes sense.

This function accepts (a) an analysis-ready CSV (with columns text_id,text), (b) a raw CSV plus instructions for gathering/aggregation, or (c) a folder of .txt files. For each resulting row of text, it tokenizes words and computes a suite of classical lexical richness measures (e.g., TTR, Herdan's C, Yule's K, MTLD, MATTR, HDD, VOCD). Results are written as a wide CSV whose rows align with the rows in the analysis-ready table (or the gathered group_by rows), preserving any non-text metadata columns.

Parameters:

Name Type Description Default
csv_path str or Path

Source CSV to gather from. Use with text_cols, optional id_cols, and optional group_by. Exactly one of csv_path, txt_dir, or analysis_csv must be provided (unless analysis_csv is given, which skips gathering).

None
txt_dir str or Path

Folder of .txt files to gather. File identifiers are created from filenames via id_from and (optionally) a source_path column when include_source_path=True.

None
analysis_csv str or Path

Existing analysis-ready CSV with columns text_id,text. When provided, all gathering options are ignored and the file is used as-is.

None
out_features_csv str or Path

Output CSV path. If omitted, defaults to ./features/lexical-richness/<analysis_ready_filename>.

None
overwrite_existing bool

If False and out_features_csv exists, the function short-circuits and returns the existing path without recomputation.

False
encoding str

Encoding for reading/writing CSVs.

"utf-8-sig"
text_cols sequence of str

Text column(s) to use when csv_path is provided. When multiple columns are given, they are combined according to mode (concat or separate).

("text",)
id_cols sequence of str

Columns to carry through unchanged into the analysis-ready CSV prior to analysis (e.g., ["source","speaker"]). These will also appear in the output features CSV.

None
mode ('concat', 'separate')

Gathering behavior when multiple text_cols are provided. "concat" joins values using joiner; "separate" produces separate rows per text column.

"concat"
group_by sequence of str

If provided, texts are grouped by these columns before analysis (e.g., ["source","speaker"]). With mode="concat", all texts in a group are joined into one blob per group; with mode="separate", they remain separate rows.

None
delimiter str

CSV delimiter used for input and output.

","
joiner str

String used to join text fields when mode="concat".

" "
num_buckets int

Internal streaming/gather parameter to control temporary file bucketing (passed through to the gatherer).

512
max_open_bucket_files int

Maximum number of temporary files simultaneously open during gathering.

64
tmp_root str or Path

Temporary directory root for the gatherer. Defaults to a system temp location.

None
recursive bool

When txt_dir is provided, whether to search subdirectories for .txt files.

True
pattern str

Glob pattern for discovering text files under txt_dir.

"*.txt"
id_from ('stem', 'name', 'path')

How to construct text_id for .txt inputs: file stem, full name, or relative path.

"stem"
include_source_path bool

When txt_dir is used, include a source_path column in the analysis-ready CSV.

True
msttr_window int

Window size for MSTTR (Mean Segmental TTR). Must be smaller than the number of tokens in the text to produce a value.

100
mattr_window int

Window size for MATTR (Moving-Average TTR). Must be smaller than the number of tokens.

100
mtld_threshold float

MTLD threshold for factor completion. A higher threshold yields shorter factors and typically lower MTLD values; the default follows common practice.

0.72
hdd_draws int

Sample size n for HD-D (Hypergeometric Distribution Diversity). Must be less than the number of tokens to produce a value.

42
vocd_ntokens int

Maximum sample size used to estimate VOCD (D). For each N in 35..vocd_ntokens, the function computes the average TTR over many random samples (vocd_within_sample).

50
vocd_within_sample int

Number of random samples drawn per N when estimating VOCD.

100
vocd_iterations int

Repeat-estimate count for VOCD. The best-fit D from each repetition is averaged.

3
vocd_seed int

Seed for the VOCD random sampler (controls reproducibility across runs).

42

Returns:

Type Description
Path

Path to the written features CSV.

Notes

Tokenization and preprocessing. Texts are lowercased, digits are removed, and punctuation characters are replaced with spaces prior to tokenization. As a result, hyphenated forms such as "state-of-the-art" will be split into separate tokens ("state", "of", "the", "art"). This choice yields robust behavior across corpora but can produce different numeric results than implementations that remove hyphens (treating "state-of-the-art" as a single token). If you require strict parity with a hyphen-removal scheme, adapt the internal preprocessing accordingly.

Metrics. The following measures are emitted per row (values are None when a text is too short to support the computation): - ttr: Type-Token Ratio (|V| / N) - rttr: Root TTR (|V| / sqrt(N)) - cttr: Corrected TTR (|V| / sqrt(2N)) - herdan_c: Herdan's C (log |V| / log N) - summer_s: Summer's S (log log |V| / log log N) - dugast: Dugast's U ((log N)^2 / (log N − log |V|)) - maas: Maas a^2 ((log N − log |V|) / (log N)^2) - yule_k: Yule's K (dispersion of frequencies; higher = less diverse) - yule_i: Yule's I (inverse of K, scaled) - herdan_vm: Herdan's Vm - simpson_d: Simpson's D (repeat-probability across tokens) - msttr_{msttr_window}: Mean Segmental TTR over fixed segments - mattr_{mattr_window}: Moving-Average TTR over a sliding window - mtld_{mtld_threshold}: Measure of Textual Lexical Diversity (bidirectional) - hdd_{hdd_draws}: HD-D (expected proportion of types in a sample of size hdd_draws) - vocd_{vocd_ntokens}: VOCD (D) estimated by fitting TTR(N) to a theoretical curve

VOCD estimation. VOCD is fit without external optimization libraries: the function performs a coarse grid search over candidate D values (minimizing squared error between observed mean TTRs and a theoretical TTR(N; D) curve) for multiple repetitions, then averages the best D across repetitions. This generally tracks SciPy-based curve fits closely; you can widen the search grid or add a fine local search if tighter agreement is desired.

Output shape. The output CSV includes all non-text columns from the analysis-ready CSV (e.g., text_id, plus any id_cols) and appends one column per metric. When a group-by is specified during gathering, each output row corresponds to one group (e.g., one (source, speaker)).

Raises:

Type Description
FileNotFoundError

If analysis_csv is provided but the file does not exist.

ValueError

If none or more than one of csv_path, txt_dir, or analysis_csv are provided, or if the analysis-ready CSV is missing required columns (text_id, text).

Examples:

Analyze an existing analysis-ready CSV (utterance-level):

>>> analyze_lexical_richness(
...     analysis_csv="transcripts_all.csv",
...     out_features_csv="features/lexical-richness.csv",
...     overwrite_existing=True,
... )

Gather from a transcript CSV and aggregate per (source, speaker):

>>> analyze_lexical_richness(
...     csv_path="transcripts/session.csv",
...     text_cols=["text"],
...     id_cols=["source", "speaker"],
...     group_by=["source", "speaker"],
...     mode="concat",
...     out_features_csv="features/lexical-richness.csv",
... )
See Also

analyze_readability : Parallel analyzer producing readability indices. csv_to_analysis_ready_csv : Helper for building the analysis-ready table from a CSV. txt_folder_to_analysis_ready_csv : Helper for building the analysis-ready table from a folder of .txt files.

Source code in src\taters\text\analyze_lexical_richness.py
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
def analyze_lexical_richness(
    *,
    # ----- Input source (exactly one unless analysis_csv is provided) ----------
    csv_path: Optional[Union[str, Path]] = None,
    txt_dir: Optional[Union[str, Path]] = None,
    analysis_csv: Optional[Union[str, Path]] = None,  # if provided, gathering is skipped

    # ----- Output --------------------------------------------------------------
    out_features_csv: Optional[Union[str, Path]] = None,
    overwrite_existing: bool = False,

    # ====== SHARED I/O OPTIONS ======
    encoding: str = "utf-8-sig",

    # ====== CSV GATHER OPTIONS ======
    text_cols: Sequence[str] = ("text",),
    id_cols: Optional[Sequence[str]] = None,
    mode: Literal["concat", "separate"] = "concat",
    group_by: Optional[Sequence[str]] = None,
    delimiter: str = ",",
    joiner: str = " ",
    num_buckets: int = 512,
    max_open_bucket_files: int = 64,
    tmp_root: Optional[Union[str, Path]] = None,

    # ====== TXT FOLDER GATHER OPTIONS ======
    recursive: bool = True,
    pattern: str = "*.txt",
    id_from: Literal["stem", "name", "path"] = "stem",
    include_source_path: bool = True,

    # ====== Metric hyperparameters (optional) ======
    msttr_window: int = 100,
    mattr_window: int = 100,
    mtld_threshold: float = 0.72,
    hdd_draws: int = 42,
    vocd_ntokens: int = 50,
    vocd_within_sample: int = 100,
    vocd_iterations: int = 3,
    vocd_seed: int = 42,
) -> Path:
    """
    Compute lexical richness/diversity metrics for each text row and write a features CSV.
    Draws heavily from https://github.com/LSYS/lexicalrichness but makes several key changes 
    with the goals of minimizing dependencies, attempting to make some speed optimizations with 
    grid search instead of precise curve specifications, and making some principled decisions 
    around punctuation/hyphenization that differ from the original Note that these decisions are 
    not objectively "better" than the original but, instead, reflect my own experiences/intuitions 
    about what makes sense.

    This function accepts (a) an *analysis-ready* CSV (with columns `text_id,text`), (b) a
    raw CSV plus instructions for gathering/aggregation, or (c) a folder of `.txt` files.
    For each resulting row of text, it tokenizes words and computes a suite of classical
    lexical richness measures (e.g., TTR, Herdan's C, Yule's K, MTLD, MATTR, HDD, VOCD).
    Results are written as a wide CSV whose rows align with the rows in the analysis-ready
    table (or the gathered `group_by` rows), preserving any non-text metadata columns.

    Parameters
    ----------
    csv_path : str or Path, optional
        Source CSV to *gather* from. Use with `text_cols`, optional `id_cols`, and
        optional `group_by`. Exactly one of `csv_path`, `txt_dir`, or `analysis_csv`
        must be provided (unless `analysis_csv` is given, which skips gathering).
    txt_dir : str or Path, optional
        Folder of `.txt` files to gather. File identifiers are created from filenames
        via `id_from` and (optionally) a `source_path` column when `include_source_path=True`.
    analysis_csv : str or Path, optional
        Existing analysis-ready CSV with columns `text_id,text`. When provided, all
        gathering options are ignored and the file is used as-is.
    out_features_csv : str or Path, optional
        Output CSV path. If omitted, defaults to
        `./features/lexical-richness/<analysis_ready_filename>`.
    overwrite_existing : bool, default False
        If `False` and `out_features_csv` exists, the function short-circuits and
        returns the existing path without recomputation.
    encoding : str, default "utf-8-sig"
        Encoding for reading/writing CSVs.
    text_cols : sequence of str, default ("text",)
        Text column(s) to use when `csv_path` is provided. When multiple columns are
        given, they are combined according to `mode` (`concat` or `separate`).
    id_cols : sequence of str, optional
        Columns to carry through unchanged into the analysis-ready CSV prior to analysis
        (e.g., `["source","speaker"]`). These will also appear in the output features CSV.
    mode : {"concat", "separate"}, default "concat"
        Gathering behavior when multiple `text_cols` are provided. `"concat"` joins
        values using `joiner`; `"separate"` produces separate rows per text column.
    group_by : sequence of str, optional
        If provided, texts are grouped by these columns before analysis (e.g.,
        `["source","speaker"]`). With `mode="concat"`, all texts in a group are joined
        into one blob per group; with `mode="separate"`, they remain separate rows.
    delimiter : str, default ","
        CSV delimiter used for input and output.
    joiner : str, default " "
        String used to join text fields when `mode="concat"`.
    num_buckets : int, default 512
        Internal streaming/gather parameter to control temporary file bucketing
        (passed through to the gatherer).
    max_open_bucket_files : int, default 64
        Maximum number of temporary files simultaneously open during gathering.
    tmp_root : str or Path, optional
        Temporary directory root for the gatherer. Defaults to a system temp location.
    recursive : bool, default True
        When `txt_dir` is provided, whether to search subdirectories for `.txt` files.
    pattern : str, default "*.txt"
        Glob pattern for discovering text files under `txt_dir`.
    id_from : {"stem", "name", "path"}, default "stem"
        How to construct `text_id` for `.txt` inputs: file stem, full name, or relative path.
    include_source_path : bool, default True
        When `txt_dir` is used, include a `source_path` column in the analysis-ready CSV.
    msttr_window : int, default 100
        Window size for MSTTR (Mean Segmental TTR). Must be smaller than the number of tokens
        in the text to produce a value.
    mattr_window : int, default 100
        Window size for MATTR (Moving-Average TTR). Must be smaller than the number of tokens.
    mtld_threshold : float, default 0.72
        MTLD threshold for factor completion. A higher threshold yields shorter factors and
        typically lower MTLD values; the default follows common practice.
    hdd_draws : int, default 42
        Sample size `n` for HD-D (Hypergeometric Distribution Diversity). Must be less than
        the number of tokens to produce a value.
    vocd_ntokens : int, default 50
        Maximum sample size used to estimate VOCD (D). For each `N` in 35..`vocd_ntokens`,
        the function computes the average TTR over many random samples (`vocd_within_sample`).
    vocd_within_sample : int, default 100
        Number of random samples drawn per `N` when estimating VOCD.
    vocd_iterations : int, default 3
        Repeat-estimate count for VOCD. The best-fit D from each repetition is averaged.
    vocd_seed : int, default 42
        Seed for the VOCD random sampler (controls reproducibility across runs).

    Returns
    -------
    Path
        Path to the written features CSV.

    Notes
    -----
    **Tokenization and preprocessing.**
    Texts are lowercased, digits are removed, and punctuation characters are
    replaced with spaces prior to tokenization. As a result, hyphenated forms such
    as `"state-of-the-art"` will be split into separate tokens (`"state"`, `"of"`,
    `"the"`, `"art"`). This choice yields robust behavior across corpora but can
    produce different numeric results than implementations that *remove* hyphens
    (treating `"state-of-the-art"` as a single token). If you require strict parity
    with a hyphen-removal scheme, adapt the internal preprocessing accordingly.

    **Metrics.**
    The following measures are emitted per row (values are `None` when a text is
    too short to support the computation):
    - ``ttr``: Type-Token Ratio (|V| / N)
    - ``rttr``: Root TTR (|V| / sqrt(N))
    - ``cttr``: Corrected TTR (|V| / sqrt(2N))
    - ``herdan_c``: Herdan's C (log |V| / log N)
    - ``summer_s``: Summer's S (log log |V| / log log N)
    - ``dugast``: Dugast's U ((log N)^2 / (log N − log |V|))
    - ``maas``: Maas a^2 ((log N − log |V|) / (log N)^2)
    - ``yule_k``: Yule's K (dispersion of frequencies; higher = less diverse)
    - ``yule_i``: Yule's I (inverse of K, scaled)
    - ``herdan_vm``: Herdan's Vm
    - ``simpson_d``: Simpson's D (repeat-probability across tokens)
    - ``msttr_{msttr_window}``: Mean Segmental TTR over fixed segments
    - ``mattr_{mattr_window}``: Moving-Average TTR over a sliding window
    - ``mtld_{mtld_threshold}``: Measure of Textual Lexical Diversity (bidirectional)
    - ``hdd_{hdd_draws}``: HD-D (expected proportion of types in a sample of size ``hdd_draws``)
    - ``vocd_{vocd_ntokens}``: VOCD (D) estimated by fitting TTR(N) to a theoretical curve

    **VOCD estimation.**
    VOCD is fit without external optimization libraries: the function performs a
    coarse grid search over candidate D values (minimizing squared error between
    observed mean TTRs and a theoretical TTR(N; D) curve) for multiple repetitions,
    then averages the best D across repetitions. This generally tracks SciPy-based
    curve fits closely; you can widen the search grid or add a fine local search
    if tighter agreement is desired.

    **Output shape.**
    The output CSV includes all non-text columns from the analysis-ready CSV
    (e.g., `text_id`, plus any `id_cols`) and appends one column per metric. When
    a group-by is specified during gathering, each output row corresponds to one
    group (e.g., one `(source, speaker)`).

    Raises
    ------
    FileNotFoundError
        If `analysis_csv` is provided but the file does not exist.
    ValueError
        If none or more than one of `csv_path`, `txt_dir`, or `analysis_csv` are provided,
        or if the analysis-ready CSV is missing required columns (`text_id`, `text`).

    Examples
    --------
    Analyze an existing analysis-ready CSV (utterance-level):

    >>> analyze_lexical_richness(
    ...     analysis_csv="transcripts_all.csv",
    ...     out_features_csv="features/lexical-richness.csv",
    ...     overwrite_existing=True,
    ... )

    Gather from a transcript CSV and aggregate per (source, speaker):

    >>> analyze_lexical_richness(
    ...     csv_path="transcripts/session.csv",
    ...     text_cols=["text"],
    ...     id_cols=["source", "speaker"],
    ...     group_by=["source", "speaker"],
    ...     mode="concat",
    ...     out_features_csv="features/lexical-richness.csv",
    ... )

    See Also
    --------
    analyze_readability : Parallel analyzer producing readability indices.
    csv_to_analysis_ready_csv : Helper for building the analysis-ready table from a CSV.
    txt_folder_to_analysis_ready_csv : Helper for building the analysis-ready table from a folder of .txt files.
    """
    # 1) Accept or produce analysis-ready CSV
    if analysis_csv is not None:
        analysis_ready = Path(analysis_csv)
        if not analysis_ready.exists():
            raise FileNotFoundError(f"analysis_csv not found: {analysis_ready}")
    else:
        if (csv_path is None) == (txt_dir is None):
            raise ValueError("Provide exactly one of csv_path or txt_dir (or pass analysis_csv).")

        if csv_path is not None:
            analysis_ready = Path(
                csv_to_analysis_ready_csv(
                    csv_path=csv_path,
                    text_cols=list(text_cols),
                    id_cols=list(id_cols) if id_cols else None,
                    mode=mode,
                    group_by=list(group_by) if group_by else None,
                    delimiter=delimiter,
                    encoding=encoding,
                    joiner=joiner,
                    num_buckets=num_buckets,
                    max_open_bucket_files=max_open_bucket_files,
                    tmp_root=tmp_root,
                )
            )
        else:
            analysis_ready = Path(
                txt_folder_to_analysis_ready_csv(
                    root_dir=txt_dir,
                    recursive=recursive,
                    pattern=pattern,
                    encoding=encoding,
                    id_from=id_from,
                    include_source_path=include_source_path,
                )
            )

    # 2) Decide default features path
    if out_features_csv is None:
        out_features_csv = Path.cwd() / "features" / "lexical-richness" / analysis_ready.name
    out_features_csv = Path(out_features_csv)
    out_features_csv.parent.mkdir(parents=True, exist_ok=True)

    if not overwrite_existing and out_features_csv.is_file():
        print(f"Lexical richness output file already exists; returning existing file: {out_features_csv}")
        return out_features_csv

    # 3) Stream analysis-ready CSV and compute metrics per row
    metrics_fixed = [
        "ttr",
        "rttr",
        "cttr",
        "herdan_c",
        "summer_s",
        "dugast",
        "maas",
        "yule_k",
        "yule_i",
        "herdan_vm",
        "simpson_d",
    ]
    # dynamic metric names (with params baked into column names)
    m_msttr = f"msttr_{msttr_window}"
    m_mattr = f"mattr_{mattr_window}"
    m_mtld  = f"mtld_{str(mtld_threshold).replace('.', '_')}"
    m_hdd   = f"hdd_{hdd_draws}"
    m_vocd  = f"vocd_{vocd_ntokens}"
    metric_names = metrics_fixed + [m_msttr, m_mattr, m_mtld, m_hdd, m_vocd]

    with analysis_ready.open("r", newline="", encoding=encoding) as fin, \
         out_features_csv.open("w", newline="", encoding=encoding) as fout:
        reader = csv.DictReader(fin, delimiter=delimiter)

        if "text_id" not in reader.fieldnames or "text" not in reader.fieldnames:
            raise ValueError(
                f"Expected columns 'text_id' and 'text' in {analysis_ready}; "
                f"found {reader.fieldnames}"
            )

        passthrough_cols = [c for c in reader.fieldnames if c != "text"]
        fieldnames = passthrough_cols + metric_names
        writer = csv.DictWriter(fout, fieldnames=fieldnames, delimiter=delimiter)
        writer.writeheader()

        for row in reader:
            txt = (row.get("text") or "").strip()
            toks = _tokenize(txt) if txt else []
            out_row: Dict[str, Any] = {k: row.get(k) for k in passthrough_cols}

            # fixed metrics
            out_row["ttr"]        = ttr(toks)
            out_row["rttr"]       = rttr(toks)
            out_row["cttr"]       = cttr(toks)
            out_row["herdan_c"]   = herdan_c(toks)
            out_row["summer_s"]   = summer_s(toks)
            out_row["dugast"]     = dugast(toks)
            out_row["maas"]       = maas(toks)
            out_row["yule_k"]     = yule_k(toks)
            out_row["yule_i"]     = yule_i(toks)
            out_row["herdan_vm"]  = herdan_vm(toks)
            out_row["simpson_d"]  = simpson_d(toks)

            # parameterized metrics
            out_row[m_msttr] = msttr(toks, segment_window=msttr_window)
            out_row[m_mattr] = mattr(toks, window_size=mattr_window)
            out_row[m_mtld]  = mtld(toks, threshold=mtld_threshold)
            out_row[m_hdd]   = hdd(toks, draws=hdd_draws)
            out_row[m_vocd]  = vocd(
                toks,
                ntokens=vocd_ntokens,
                within_sample=vocd_within_sample,
                iterations=vocd_iterations,
                seed=vocd_seed,
            )

            writer.writerow(out_row)

    return out_features_csv

options: members_order: alphabetical show_source: true


Transformer-based analyses

Archetypes (theory-driven, embedding-based)

Archetype analysis encodes each text with a Sentence-Transformers model and measures similarity to curated seed phrases (one CSV per construct). The model handles nuance; your archetype definitions provide direction in embedding space. Recent examples:

  • Varadarajan, V., Lahnala, A., Ganesan, A. V., Dey, G., Mangalik, S., Bucur, A.-M., Soni, N., Rao, R., Lanning, K., Vallejo, I., Flek, L., Schwartz, H. A., Welch, C., & Boyd, R. L. (2024). Archetypes and entropy: Theory-driven extraction of evidence for suicide risk. In Proceedings of CLPsych 2024 (pp. 278–291). https://aclanthology.org/2024.clpsych-1.28

  • Lahnala, A., Varadarajan, V., Flek, L., Schwartz, H. A., & Boyd, R. L. (2025). Unifying the extremes: Developing a unified model for detecting and predicting extremist traits and radicalization. Proceedings of the International AAAI Conference on Web and Social Media, 19, 1051–1067. https://doi.org/10.1609/icwsm.v19i1.35860

  • Soni, N., Nilsson, A. H., Mahwish, S., Varadarajan, V., Schwartz, H. A., & Boyd, R. L. (2025). Who we are, where we are: Mental health at the intersection of person, situation, and large language models. In Proceedings of CLPsych 2025 (pp. 300–313). https://aclanthology.org/2025.clpsych-1.27/

  • Atari, M., Omrani, A., & Dehghani, M. (2023). Contextualized construct representation: Leveraging psychometric scales to advance theory-driven text analysis. OSF. https://doi.org/10.31234/osf.io/m93pd

  • Chen, Y., Li, S., Li, Y., & Atari, M. (2024). Surveying the dead minds: Historical-psychological text analysis with contextualized construct representation (CCR) for classical Chinese. In Y. Al-Onaizan, M. Bansal, & Y.-N. Chen (Eds.), Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 2597–2615). Association for Computational Linguistics. https://doi.org/10.18653/v1/2024.emnlp-main.151

  • Simchon, A., Hadar, B., & Gilead, M. (2023). A computational text analysis investigation of the relation between personal and linguistic agency. Communications Psychology, 1(1), 23. https://doi.org/10.1038/s44271-023-00020-1

API: Analyze texts with/for archetypes

Compute archetype scores for text rows and write a wide, analysis-ready features CSV.

This function supports three input modes:

  1. analysis_csv — Use a prebuilt CSV with exactly two columns: text_id and text.
  2. csv_path — Gather text from an arbitrary CSV by specifying text_cols (and optionally id_cols and group_by) to construct an analysis-ready CSV on the fly.
  3. txt_dir — Gather text from a folder of .txt files.

Archetype scoring is delegated to a middle layer that embeds text with a Sentence-Transformers model and evaluates cosine similarity to one or more archetype CSVs. If out_features_csv is omitted, the default path is ./features/archetypes/<analysis_ready_filename>.

Parameters:

Name Type Description Default
csv_path str or Path

Source CSV for gathering. Mutually exclusive with txt_dir and analysis_csv.

None
txt_dir str or Path

Folder of .txt files to gather from. Mutually exclusive with the other input modes.

None
analysis_csv str or Path

Precomputed analysis-ready CSV containing exactly the columns text_id and text.

None
out_features_csv str or Path

Output path for the features CSV. If None, defaults to ./features/archetypes/<analysis_ready_filename>.

None
overwrite_existing bool

If False and the output file already exists, skip recomputation and return the existing path.

False
archetype_csvs Sequence[str or Path]

One or more archetype CSVs (name → seed phrases). Directories are allowed and expanded recursively to all .csv files.

required
encoding str

Text encoding for CSV I/O.

"utf-8-sig"
delimiter str

Field delimiter for CSV I/O.

","
text_cols Sequence[str]

When gathering from a CSV: column(s) that contain text. Used only if csv_path is provided.

("text",)
id_cols Sequence[str]

When gathering from a CSV: optional ID columns to carry into grouping (e.g., ["speaker"]).

None
mode (concat, separate)

Gathering behavior when multiple text_cols are provided. "concat" joins into a single text field; "separate" creates one row per text column.

"concat"
group_by Sequence[str]

Optional grouping keys used during gathering (e.g., ["speaker"]). In "concat" mode, members are concatenated into one row per group.

None
joiner str

Separator used when concatenating multiple text chunks.

" "
num_buckets int

Number of temporary hash buckets used for scalable CSV gathering.

512
max_open_bucket_files int

Maximum number of bucket files to keep open concurrently during gathering.

64
tmp_root str or Path

Root directory for temporary files used by gathering.

None
recursive bool

When gathering from a text folder, whether to recurse into subdirectories.

True
pattern str

Filename glob used when gathering from a text folder.

"*.txt"
id_from (stem, name, path)

How to derive the text_id when gathering from a text folder.

"stem"
include_source_path bool

Whether to include the absolute source path as an additional column when gathering from a text folder.

True
model_name str

Sentence-Transformers model used to embed text for archetype scoring.

"sentence-transformers/all-roberta-large-v1"
mean_center_vectors bool

If True, mean-center embedding vectors prior to scoring.

True
fisher_z_transform bool

If True, apply the Fisher z-transform to correlations.

False
rounding int

Number of decimal places to round numeric outputs. Use None to disable rounding.

4

Returns:

Type Description
Path

Path to the written features CSV.

Raises:

Type Description
FileNotFoundError

If an input file or folder does not exist, or an archetype CSV path is invalid.

ValueError

If required arguments are incompatible or missing (e.g., no input mode chosen), or if the analysis-ready CSV lacks text_id/text columns.

Examples:

Run on a transcript CSV, grouped by speaker:

>>> analyze_with_archetypes(
...     csv_path="transcripts/session.csv",
...     text_cols=["text"],
...     id_cols=["speaker"],
...     group_by=["speaker"],
...     archetype_csvs=["dictionaries/archetypes"],
...     model_name="sentence-transformers/all-roberta-large-v1",
... )
PosixPath('.../features/archetypes/session.csv')
Notes

If out_features_csv exists and overwrite_existing=False, the existing path is returned without recomputation. Directories passed in archetype_csvs are expanded recursively to all .csv files and deduplicated before scoring.

Source code in src\taters\text\analyze_with_archetypes.py
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
def analyze_with_archetypes(
    *,
    # ----- Input source (choose exactly one, OR pass analysis_csv to skip gathering) -----
    csv_path: Optional[Union[str, Path]] = None,
    txt_dir: Optional[Union[str, Path]] = None,
    analysis_csv: Optional[Union[str, Path]] = None,   # <- NEW: skip gathering if provided

    # ----- Output -----
    out_features_csv: Optional[Union[str, Path]] = None,
    overwrite_existing: bool = False,  # if the file already exists, let's not overwrite by default

    # ----- Archetype CSVs (one or more) -----
    archetype_csvs: Sequence[Union[str, Path]],

    # ====== SHARED I/O OPTIONS ======
    encoding: str = "utf-8-sig",
    delimiter: str = ",",

    # ====== CSV GATHER OPTIONS (when csv_path is provided) ======
    text_cols: Sequence[str] = ("text",),
    id_cols: Optional[Sequence[str]] = None,
    mode: Literal["concat", "separate"] = "concat",
    group_by: Optional[Sequence[str]] = None,
    joiner: str = " ",
    num_buckets: int = 512,
    max_open_bucket_files: int = 64,
    tmp_root: Optional[Union[str, Path]] = None,

    # ====== TXT FOLDER GATHER OPTIONS (when txt_dir is provided) ======
    recursive: bool = True,
    pattern: str = "*.txt",
    id_from: Literal["stem", "name", "path"] = "stem",
    include_source_path: bool = True,

    # ====== Archetyper scoring options ======
    model_name: str = "sentence-transformers/all-roberta-large-v1",
    mean_center_vectors: bool = True,
    fisher_z_transform: bool = False,
    rounding: int = 4,
) -> Path:
    """
    Compute archetype scores for text rows and write a wide, analysis-ready features CSV.

    This function supports three input modes:

    1. ``analysis_csv`` — Use a prebuilt CSV with exactly two columns: ``text_id`` and ``text``.
    2. ``csv_path`` — Gather text from an arbitrary CSV by specifying ``text_cols`` (and optionally
    ``id_cols`` and ``group_by``) to construct an analysis-ready CSV on the fly.
    3. ``txt_dir`` — Gather text from a folder of ``.txt`` files.

    Archetype scoring is delegated to a middle layer that embeds text with a Sentence-Transformers
    model and evaluates cosine similarity to one or more archetype CSVs. If ``out_features_csv`` is
    omitted, the default path is ``./features/archetypes/<analysis_ready_filename>``.

    Parameters
    ----------
    csv_path : str or pathlib.Path, optional
        Source CSV for gathering. Mutually exclusive with ``txt_dir`` and ``analysis_csv``.
    txt_dir : str or pathlib.Path, optional
        Folder of ``.txt`` files to gather from. Mutually exclusive with the other input modes.
    analysis_csv : str or pathlib.Path, optional
        Precomputed analysis-ready CSV containing exactly the columns ``text_id`` and ``text``.
    out_features_csv : str or pathlib.Path, optional
        Output path for the features CSV. If ``None``, defaults to
        ``./features/archetypes/<analysis_ready_filename>``.
    overwrite_existing : bool, default=False
        If ``False`` and the output file already exists, skip recomputation and return the existing path.
    archetype_csvs : Sequence[str or pathlib.Path]
        One or more archetype CSVs (name → seed phrases). Directories are allowed and expanded
        recursively to all ``.csv`` files.
    encoding : str, default="utf-8-sig"
        Text encoding for CSV I/O.
    delimiter : str, default=","
        Field delimiter for CSV I/O.
    text_cols : Sequence[str], default=("text",)
        When gathering from a CSV: column(s) that contain text. Used only if ``csv_path`` is provided.
    id_cols : Sequence[str], optional
        When gathering from a CSV: optional ID columns to carry into grouping (e.g., ``["speaker"]``).
    mode : {"concat", "separate"}, default="concat"
        Gathering behavior when multiple ``text_cols`` are provided. ``"concat"`` joins into a single
        text field; ``"separate"`` creates one row per text column.
    group_by : Sequence[str], optional
        Optional grouping keys used during gathering (e.g., ``["speaker"]``). In ``"concat"`` mode,
        members are concatenated into one row per group.
    joiner : str, default=" "
        Separator used when concatenating multiple text chunks.
    num_buckets : int, default=512
        Number of temporary hash buckets used for scalable CSV gathering.
    max_open_bucket_files : int, default=64
        Maximum number of bucket files to keep open concurrently during gathering.
    tmp_root : str or pathlib.Path, optional
        Root directory for temporary files used by gathering.
    recursive : bool, default=True
        When gathering from a text folder, whether to recurse into subdirectories.
    pattern : str, default="*.txt"
        Filename glob used when gathering from a text folder.
    id_from : {"stem", "name", "path"}, default="stem"
        How to derive the ``text_id`` when gathering from a text folder.
    include_source_path : bool, default=True
        Whether to include the absolute source path as an additional column when gathering from a text folder.
    model_name : str, default="sentence-transformers/all-roberta-large-v1"
        Sentence-Transformers model used to embed text for archetype scoring.
    mean_center_vectors : bool, default=True
        If ``True``, mean-center embedding vectors prior to scoring.
    fisher_z_transform : bool, default=False
        If ``True``, apply the Fisher z-transform to correlations.
    rounding : int, default=4
        Number of decimal places to round numeric outputs. Use ``None`` to disable rounding.

    Returns
    -------
    pathlib.Path
        Path to the written features CSV.

    Raises
    ------
    FileNotFoundError
        If an input file or folder does not exist, or an archetype CSV path is invalid.
    ValueError
        If required arguments are incompatible or missing (e.g., no input mode chosen),
        or if the analysis-ready CSV lacks ``text_id``/``text`` columns.

    Examples
    --------
    Run on a transcript CSV, grouped by speaker:

    >>> analyze_with_archetypes(
    ...     csv_path="transcripts/session.csv",
    ...     text_cols=["text"],
    ...     id_cols=["speaker"],
    ...     group_by=["speaker"],
    ...     archetype_csvs=["dictionaries/archetypes"],
    ...     model_name="sentence-transformers/all-roberta-large-v1",
    ... )
    PosixPath('.../features/archetypes/session.csv')

    Notes
    -----
    If ``out_features_csv`` exists and ``overwrite_existing=False``, the existing path is returned
    without recomputation. Directories passed in ``archetype_csvs`` are expanded recursively to
    all ``.csv`` files and deduplicated before scoring.
    """


    # 1) Use analysis-ready CSV if given; otherwise gather from csv_path or txt_dir
    if analysis_csv is not None:
        analysis_ready = Path(analysis_csv)
        if not analysis_ready.exists():
            raise FileNotFoundError(f"analysis_csv not found: {analysis_ready}")
    else:
        if (csv_path is None) == (txt_dir is None):
            raise ValueError("Provide exactly one of csv_path or txt_dir (or pass analysis_csv).")
        if csv_path is not None:
            analysis_ready = Path(
                csv_to_analysis_ready_csv(
                    csv_path=csv_path,
                    text_cols=list(text_cols),
                    id_cols=list(id_cols) if id_cols else None,
                    mode=mode,
                    group_by=list(group_by) if group_by else None,
                    delimiter=delimiter,
                    encoding=encoding,
                    joiner=joiner,
                    num_buckets=num_buckets,
                    max_open_bucket_files=max_open_bucket_files,
                    tmp_root=tmp_root,
                )
            )
        else:
            analysis_ready = Path(
                txt_folder_to_analysis_ready_csv(
                    root_dir=txt_dir,
                    recursive=recursive,
                    pattern=pattern,
                    encoding=encoding,
                    id_from=id_from,
                    include_source_path=include_source_path,
                )
            )

    # 1b) Decide default features path if not provided:
    #     <analysis_ready_dir>/features/archetypes/<analysis_ready_filename>
    if out_features_csv is None:
        out_features_csv = Path.cwd() / "features" / "archetypes" / analysis_ready.name
    out_features_csv = Path(out_features_csv)
    out_features_csv.parent.mkdir(parents=True, exist_ok=True)

    if not overwrite_existing and Path(out_features_csv).is_file():
        print("Archetypes output file already exists; returning existing file.")
        return out_features_csv


    # 2) Resolve/validate archetype CSVs
    # Allow passing either:
    #   • one or more CSV files, or
    #   • one or more directories containing CSVs (recursively).
    #
    # We lean on the shared find_files helper to avoid redundancy.

    # 2) Resolve/validate archetype CSVs
    resolved_archetype_csvs: list[Path] = []

    for src in archetype_csvs:
        src_path = Path(src)
        if src_path.is_dir():
            # find all *.csv under this folder (recursive)
            found = find_files(
                root_dir=src_path,
                extensions=[".csv"],
                recursive=True,
                absolute=True,
                sort=True,
            )
            resolved_archetype_csvs.extend(Path(f) for f in found)
        else:
            resolved_archetype_csvs.append(src_path)

    # De-dup, normalize, and sort
    archetype_csvs = sorted({p.resolve() for p in resolved_archetype_csvs})

    if not archetype_csvs:
        raise ValueError(
            "No archetype CSVs found. Pass one or more CSV files, or a directory containing CSV files with your archetypes."
        )
    for p in archetype_csvs:
        if not p.exists():
            raise FileNotFoundError(f"Archetype CSV not found: {p}")



    # 3) Stream (text_id, text) → middle layer → features CSV
    def _iter_items_from_csv(path: Path, *, id_col: str = "text_id", text_col: str = "text") -> Iterable[Tuple[str, str]]:
        with path.open("r", newline="", encoding=encoding) as f:
            """
            Stream ``(text_id, text)`` pairs from an analysis-ready CSV.

            Parameters
            ----------
            path : pathlib.Path
                Path to the analysis-ready CSV containing at least ``text_id`` and ``text``.
            id_col : str, default="text_id"
                Name of the identifier column to read.
            text_col : str, default="text"
                Name of the text column to read.

            Yields
            ------
            tuple of (str, str)
                The ``(text_id, text)`` for each row. Missing text values are emitted as empty strings.

            Raises
            ------
            ValueError
                If the required columns are not present in the CSV header.
            """

            reader = csv.DictReader(f, delimiter=delimiter)
            if id_col not in reader.fieldnames or text_col not in reader.fieldnames:
                raise ValueError(
                    f"Expected columns '{id_col}' and '{text_col}' in {path}; found {reader.fieldnames}"
                )
            for row in reader:
                yield str(row[id_col]), (row.get(text_col) or "")

    maa.analyze_texts_to_csv(
        items=_iter_items_from_csv(analysis_ready),
        archetype_csvs=archetype_csvs,
        out_csv=out_features_csv,
        model_name=model_name,
        mean_center_vectors=mean_center_vectors,
        fisher_z_transform=fisher_z_transform,
        rounding=rounding,
        encoding=encoding,
        delimiter=delimiter,
        id_col_name="text_id",
    )

    return out_features_csv

Sentence embeddings (general semantic features)

When you want task-agnostic semantic features (for clustering, similarity, regression/classification), use taters.text.extract_sentence_embeddings. Taters tokenizes into sentences, embeds each sentence, and averages to a single vector per row. Optional L2 normalization makes cosine comparisons straightforward. Example applications:

  • Kjell, O. N. E., Sikström, S., Kjell, K., & Schwartz, H. A. (2022). Natural language analyzed with AI-based transformers predict traditional subjective well-being measures approaching the theoretical upper limits in accuracy. Scientific Reports, 12, 3918. https://doi.org/10.1038/s41598-022-07520-w

  • Nilsson, A. H., Schwartz, H. A., Rosenthal, R. N., McKay, J. R., Vu, H., Cho, Y.-M., Mahwish, S., Ganesan, A. V., & Ungar, L. (2024). Language-based EMA assessments help understand problematic alcohol consumption. PLOS ONE, 19(3), e0298300. https://doi.org/10.1371/journal.pone.0298300

API: Extract sentence embeddings

Average sentence embeddings per row of text and write a wide features CSV.

Supports three mutually exclusive input modes:

  1. analysis_csv — Use a prebuilt file with columns text_id and text.
  2. csv_path — Gather from a CSV using text_cols (and optional id_cols/group_by) to build an analysis-ready CSV.
  3. txt_dir — Gather from a folder of .txt files.

For each row, the text is split into sentences (NLTK if available; otherwise a regex fallback). Each sentence is embedded with a Sentence-Transformers model and the vectors are averaged into one row-level embedding. Optionally, vectors are L2-normalized. The output CSV schema is:

text_id, e0, e1, ..., e{D-1}

If out_features_csv is omitted, the default is ./features/sentence-embeddings/<analysis_ready_filename>. When overwrite_existing is False and the output exists, the function returns the existing path without recomputation.

Parameters:

Name Type Description Default
csv_path str or Path

Source CSV to gather from. Mutually exclusive with txt_dir and analysis_csv.

None
txt_dir str or Path

Folder of .txt files to gather from. Mutually exclusive with the other modes.

None
analysis_csv str or Path

Prebuilt analysis-ready CSV containing exactly text_id and text.

None
out_features_csv str or Path

Output features CSV path. If None, a default path is derived from the analysis-ready filename under ./features/sentence-embeddings/.

None
overwrite_existing bool

If False and the output file already exists, skip processing and return it.

False
encoding str

CSV I/O encoding.

"utf-8-sig"
delimiter str

CSV field delimiter.

","
text_cols Sequence[str]

When gathering from a CSV: column(s) containing text.

("text",)
id_cols Sequence[str]

When gathering from a CSV: optional ID columns to carry through.

None
mode ('concat', 'separate')

Gathering behavior if multiple text_cols are provided. "concat" joins them with joiner; "separate" creates one row per column.

"concat"
group_by Sequence[str]

Optional grouping keys used during CSV gathering (e.g., ["speaker"]).

None
joiner str

Separator used when concatenating text in "concat" mode.

" "
num_buckets int

Number of temporary hash buckets for scalable gathering.

512
max_open_bucket_files int

Maximum number of bucket files kept open concurrently during gathering.

64
tmp_root str or Path

Root directory for temporary gathering artifacts.

None
recursive bool

When gathering from a text folder, recurse into subdirectories.

True
pattern str

Glob pattern for selecting text files.

"*.txt"
id_from ('stem', 'name', 'path')

How to derive text_id when gathering from a text folder.

"stem"
include_source_path bool

Whether to include the absolute source path as an additional column when gathering from a text folder.

True
model_name str

Sentence-Transformers model name or path.

"sentence-transformers/all-roberta-large-v1"
batch_size int

Batch size for model encoding.

32
normalize_l2 bool

If True, L2-normalize each row's final vector.

True
rounding int or None

If provided, round floats to this many decimals (useful for smaller files).

None
show_progress bool

Show a progress bar during embedding.

False

Returns:

Type Description
Path

Path to the written features CSV.

Raises:

Type Description
FileNotFoundError

If an input file or directory does not exist.

ImportError

If sentence-transformers is not installed.

ValueError

If input modes are misconfigured (e.g., multiple or none provided), or if the analysis-ready CSV lacks text_id/text.

Examples:

Compute row-level embeddings from a transcript CSV, grouped by speaker:

>>> analyze_with_sentence_embeddings(
...     csv_path="transcripts/session.csv",
...     text_cols=["text"], id_cols=["speaker"], group_by=["speaker"],
...     model_name="sentence-transformers/all-roberta-large-v1",
...     normalize_l2=True
... )
PosixPath('.../features/sentence-embeddings/session.csv')
Notes
  • Rows with no recoverable sentences produce empty feature cells (not zeros).
  • The embedding dimensionality D is taken from the model and used to construct header columns e0..e{D-1}.
Source code in src\taters\text\extract_sentence_embeddings.py
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
def analyze_with_sentence_embeddings(
    *,
    # ----- Input source (choose exactly one, or pass analysis_csv directly) -----
    csv_path: Optional[Union[str, Path]] = None,
    txt_dir: Optional[Union[str, Path]] = None,
    analysis_csv: Optional[Union[str, Path]] = None,

    # ----- Output -----
    out_features_csv: Optional[Union[str, Path]] = None,
    overwrite_existing: bool = False,  # if the file already exists, let's not overwrite by default

    # ====== SHARED I/O OPTIONS ======
    encoding: str = "utf-8-sig",
    delimiter: str = ",",

    # ====== CSV GATHER OPTIONS (when csv_path is provided) ======
    text_cols: Sequence[str] = ("text",),
    id_cols: Optional[Sequence[str]] = None,
    mode: Literal["concat", "separate"] = "concat",
    group_by: Optional[Sequence[str]] = None,
    joiner: str = " ",
    num_buckets: int = 512,
    max_open_bucket_files: int = 64,
    tmp_root: Optional[Union[str, Path]] = None,

    # ====== TXT FOLDER GATHER OPTIONS (when txt_dir is provided) ======
    recursive: bool = True,
    pattern: str = "*.txt",
    id_from: Literal["stem", "name", "path"] = "stem",
    include_source_path: bool = True,

    # ====== SentenceTransformer options ======
    model_name: str = "sentence-transformers/all-roberta-large-v1",
    batch_size: int = 32,
    normalize_l2: bool = True,       # set True if you want unit-length vectors
    rounding: Optional[int] = None,   # None = full precision; e.g., 6 for ~float32-ish text
    show_progress: bool = False,
) -> Path:
    """
    Average sentence embeddings per row of text and write a wide features CSV.

    Supports three mutually exclusive input modes:

    1. ``analysis_csv`` — Use a prebuilt file with columns ``text_id`` and ``text``.
    2. ``csv_path`` — Gather from a CSV using ``text_cols`` (and optional
    ``id_cols``/``group_by``) to build an analysis-ready CSV.
    3. ``txt_dir`` — Gather from a folder of ``.txt`` files.

    For each row, the text is split into sentences (NLTK if available; otherwise
    a regex fallback). Each sentence is embedded with a Sentence-Transformers
    model and the vectors are averaged into one row-level embedding. Optionally,
    vectors are L2-normalized. The output CSV schema is:

    ``text_id, e0, e1, ..., e{D-1}``

    If ``out_features_csv`` is omitted, the default is
    ``./features/sentence-embeddings/<analysis_ready_filename>``. When
    ``overwrite_existing`` is ``False`` and the output exists, the function
    returns the existing path without recomputation.

    Parameters
    ----------
    csv_path : str or pathlib.Path, optional
        Source CSV to gather from. Mutually exclusive with ``txt_dir`` and ``analysis_csv``.
    txt_dir : str or pathlib.Path, optional
        Folder of ``.txt`` files to gather from. Mutually exclusive with the other modes.
    analysis_csv : str or pathlib.Path, optional
        Prebuilt analysis-ready CSV containing exactly ``text_id`` and ``text``.
    out_features_csv : str or pathlib.Path, optional
        Output features CSV path. If ``None``, a default path is derived from the
        analysis-ready filename under ``./features/sentence-embeddings/``.
    overwrite_existing : bool, default=False
        If ``False`` and the output file already exists, skip processing and return it.

    encoding : str, default="utf-8-sig"
        CSV I/O encoding.
    delimiter : str, default=","
        CSV field delimiter.

    text_cols : Sequence[str], default=("text",)
        When gathering from a CSV: column(s) containing text.
    id_cols : Sequence[str], optional
        When gathering from a CSV: optional ID columns to carry through.
    mode : {"concat", "separate"}, default="concat"
        Gathering behavior if multiple ``text_cols`` are provided. ``"concat"`` joins
        them with ``joiner``; ``"separate"`` creates one row per column.
    group_by : Sequence[str], optional
        Optional grouping keys used during CSV gathering (e.g., ``["speaker"]``).
    joiner : str, default=" "
        Separator used when concatenating text in ``"concat"`` mode.
    num_buckets : int, default=512
        Number of temporary hash buckets for scalable gathering.
    max_open_bucket_files : int, default=64
        Maximum number of bucket files kept open concurrently during gathering.
    tmp_root : str or pathlib.Path, optional
        Root directory for temporary gathering artifacts.

    recursive : bool, default=True
        When gathering from a text folder, recurse into subdirectories.
    pattern : str, default="*.txt"
        Glob pattern for selecting text files.
    id_from : {"stem", "name", "path"}, default="stem"
        How to derive ``text_id`` when gathering from a text folder.
    include_source_path : bool, default=True
        Whether to include the absolute source path as an additional column when
        gathering from a text folder.

    model_name : str, default="sentence-transformers/all-roberta-large-v1"
        Sentence-Transformers model name or path.
    batch_size : int, default=32
        Batch size for model encoding.
    normalize_l2 : bool, default=True
        If ``True``, L2-normalize each row's final vector.
    rounding : int or None, default=None
        If provided, round floats to this many decimals (useful for smaller files).
    show_progress : bool, default=False
        Show a progress bar during embedding.

    Returns
    -------
    pathlib.Path
        Path to the written features CSV.

    Raises
    ------
    FileNotFoundError
        If an input file or directory does not exist.
    ImportError
        If ``sentence-transformers`` is not installed.
    ValueError
        If input modes are misconfigured (e.g., multiple or none provided),
        or if the analysis-ready CSV lacks ``text_id``/``text``.

    Examples
    --------
    Compute row-level embeddings from a transcript CSV, grouped by speaker:

    >>> analyze_with_sentence_embeddings(
    ...     csv_path="transcripts/session.csv",
    ...     text_cols=["text"], id_cols=["speaker"], group_by=["speaker"],
    ...     model_name="sentence-transformers/all-roberta-large-v1",
    ...     normalize_l2=True
    ... )
    PosixPath('.../features/sentence-embeddings/session.csv')

    Notes
    -----
    - Rows with no recoverable sentences produce **empty** feature cells (not zeros).
    - The embedding dimensionality ``D`` is taken from the model and used to
    construct header columns ``e0..e{D-1}``.
    """

    # pre-check that nltk sent_tokenizer is usable
    use_nltk = _ensure_nltk_punkt(verbose=True)

    # 1) analysis-ready CSV
    if analysis_csv is not None:
        analysis_ready = Path(analysis_csv)
        if not analysis_ready.exists():
            raise FileNotFoundError(f"analysis_csv not found: {analysis_ready}")
    else:
        if (csv_path is None) == (txt_dir is None):
            raise ValueError("Provide exactly one of csv_path or txt_dir (or pass analysis_csv).")
        if csv_path is not None:
            analysis_ready = Path(
                csv_to_analysis_ready_csv(
                    csv_path=csv_path,
                    text_cols=list(text_cols),
                    id_cols=list(id_cols) if id_cols else None,
                    mode=mode,
                    group_by=list(group_by) if group_by else None,
                    delimiter=delimiter,
                    encoding=encoding,
                    joiner=joiner,
                    num_buckets=num_buckets,
                    max_open_bucket_files=max_open_bucket_files,
                    tmp_root=tmp_root,
                )
            )
        else:
            analysis_ready = Path(
                txt_folder_to_analysis_ready_csv(
                    root_dir=txt_dir,
                    recursive=recursive,
                    pattern=pattern,
                    encoding=encoding,
                    id_from=id_from,
                    include_source_path=include_source_path,
                )
            )

    # 1b) default output path
    if out_features_csv is None:
        out_features_csv = Path.cwd() / "features" / "sentence-embeddings" / analysis_ready.name
    out_features_csv = Path(out_features_csv)
    out_features_csv.parent.mkdir(parents=True, exist_ok=True)

    if not overwrite_existing and Path(out_features_csv).is_file():
        print("Sentence embedding feature output file already exists; returning existing file.")
        return out_features_csv

    # 2) load model
    if SentenceTransformer is None:
        raise ImportError(
            "sentence-transformers is required. Install with `pip install sentence-transformers`."
        )
    print(f"Loading sentence-transformer model: {model_name}")
    model = SentenceTransformer(model_name)
    dim = int(getattr(model, "get_sentence_embedding_dimension", lambda: 768)())

    # 3) header
    header = ["text_id"] + [f"e{i}" for i in range(dim)]

    # 4) stream rows → split → encode → average → (optional) L2 normalize → write
    def _norm(v: np.ndarray) -> np.ndarray:
        if not normalize_l2:
            return v
        n = float(np.linalg.norm(v))
        return v if n < 1e-12 else (v / n)

    print("Extracting embeddings...")
    with out_features_csv.open("w", newline="", encoding=encoding) as f:
        writer = csv.writer(f)
        writer.writerow(header)

        for text_id, text in _iter_items_from_csv(analysis_ready, encoding=encoding, delimiter=delimiter):
            sents = _split_sentences(text)
            if not sents:
                vec = None
            else:
                emb = model.encode(
                    sents,
                    batch_size=batch_size,
                    convert_to_numpy=True,
                    normalize_embeddings=False,
                    show_progress_bar=show_progress,
                )
                # Average across sentences → one vector
                vec = emb.mean(axis=0).astype(np.float32, copy=False)


            # L2 and rounding only if we have a vector
            if vec is None:
                values = [""] * dim  # <- write empty cells, not zeros/NaNs
            else:
                if normalize_l2:
                    n = float(np.linalg.norm(vec))
                    if n > 1e-12:
                        vec = vec / n
                if rounding is not None:
                    values = [round(float(x), int(rounding)) for x in vec.tolist()]
                else:
                    values = [float(x) for x in vec.tolist()]

            writer.writerow([text_id] + values)

    return out_features_csv

Practical notes

  • Interpretability vs. nuance: Dictionaries are directly interpretable; embeddings are flexible and expressive. Many projects benefit from both.
  • Construct validity: Whether counting or embedding, the theory matters. Tie features to constructs you can define, defend, and, ideally, test across datasets.
  • Reproducibility: Taters standardizes I/O, uses "don't overwrite unless asked," and writes predictable outputs under ./features/*/ — so your analyses are easy to rerun and audit later.