word-frequency
A word frequency counter analyzes a body of text and lists every distinct word with its occurrence count, sorted from most to least frequent β useful for SEO keyword density, content analysis, vocabulary profiling, and translation prep. The ZTools Word Frequency Counter parses any size paste, normalizes case for fair counting, optionally filters out common stopwords (the, a, and, ofβ¦) so meaningful content words rise to the top, computes percentage shares, and exports results as CSV for further analysis in spreadsheets.
Use casesβ
SEO keyword density checkβ
Paste an article draft, see which words appear most. Confirms target keywords are featured naturally; flags accidental over-use that triggers Google's keyword-stuffing penalties.
Vocabulary analysis for language learnersβ
A 10,000-word novel paste reveals the 200 most common words. Memorize those and you can read the rest β Pareto principle applied to language acquisition.
Content audit and editorial reviewβ
A long-form piece overuses "however" and "moreover". The frequency counter surfaces tics so editors can vary the prose.
Translation memory and glossary buildingβ
Analyze a source corpus, export the top 500 frequent words. That's the glossary translators need to translate before tackling the rest.
How it worksβ
- Paste your text β Any size β articles, books, transcripts, code, scraped pages.
- Configure tokenization β Lowercase: case-insensitive (default on). Strip punctuation: yes/no. Min length: filter very short tokens (1-2 chars usually noise).
- Toggle stopword filtering β Built-in lists for English, Spanish, French, German. Removes function words (the, of, and, is) so content words dominate the rankings.
- Read the frequency table β Sorted by count desc. Each row: word | count | percentage. Top 10, top 50, or full list.
- Export as CSV β For deeper analysis in Excel, R, or Python. Format: word,count,percentage.
Examplesβ
Input: "the cat sat on the mat. the cat slept."
Output: the (3), cat (2), sat (1), on (1), mat (1), slept (1)
Input: Article with stopwords filtered
Output: product (15), feature (12), customer (10), value (8) β content words ready for SEO review.
Input: Novel chapter (5,000 words)
Output: Top 10 words shown. Stopwords filtered. Reveals the chapter's thematic vocabulary.
Frequently asked questionsβ
What are stopwords and should I filter them?
Stopwords are very common function words (the, of, and, is, to) that don't carry topical meaning. Filtering them surfaces the "real" content words. Keep them in for readability analysis (Flesch-Kincaid), filter them for keyword research and topic detection.
How does the counter handle "running" vs "ran" vs "runs"?
It treats them as different words by default. For lemma-based grouping (treating all as "run"), you'd need a lemmatizer like NLTK or spaCy β beyond the scope of a frequency counter.
Can it analyze HTML or markdown?
It strips obvious markup (HTML tags, markdown syntax) by default, so word counts reflect actual prose. If you want to count tags themselves (rare), disable the strip option.
How is keyword density calculated?
word count Γ· total words Γ 100. So "product" appearing 15 times in a 1,500-word article = 1% density. SEO best practice is 1-2% for primary keywords; over 3% risks keyword-stuffing penalties.
Does the order of paste matter?
No β the counter is order-independent. Same words anywhere produce the same frequency table.
Tipsβ
- For SEO content reviews, run with stopwords filtered to surface keywords. Run again without filtering to spot stylistic tics.
- A 1-2% target keyword density is a safe SEO range; obsessing over an exact number is overrated.
- Filter words shorter than 3 characters when analyzing prose β they're mostly stopwords or articles.
- Compare frequency tables across drafts to track how your content evolves over revisions.
Try it nowβ
The full word-frequency runs in your browser at https://ztools.zaions.com/word-frequency β no signup, no upload, no data leaves your device.
Last updated: 2026-05-05 Β· Author: Ahsan Mahmood Β· Edit this page on GitHub