Back to Blog
Data — Research & Analytics
Data
Research & Analytics
Data8 min read

5 Press Release Mistakes That Tank Your Stock Price

When a small-cap company issues a press release, the first reader is not a journalist or an analyst — it is an algorithm. NLP systems at Bloomberg, FactSet, and dozens of quantitative funds parse your release within seconds, score it against benchmark dictionaries, and route a sentiment signal to trading models. By the time a human at a target institution opens the release, the algorithmic verdict has already moved your stock.

The data on this is consistent: a release that scores poorly on Loughran-McDonald sentiment lexicons triggers measurable price decay in the 30-minute post-release window, regardless of the underlying news. After analyzing 800+ small-cap releases issued in 2025, here are the five mistakes that show up most often — and the documented price impact each one carries.

Mistake #1: Burying the Material Number Below the Fold

Loughran-McDonald research consistently shows that algorithms weight the first 200 words 3x more heavily than later content. Yet 62% of small-cap releases we sampled bury their primary financial metric below paragraph four.

A release that opens with "Company X today announced strategic initiatives to drive shareholder value" and reaches the actual revenue number on line 18 has already lost. The sentiment classifier has scored "strategic," "initiatives," and "shareholder value" as low-information filler — and those tokens carry negative weights in newer transformer-based financial NLP models because they correlate historically with under-performing announcements.

The fix: lead with the number. "Company X today reported Q1 revenue of $42.3M, a 38% year-over-year increase, and raised full-year guidance to $180M–$190M." That single sentence resolves the algorithm's three highest-priority queries: what changed, by how much, and what is the forward expectation.

Mistake #2: Modal Verb Hedging That Reads as Uncertainty

The Loughran-McDonald uncertainty dictionary flags modal patterns — "may," "could," "might," "possibly," "potentially" — and weights them against the overall sentiment score. The threshold matters: a release with more than 4.2 modal hedges per 500 words crosses into a documented penalty band that correlates with a 0.6%–1.4% intraday price decay relative to comparable releases.

The mistake is rarely intentional. Securities counsel adds modals defensively, IR consultants over-soften forward statements, and the cumulative result is a release that signals diffidence to algorithms even when the underlying news is positive.

The fix: distinguish between safe-harbor language (which belongs in the Forward-Looking Statements section at the bottom and gets parsed separately) and the body copy. In the body, replace "we believe we may be positioned to potentially capture" with "we are expanding into [specific market] with [specific milestone] by [specific date]." Specificity is the strongest counter-signal to algorithmic uncertainty scoring.

Mistake #3: Quote Blocks That Add Zero Information

Roughly 78% of small-cap releases include a CEO quote that contains no substantive information beyond what the headline already stated. "We are pleased to announce" or "This represents an important milestone" are not signals — they are noise that dilutes the algorithmic signal-to-information ratio.

This matters because most NLP scoring systems calculate a tokens-per-fact density score. Releases scoring below 0.18 — roughly more than 5 words per unique factual claim — get classified as low-information events. Low-information events trigger smaller price responses regardless of whether the underlying news is good or bad. For positive news, that is value left on the table.

The fix: every CEO quote should add information not present in the headline or first paragraph. Strong quote: "Our enterprise pipeline tripled to $48M during Q1, driven by three Fortune 500 customer expansions. We expect two of those to close before quarter-end." Weak quote: "We're excited about the momentum we're seeing across our business."

Mistake #4: Forward-Looking Statements That Hedge Specific Numbers

When a company releases guidance, the language structure around the guidance number is parsed separately from the number itself. A release that says "we expect Q2 revenue of $50M" generates a different algorithmic signal than "we currently anticipate that Q2 revenue could potentially be approximately $50M, subject to various market conditions."

The second formulation costs you. Specifically, transformer-based forecast confidence models — increasingly common in institutional quant systems — score the second formulation as 30–40% lower confidence, which widens the algorithmic uncertainty band around the consensus expectation. A wider uncertainty band means smaller institutional position sizes if the release triggers a buy signal at all.

The fix: state guidance with the same confidence structure your CFO actually has internally. If your model says $50M with reasonable confidence, the release should say "we expect Q2 revenue of $50M, with a range of $48M–$52M." That formulation gives the algorithm a precise center point and an explicit confidence interval — both inputs that improve the post-release classification.

Mistake #5: Boilerplate That Drowns the Signal

Every small-cap release ends with an "About Company X" boilerplate paragraph and a Forward-Looking Statements safe harbor block. Most companies treat these as immutable — copy-pasted across releases for years.

The problem is the ratio. When a 280-word release carries a 220-word boilerplate, the body-to-boilerplate ratio drops below 1.5. Algorithms weighted toward signal density penalize this ratio, treating the release as primarily compliance content rather than a material announcement. A 12-month sample of releases with ratios below 1.5 underperformed the benchmark by an average of 0.8% in the post-release 24-hour window, even when controlling for category and market cap.

The fix: keep the boilerplate to under 75 words. Compress the safe harbor to its functional core — most of what is in the typical 200-word block is unnecessary repetition. And meaningfully extend the substantive body of the release. If the news is worth issuing, it is worth more than 60 words of substance.

What This Costs You in Aggregate

These five mistakes rarely appear in isolation. A typical small-cap release we sampled contained three to four of them simultaneously. The compound effect: the average algo-friendliness score across 800+ releases was 47/100, with the bottom quartile at 31/100 or lower.

Compare that to releases issued by mid-cap companies with mature IR functions, which average 71/100. The gap is not editorial polish — it is structural language choices that smaller companies have no infrastructure to audit. And the cost is not theoretical. The bottom quartile of algo-scored releases generated 1.8x higher post-release volatility and 0.4x the post-release volume of the top quartile, controlling for sector and market cap.

Translation: worse releases produce more chaotic price action with less liquidity to absorb it. That is the opposite of what an IR strategy should produce.

The 10-Minute Pre-Release Audit

Before any release goes on the wire, run this checklist:

  • Lead with the number. First sentence must contain the most material data point.
  • Count modal verbs. More than 4 in the body copy (excluding the safe harbor block) is a red flag.
  • Audit your CEO quote. If you can delete it without losing information, delete it.
  • Stress-test guidance language. State the number with explicit confidence intervals, not modal hedging.
  • Check your body-to-boilerplate ratio. If boilerplate is more than 40% of the release, you have a signal density problem.

This audit takes ten minutes. Most companies skip it because no one in the IR chain has been trained to think about releases as algorithmic inputs rather than journalistic communications. The wire service certainly does not provide this analysis. Neither does most outside counsel.

Recommended Reading

[Trillions: How a Band of Wall Street Renegades Invented the Index Fund and Changed Finance Forever](https://www.amazon.com/Trillions-Renegades-Invented-Changed-Finance/dp/0593087682?tag=axonir-20) by Robin Wigglesworth — Understanding how passive and quantitative capital flows now dominate market structure is foundational to understanding why language structure matters so much for small-caps. The clearest available explanation of how the mechanics of modern markets actually work.

[Flash Boys: A Wall Street Revolt](https://www.amazon.com/Flash-Boys-Wall-Street-Revolt/dp/0393351599?tag=axonir-20) by Michael Lewis — A decade old now, but still the most accessible introduction to how millisecond-scale algorithmic systems read and react to public information. The implications for how IR should think about release timing and language are direct.

What to Do Next

If your last three releases scored below 60 on algorithmic friendliness benchmarks, you are leaving measurable price impact on the table — every release, every quarter. The fix is structural, not editorial.

For the dataset behind the 47/100 average score, see [Your Press Releases Are Leaving Money on the Table](/blog/press-release-optimization-guide). For the underlying NLP frameworks that drive these scores, see [How AI Reads Your 10-K](/blog/how-ai-reads-your-10k).

Know your current algo score before your next release. Get it free at [/free-score](/free-score) — benchmarked against 500+ comparable companies, with specific language changes flagged. It takes 60 seconds and produces a prioritized list of changes your IR team can apply to the next release.

IR Intelligence Weekly

Data-driven insights on SEC filings, algo signals, and small-cap IR strategy. Read by 500+ CEOs and CFOs. No spam, unsubscribe anytime.

Join 500+ public company executives. Weekly delivery, no fluff.

See Your Company's Algo Score

Free analysis of your latest SEC filing. No account required.

Analyze My Ticker Free