Built for performance. Designed for developers.

Markdown rendering
at engine speed.

A Rust-native parser compiled to WASM. Sub-2ms parse time. One API to render, convert, and normalize every Markdown flavor. Coming soon.

Join the WaitlistRead the Docs
~/api
$ curl -X POST https://api.mdcore.ai/v1/render \
  -H "Authorization: Bearer mc_..." \
  -d '{"markdown": "# Hello\n**Fast.**", "output": "html"}'

// => <h1>Hello</h1><p><strong>Fast.</strong></p>
//    1.8ms · edge-cached

See the output

Raw Markdown in. This comes out.

input.mdRAW
# Quarterly Report

Revenue grew **34%** YoY, driven by
API adoption in the enterprise segment.

| Metric     | Q1     | Q2     |
|------------|--------|--------|
| Revenue    | $1.2M  | $1.6M  |
| API Calls  | 12M    | 31M    |
| Latency    | 4.2ms  | 1.8ms  |

## Code Performance

```rust
pub fn render(input: &str) -> String {
    let arena = Arena::new();
    let root = parse_document(
        &arena, input, &Options::default()
    );
    format_html(root, &Options::default())
}
```

Inline math: $E = mc^2$

> **Note:** All benchmarks measured at p95
> on Cloudflare Workers edge network.
output.htmlRENDERED

Quarterly Report

Revenue grew 34% YoY, driven by API adoption in the enterprise segment.

MetricQ1Q2
Revenue$1.2M$1.6M
API Calls12M31M
Latency4.2ms1.8ms

Code Performance

pub fn render(input: &str) -> String { let arena = Arena::new(); let root = parse_document( &arena, input, &Options::default() ); format_html(root, &Options::default()) }

Inline math: E = mc2

Note: All benchmarks measured at p95 on Cloudflare Workers edge network.

```mermaidRAW
graph LR
  A[API Request] --> B{Cached?}
  B -->|Yes| C([Edge CDN])
  B -->|No| D[Rust Engine]
  D --> E[Parse AST]
  E --> F[Render HTML]
  F --> C
renderedSVG
KaTeXRAW
$E = mc^2$

$$
\int_0^\infty e^{-x^2} dx
= \frac{\sqrt{\pi}}{2}
$$
renderedHTML

E = mc2

0 e-x² dx = √π2
Everything that renders
Syntax highlighting190+ languages
Math equationsKaTeX, inline & display
TablesGFM, alignment, striping
Mermaid diagramsFlowchart, sequence, gantt
Task listsInteractive checkboxes
BlockquotesNested, callout styling
FootnotesAuto-linked references
Auto-linksURLs, emails, @mentions

Try it live on mdfy.cc →

<2ms

engine parse

6.7x

vs remark+rehype

190+

languages

Rust

native parser

API

Three endpoints. One engine.

POST/v1/render

Render

Markdown to beautiful HTML, PNG, or PDF. The same Rust engine behind mdfy.cc — now as an API.

curl -X POST https://api.mdcore.ai/v1/render \
  -H "Authorization: Bearer mc_..." \
  -H "Content-Type: application/json" \
  -d '{"markdown": "# Hello\n**Bold**.", "output": "html"}'
POST/v1/convert

Convert

HTML, PDF, DOCX, or any URL to clean Markdown. One call to feed your AI pipeline.

curl -X POST https://api.mdcore.ai/v1/convert \
  -H "Authorization: Bearer mc_..." \
  -H "Content-Type: application/json" \
  -d '{"source": "https://example.com", "output": "markdown"}'
POST/v1/normalize

Normalize

Any MD flavor in, consistent output out. GFM, Obsidian, MDX, Pandoc — auto-detected and unified.

curl -X POST https://api.mdcore.ai/v1/normalize \
  -H "Authorization: Bearer mc_..." \
  -H "Content-Type: application/json" \
  -d '{"markdown": "[[wikilink]]", "target": "gfm"}'

How it works

From request to response in <2ms

01

Send a request

Pass Markdown text, a file, or a URL to any endpoint. JSON in, JSON out.

02

Engine parses

Rust-based comrak engine detects the flavor and builds the AST in microseconds.

03

Post-processing

Syntax highlighting, KaTeX math, Mermaid diagrams — applied server-side.

04

Result delivered

HTML, PNG, PDF, or clean Markdown. Edge-cached, sub-2ms repeat latency.

Performance

Rust-native. Not another JS wrapper.

Parse + Render benchmark

mdcore (Rust/WASM)1.8ms
marked6ms
markdown-it8ms
remark + rehype12ms

10KB document · GFM + math + code blocks

Architecture

Parsercomrak (Rust)

Same parser used by GitLab, Reddit, crates.io

Compile targetwasm32 + native

One codebase → browser, server, CLI, edge

Highlighthighlight.js

190+ languages, server-side applied

MathKaTeX

LaTeX-quality rendering, no MathJax overhead

DiagramsMermaid

Flowcharts, sequences, gantt — SVG output

Rust parser, JS post-processing

The core parse → AST → HTML pipeline runs in compiled WASM via comrak. Post-processing (highlight.js, KaTeX, Mermaid) runs in JS — the best tool for each job.

Edge-first deployment

WASM binary runs on Cloudflare Workers, Vercel Edge, Deno Deploy. Your Markdown renders at the edge closest to your users, not in a central Node.js server.

Drop-in replacement

Same output as remark + rehype + shiki + katex + mermaid combined — Rust parser with JS post-processing, zero config, one API instead of five packages with conflicting versions.

Use Cases

Real problems. One API call to fix each.

AI Products

Your AI chatbot looks like raw text

You built a chatbot with Claude or GPT. It returns Markdown with tables, code, math. Your frontend shows broken formatting or plain text. Users think your product is broken.

Solution

One API call renders the LLM output as production-quality HTML — syntax-highlighted code, rendered LaTeX, live Mermaid diagrams. Ship it as-is to your frontend.

implementation
// Your chatbot response handler
const stream = await anthropic.messages.stream({ ... })
const markdown = await stream.finalText()

// Before: dangerouslySetInnerHTML with broken formatting
// After: one call
const html = await md.render(markdown)
res.json({ html }) // production-ready HTML
RAG / LLM Infra

RAG retrieval quality is terrible

You're chunking PDFs and web pages for your RAG pipeline. Raw HTML has noise — navbars, footers, ads, scripts. Your embeddings are polluted. Retrieval precision drops.

Solution

Convert any URL or PDF to clean, structured Markdown first. Headings become natural chunk boundaries. Tables stay intact. Code blocks preserve formatting. Your embeddings get signal, not noise.

implementation
// Before: messy HTML chunks with nav, footer, ads
// After: clean Markdown with semantic structure
const markdown = await md.convert(url)

// Split by headings — natural semantic boundaries
const chunks = markdown.split(/^## /gm)
for (const chunk of chunks) {
  await pinecone.upsert(embed(chunk))
}
Developer Tools

5 dependencies to render a README

Your docs site needs remark + rehype + shiki + katex + mermaid. Five packages, five version cycles, five configs. Shiki alone is 2MB. A KaTeX update breaks your math. Mermaid conflicts with SSR.

Solution

Replace all five with one API call. Same output quality, zero config, no version conflicts. Your CI build drops from 45s to 12s because you're not bundling five parsers.

implementation
// Before: 5 packages, 200 lines of pipeline config
// import remarkGfm from 'remark-gfm'
// import rehypeShiki from 'rehype-shiki'
// import rehypeKatex from 'rehype-katex'
// ... 15 more imports and plugins

// After: one line
const html = await md.render(content)
AI Agents

Customer sends a PDF, agent can't read it

Your support agent receives PDFs, DOCX files, and URLs from customers. The LLM needs Markdown to reason about them. You're stitching together pdf-parse, mammoth, and cheerio. Each breaks differently.

Solution

One endpoint handles all formats. PDF, DOCX, HTML, URL — auto-detected, converted to clean Markdown. Your agent gets structured text it can actually reason about.

implementation
// Customer uploads a contract PDF
const markdown = await md.convert(file, {
  format: "auto" // detects PDF, DOCX, HTML
})

// Feed to your agent with full structure preserved
const analysis = await agent.run(
  `Analyze this contract:\n\n${markdown}`
)
SaaS Products

Obsidian users break your Markdown input

Your app accepts Markdown input. Users paste from Obsidian (wikilinks), Notion (custom blocks), MDX (JSX components), and GitHub (task lists). Half the syntax doesn't render. Users file bugs.

Solution

Auto-detect the source flavor and normalize to standard GFM. Wikilinks become regular links. MDX components get stripped or rendered. Every flavor works, zero user friction.

implementation
// User pastes Obsidian-flavored Markdown
const input = "See [[Project Plan]] and ~~old text~~"

// Auto-detect flavor, normalize to GFM
const clean = await md.normalize(input, {
  source_flavor: "auto",
  target: "gfm"
})
// => "See [Project Plan](project-plan) and ~~old text~~"
Automation

Weekly reports take 2 hours to format

Your team writes weekly reports in Markdown. Converting to PDF for stakeholders means fighting with Pandoc, tweaking LaTeX templates, fixing page breaks. Every week, 2 hours lost.

Solution

Markdown in, branded PDF out. Code blocks are highlighted, tables are formatted, charts render from Mermaid. Automate it in your CI — push to main, PDF appears in Slack.

implementation
// GitHub Action: auto-generate PDF on push
const report = fs.readFileSync("reports/week-12.md")
const pdf = await md.render(report, {
  output: "pdf",
  theme: "corporate"
})
await slack.upload(pdf, "#team-reports")
Content Migration

Notion export is a mess of HTML

You're migrating 500 pages from Notion to your new docs platform. Notion's export gives you mangled HTML with inline styles, empty divs, and broken links. Manual cleanup would take weeks.

Solution

Batch convert Notion HTML exports to clean Markdown. Structure preserved, links fixed, formatting intact. 500 pages in minutes, not weeks.

implementation
// Batch convert Notion export
const files = glob("notion-export/**/*.html")

for (const file of files) {
  const html = fs.readFileSync(file)
  const markdown = await md.convert(html, {
    format: "html"
  })
  fs.writeFileSync(file.replace(".html", ".md"), markdown)
}
// 500 pages → clean Markdown in 3 minutes
Multi-platform

LLM output renders differently everywhere

The same Markdown from Claude renders differently in your web app, mobile app, email, and Slack bot. Four surfaces, four rendering stacks, four sets of bugs. Users see inconsistencies.

Solution

One engine, consistent output everywhere. Render once through mdcore, get identical HTML for web, mobile WebView, email, and Slack. Same AST, same styles, same result.

implementation
// Same engine, every surface
const markdown = agent.response

// Web app
const webHtml = await md.render(markdown)
// Email
const emailHtml = await md.render(markdown, { theme: "email" })
// Slack
const slackMrkdwn = await md.render(markdown, { output: "slack" })

// Identical rendering logic. Zero drift.

SDKs

JavaScript / TypeScript
import { MdCore } from "@mdcore/sdk"

const md = new MdCore("mc_...")
const html = await md.render("# Hello")
const markdown = await md.convert(url)
Python
import mdcore

client = mdcore.Client("mc_...")
html = client.render("# Hello")
markdown = client.convert(url)

Planned Pricing

Start free. Scale when you're ready.

Free

$0

1,000 calls/mo

  • +HTML output
  • +Watermark
  • +Community support
Join Waitlist

Starter

$19/mo

10,000 calls/mo

  • +HTML + PNG + PDF
  • +No watermark
  • +Custom themes
  • +Email support
Join Waitlist
POPULAR

Growth

$49/mo

100K calls/mo

  • +All outputs
  • +Custom themes
  • +Priority support
  • +Analytics
Join Waitlist

Scale

$199/mo

1M calls/mo

  • +Everything in Growth
  • +SLA 99.9%
  • +Dedicated support
  • +Self-hosted
Join Waitlist

Pricing is subject to change. $0.001 per additional call beyond quota on all paid tiers.

EARLY ACCESS

The engine is live. The API is coming.

mdcore already powers mdfy.cc in production — rendering Markdown with Rust + WASM in the browser. The hosted API is next. Join the waitlist for early access.

hi@raymind.ai

Join the Waitlist

PLAYGROUND

Try the engine now — it's already live.

The same Rust + WASM engine that will power the API is running client-side on mdfy.cc right now. Paste any Markdown and see the output.

Open mdfy.cc

The engine is already live on mdfy.cc. The API is next.

Be first to ship with mdcore.

Join the waitlist for early API access and developer preview.

Join the Waitlist

Try the engine live on mdfy.cc →