A Rust-native parser compiled to WASM. Sub-2ms parse time. One API to render, convert, and normalize every Markdown flavor. Coming soon.
$ curl -X POST https://api.mdcore.ai/v1/render \ -H "Authorization: Bearer mc_..." \ -d '{"markdown": "# Hello\n**Fast.**", "output": "html"}' // => <h1>Hello</h1><p><strong>Fast.</strong></p> // 1.8ms · edge-cached
See the output
# Quarterly Report
Revenue grew **34%** YoY, driven by
API adoption in the enterprise segment.
| Metric | Q1 | Q2 |
|------------|--------|--------|
| Revenue | $1.2M | $1.6M |
| API Calls | 12M | 31M |
| Latency | 4.2ms | 1.8ms |
## Code Performance
```rust
pub fn render(input: &str) -> String {
let arena = Arena::new();
let root = parse_document(
&arena, input, &Options::default()
);
format_html(root, &Options::default())
}
```
Inline math: $E = mc^2$
> **Note:** All benchmarks measured at p95
> on Cloudflare Workers edge network.Revenue grew 34% YoY, driven by API adoption in the enterprise segment.
| Metric | Q1 | Q2 |
|---|---|---|
| Revenue | $1.2M | $1.6M |
| API Calls | 12M | 31M |
| Latency | 4.2ms | 1.8ms |
Inline math: E = mc2
Note: All benchmarks measured at p95 on Cloudflare Workers edge network.
graph LR
A[API Request] --> B{Cached?}
B -->|Yes| C([Edge CDN])
B -->|No| D[Rust Engine]
D --> E[Parse AST]
E --> F[Render HTML]
F --> C$E = mc^2$
$$
\int_0^\infty e^{-x^2} dx
= \frac{\sqrt{\pi}}{2}
$$E = mc2
<2ms
engine parse
6.7x
vs remark+rehype
190+
languages
Rust
native parser
API
Markdown to beautiful HTML, PNG, or PDF. The same Rust engine behind mdfy.cc — now as an API.
curl -X POST https://api.mdcore.ai/v1/render \
-H "Authorization: Bearer mc_..." \
-H "Content-Type: application/json" \
-d '{"markdown": "# Hello\n**Bold**.", "output": "html"}'HTML, PDF, DOCX, or any URL to clean Markdown. One call to feed your AI pipeline.
curl -X POST https://api.mdcore.ai/v1/convert \
-H "Authorization: Bearer mc_..." \
-H "Content-Type: application/json" \
-d '{"source": "https://example.com", "output": "markdown"}'Any MD flavor in, consistent output out. GFM, Obsidian, MDX, Pandoc — auto-detected and unified.
curl -X POST https://api.mdcore.ai/v1/normalize \
-H "Authorization: Bearer mc_..." \
-H "Content-Type: application/json" \
-d '{"markdown": "[[wikilink]]", "target": "gfm"}'How it works
Send a request
Pass Markdown text, a file, or a URL to any endpoint. JSON in, JSON out.
Engine parses
Rust-based comrak engine detects the flavor and builds the AST in microseconds.
Post-processing
Syntax highlighting, KaTeX math, Mermaid diagrams — applied server-side.
Result delivered
HTML, PNG, PDF, or clean Markdown. Edge-cached, sub-2ms repeat latency.
Performance
10KB document · GFM + math + code blocks
Same parser used by GitLab, Reddit, crates.io
One codebase → browser, server, CLI, edge
190+ languages, server-side applied
LaTeX-quality rendering, no MathJax overhead
Flowcharts, sequences, gantt — SVG output
The core parse → AST → HTML pipeline runs in compiled WASM via comrak. Post-processing (highlight.js, KaTeX, Mermaid) runs in JS — the best tool for each job.
WASM binary runs on Cloudflare Workers, Vercel Edge, Deno Deploy. Your Markdown renders at the edge closest to your users, not in a central Node.js server.
Same output as remark + rehype + shiki + katex + mermaid combined — Rust parser with JS post-processing, zero config, one API instead of five packages with conflicting versions.
Use Cases
You built a chatbot with Claude or GPT. It returns Markdown with tables, code, math. Your frontend shows broken formatting or plain text. Users think your product is broken.
Solution
One API call renders the LLM output as production-quality HTML — syntax-highlighted code, rendered LaTeX, live Mermaid diagrams. Ship it as-is to your frontend.
// Your chatbot response handler
const stream = await anthropic.messages.stream({ ... })
const markdown = await stream.finalText()
// Before: dangerouslySetInnerHTML with broken formatting
// After: one call
const html = await md.render(markdown)
res.json({ html }) // production-ready HTMLYou're chunking PDFs and web pages for your RAG pipeline. Raw HTML has noise — navbars, footers, ads, scripts. Your embeddings are polluted. Retrieval precision drops.
Solution
Convert any URL or PDF to clean, structured Markdown first. Headings become natural chunk boundaries. Tables stay intact. Code blocks preserve formatting. Your embeddings get signal, not noise.
// Before: messy HTML chunks with nav, footer, ads
// After: clean Markdown with semantic structure
const markdown = await md.convert(url)
// Split by headings — natural semantic boundaries
const chunks = markdown.split(/^## /gm)
for (const chunk of chunks) {
await pinecone.upsert(embed(chunk))
}Your docs site needs remark + rehype + shiki + katex + mermaid. Five packages, five version cycles, five configs. Shiki alone is 2MB. A KaTeX update breaks your math. Mermaid conflicts with SSR.
Solution
Replace all five with one API call. Same output quality, zero config, no version conflicts. Your CI build drops from 45s to 12s because you're not bundling five parsers.
// Before: 5 packages, 200 lines of pipeline config // import remarkGfm from 'remark-gfm' // import rehypeShiki from 'rehype-shiki' // import rehypeKatex from 'rehype-katex' // ... 15 more imports and plugins // After: one line const html = await md.render(content)
Your support agent receives PDFs, DOCX files, and URLs from customers. The LLM needs Markdown to reason about them. You're stitching together pdf-parse, mammoth, and cheerio. Each breaks differently.
Solution
One endpoint handles all formats. PDF, DOCX, HTML, URL — auto-detected, converted to clean Markdown. Your agent gets structured text it can actually reason about.
// Customer uploads a contract PDF
const markdown = await md.convert(file, {
format: "auto" // detects PDF, DOCX, HTML
})
// Feed to your agent with full structure preserved
const analysis = await agent.run(
`Analyze this contract:\n\n${markdown}`
)Your app accepts Markdown input. Users paste from Obsidian (wikilinks), Notion (custom blocks), MDX (JSX components), and GitHub (task lists). Half the syntax doesn't render. Users file bugs.
Solution
Auto-detect the source flavor and normalize to standard GFM. Wikilinks become regular links. MDX components get stripped or rendered. Every flavor works, zero user friction.
// User pastes Obsidian-flavored Markdown
const input = "See [[Project Plan]] and ~~old text~~"
// Auto-detect flavor, normalize to GFM
const clean = await md.normalize(input, {
source_flavor: "auto",
target: "gfm"
})
// => "See [Project Plan](project-plan) and ~~old text~~"Your team writes weekly reports in Markdown. Converting to PDF for stakeholders means fighting with Pandoc, tweaking LaTeX templates, fixing page breaks. Every week, 2 hours lost.
Solution
Markdown in, branded PDF out. Code blocks are highlighted, tables are formatted, charts render from Mermaid. Automate it in your CI — push to main, PDF appears in Slack.
// GitHub Action: auto-generate PDF on push
const report = fs.readFileSync("reports/week-12.md")
const pdf = await md.render(report, {
output: "pdf",
theme: "corporate"
})
await slack.upload(pdf, "#team-reports")You're migrating 500 pages from Notion to your new docs platform. Notion's export gives you mangled HTML with inline styles, empty divs, and broken links. Manual cleanup would take weeks.
Solution
Batch convert Notion HTML exports to clean Markdown. Structure preserved, links fixed, formatting intact. 500 pages in minutes, not weeks.
// Batch convert Notion export
const files = glob("notion-export/**/*.html")
for (const file of files) {
const html = fs.readFileSync(file)
const markdown = await md.convert(html, {
format: "html"
})
fs.writeFileSync(file.replace(".html", ".md"), markdown)
}
// 500 pages → clean Markdown in 3 minutesThe same Markdown from Claude renders differently in your web app, mobile app, email, and Slack bot. Four surfaces, four rendering stacks, four sets of bugs. Users see inconsistencies.
Solution
One engine, consistent output everywhere. Render once through mdcore, get identical HTML for web, mobile WebView, email, and Slack. Same AST, same styles, same result.
// Same engine, every surface
const markdown = agent.response
// Web app
const webHtml = await md.render(markdown)
// Email
const emailHtml = await md.render(markdown, { theme: "email" })
// Slack
const slackMrkdwn = await md.render(markdown, { output: "slack" })
// Identical rendering logic. Zero drift.SDKs
import { MdCore } from "@mdcore/sdk"
const md = new MdCore("mc_...")
const html = await md.render("# Hello")
const markdown = await md.convert(url)import mdcore
client = mdcore.Client("mc_...")
html = client.render("# Hello")
markdown = client.convert(url)Planned Pricing
Pricing is subject to change. $0.001 per additional call beyond quota on all paid tiers.
EARLY ACCESS
mdcore already powers mdfy.cc in production — rendering Markdown with Rust + WASM in the browser. The hosted API is next. Join the waitlist for early access.
hi@raymind.ai
Join the WaitlistPLAYGROUND
The same Rust + WASM engine that will power the API is running client-side on mdfy.cc right now. Paste any Markdown and see the output.
Open mdfy.ccThe engine is already live on mdfy.cc. The API is next.
Join the waitlist for early API access and developer preview.
Join the Waitlist