Our CEO and founder Anna Pattersonis joining Cerebral Valley AI Summit to lead a discussion exploring the next wave of progress in AI performance: improving efficiency at the infrastructure layer. As infrastructure quickly defines its competitive edge, Anna will discuss how we can push the boundaries of what AI can achieve. 📅 Don’t miss her session on November 12th at 12pm PT: https://lnkd.in/ggdz3yKP
Ceramic.ai
Data Infrastructure and Analytics
Mountain View, California 1,302 followers
Redefining AI Infrastructure
About us
Ceramic.ai builds enterprise-grade infrastructure to optimize how companies train and deploy their AI models faster and more efficiently.
- Website
-
https://ceramic.ai
External link for Ceramic.ai
- Industry
- Data Infrastructure and Analytics
- Company size
- 2-10 employees
- Headquarters
- Mountain View, California
- Type
- Privately Held
Locations
-
Primary
Get directions
605 Castro St
Mountain View, California 94041, US
Employees at Ceramic.ai
-
Andrea Kalmans
Founder, Lontra Ventures - Investing, building and championing startups.
-
Autumn Yuan
Head of Finance at Ceramic.ai
-
Laura N Simon, Ph.D.
People @ Ceramic.ai | Leadership/Executive Coach | PCC Certified Coach | 2x Olympic Hopeful | 2014 Youth Olympic Coach | Counseling for the College…
-
Ari Kalfayan
AI & Inference Leader @ Lambda | AI/ML Seed investor| Founding Team @ W&B | Former AWS & Weights & Biases
Updates
-
Proud to see our founder/CEO Anna Patterson has been named to the 2025 Mayfield | Divot AI List, recognizing 50 builders shaping what’s next in AI before the hype and headlines. We’re honored to see Ceramic’s work highlighted alongside so many brilliant founders, researchers and operators. Big thanks to Mayfield and Startup Grind for including Anna in this year’s list. 🔗 Check out the full list here: https://www.divot.org/list
-
If you wanted to kill ChatGPT, how would you do it? Turn off thousands of servers or corrupt terabytes of memory? It turns out, it may only require changing a single 1 to a 0… This is possible due to "super weights" found by researchers at Apple—single neurons so critical they act as a kill switch. The effect is dramatic. For example, a healthy Llama-7B responds to "My favorite condiment is..." with "mustard." But with one super weight zeroed out, it says: ":/νώ好 !\β.” That was an older, dense model. We asked: are today's advanced Mixture-of-Expert (MoE) models immune? Ceramic.ai's latest research, done by intern Andrew Wesel, reveals a split. MoEs from OpenAI and Allen AI are robust, but we found Alibaba's newest Qwen3 models still contain these critical points of failure. This has major implications for model security, compression, and our basic understanding of how LLMs work. See the data for yourself: https://lnkd.in/gWhvHF-9 What model should we test next?
-
-
OpenAI buried 5 cryptic constants in their SwiGLU code, but when Ceramic.ai let each transformer layer choose its own activation recipe, we uncovered a shocking truth 😲: forcing all layers to use identical activations is like making everyone wear the same shoe 👟 size - wildly inefficient and easily fixed 🛠️ for a 20% performance boost 🚀. https://lnkd.in/gZkHXHuJ
-
Our Founder/CEO got to sit down with Riviera Partners and talk about one of her favorite topics ... exploring the state of AI! #foundertalk #venturevanguard #stateofai
⚡“The game hasn’t even really started yet.”⚡ That’s how Anna Patterson—former Google exec and now founder of Ceramic.ai—describes where we are in AI. It’s also why we’re launching our new Venture Vanguard series: to spotlight the founders, builders, and bold thinkers shaping what’s next. In our first feature, Anna breaks down why the AI stack is broken, what she’s doing to fix it, and why the real action in AI is just beginning. If you’re a VC-backed founder, early-stage exec, or investor betting on the next breakout company, this is the kind of thinking that sets the bar. 🚀 Real founder grit 🧠 Deep tech insights 🔎 A clear view of what’s coming 📖 Read it here: https://lnkd.in/gpZ3dDih #VentureCapital #StartupLeadership #AI #TechFounders #ExecutiveSearch #VentureVanguard #RivieraPartners
-
-
Ceramic.ai we're writing this negative result up for posterity and decided to share it. We discovered a Ferrari that only works in school zones … 🚗💨 The math checks out beautifully - decomposed parameters learn faster! Except they only work at learning rates 📈 so conservative your model trains slower than dial-up internet 🛜 . Push it to production speed? Instant death. Classic academic trap: optimizing for elegance instead of shipping. Sometimes the "dumb" solution wins because it doesn't explode when you need results yesterday. #AI #Shipping #RealityCheck Read the full blog here: https://lnkd.in/exNW5gnM
-
Here’s the brutal reality: If AI labs were chip fabs, then we are officially in the early days of manufacturing - working on how to make a product more efficient without revenue loss. Here Ceramic.ai our Chief Scientist, Tom Costello, looked at yield rates and causes to see why AI labs are burning through GPUs at high rates. take a look --> https://lnkd.in/g3UPj9Ak
-
Training large language models efficiently at scale is hard — Ceramic’s architecture makes it easier. By combining breakthroughs in mathematical optimization, network efficiency, and long-context specialization, Ceramic delivers high-performance training with fewer resources. 🚀 In our latest test using Lambda’s new B200s, Ceramic trained long-context LLMs on just 8 GPUs — and outperformed every benchmark we’ve seen. 📊 See the full POC results: https://shorturl.at/3HkDZ 🔍 Learn more about our technology: https://lnkd.in/g3Dx3G6b
🚨 Ceramic.ai delivered breakthrough training efficiency on Lambda's clusters with NVIDIA Blackwell, and the results are staggering. 📊 Highlights: 🔹 85% MFU @32K, 82% @64K with just 8 GPUs 🔹 Up to 2.8x better MFU than Foundry 🔹 Peak 130.9B FLOPs/token This architecture is purpose-built for long-context training optimized math, comms, and scale. Learn more: https://shorturl.at/3HkDZ 🚀 Ready to train like this? Spin up your cluster now: https://shorturl.at/AFU8u
-
Surprise! Hugging Face's default settings have been quietly sabotaging your LLM evaluations - your eval loss drops as batch size increases, which is odd since you're testing the same data. The "fix" is simple and will actually speed up your evaluation by 50%+ ... take a look at what our intern YU-CHI HSU discovered in our latest blog! https://lnkd.in/gKG2jY7J
-
Our chief scientist, Tom Costello, decodes the rise of Chinese models. https://lnkd.in/gaE3ZmXj Happy 4th! Happy reading.