We’re building some exciting new products at Fastino. But before releasing them, we want your input. We invite you to join a small group of professionals in our user testing cohort. ✅ Join 3 short 30-minute sessions over 6 weeks ✅ Get early access to Fastino’s latest products ✅ Receive a gift card for participating We’re especially looking for folks who are: - Using AI tools in their work - Curious about building or testing AI/LLM-powered tools - Currently working in a full-time role If that sounds like you, we’d love to hear from you. 👉 Apply in under 2 minutes here: https://lnkd.in/gWvVzuDJ We’re keeping the cohort small, so if you’re interested, please apply soon!
About us
Fastino powers enterprise AI developers with high-performance, task-optimized language models built to scale. Unlike generic LLMs, Fastino’s models are engineered for accuracy, speed, and security, delivering near-instant CPU inference and flexible deployment across environments.
- Website
-
https://fastino.ai
External link for Fastino
- Industry
- Research Services
- Company size
- 11-50 employees
- Type
- Privately Held
Employees at Fastino
Updates
-
Thanks to PwC for hosting the Private Equity Digital Executive Forum event in NYC and assembling such an impressive lineup of speakers! Our founder George Hurn-Maloney from Fastino joined a State of AI panel alongside Scott Likens (PwC, Global Chief AI Engineer) and Shaown Nandi (Amazon Web Services (AWS), Director of Technology). We explored how foundation models are evolving to meet real enterprise needs - particularly for agentic tasks like function calling, summarization, and data structuring. Special thanks to Mahera (Walia) Mayer for moderating the thought-provoking session!
-
-
Huge thanks to the MLOps community for the shoutout! Come find us and other great teams at the AI Agent Builders Summit in San Francisco next week 👇 📍 May 28 | The Hibernia | 4:30 PM 🔗 https://lnkd.in/gt7d_-H5
Big ups to our friends at Fastino 💥 🚀 They build Task-specific Language Models (TLMs). Think of it this way: instead of using a giant general-purpose AI for every little thing, Fastino crafts smaller, highly focused AI models. 🚀 These are like precision instruments, expertly trained for particular jobs, whether it's quickly summarizing text, pulling out specific data, or even translating instructions into code. This means they often perform these specific tasks faster, more accurately, and usually with a more predictable price tag than their larger counterparts. Pretty cool, no? 🤘 Come meet them and bunch of other cool teams next Wednesday at the Hibernia : ) This one’s for builders who care about speed and scale. 🔗 https://lnkd.in/gt7d_-H5
-
-
What an incredible evening in New York! 🌇✨ A huge thanks to everyone who joined us and Insight Partners for our rooftop happy hour in NYC. We loved connecting with so many talented AI developers, builders, and investors. Special shoutout to Michael Spiro from Insight Partners for co-hosting with our founder, George Hurn-Maloney, and for sparking some great conversations about the future of AI and task-specific language models. Thank you for making this event memorable, see you next time big apple! If you missed us this time, fill out this form for first dibs on our next event: https://lnkd.in/e9ZsFC27
-
-
"Fastino is reportedly using less than $100,000 (USD) worth of gaming graphics cards to handle its training needs… This stands out against more commercial operations like Elon Musk’s giant xAI, which has seriously intense power requirements.” By leveraging affordable, off-the-shelf GPUs and focusing on task-specific language models, Fastino delivers accuracy, speed, and security, without the massive infrastructure costs of legacy LLMs. It’s a smarter, more sustainable path for enterprise AI. Curious how Fastino’s approach is changing the game? Dive into the full story in our feature here: Link to article: https://lnkd.in/gw9xmmQQ Tom's Hardware
-
📣 Calling all NYC AI builders and investors! Join us on Tuesday, May 20 for a casual AI & Agents Happy Hour hosted by Fastino 🍸 Come meet George Hurn-Maloney (Founder, Fastino), Michael Spiro (Investor, Insight Partners), and other team members. ✅ Great convos + community ✅ Real talk on scaling LLMs ✅ Drinks, views, and a peek at Fastino’s TLMs in action 🗓️ May 20 | 🕠 5:30–7:30PM | 📍 Bryant Park, NYC 🔗 Register in the Luma link below to get the full location Luma: bit.ly/fastino-nyc #NYCtech #AI #LLMs #AgentOps #GenAI #Networking
-
-
We just dropped our first deep dive on Fastino's TLMs which are purpose-built to outperform generalist LLMs like GPT-4o on high scale enterprise tasks. 🦊 Millisecond latency 🦊 Benchmarked against real-world use cases 🦊 Inference on CPU and low-end GPU Read the full launch blog here ⬇️ https://lnkd.in/e_HNk4H2
-
"It’s no secret that the future of generative AI for enterprise is likely in smaller, more focused language models." We couldn’t agree more. In a world of massive, general-purpose LLMs, Fastino’s Task-Specific Language Models (TLMs) are purpose-built for accuracy, speed, and security – outperforming even GPT-4o on specific tasks. Dive into our TechCrunch feature for the full story. Learn more here: https://bit.ly/4m5uaQL
Fastino trains AI models on cheap gaming GPUs and just raised $17.5M led by Khosla
-
Fastino reposted this
Thrilled to join you on this adventure, Fastino! 🦊 Jon Chu https://lnkd.in/gtdTQx8n
-
-
Fastino reposted this
Big news from the #AIInfrastructure world — and a proud moment for us at M12! 🌟 M12 portfolio company Fastino just raised $17.5M in a round led by Khosla Ventures and included participation from Insight Partners and Valor Equity Partners. We backed Fastino at the pre-seed stage because we believe in their bold vision: ⚡️ Smaller, faster models 💻 Trained on consumer-grade GPUs 🏢 Built for real-world enterprise use This approach is a major step toward accessible, cost-efficient AI that meets the needs of real businesses — not just big tech. Huge congrats to the Fastino team! 📖 Read more via TechCrunch: https://lnkd.in/gtdTQx8n
-