Feeling inspired to write your first TDS post? We’re always open to contributions from new authors.
It’s become more or less conventional wisdom that most machine learning projects don’t make it into production, and of those that do, many fail to deliver on their promise.
We should always take sweeping claims like these with a grain of salt, as accurate stats are hard to collect (and interpret), and some of the organizations circulating them have a stake in convincing practitioners that their solution is the key to all the AI-integration challenges they’re facing. Still, it’s hard to dismiss so many voices—from many different corners of our community—who acknowledge that reaping the benefits of this emerging technology is harder than it might seem at first.
Our weekly highlights zoom in on the practical aspects of choosing, adopting, and making the most of AI-powered products and workflows. There’s never going to be a one-size-fits-all solution to the problem of integrating promising-yet-complex tools into a business, but we think that exploring these articles can frame the conversation in more useful and pragmatic terms. Let’s get to it.
- Carving Out Your Competitive Advantage with AI
What benefits can businesses actually reap by using AI? Dr. Janna Lipenkova expands on the mental model you can adopt to make smarter design and product decisions that will allow you to find the “sweet spot” for AI in your organization—going beyond automation to open up the space for more creativity and innovation. - Integrating Multimodal Data into a Large Language Model
Umair Ali Khan presents a detailed, hands-on introduction to a cutting-edge approach that builds on recent work on contextual retrieval and makes it possible to include not just textual data in your RAG pipelines, but visual media as well. From receipts to charts and tables, ML workflows can now become more robust with the use of richer, multimodal data.
- How to Choose the Best ML Deployment Strategy: Cloud vs. Edge
“As ML adoption grows, there’s a rising demand for scalable and efficient deployment methods, yet specifics often remain unclear.” Vincent Vandenbussche patiently walks us through the various factors ML engineers need to consider as they decide on the best option for their specific projects and use cases. - A Walkthrough of Nvidia’s Latest Multi-Modal LLM Family
Staying up-to-date with new developments in AI is undoubtedly important, but the rapid pace at which new models and tools arrive on the scene often makes it difficult for busy data professionals to keep up. Mengliu Zhao’s recent roundup is a helpful overview of the new suite of multimodal LLMs released by NVIDIA, and compares their performance to other models (both commercial and open source).