DEV Community

Cover image for Why enterprise AI projects are moving too slowly
Brad Micklea for KitOps

Posted on • Edited on • Originally published at jozu.com

Why enterprise AI projects are moving too slowly

In AI projects the biggest (and most solvable) source of friction are the handoffs between data scientists, application developers, testers, and infrastructure engineers as the project moves from development to production. This friction exists at every company size, in every industry, and every vertical. Gartner’s research shows that AI/ML projects are rarely deployed in under 9 months despite the use of ready-to-go large language models (LLMs) like Llama, Mistral, and Falcon.

Why do AI/ML projects move so much slower than other software projects? It’s not for lack of effort or lack of focus - it’s because of the huge amount of friction in the AI/ML development, deployment, and operations life cycle.

AI/ML isn’t just about the code

A big part of the problem is that AI/ML projects aren’t like other software projects. They have a lot of different assets that are held in different locations. Until now, there hasn't been a standard mechanism to package, version, and share these assets in a way that is accessible to data science and software teams alike. Why?

It’s tempting to think of an AI project as “just a model and some data” but it’s far more complex than that:

  • Model code
  • Adapter code
  • Tokenizer code
  • Training code
  • Training data
  • Validation data
  • Configuration files
  • Hyperparameters
  • Model features
  • Serialized models
  • API interface
  • Embedding code
  • Deployment definitions

Parts of this list are small and easily shared (like the code through git). But others can be massive (the datasets and serialized models), or difficult to capture and contextualize (the features or hyperparameters) for non-data science team members.

Making it worse is the variety of storage locations and lack of cross-artifact versioning:

  • Code in git
  • Datasets in DvC or cloud storage like AWS S3
  • Features and hyperparameters in ML training and experimentation tools
  • Serialized models in a container registry
  • Deployment definitions in separate repos

Keeping track of all these assets (which may be unique to a single model, or shared with many models) is tricky...

Which changes should an application or SRE team be aware of?

How do you track the provenance of each and ensure they weren’t accidentally or purposefully tampered with?

How do you control access and guarantee compliance?

How does each team know when to get involved?

It’s almost impossible to have good cross-team coordination and collaboration when people can’t find the project’s assets, don’t know which versions belong together, and aren’t notified of impactful changes.

I can hear you saying... “but people have been developing models for years...there must be a solution!”

Kind of. Data scientists haven't felt this issue too strongly because they all use Jupyter notebooks. But…

Jupyter notebooks are great...and terrible

Data scientists work in Jupyter notebooks because they work perfectly for experimentation.

But you can’t easily extract the code or data from a notebook, and it’s not clear for a non-data scientist where the features, parameters, weights, and biases are in the notebook. Plus, while a data scientist can run the model in the notebook on their machine, it doesn't generate a sharable and runnable model that non-data science teams can use.

Notebooks are perfect for early development by data scientists, but they are a walled garden, and one that engineers can’t use.

What about containers?

Unfortunately, getting a model that works offline on a data scientist’s machine to run in production isn’t as simple as dropping it into a container.

That’s because the model created by a data science team is best thought of as a prototype. It hasn’t been designed to work in production at scale.

For example, the features it uses may take too long to calculate in production. Or the libraries it uses may be ideally suited to the necessary iterations of development but not for the sustained load of production. Even something as simple as matching package versions in production may take hours or days of work.

We haven't even touched on the changes that are likely needed for logging and monitoring, continuous training, and deployment pipelines that include a feedback loop mechanism.

Completing the model is half the job, and if you’re waiting until the model is done to start thinking about the operational needs you’ll likely lose weeks and have to redo parts of the model development cycle several times.

Bridging the divide between data science and operations

In my previous roles at Red Hat and Amazon Web Services, I faced a dilemma familiar in many tech organizations: an organizational separation between data science and operations teams.

As much as the data scientists were wizards with data, their understanding of deploying and managing applications in a production environment was limited. Their AI projects lacked crucial production elements like packaging and integration, which led to frequent bottlenecks and frustrations when transitioning from development to deployment.

The solution was not to silo these teams but to integrate them. By embedding data scientists directly into application teams, they attended the same meetings, shared meals, and naturally understood that they (like their colleagues) were responsible for the AI project’s success in production. This made them more proactive in preparing their models for production and gave them a sense of accomplishment each time an AI project was deployed or updated.

Integrating teams not only reduces friction but enhances the effectiveness of both groups. Learning from the DevOps movement, which bridged a similar gap between software developers and IT operations, embedding data scientists within application teams eliminates the "not my problem" mindset and leads to more resilient and efficient workflows.

There’s more...

Today, there are only a few organizations that have experience putting AI projects into production. However, nearly every organization I talk to is working on developing AI projects so it’s only a matter of time before those projects will need to live in production. Sadly, most organizations aren’t ready for the problems that will come that day.

I started Jozu to help people avoid an unpleasant experience when their new AI project hits production.

Our first contribution is a free open source tool called KitOps that packages and versions AI projects into ModelKits. It uses existing standards - so you can store ModelKits in the enterprise registry you already use.

You can read more about how we see ModelKits bridging the Data Science / Software Engineering divide. Or, if you're curious to go deeper, read about how to using versioning and tagging with ModelKits to speed up and de-risk AI projects.

If you've pushed an AI project into production I'd love to hear what your experiences were and what helped / hindered.

Top comments (6)

Collapse
 
nidahasan profile image
nida

Great read! The friction between teams in AI project transitions is a real challenge.

I hadn't realized the role Jupyter notebooks play in slowing down the transition from development to production. It’s eye-opening to see how their limitations could impact the overall pace of AI projects.

Collapse
 
matijasos profile image
Matija Sosic

Nice overview! Yeah, Jupyter notebooks have a special place in every dev's heart :D

Collapse
 
jwilliamsr profile image
Jesse Williams

@matijasos that sounds like experience talking lol

Collapse
 
jwilliamsr profile image
Jesse Williams

I had no idea that it takes almost 9-months to get to prod. But it makes sense now that I think about it.

Collapse
 
bmicklea profile image
Brad Micklea

I think what's even more shocking is that only ~50% of AI projects ever make it to production usage. That's a huge amount of wasted effort and cost. I can't imagine any software company that would tolerate that kind of batting average for long...

Collapse
 
jwilliamsr profile image
Jesse Williams

Especially considering that application code goes to prod weekly, and in some cases daily.