AI2’s open source Tulu 3 lets anyone play the AI post-training game | TechCrunch

ai2’s-open-source-tulu-3-lets-anyone-play-the-ai-post-training-game-|-techcrunch

Ask anyone in the open source AI community, and they will tell you the gap between them and the big private companies is more than just computing power. AI2 is working to fix that, first with fully open source databases and models, and now with an open and easily adapted post-training regimen to turn “raw” large language models into usable ones.

Contrary to what many think, “foundation” language models don’t come out of the training process ready to put to work. The pre-training process is necessary, of course, but far from sufficient. Some even believe that pre-training may soon no longer be the most important part at all.

That’s because the post-training process is increasingly being shown to be where real value can be created. That’s where the model is molded from a giant, know-it-all network that will as readily produce Holocaust denial talking points as it will cookie recipes. You generally don’t want that!

Companies are secretive about their post-training regimens because, while everyone can scrape the web and make a model using state-of-the-art methods, making that model useful to, say, a therapist or research analyst is a completely different challenge.

AI2 (formerly known as the Allen Institute for AI) has spoken out about the lack of openness in ostensibly “open” AI projects, like Meta’s Llama. While the model is indeed free for anyone to use and tweak, the sources and process of making the raw model and the method of training it for general use remain carefully guarded secrets. It’s not bad — but it also isn’t really “open.”

AI2, on the other hand, is committed to being as open as it can possibly be, from exposing its data collection, curation, cleaning, and other pipelines to the exact training methods it used to produce LLMs like OLMo.

But the simple truth is that few developers have the chops to run their own LLMs to begin with, and even fewer can do post-training the way Meta, OpenAI, or Anthropic does — partly because they don’t know, but also because it’s technically complex and time-consuming.

Fortunately, AI2 wants to democratize this aspect of the AI ecosystem as well. That’s where Tulu 3 comes in. It’s a huge improvement over an earlier, more rudimentary post-training process (called, you guessed it, Tulu 2); in the nonprofit’s tests, this resulted in scores on par with the most advanced “open” models out there. It’s based on months of experimentation, reading, and interpreting what the big guys are hinting at, and lots of iterative training runs.

a diagram doesn’t really capture it all, but you see the general shape of it.Image Credits:AI2

Basically, Tulu 3 covers everything from choosing which topics you want your model to care about — for instance, downplaying multilingual capabilities but dialing up math and coding — then takes it through a long regimen of data curation, reinforcement learning, fine tuning and preference tuning, plus tweaking a bunch of other meta-parameters and training processes that I couldn’t adequately describe to you. The result is, hopefully, a far more capable model focused on the skills you need it to have.

The real point, though, is taking one more toy out of the private companies’ toybox. Previously, if you wanted to build a custom-trained LLM, it was very hard to avoid using a major company’s resources one way or the other, or hiring a middleman who would do the work for you. That’s not only expensive, but it introduces risks that some companies are loath to take.

For instance, medical research and service companies: sure, you could use OpenAI’s API, or talk to Scale or whoever to customize an in-house model, but both of these involve outside companies in sensitive user data. If it’s unavoidable, you just have to bite the bullet — but if it isn’t? Like if, for instance, a research organization released a soup-to-nuts pre- and post-training regimen that you could implement on-premises? That may well be a better alternative.

AI2 is using this itself, which is the best endorsement one can give. Even though the test results its publishing today use Llama as a foundation model, they’re planning to put out an OLMo-based, Tulu-3-trained model soon that should offer even more improvements over the baseline and also be fully open source, tip to tail.

If you’re curious how the model performs currently, give the live demo a shot.

Source: Techcrunch

Leave a Comment

Your email address will not be published. Required fields are marked *

mt-sample-background

© 2024 Egerin. All rights reserved.

Scroll to Top

Subscribe to receive News in Email

* indicates required

Intuit Mailchimp