Wednesday, February 28th 2024

ServiceNow, Hugging Face & NVIDIA Release StarCoder2 - a New Open-Access LLM Family

ServiceNow, Hugging Face, and NVIDIA today announced the release of StarCoder2, a family of open-access large language models for code generation that sets new standards for performance, transparency, and cost-effectiveness. StarCoder2 was developed in partnership with the BigCode Community, managed by ServiceNow, the leading digital workflow company making the world work better for everyone, and Hugging Face, the most-used open-source platform, where the machine learning community collaborates on models, datasets, and applications. Trained on 619 programming languages, StarCoder2 can be further trained and embedded in enterprise applications to perform specialized tasks such as application source code generation, workflow generation, text summarization, and more. Developers can use its code completion, advanced code summarization, code snippets retrieval, and other capabilities to accelerate innovation and improve productivity.

StarCoder2 offers three model sizes: a 3-billion-parameter model trained by ServiceNow; a 7-billion-parameter model trained by Hugging Face; and a 15-billion-parameter model built by NVIDIA with NVIDIA NeMo and trained on NVIDIA accelerated infrastructure. The smaller variants provide powerful performance while saving on compute costs, as fewer parameters require less computing during inference. In fact, the new 3-billion-parameter model matches the performance of the original StarCoder 15-billion-parameter model. "StarCoder2 stands as a testament to the combined power of open scientific collaboration and responsible AI practices with an ethical data supply chain," emphasized Harm de Vries, lead of ServiceNow's StarCoder2 development team and co-lead of BigCode. "The state-of-the-art open-access model improves on prior generative AI performance to increase developer productivity and provides developers equal access to the benefits of code generation AI, which in turn enables organizations of any size to more easily meet their full business potential."
"The joint efforts led by Hugging Face, ServiceNow, and NVIDIA enable the release of powerful base models that empower the community to build a wide range of applications more efficiently with full data and training transparency," said Leandro von Werra, machine learning engineer at Hugging Face and co‑lead of BigCode. "StarCoder2 is a testament to the potential of open source and open science as we work toward democratizing responsible AI."

"Since every software ecosystem has a proprietary programming language, code LLMs can drive breakthroughs in efficiency and innovation in every industry," said Jonathan Cohen, vice president of applied research at NVIDIA. "NVIDIA's collaboration with ServiceNow and Hugging Face introduces secure, responsibly developed models and supports broader access to accountable generative AI that we believe will benefit the global community."

StarCoder2 Models Supercharge Custom Application Development
StarCoder2 models share a state-of-the-art architecture and carefully curated data sources from BigCode that prioritize transparency and open governance to enable responsible innovation at scale. StarCoder2 advances the potential of future AI-driven coding applications, including text-to-code and text-to-workflow capabilities. With broader, deeper programming training, it provides repository context, enabling accurate, context-aware predictions. These advancements serve seasoned software engineers and citizen developers alike, accelerating business value and digital transformation.

The foundation of StarCoder2 is a new code dataset called Stack v2, which is more than 7x larger than Stack v1. In addition to the advanced dataset, new training techniques help the model understand low-resource programming languages (such as COBOL), mathematics, and program source code discussions.

Fine-Tuning Advances Capabilities With Business-Specific Data
Users can fine-tune the open-access StarCoder2 models with industry or organization-specific data using open-source tools such as NVIDIA NeMo or Hugging Face TRL. They can create advanced chatbots to handle more complex summarization or classification tasks, develop personalized coding assistants that can quickly and easily complete programming tasks, retrieve relevant code snippets, and enable text-to-workflow capabilities.

Organizations have already begun to fine-tune the foundational StarCoder model to create specialized task-specific capabilities for their businesses. ServiceNow's text-to-code Now LLM was purpose-built on a specialized version of the 15-billion-parameter StarCoder LLM, fine-tuned and trained for its workflow patterns, use cases, and processes. Hugging Face has also used the model to create its StarChat assistant.

BigCode Fosters Open Scientific Collaboration in AI
BigCode represents an open scientific collaboration led by Hugging Face and ServiceNow, dedicated to the responsible development of LLMs for code. The BigCode community actively participated in the technical aspects of the StarCoder2 project through working groups and task forces, leveraging ServiceNow's Fast LLM framework to train the 3-billion-parameter model, Hugging Face's nanotron framework for the 7-billion-parameter model and the NVIDIA NeMo cloud-native framework and NVIDIA TensorRT-LLM software to train and optimize the 15-billion-parameter model.

Fostering responsible innovation is at the core of BigCode's purpose, demonstrated through its open governance, transparent supply chain, use of open-source software, and the ability for developers to opt data out of training. StarCoder2 was built using responsibly sourced data under license from the digital commons of Software Heritage, hosted by Inria. "StarCoder2 is the first code generation AI model developed using the Software Heritage source code archive and built to align with our policies for responsible development of models for code," stated Roberto Di Cosmo, director at Software Heritage. "The collaboration of ServiceNow, Hugging Face, and NVIDIA exemplifies a shared commitment to ethical AI development, advancing technology for the greater good."
StarCoder2, like its predecessor, will be made available under the BigCode Open RAIL-M license, allowing royalty-free access and use. Further fostering transparency and collaboration, the model's supporting code will continue to reside on the BigCode project's GitHub page. All StarCoder2 models will also be available for download from Hugging Face, and the StarCoder2 15-billion-parameter model is available on NVIDIA AI Foundation models for developers to experiment with directly from their browser, or through an API endpoint.

For more information on StarCoder2, visit huggingface.co/bigcode.
Sources: NVIDIA News, France 3 (image source)
Add your own comment

2 Comments on ServiceNow, Hugging Face & NVIDIA Release StarCoder2 - a New Open-Access LLM Family

#1
ScaLibBDP
Completed a very quick-and-dirty set of tests of StarCoder2 model ( Big / 15B ) on HuggingFace website.

Some test codes generated from my prompts were absolutely Not related to what I wanted to see. For example:

My prompt: Initialize an array of long double variables of single floating point precision

Code from the model:
long double *ld_array = (long double *)malloc(sizeof(long double)

My comments: I did Not ask to allocate the array ( using malloc ) and I asked to initialize it, and in my prompt there was an error since long double is an extended precision floating point data type (!).

I think StarCoder2 model could help to speed up ports from one programming language to another, for example from Python to C or C++. This is actually what I have on my list of To-Do tasks as a C/C++ Software Engineer, that is, I need to port a small application from Python to C. I hope that StarCoder2 model will help...
Posted on Reply
#2
Ware
ScaLibBDPSome test codes generated from my prompts were absolutely Not related to what I wanted to see.
Some of these code AI's aren't made to accept English instructions.
I think Code LLama is better for English to code.
Even mistral and llama chat provided with "Chat with RTX" can occasionally write working code that runs without modification, with the the right RAG files.

If you prompt StarCoder2 with:
function love.load()
it will proceed to write a complete game(that you will probably have to modify before it runs properly).

I gave it 1 line in pascal:
if Key = 'w' then GLCamera1.Position.Y := GLCamera1.Position.Y + 1;
and it responded with:
if Key = 'q' then GLCamera1.Position.X := GLCamera1.Position.X + 1;
if Key = 'e' then GLCamera1.Position.X := GLCamera1.Position.X - 1;
if Key = 'd' then GLCamera1.Position.Z := GLCamera1.Position.Z - 1;
if Key = 'a' then GLCamera1.Position.Z := GLCamera1.Position.Z + 1;
if Key = 's' then GLCamera1.Position.Y := GLCamera1.Position.Y - 1;

That's the proper controls to move my camera on the other 5 axes.
It did also write about 5 other versions of the exact same code, some were more complex using the shift key as a modifier to reverse direction.

It does get stuck a lot and writes junk and repeats itself.
I think there is an autocomplete plugin for starcoder in VS Code.
They all write some wonky stuff and sometime you have to carefully rephrase your prompt to get good results.
Posted on Reply
Nov 21st, 2024 09:38 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts