huggingface_client

Build Status

A server and browser based Hugging Face REST API client for the inference and inference endpoint APIs.

The client supports standard query based inference and inference tasks for Natural Language Processing(NLP), Audio and Vision. It also supports the inference endpoint API for creation and control of inference endpoints and the provider endpoint.

The bindings for the inference endpoint API are generated from the Hugging Face Inference Endpoint OpenApi specification using the OpenAPI Generator project.

API documentation generated by the OpenAPI Generator can be found here.

There is no such Open API specification for the inference API.

See the examples.md document in the examples folder for usage examples.

A Hugging Face API key is needed for authentication, see here for instructions on how to obtain this.

Using the inference API with your own inference endpoint is a simple matter of substituting the hugging face base path with your inference endpoint URL and setting the model parameter to '' as the inference endpoints are created on a per model(repository) basis, see here for more details. Two examples of this are included in the example directory.

Note that the inference endpoint and provider APIs use the V2 version of the Hugging Face API. The V1 version is deprecated by Hugging Face and should not be used, see here.