Skip to content

Artifact Repositories

Big Picture integrates with artifact repositories to store and serve software installers. Artifacts can be stored in external repositories or Big Picture managed storage. Both approaches work with the same update decision API.


Big Picture supports two artifact storage models:

  1. External references — Artifacts stored in external repositories (S3, GCS, JFrog, etc.)
  2. Managed storage — Artifacts stored in Big Picture managed object storage

Both models provide the same functionality. Clients receive artifact URLs in signed update decisions, regardless of storage location.


External references point to artifacts stored in existing repositories. Big Picture registers artifact metadata (checksum, size, URL) but does not store the artifact itself.

  • Amazon S3 — S3 buckets with public or signed URLs
  • Google Cloud Storage — GCS buckets with public or signed URLs
  • JFrog Artifactory — Artifactory repositories
  • MinIO — Self-hosted S3-compatible storage
  • GitHub Releases — GitHub release assets
  • Any HTTP/HTTPS URL — Any accessible artifact URL

Register external artifacts by providing URL, checksum, and size:

Terminal window
curl -X POST "${BP_BASE_URL}/v1/artifacts" \
-H "Authorization: Bearer $BP_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"source_type": "EXTERNAL",
"sha256": "abc123...",
"size_bytes": 12345678,
"external_url": "https://artifacts.example.com/releases/v1.0.0/installer.msi"
}'
  • Artifacts must be accessible via HTTPS
  • URLs must remain stable (not expire or change)
  • Checksums must match registered values
  • Access control configured appropriately (public or signed URLs)

Managed storage stores artifacts in Big Picture object storage. Artifacts are uploaded through the API and served from Big Picture endpoints.

  • Amazon S3 — S3 buckets configured for Big Picture
  • Google Cloud Storage — GCS buckets configured for Big Picture
  • MinIO — Self-hosted S3-compatible storage
  • Filesystem — Local filesystem (development only)

Upload artifacts through a three-step process:

  1. Initiate upload — Request upload URL and upload ID
  2. Upload content — PUT artifact content to upload URL
  3. Complete upload — Verify checksum and finalize artifact
Terminal window
# Initiate upload
curl -X POST "${BP_BASE_URL}/v1/artifacts/uploads" \
-H "Authorization: Bearer $BP_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"size_bytes": 12345678,
"sha256": "abc123..."
}'
# Upload artifact (PUT to upload_url)
curl -X PUT "${UPLOAD_URL}" \
--data-binary @installer.msi \
-H "Content-Type: application/octet-stream"
# Complete upload
curl -X POST "${BP_BASE_URL}/v1/artifacts/uploads/${UPLOAD_ID}/complete" \
-H "Authorization: Bearer $BP_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"checksum": "abc123..."
}'

  • Existing artifact repositories are already in use
  • Artifacts are large and storage costs matter
  • Repositories provide access control and CDN capabilities
  • Organizations want to maintain control over artifact storage
  • Simplified artifact management is preferred
  • Artifacts should be co-located with Big Picture
  • Organizations want Big Picture to handle storage lifecycle
  • External repositories are not available

Configure S3 buckets for external references or managed storage:

storage:
type: s3
s3:
bucket: bigpicture-artifacts
region: us-east-1
endpoint: "" # For S3-compatible storage

Configure GCS buckets for external references or managed storage:

storage:
type: gcs
gcs:
bucket: bigpicture-artifacts
project: my-project

Use JFrog repositories as external references. Configure repository URLs and access credentials as needed.


Artifacts can be publicly accessible. Big Picture verifies checksums to ensure integrity.

Artifacts can use signed URLs (S3 presigned URLs, GCS signed URLs) for time-limited access.

Artifacts can require authentication. Clients must provide credentials when downloading.


  1. Verify checksums — Always compute and verify SHA256 checksums
  2. Stable URLs — Ensure artifact URLs remain stable and accessible
  3. HTTPS only — Use HTTPS for all artifact URLs
  4. Access control — Configure appropriate access control for artifacts
  5. CDN integration — Use CDN capabilities when available
  6. Lifecycle policies — Configure repository lifecycle policies for old artifacts

Artifact not accessible — Verify URL is correct and accessible from client networks

Checksum mismatch — Verify artifact content matches registered checksum

Upload failures — Check storage backend configuration and credentials

Access denied — Verify access control settings and credentials