Blob Storage
Getting Started
- Enable the blob storage in your NuxtHub project by adding the
blob
property to thehub
object in yournuxt.config.ts
file.
export default defineNuxtConfig({
hub: {
blob: true
}
})
- Configure your production storage provider in Nitro
export default defineNuxtConfig({
nitro: {
storage: {
BLOB: {
driver: 'vercel-blob',
access: 'public',
/* any additional connector options */
}
}
},
hub: {
blob: true,
},
})
By default, NuxtHub will automatically use the filesystem during local development. You can modify this behaviour by specifying a different storage driver.
export default defineNuxtConfig({
nitro: {
// default blob driver
devStorage: {
BLOB: {
driver: 'fs',
base: join(nuxt.options.rootDir, '.data/blob')
}
}
},
})
hubBlob()
Server composable that returns a set of methods to manipulate the blob storage.
list()
Returns a paginated list of blobs (metadata only).
export default eventHandler(async () => {
const { blobs } = await hubBlob().list({ limit: 10 })
return blobs
})
Params
1000
.true
, the list will be folded using /
separator and list of folders will be returned.Return
Returns BlobListResult
.
Return all blobs
To fetch all blobs, you can use a while
loop to fetch the next page until the cursor
is null
.
let blobs = []
let cursor = null
do {
const res = await hubBlob().list({ cursor })
blobs.push(...res.blobs)
cursor = res.cursor
} while (cursor)
serve()
Returns a blob's data and sets Content-Type
, Content-Length
and ETag
headers.
export default eventHandler(async (event) => {
const { pathname } = getRouterParams(event)
return hubBlob().serve(event, pathname)
})
<template>
<img src="/images/my-image.jpg">
</template>
You can also set a Content-Security-Policy
header to add an additional layer of security:
export default eventHandler(async (event) => {
const { pathname } = getRouterParams(event)
setHeader(event, 'Content-Security-Policy', 'default-src \'none\';')
return hubBlob().serve(event, pathname)
})
Params
Return
Returns the blob's raw data and sets Content-Type
and Content-Length
headers.
head()
Returns a blob's metadata.
const metadata = await hubBlob().head(pathname)
Params
Return
Returns a BlobObject
.
get()
Returns a blob body.
const blob = await hubBlob().get(pathname)
Params
Return
Returns a Blob
or null
if not found.
put()
Uploads a blob to the storage.
export default eventHandler(async (event) => {
const form = await readFormData(event)
const file = form.get('file') as File
if (!file || !file.size) {
throw createError({ statusCode: 400, message: 'No file provided' })
}
ensureBlob(file, {
maxSize: '1MB',
types: ['image']
})
return hubBlob().put(file.name, file, {
addRandomSuffix: false,
prefix: 'images'
})
})
See an example on the Vue side:
<script setup lang="ts">
async function uploadImage (e: Event) {
const form = e.target as HTMLFormElement
await $fetch('/api/files', {
method: 'POST',
body: new FormData(form)
}).catch((err) => alert('Failed to upload image:\n'+ err.data?.message))
form.reset()
}
</script>
<template>
<form @submit.prevent="uploadImage">
<label>Upload an image: <input type="file" name="image"></label>
<button type="submit">
Upload
</button>
</form>
</template>
Params
true
, a random suffix will be added to the blob's name. Defaults to false
.Return
Returns a BlobObject
.
del()
Delete a blob with its pathname.
export default eventHandler(async (event) => {
const { pathname } = getRouterParams(event)
await hubBlob().del(pathname)
return sendNoContent(event)
})
You can also delete multiple blobs at once by providing an array of pathnames:
await hubBlob().del(['images/1.jpg', 'images/2.jpg'])
delete()
method as alias of del()
.Params
Return
Returns nothing.
handleUpload()
This is an "all in one" function to validate a Blob
by checking its size and type and upload it to the storage.
useUpload()
Vue composable.It can be used to handle file uploads in API routes.
export default eventHandler(async (event) => {
return hubBlob().handleUpload(event, {
formKey: 'files', // read file or files form the `formKey` field of request body (body should be a `FormData` object)
multiple: true, // when `true`, the `formKey` field will be an array of `Blob` objects
ensure: {
types: ['image/jpeg', 'image/png'], // allowed types of the file
},
put: {
addRandomSuffix: true
}
})
})
<script setup lang="ts">
const upload = useUpload('/api/blob', { method: 'PUT' })
async function onFileSelect(event: Event) {
const uploadedFiles = await upload(event.target as HTMLInputElement)
// file uploaded successfully
}
</script>
<template>
<input type="file" name="file" @change="onFileSelect" multiple accept="image/jpeg, image/png" />
</template>
Params
'files'
.true
, the formKey
field will be an array of Blob
objects.ensureBlob()
options for more details.put()
options for more details.Return
Returns a BlobObject
or an array of BlobObject
if multiple
is true
.
Throws an error if file
doesn't meet the requirements.
handleMultipartUpload()
Handle the request to support multipart upload.
export default eventHandler(async (event) => {
return await hubBlob().handleMultipartUpload(event)
})
[action]
and [...pathname]
params.On the client side, you can use the useMultipartUpload()
composable to upload a file in parts.
<script setup lang="ts">
async function uploadFile(file: File) {
const upload = useMultipartUpload('/api/files/multipart')
const { progress, completed, abort } = upload(file)
}
</script>
useMultipartUpload()
on usage details.Params
true
, a random suffix will be added to the blob's name. Defaults to false
.createMultipartUpload()
handleMultipartUpload()
method to handle the multipart upload request.If you want to handle multipart uploads manually using this utility, keep in mind that you cannot use this utility for Vercel Blob due to payload size limit of Vercel functions. Consider using Vercel Blob Client SDK.
Start a new multipart upload.
export default eventHandler(async (event) => {
const { pathname } = getRouterParams(event)
const mpu = await hubBlob().createMultipartUpload(pathname)
return {
uploadId: mpu.uploadId,
pathname: mpu.pathname,
}
})
Params
true
, a random suffix will be added to the blob's name. Defaults to true
.Return
Returns a BlobMultipartUpload
resumeMultipartUpload()
handleMultipartUpload()
method to handle the multipart upload request.Continue processing of unfinished multipart upload.
To upload a part of the multipart upload, you can use the uploadPart()
method:
export default eventHandler(async (event) => {
const { pathname } = getRouterParams(event)
const { uploadId, partNumber } = getQuery(event)
const stream = getRequestWebStream(event)!
const body = await streamToArrayBuffer(stream, contentLength)
const mpu = hubBlob().resumeMultipartUpload(pathname, uploadId)
return await mpu.uploadPart(partNumber, body)
})
Complete the upload by calling complete()
method:
export default eventHandler(async (event) => {
const { pathname, uploadId } = getQuery(event)
const parts = await readBody(event)
const mpu = hubBlob().resumeMultipartUpload(pathname, uploadId)
return await mpu.complete(parts)
})
If you want to cancel the upload, you need to call abort()
method:
export default eventHandler(async (event) => {
const { pathname } = getRouterParams(event)
const { uploadId } = getQuery(event)
const mpu = hubBlob().resumeMultipartUpload(pathname, uploadId)
await mpu.abort()
return sendNoContent(event)
})
A simple example of multipart upload in client with above routes:
async function uploadLargeFile(file: File) {
const chunkSize = 10 * 1024 * 1024 // 10MB
const count = Math.ceil(file.size / chunkSize)
const { pathname, uploadId } = await $fetch(
`/api/files/multipart/${file.name}`,
{ method: 'POST' },
)
const uploaded = []
for (let i = 0; i < count; i++) {
const start = i * chunkSize
const end = Math.min(start + chunkSize, file.size)
const partNumber = i + 1
const chunk = file.slice(start, end)
const part = await $fetch(
`/api/files/multipart/${pathname}`,
{
method: 'PUT',
query: { uploadId, partNumber },
body: chunk,
},
)
uploaded.push(part)
}
return await $fetch(
'/api/files/multipart/complete',
{
method: 'POST',
query: { pathname, uploadId },
body: { parts: uploaded },
},
)
}
Params
Return
Returns a BlobMultipartUpload
Params
ensureBlob()
ensureBlob()
is a handy util to validate a Blob
by checking its size and type:
// Will throw an error if the file is not an image or is larger than 1MB
ensureBlob(file, { maxSize: '1MB', types: ['image']})
Params
maxSize
or types
should be provided.(
1
| 2
| 4
| 8
| 16
| 32
| 64
| 128
| 256
| 512
| 1024
) + (B
| KB
| MB
| GB
) e.g.
'512KB'
, '1MB'
, '2GB'
, etc.['image/jpeg']
.Return
Returns nothing.
Throws an error if file
doesn't meet the requirements.
Vue Composables
server/
directory).useUpload()
useUpload
is to handle file uploads in your Nuxt application.
<script setup lang="ts">
const upload = useUpload('/api/blob', { method: 'PUT' })
async function onFileSelect({ target }: Event) {
const uploadedFiles = await upload(target as HTMLInputElement)
// file uploaded successfully
}
</script>
<template>
<input
accept="image/jpeg, image/png"
type="file"
name="file"
multiple
@change="onFileSelect"
>
</template>
Params
'files'
.true
.Return
Return a MultipartUpload
function that can be used to upload a file in parts.
const { completed, progress, abort } = upload(file)
const data = await completed
useMultipartUpload()
Application composable that creates a multipart upload helper.
export const mpu = useMultipartUpload('/api/files/multipart')
Params
handleMultipartUpload()
.10MB
.1
.3
.query
and headers
will be merged with the options provided by the uploader.Return
Return a MultipartUpload
function that can be used to upload a file in parts.
const { completed, progress, abort } = mpu(file)
const data = await completed
Storage Providers
NuxtHub supports multiple storage providers for blob storage. In development mode, NuxtHub automatically configures the filesystem (fs
) driver for local development.
Filesystem (fs)
The filesystem driver stores blobs locally on your development machine.
export default defineNuxtConfig({
nitro: {
storage: {
BLOB: {
driver: 'fs',
base: './.data/blob'
}
}
}
})
Vercel Blob
For production deployments on Vercel, use the Vercel Blob driver.
export default defineNuxtConfig({
nitro: {
storage: {
BLOB: {
driver: 'vercel-blob',
access: 'public'
}
}
}
})
Cloudflare R2
For Cloudflare deployments, you can use Cloudflare R2 with either bindings (recommended) or the S3-compatible driver.
Using R2 Bindings (Recommended)
When deploying to Cloudflare Workers, use R2 bindings for optimal performance and integration.
export default defineNuxtConfig({
nitro: {
storage: {
BLOB: {
driver: 'cloudflare-r2',
binding: 'BLOB'
}
}
}
})
Make sure to configure the R2 binding in your wrangler.toml
:
[[r2_buckets]]
binding = "BLOB"
bucket_name = "my-bucket"
Using S3-Compatible Driver
Alternatively, you can use the S3-compatible driver with Cloudflare R2. This is useful for deploying your project in different environments while still using Cloudflare R2.
export default defineNuxtConfig({
nitro: {
storage: {
BLOB: {
driver: 's3',
accessKeyId: process.env.CLOUDFLARE_R2_ACCESS_KEY_ID,
secretAccessKey: process.env.CLOUDFLARE_R2_SECRET_ACCESS_KEY,
region: 'auto',
endpoint: `https://${process.env.CLOUDFLARE_ACCOUNT_ID}.r2.cloudflarestorage.com`,
bucket: process.env.CLOUDFLARE_R2_BUCKET_NAME
}
}
}
})
Amazon S3
For AWS S3 storage, use the S3 driver.
export default defineNuxtConfig({
nitro: {
storage: {
BLOB: {
driver: 's3',
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: process.env.AWS_REGION,
bucket: process.env.AWS_S3_BUCKET
}
}
}
})
Types
BlobObject
interface BlobObject {
pathname: string
contentType: string | undefined
size: number
httpEtag: string
uploadedAt: Date
httpMetadata: Record<string, string>
customMetadata: Record<string, string>
url: string | undefined
}
BlobMultipartUpload
export interface BlobMultipartUpload {
pathname: string
uploadId: string
uploadPart(
partNumber: number,
value: string | ReadableStream<any> | ArrayBuffer | ArrayBufferView | Blob
): Promise<BlobUploadedPart>
abort(): Promise<void>
complete(uploadedParts: BlobUploadedPart[]): Promise<BlobObject>
}
BlobUploadedPart
export interface BlobUploadedPart {
partNumber: number;
etag: string;
}
MultipartUploader
export type MultipartUploader = (file: File) => {
completed: Promise<SerializeObject<BlobObject> | undefined>
progress: Readonly<Ref<number>>
abort: () => Promise<void>
}
BlobListResult
interface BlobListResult {
blobs: BlobObject[]
hasMore: boolean
cursor?: string
folders?: string[]
}