Use S3-Compatible Storage Backend During Local Development #540
Closed
AravindPrabhs
started this conversation in
General
Replies: 1 comment
-
I realise this might be a more pertinent question for the cli since that is where the docker services get their environment variables (https://github.com/supabase/cli/blob/c859f647344de145a7fb92f62f0282d645c0d063/internal/start/start.go#L843). |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am developing an application that requires manipulation of large files being uploaded and processed, for this I need range requests to be performant. I've found the default file storage backend to not be effective for this at all; subsequent range queries take linearly longer to process as if they're just re-reading the file. I see the range request is effectively
fs.createReadStream(file, { start: startRange, end: endRange })
and maybe there is a way to optimise this.However, I realise an alternative approach will be to drop in an S3 Compatible Store and see if the local environment can connect to that. Would this be possible ? I tried starting an MinIO container and point it to that but the file backend is still used.
Added this to my root .env file:
and ran the following docker command
docker run -p 9230:9000 -p 9231:9001 quay.io/minio/minio server /data --console-address ":9001"
.I am pretty sure the environment variables aren't being passed to the storage service and wondering how I acheive this ?
Beta Was this translation helpful? Give feedback.
All reactions