How to mount S3 bucket as a file system
In this article, we describe the setup and mounting of a S3 bucket in your user space environment with Rclone for Ubuntu 16.04 and newer. This allows you to use the Nine S3 storage as a local file system.
Rclone offers a wide range of features and could prevail in our comparison between s3fs and Goofys as the best option in terms of performance and compatibility.
Requirements
-
Access Key and Secret for access to S3 bucket (can be accessed in your Nine cockpit).
-
In order for an unprivileged user to create his own filesystem in the user space, the kernel module FUSE (Filesystem in Userspace) must be installed. The module is available on our managed servers.
-
Root Server: Fuse is available in the Ubuntu repository and can be installed via
apt
:apt-get install fuse
Installation
Download the latest Linux binary (Intel/AMD - 64 bit) (https://rclone.org/downloads/) and place it in your user space:
:~ $ wget https://downloads.rclone.org/rclone-current-linux-amd64.zip
:~ $ unzip rclone-current-linux-amd64.zip ; rm rclone-current-linux-amd64.zip
:~ $ mkdir ~/bin && mv rclone-v*-linux-amd64/rclone ~/bin/rclone && chmod u+x ~/bin/rclone
:~ $ ~/bin/rclone version
rclone v1.56.1
- os/version: ubuntu 20.04 (64 bit)
- os/kernel: 5.4.0-80-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.16.8
- go/linking: static
- go/tags: none
Configuration
To be automatically recognized by Rclone, the configuration must be placed in the file ~/.config/rclone/rclone.conf
. Adjust the values according to the information for your user and bucket.
Multiple endpoints and users can be configured separated by [section]
.
:~ $ mkdir -p ~/.config/rclone/
[s3-nine]
type = s3
provider = Other
access_key_id = 6aaf50b18357446ab1a25a6c93361569
secret_access_key = fcf2c9c6bc5c4384a4e1dbff99d2cc52
region = nine-cz42
endpoint = https://cz42.objectstorage.nineapis.ch
Afterwards, the bucket can be setup as a mount point. The mount point can be an existing directory within your current directory structure or a new directory.
A new directory can be created using mkdir ~/path
.
The following command mounts the S3 bucket:
Note: Older versions of rclone use
--vfs-cache-mode write
(without "s").
:~ $ ~/bin/rclone mount s3-nine:<Bucketname> ~/<Mountpoint> --vfs-cache-mode writes --use-server-modtime
Note: The Rclone process runs in the foreground in your current shell. You can now open a second shell and check the status. If everything works as desired, you can proceed to the next step and create a systemd service that runs in the background.
In our tests --vfs-cache-mode writes
has been proven the most sensible option between compatibility and disk usage.
To improve performance, read operations can also be cached with some implications. For cache modes minimal
and full
, disk usage can be higher or certain operations on the file system won't work. With those options, it may also makes sense to adjust the buffer and cache sizes with the parameters --buffer-size
and --vfs-cache-max-size
. We suggest looking up the official rclone documentation if you're considering using these modes:
https://rclone.org/commands/rclone_mount/#vfs-file-caching
Auto start / monitoring
We use a Systemd Service Unit to automatically start Rclone and mount the buckert after a reboot.
We create the following service unit configuration in ~/.config/systemd/user/rclone.service
.
Adjust the values marked with < >
according to the information for your user, bucket and mount point:
[Unit]
Description=rclone mount
Documentation=http://rclone.org/docs/
Wants=network-online.target
After=network-online.target
StartLimitInterval=500
StartLimitBurst=5
[Service]
Type=notify
Environment=MOUNTPOINT=<MOUNTPOINT>
Environment=REMOTE_NAME=s3-nine
Environment=BUCKETNAME=<BUCKETNAME>
Restart=on-failure
RestartSec=5
ExecStartPre=/bin/bash -c "/usr/bin/fusermount -uzq ${MOUNTPOINT} || true"
ExecStart=/usr/bin/env "${HOME}/bin/rclone" mount \
--vfs-cache-mode writes \
--use-server-modtime \
${REMOTE_NAME}:${BUCKETNAME} ${MOUNTPOINT}
ExecStop=/bin/fusermount -uzq ${MOUNTPOINT}
[Install]
WantedBy=multi-user.target
The new Systemd configuration must then be loaded with the command systemctl --user daemon-reload
.
In order for the service to start automatically after a system reboot, the following command must be executed:
systemctl --user enable rclone.service
.
The newly created service unit can now be started or the status of the unit can be retrieved with the following commands:
:~ $ systemctl --user start rclone.service
:~ $ systemctl --user status rclone.service
● rclone.service - rclone mount
Loaded: loaded (/home/www-data/.config/systemd/user/rclone.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2021-09-30 15:45:56 CEST; 1min 2s ago
Docs: http://rclone.org/docs/
Main PID: 12582 (rclone)
Status: "[15:46] vfs cache: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)"
CGroup: /user.slice/user-33.slice/user@33.service/rclone.service
└─12582 /home/www-data/bin/rclone mount --vfs-cache-mode writes --use-server-modtime s3-nine:bucket1 /home/www-data/testmountpoint
Sep 30 15:45:56 server systemd[805]: rclone.service: Service hold-off time over, scheduling restart.
Sep 30 15:45:56 server systemd[805]: Stopped rclone mount.
Sep 30 15:45:56 server systemd[805]: Starting rclone mount...
Sep 30 15:45:56 server systemd[805]: Started rclone mount.
Update
Updates of Rclone can be done using the built-in "selfupdate" function. This will download the latest version marked as "stable" and replace the binary that was used before:
~/bin/rclone selfupdate
Troubleshooting
Error:
:~/s3mount $ ls
ls: cannot open directory '.': Transport endpoint is not connected
If Rclone mounted the bucket to the path you had your current shell session poiting to, you must change to the directory again (cd; cd -
).
Error:
:~ $ ~/bin/rclone mount s3-nine:test-bucket ~/s3mount --vfs-cache-mode writes --use-server-modtime
2021/09/23 14:27:56 Fatal error: Can not open: /home/www-data/s3mount: open /home/www-data/s3mount: transport endpoint is not connected
If the Rclone process terminates unexpectedly, the mount point must be removed with the command fusermount -u </mountpoint>
. After a restart of the systemd unit the mountpoint should be available again:
systemctl --user restart rclone.service
.