Amazon Cloud Drive

Are there guides for setting up rclone with amazon/plex the same way the other one was?
This is something I would like to do and next step of on my dedi but am not nearly skilled enough with linux to really know what to do.

It seems some of the issues with amazon drive have been solved with it now being offered in the UK. But I’m in canada and live on border with states. have had a account for years and order from it, have prime and they offered free year of cloud a while ago so I am paying for it now. It has all my Canadian info so they don’t really care, at least not yet. The only thing that matters is to watch content I have to use a US IP.

i have not installed rclone yet but i will do soon the most simply way i know is the following

sudo apt-get install unzip -y

cd ~



cd rclone-current-linux-amd64

#copy binary file
sudo cp rclone /usr/sbin/
sudo chown root:root /usr/sbin/rclone
sudo chmod 755 /usr/sbin/rclone
#install manpage
sudo mkdir -p /usr/local/share/man/man1
sudo cp rclone.1 /usr/local/share/man/man1/
sudo mandb

so how my process flows…just got it working this weekend so there could be bugs.

  • rtorrent downloads a file into /home/user/torrents/rtorrent/incomplete
  • filebot processes the files and does a hardlink to another directory depending on if its a movie or if its a tv show:
  • rclone uploads the media
  • rclone mounts the acd / gdrive directories
  • plex points to rclone mounts

rclone is setup to use remotes ‘acd’, ‘gdrive’, ‘eacd’ and ‘egdrive’ for normal and then encrypted.
one lesson learned is that when you create the crypt remote on top of your ACD or GDrive remotes you need to include a : so if my unencrypted remote is ‘acd’ when making the crypt one make sure you specify acd:

I have a script which uploads to BOTH encrypted ACD and Gdrive…one copies, one moves:

# Encrypted Movies
rclone copy /home/user/torrents/rtorrent/complete/movies/ eacd:media/movies
rclone move /home/user/torrents/rtorrent/complete/movies/ egdrive:media/movies
# Encrypted TV Shows
rclone copy /home/user/torrents/rtorrent/complete/tv_shows/ eacd:media/tv_shows
rclone move /home/user/torrents/rtorrent/complete/tv_shows/ egdrive:media/tv_shows
# needed to add this because rclone doesnt delete empty source directories with the move command
find /home/user/torrents/rtorrent/complete/movies/* -empty -type d -delete
find /home/user/torrents/rtorrent/complete/tv_shows/* -empty -type d -delete

I have a directory structure like this:

├── acd
│   ├── encrypted
│   │   └── media
│   │       ├── movies 
│   │       └── tv_shows
│   └── plain
├── gdrive
│   ├── encrypted
│   │   └── media
│   │       ├── movies
│   │       └── tv_shows
│   └── plain
└── unraid

Unraid is a sshfs mount back to my house so sonarr/couchpotato/etc can compare what I have at home vs seedbox.

If you use the beta of rclone, it now has fuse options like --allow-other. It will be in 1.3.4.
It’s here:β/

This --allow-other allows plex to see the files under the mounts.

mount script:

 rclone mount eacd: /home/user/cloud/acd/encrypted/ --allow-other &
 rclone mount egdrive: /home/user/cloud/gdrive/encrypted/ --allow-other &

unmount script:

fusermount -u /home/user/cloud/acd/encrypted
fusermount -u /home/user/cloud/gdrive/encrypted

You now point plex to /home/user/cloud/acd/encrypted/media/movies and /home/user/cloud/acd/encrypted/media/tv_shows

comment about why both: backup. I actually have 2 unlimited gdrive accounts and an unlimited amazon account.
I picked up a cheap lifetime unlimited gdrive on ebay for $15 one-time-fee as a backup gdrive. I’m basically going to duplicate the data on amazon and this gdrive backup account. The 2nd gdrive account is linked to my own domain and I pay $10/month for it. If my process flow works then I’ll dump my $10/month one. $60/year($5 first year for me) for Amazon is darn cheap and my $15 is insurance. I dont think I need the $120/year gdrive one.


very nice RXwatcher

1 Like

thanks! I just uploaded 250GBs… one issue. the mount wouldnt see all the movies until I unmounted and remounted it. There must be a delay or something. Plex couldnt see them. I unmount/remounted/scanned plex and it picked them all up nicely.

I really like this idea because I can just move seedboxes and I just have to remount and I should be back in business.

I think I need to throttle the uploads because its eating all of my 250Mb/s SyS upload.

Normal playback from ACD via that mount is perfect. fast forwarding or skipping seems to be a challenge and plex spins for like 30 seconds before it plays. There is a feature request in to allow the rclone mount to seek which should take care of that.

when i use to use ACD_CLI i would have to unmount remount each time i uploaded new content not sure if you are going to have to do this all the time as well

I always had stability issues with acd_cli mounts. I wouldnt touch a thing and my mounts to amazon would be stale and dead so obviously plex wouldnt work. Hopefully this all-in-one solution with rclone proves to be more stable.

Looks like my unmount/remount issue will be fixed in 1.3.4:

1 Like

@RXWatcher thats great you are using up all your upload speed when uploading your content. I use ACD_CLI and get around 8mbps its takes an age to upload. Having said that, i’ve got just under 9TB in my TV folder on ACD. Generally speaking its all 720p WEB-DL content. So in your experience, you’d say rclone is better for uploading?

@fdisk with ACD_CLI i find if i run the command acd_cli sync that it will update the drive.

One thing that i have found with ACD_CLI is that sometimes it gives a bad mount, and while it may be able to still be readable by Plex, i can’t upload to it. To get around this i manually delete the nodes.db which is located in username/.cache/acd_cli and then run the “acd_cli sync” command again.

My ACD mount goes down every now and again. This is the Python script I use to check if the mount is down and remount. It’s very basic and no variables are used, so edit and use as you see fit.

# basic script for restarting acd mount on failure and notifying via pushbullet,
# hardcoded and not great but whatever, note this mounts read-only, change to suit your needs
# put this in your crontab to run every 5 minuntes
# */5 * * * * python /home/user/ > /dev/null

from pushbullet import Pushbullet
import os
import subprocess
import logging

logging.basicConfig(filename='acdmount.log', level=logging.INFO, format='%(asctime)s %(levelname)s %(message)s')'ACD mount checker started')

DEVNULL = open(os.devnull, 'wb')

pb = Pushbullet('YOUR_PUSHBULLET_KEY')

#If the mount is available, this check will return True.
exists = os.path.exists("/home/kamos/acd/cloud.encrypted/t-sYFZuyBZHfi1XTJr8lcgqP")

if exists is not True:
    logging.warning("ACD mount is down")
    push = pb.push_note("ACD mount down", "attempting restart")"attempting umount")["/usr/local/bin/acd_cli", "umount"], stdin=None, stdout=DEVNULL)"attempting ACD mount")["bash", "/home/kamos/acd/"], stdin=None, stdout=DEVNULL)"running ACD sync")["/usr/local/bin/acd_cli", "sync"], stdin=None, stdout=DEVNULL)

    if os.path.exists("/home/kamos/acd/cloud.encrypted/t-sYFZuyBZHfi1XTJr8lcgqP") is not True:
        push = pb.push_note("ACD mount still down", "attempting node.db delete")
        logging.warning("ACD mount is still down, attempting node.db delete")["rm", "/home/kamos/.cache/acd_cli/nodes.db"], stdin=None, stdout=DEVNULL)"running ACD sync")["/usr/local/bin/acd_cli", "sync"], stdin=None, stdout=DEVNULL)"attempting ACD mount")["bash", "/home/kamos/acd/"], stdin=None, stdout=DEVNULL)

    elif os.path.exists("/home/kamos/acd/cloud.encrypted/t-sYFZuyBZHfi1XTJr8lcgqP"):
           push = pb.push_note("ACD mount back up", "restart successful")
 "ACD mount restart succeeded")

else:'ACD mount is up')

Basic logic here is:
check if the Mount is up:
If not, unmount, remount and sync.
Check if mount is up:
If not, delete nodes.db, sync and remount.

I might move over to rclone if it proves to be more stable.

I dont know if rclone is better for performance as I havent used acd_cli+encfs in 6 months or so. I’m sure if I said it was someone would pipe up with examples where acd_cli+encfs was better. In my opinion its less hassle to setup because its all in one app. From what I’ve read of others opinions, rclone is the way to go for performance.

What I would recommend is trying it. I wouldnt even mess with the rclone mounts or encryption. I would just try it in place of acd_cli for the upload portion.

I do know I can saturate my upload. I should have grabbed screen shots.

One thing discussed on the rclone github site was trying to support encfs in at least a read only format. I would be concerned with having 9TBs of data uploaded if its in encfs format(you didnt say if it was encrypted) and then cutting over to something new. I’m in a position where I don’t really have anything in there to worry about.

I’m still a little concerned about the encryption right now. It’s still pretty new. What if he changes the format? There was talk of making it even more secure. At least the stuff I have uploaded is expendable. I can trash it and reload without too much difficulty.

Time will tell if this is more stable. One issue I have is the stale mount until its fixed in 1.3.4. It’s going to require that I unmount/remount in order to pick up the updates so plex can see them.

All my stuff on the ACD isn’t encrypted. I did try to get the hang of encrypting everything, but i couldn’t get the hang of it and when i did cobble something together the upload was so slow it wasn’t worth it.

I have a similar set up to you, with the fact that i only host Plex with my in-laws and a few friends, so there would never be a great draw on the ACD resources.

I know nothing about rclone and was only able to install ACD_CLI from follow tutorials, i’ll try installing rclone locally and mount the drive to see how i get on. All going well, i’ll add it to my server.

the only thing you would need to change is the names so rather than encrypting the whole file find a way to just change the names so that they can not scan for names with crawlers.

Observation. I have 2 seedboxes right now. A beefy box (LT 2016 with 3 SSDs in Raid 0. 1.2GB/s on disk i/o). I uploaded to ACD using the process I laid out last night on the box and the speeds sucked. I have a SyS Canada box as well. I was working in rclone last night from that server moving data between ACD and Gdrive, moving other things around in the encrypted ACD. Speeds were WAY better even with the limited upload of SyS.

I’m thinking the speeds are better in my case because I’m US based so therefore my ACD and GDrive are probably using US based servers. France->US Based ACD/GDrive vs Canada->US Based ACD/GDrive.

The location could be something to do with it. I have a solid Xeon Hetzner machine and the uploads are slow copying to the mounted drive. However, if i remote desktop and upload via the browser it can max out my speeds. The only problem with that is the 2GB cap on file size uploads. Which can affect some longer 720p episodes and movies.

rclone’s author has said he doesnt see the mount being stable enough to use for uploading as that is very sensitive to timeouts, etc. He recommends using the cmd line rclone to upload.

Interesting, i never did try uploading from the cmd line. I just tried it now with acd_cli and i’m getting really acceptable speeds around 45MB/s using the cmd

acd_cli upload -x 2 home/smilingheadcase/Downloads/TV/ /TV/

This will be a fresh upload, but i wonder how it will behave with duplicates. Say if i run this cmd every few days, will it only upload the new files.

acdcli does a hash check on each of the files it’s uploading vs what’s on ACD already. So it won’t upload any duplicate files. Also, ACD has a problem with duplicate file / folder names, so acdcli will throw an error for duplicate files and ignore them when uploading to ACD.

Just finished the test there and it didn’t upload duplicates :slight_smile:

My backups will be a much easier affair from now on.

RXWatcher - have you played further with rclone mounting for Plex streaming? Do you think this is close to being stable and reliable? I’m keeping an eye on this. If it works out, I’ll upload all my media, sell off my local server and rely on a dedicated server at SYS with media on rclone mount either with ACD or Google Drive. Curious for any further thoughts on this.