Amazon Cloud Drive

thanks! I just uploaded 250GBs… one issue. the mount wouldnt see all the movies until I unmounted and remounted it. There must be a delay or something. Plex couldnt see them. I unmount/remounted/scanned plex and it picked them all up nicely.

I really like this idea because I can just move seedboxes and I just have to remount and I should be back in business.

I think I need to throttle the uploads because its eating all of my 250Mb/s SyS upload.

Normal playback from ACD via that mount is perfect. fast forwarding or skipping seems to be a challenge and plex spins for like 30 seconds before it plays. There is a feature request in to allow the rclone mount to seek which should take care of that.

when i use to use ACD_CLI i would have to unmount remount each time i uploaded new content not sure if you are going to have to do this all the time as well

I always had stability issues with acd_cli mounts. I wouldnt touch a thing and my mounts to amazon would be stale and dead so obviously plex wouldnt work. Hopefully this all-in-one solution with rclone proves to be more stable.

Looks like my unmount/remount issue will be fixed in 1.3.4: https://github.com/ncw/rclone/issues/680

1 Like

@RXWatcher thats great you are using up all your upload speed when uploading your content. I use ACD_CLI and get around 8mbps its takes an age to upload. Having said that, i’ve got just under 9TB in my TV folder on ACD. Generally speaking its all 720p WEB-DL content. So in your experience, you’d say rclone is better for uploading?

@fdisk with ACD_CLI i find if i run the command acd_cli sync that it will update the drive.

One thing that i have found with ACD_CLI is that sometimes it gives a bad mount, and while it may be able to still be readable by Plex, i can’t upload to it. To get around this i manually delete the nodes.db which is located in username/.cache/acd_cli and then run the “acd_cli sync” command again.

My ACD mount goes down every now and again. This is the Python script I use to check if the mount is down and remount. It’s very basic and no variables are used, so edit and use as you see fit.

# basic script for restarting acd mount on failure and notifying via pushbullet,
# hardcoded and not great but whatever, note this mounts read-only, change to suit your needs
# put this in your crontab to run every 5 minuntes
# */5 * * * * python /home/user/acdmountdown.py > /dev/null

from pushbullet import Pushbullet
import os
import subprocess
import logging

logging.basicConfig(filename='acdmount.log', level=logging.INFO, format='%(asctime)s %(levelname)s %(message)s')
logging.info('ACD mount checker started')

DEVNULL = open(os.devnull, 'wb')

pb = Pushbullet('YOUR_PUSHBULLET_KEY')

#If the mount is available, this check will return True.
exists = os.path.exists("/home/kamos/acd/cloud.encrypted/t-sYFZuyBZHfi1XTJr8lcgqP")

if exists is not True:
    logging.warning("ACD mount is down")
    push = pb.push_note("ACD mount down", "attempting restart")
    logging.info("attempting umount")
    subprocess.call(["/usr/local/bin/acd_cli", "umount"], stdin=None, stdout=DEVNULL)
    logging.info("attempting ACD mount")
    subprocess.call(["bash", "/home/kamos/acd/mount.sh"], stdin=None, stdout=DEVNULL)
    logging.info("running ACD sync")
    subprocess.call(["/usr/local/bin/acd_cli", "sync"], stdin=None, stdout=DEVNULL)

    if os.path.exists("/home/kamos/acd/cloud.encrypted/t-sYFZuyBZHfi1XTJr8lcgqP") is not True:
        push = pb.push_note("ACD mount still down", "attempting node.db delete")
        logging.warning("ACD mount is still down, attempting node.db delete")
        subprocess.call(["rm", "/home/kamos/.cache/acd_cli/nodes.db"], stdin=None, stdout=DEVNULL)
        logging.info("running ACD sync")
        subprocess.call(["/usr/local/bin/acd_cli", "sync"], stdin=None, stdout=DEVNULL)
        logging.info("attempting ACD mount")
        subprocess.call(["bash", "/home/kamos/acd/mount.sh"], stdin=None, stdout=DEVNULL)

    elif os.path.exists("/home/kamos/acd/cloud.encrypted/t-sYFZuyBZHfi1XTJr8lcgqP"):
           push = pb.push_note("ACD mount back up", "restart successful")
           logging.info("ACD mount restart succeeded")

else:
    logging.info('ACD mount is up')

Basic logic here is:
check if the Mount is up:
If not, unmount, remount and sync.
Check if mount is up:
If not, delete nodes.db, sync and remount.

I might move over to rclone if it proves to be more stable.

I dont know if rclone is better for performance as I havent used acd_cli+encfs in 6 months or so. I’m sure if I said it was someone would pipe up with examples where acd_cli+encfs was better. In my opinion its less hassle to setup because its all in one app. From what I’ve read of others opinions, rclone is the way to go for performance.

What I would recommend is trying it. I wouldnt even mess with the rclone mounts or encryption. I would just try it in place of acd_cli for the upload portion.

I do know I can saturate my upload. I should have grabbed screen shots.

One thing discussed on the rclone github site was trying to support encfs in at least a read only format. I would be concerned with having 9TBs of data uploaded if its in encfs format(you didnt say if it was encrypted) and then cutting over to something new. I’m in a position where I don’t really have anything in there to worry about.

I’m still a little concerned about the encryption right now. It’s still pretty new. What if he changes the format? There was talk of making it even more secure. At least the stuff I have uploaded is expendable. I can trash it and reload without too much difficulty.

Time will tell if this is more stable. One issue I have is the stale mount until its fixed in 1.3.4. It’s going to require that I unmount/remount in order to pick up the updates so plex can see them.

All my stuff on the ACD isn’t encrypted. I did try to get the hang of encrypting everything, but i couldn’t get the hang of it and when i did cobble something together the upload was so slow it wasn’t worth it.

I have a similar set up to you, with the fact that i only host Plex with my in-laws and a few friends, so there would never be a great draw on the ACD resources.

I know nothing about rclone and was only able to install ACD_CLI from follow tutorials, i’ll try installing rclone locally and mount the drive to see how i get on. All going well, i’ll add it to my server.

the only thing you would need to change is the names so rather than encrypting the whole file find a way to just change the names so that they can not scan for names with crawlers.

Observation. I have 2 seedboxes right now. A beefy online.net box (LT 2016 with 3 SSDs in Raid 0. 1.2GB/s on disk i/o). I uploaded to ACD using the process I laid out last night on the online.net box and the speeds sucked. I have a SyS Canada box as well. I was working in rclone last night from that server moving data between ACD and Gdrive, moving other things around in the encrypted ACD. Speeds were WAY better even with the limited upload of SyS.

I’m thinking the speeds are better in my case because I’m US based so therefore my ACD and GDrive are probably using US based servers. France->US Based ACD/GDrive vs Canada->US Based ACD/GDrive.

The location could be something to do with it. I have a solid Xeon Hetzner machine and the uploads are slow copying to the mounted drive. However, if i remote desktop and upload via the browser it can max out my speeds. The only problem with that is the 2GB cap on file size uploads. Which can affect some longer 720p episodes and movies.

rclone’s author has said he doesnt see the mount being stable enough to use for uploading as that is very sensitive to timeouts, etc. He recommends using the cmd line rclone to upload.

Interesting, i never did try uploading from the cmd line. I just tried it now with acd_cli and i’m getting really acceptable speeds around 45MB/s using the cmd

acd_cli upload -x 2 home/smilingheadcase/Downloads/TV/ /TV/

This will be a fresh upload, but i wonder how it will behave with duplicates. Say if i run this cmd every few days, will it only upload the new files.

acdcli does a hash check on each of the files it’s uploading vs what’s on ACD already. So it won’t upload any duplicate files. Also, ACD has a problem with duplicate file / folder names, so acdcli will throw an error for duplicate files and ignore them when uploading to ACD.

Just finished the test there and it didn’t upload duplicates :slight_smile:

My backups will be a much easier affair from now on.

RXWatcher - have you played further with rclone mounting for Plex streaming? Do you think this is close to being stable and reliable? I’m keeping an eye on this. If it works out, I’ll upload all my media, sell off my local server and rely on a dedicated server at SYS with media on rclone mount either with ACD or Google Drive. Curious for any further thoughts on this.

I havent had that many people hit it yet but yes, it seems reliable. I have 1.2TBs in it now. I know people have played back media on web and rokus. Fast forwarding is an issue.

My issue remains of having to remount it after every upload so plex can see the new media.

I am thinking about moving to only this. I, too, have a large media server at home…36TB. I’m thinking if I can replicate it all in ACD/GDrive with rclone that I can way cut down on my home internet usage.

SyS CA still seems like the best connection for me to ACD/GDrive. online.net has been terrible.

Yes, I’ve had good success with SYS CA in the past. I’m in Canada so that helps too. Maybe in 1.34, they will fix the remounting and fast forwarding issues. If that’s the case, I think I’m sold. Just need to slowly upload all my media first.

or just re-download it. Thats how I’m going to handle it. I had a local couchpotato/sonarr and I’ve moved those databases to my quickbox system. That worked. I havent reconfigured them to point to the new mounts yet but I will. I will then have them pull fresh copies of everything. There is no way I’m uploading it all from my house :slight_smile:

Yes, I suppose that’s an option. Go through your media and see what you want to have and re-download it. I have 20mbps upload at my place. Would take about 2 months to upload it all.