Restore functionality broken?

Running this command:

sudo -u dremio /opt/dremio/bin/dremio-admin restore --backupdir /tmp/backups -v

Returns File /tmp/backups does not exist.
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(
at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(
at org.apache.hadoop.fs.FileSystem.listStatus(
at org.apache.hadoop.fs.FileSystem.listStatus(
at com.dremio.dac.util.BackupRestoreUtil.scanInfoFiles(
at com.dremio.dac.util.BackupRestoreUtil.validateBackupDir(
at com.dremio.dac.cmd.Restore.main(
verify failed File /tmp/backups does not exist.

Any ideas?

So I tried every possible combination and I am almost 100% sure that the restore option does not work when pointing to a local backup folder.

I was able to work around the issue by uploading my local backup folder into hdfs and pointing the restoring option to it.


BTW, this issue should be easy to replicate and confirm… Let me know if you find yourself in the same situation so we can formally raise the bug

Hi @akikax

Are you trying to restore from 2.1 to 3.0? Backup and Restore should be on the same version


Nahh, from 2.1 to 2.1, same version (I read the docos), you should be able to easily replicate the issue. Same behaviour with 3.0 to 3.0

Hi @akikax

Works as expected for me, can you please send me output of below 2 commands?

ls -ld /tmp/backups
ls -ltrh /tmp/backups

Also can you please try this

Change dremio.conf to write local to your home directory say (~/dremiodb), like below

local: “/dremiodb”
mkdir -p ~/dremiodb/db

Try the below command,

sudo -u dremio /opt/dremio/bin/dremio-admin restore --backupdir /tmp/backups -v

@akikax, @balaji.ramaswamy, I’ve encountered the same issue, but was able to get it work with local filesystem using the file:// schema prefix in full path. Seems it is a minor bug in implementation which leads to use HDFS by default.

Thanks for the update @savermyas

I was aware of that, did not strike to me that you might be on a Hadoop edge node and we default to hdfs://. Great find !