How to connect dremio to HDFS cluster when Kerberos is installed

Trying to create a source by connecting to HDFS node where Kerberos is installed. But it fails with “Failure while configuring source”. Even error message is not descriptive, can’t understand what went wrong.

Did you follow our Kerberos configuration instructions? https://docs.dremio.com/deployment/yarn-hadoop.html#kerberos

Also, a look at our server.log would help clarify the error.

Yes I have configured services.kerberos param in dremio.conf file

services.kerberos: {
principal: "dremio@REALM.COM", # principal name must be generic and not tied to any host.
keytab.file.path: “/path/to/keytab/file”
}

And in /var/log/dremio/server.out file recent log message is “Dremio Daemon Started as master”. But now error is "Unavailable: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]’ while creating to create source.

Thanks for quick reply.

Yes I have configured services.kerberos in dremio.conf file. In server.out recent log message is “Dremio Daemon Started as master”.
Getting “Unavailable: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]” error while trying to create source.

Thanks.

Hi @Monika_Goel

Can you please make sure the keytab is owned by the process that runs Dremio and has permissions 400 on it? Can you please send me the entire server.log when this error happens

Thanks
@balaji.ramaswamy

Can you describe your Hadoop env some more? Did you try linking/copying the Hadoop XML files to Dremio conf?

Getting “Unable to obtain password from user” error while starting dremio (server.log attached for further reference):
log.zip (2.4 KB)

Linked core-site.xml, yarn-site.xml, & hdfs-site.xml to dremio classpath.

Able to kinit to specified principal using keytab file mapped in dremio.conf.
Any help is appreciated.

Thanks

How dremio refers to krb5.conf? Will it look for KRB5_CONFIG environment variable or do we need to set any other property in dremio.conf for krb5.conf file?

Dremio does not refer to krb5.conf explicitly - it is part of Kerberos authentication process per say.
Could you please check this link: Keytab auth issues
My impression is you have some discrepancy between user that starts Dremio and user for which you have principal/keytab.
Also not sure “root” is a good choice of user here.

Can you share your dremio.conf?

Hi Anthony,

Kerberos issue is resolved. Was permission issue on keytab file.
But HDFS connection fails with ""Failure while configuring source [HDFS]. Info: Unavailable: Failed on local exception: java.io.IOException: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; Host Details : local host is: “<Dremio_server>”; destination host is: “<name_node_ip>”:50070; "

As per https://community.pivotal.io/s/article/Hadoop-NameNode-Stuck-in-Safe-Mode-because-of-Error-Requested-Data-Length-is-Longer-than-Maximum-Configured-RPC-Length
if we add ipc.maximum.data.length = 134217728 in HDFS core_site.xml then it can be resolved but is there any other option to resolve it without changing cluster properties.

Any suggestion on it.

Can you try adding ipc.maximum.data.length = 134217728 as a Property under “Properties” of the HDFS source connection details?

Tried above suggestion. Still same error

Instead of linking core-site.xml to Dremio conf, you can copy it, and change/add the ipc.maximum.data.length setting. Can you try that?

Yes I have tried that. I have copied core-site.xml at dremio Classpath and added ipc.maximum.data.length property. Still no luck.

Changing this on client side would not help as I am pretty sure it should be changed on NameNode.
Though frankly speaking I don’t think this is a real issue. Could you take a look at server.log?

That’s the first thing I have referred. Same error message in log as well. Attached logs for your reference.

2018-08-16 12:47:30,309 [qtp1671544551-116] INFO c.d.exec.catalog.CatalogServiceImpl - User Error Occurred [ErrorId: da387110-776e-45b6-b680-ff8f42f04848]
com.dremio.common.exceptions.UserException: Failure while configuring source [HDFS]. Info: Unavailable: Failed on local exception: java.io.IOException: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length;

Thanks.

Looks like I missed your NameNode configuration. You specified port 50070 (http) while it should be IPC port: standard is 8020, not sure what it is set to in your cluster

It worked.
Thank you for looking into this. Really appreciate it.