Enable namenode HA , hive can not connect

Caused by: java.net.UnknownHostException: nameservice1(When NameNode enable high availability , hive data location became to it)

Hello @wjw870907,

Can you provide more context for this error message? What are trying to do in Dremio when this appears?

hi,ben
when I enable NameNode high availability.Here’s more context:
2019-03-11 09:59:35,750 [qtp682328059-1794511] INFO c.dremio.exec.catalog.DatasetManager - User Error Occurred [ErrorId: 3d63e608-1cfa-4dd5-a627-5c9eaf41e23e]
com.dremio.common.exceptions.UserException: Failure while attempting to read metadata for hive.base_data_fcl.report_xsfx_ajhd_jj.
at com.dremio.common.exceptions.UserException$Builder.build(UserException.java:746) ~[dremio-common-3.1.1-201901281837360699-30c9d74.jar:3.1.1-201901281837360699-30c9d74]
at com.dremio.exec.catalog.DatasetManager.getTableFromPlugin(DatasetManager.java:316) [dremio-sabot-kernel-3.1.1-201901281837360699-30c9d74.jar:3.1.1-201901281837360699-30c9d74]
at com.dremio.exec.catalog.DatasetManager.getTable(DatasetManager.java:191) [dremio-sabot-kernel-3.1.1-201901281837360699-30c9d74.jar:3.1.1-201901281837360699-30c9d74]
at com.dremio.exec.catalog.CatalogImpl.getTable(CatalogImpl.java:128) [dremio-sabot-kernel-3.1.1-201901281837360699-30c9d74.jar:3.1.1-201901281837360699-30c9d74]
at com.dremio.exec.catalog.DelegatingCatalog.getTable(DelegatingCatalog.java:59) [dremio-sabot-kernel-3.1.1-201901281837360699-30c9d74.jar:3.1.1-201901281837360699-30c9d74]
at com.dremio.exec.catalog.CachingCatalog.getTable(CachingCatalog.java:66) [dremio-sabot-kernel-3.1.1-201901281837360699-30c9d74.jar:3.1.1-201901281837360699-30c9d74]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_131]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_131]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_131]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_131]
at org.glassfish.hk2.utilities.reflection.ReflectionHelper.invoke(ReflectionHelper.java:1287) [hk2-utils-2.5.0-b32.jar:na]
at org.jvnet.hk2.internal.MethodInterceptorImpl.internalInvoke(MethodInterceptorImpl.java:109) [hk2-locator-2.5.0-b32.jar:na]
at org.jvnet.hk2.internal.MethodInterceptorImpl.invoke(MethodInterceptorImpl.java:125) [hk2-locator-2.5.0-b32.jar:na]
at org.jvnet.hk2.internal.MethodInterceptorInvocationHandler.invoke(MethodInterceptorInvocationHandler.java:62) [hk2-locator-2.5.0-b32.jar:na]
at com.sun.proxy.$Proxy117.getTable(Unknown Source) [na:na]
at com.dremio.dac.explore.DatasetsResource.getDatasetSummary(DatasetsResource.java:270) [dremio-dac-backend-3.1.1-201901281837360699-30c9d74.jar:3.1.1-201901281837360699-30c9d74]
at com.dremio.dac.explore.DatasetsResource.newUntitled(DatasetsResource.java:141) [dremio-dac-backend-3.1.1-201901281837360699-30c9d74.jar:3.1.1-201901281837360699-30c9d74]
at com.dremio.dac.explore.DatasetsResource.newUntitledFromParent(DatasetsResource.java:208) [dremio-dac-backend-3.1.1-201901281837360699-30c9d74.jar:3.1.1-201901281837360699-30c9d74]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_131]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_131]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_131]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_131]
at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81) [jersey-server-2.25.1.jar:na]
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144) [jersey-server-2.25.1.jar:na]
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:161) [jersey-server-2.25.1.jar:na]
at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:205) [jersey-server-2.25.1.jar:na]
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:99) [jersey-server-2.25.1.jar:na]
at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:389) [jersey-server-2.25.1.jar:na]
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:347) [jersey-server-2.25.1.jar:na]
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:102) [jersey-server-2.25.1.jar:na]
at org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:326) [jersey-server-2.25.1.jar:na]
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271) [jersey-common-2.25.1.jar:na]
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) [jersey-common-2.25.1.jar:na]
at org.glassfish.jersey.internal.Errors.process(Errors.java:315) [jersey-common-2.25.1.jar:na]
at org.glassfish.jersey.internal.Errors.process(Errors.java:297) [jersey-common-2.25.1.jar:na]
at org.glassfish.jersey.internal.Errors.process(Errors.java:267) [jersey-common-2.25.1.jar:na]
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317) [jersey-common-2.25.1.jar:na]
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305) [jersey-server-2.25.1.jar:na]
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154) [jersey-server-2.25.1.jar:na]
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:473) [jersey-container-servlet-core-2.25.1.jar:na]
at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427) [jersey-container-servlet-core-2.25.1.jar:na]
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388) [jersey-container-servlet-core-2.25.1.jar:na]
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341) [jersey-container-servlet-core-2.25.1.jar:na]
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228) [jersey-container-servlet-core-2.25.1.jar:na]
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812) [jetty-servlet-9.2.26.v20180806.jar:9.2.26.v20180806]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1669) [jetty-servlet-9.2.26.v20180806.jar:9.2.26.v20180806]
at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83) [jetty-servlets-9.2.26.v20180806.jar:9.2.26.v20180806]
at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:301) [jetty-servlets-9.2.26.v20180806.jar:9.2.26.v20180806]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) [jetty-servlet-9.2.26.v20180806.jar:9.2.26.v20180806]
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) [jetty-servlet-9.2.26.v20180806.jar:9.2.26.v20180806]
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) [jetty-server-9.2.26.v20180806.jar:9.2.26.v20180806]
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) [jetty-servlet-9.2.26.v20180806.jar:9.2.26.v20180806]
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061) [jetty-server-9.2.26.v20180806.jar:9.2.26.v20180806]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) [jetty-server-9.2.26.v20180806.jar:9.2.26.v20180806]
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) [jetty-server-9.2.26.v20180806.jar:9.2.26.v20180806]
at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:95) [jetty-server-9.2.26.v20180806.jar:9.2.26.v20180806]
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) [jetty-server-9.2.26.v20180806.jar:9.2.26.v20180806]
at org.eclipse.jetty.server.Server.handle(Server.java:499) [jetty-server-9.2.26.v20180806.jar:9.2.26.v20180806]
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) [jetty-server-9.2.26.v20180806.jar:9.2.26.v20180806]
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:258) [jetty-server-9.2.26.v20180806.jar:9.2.26.v20180806]
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544) [jetty-io-9.2.26.v20180806.jar:9.2.26.v20180806]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) [jetty-util-9.2.26.v20180806.jar:9.2.26.v20180806]
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) [jetty-util-9.2.26.v20180806.jar:9.2.26.v20180806]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: nameservice1
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:417) ~[hadoop-common-2.8.3.jar:na]
at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:132) ~[hadoop-hdfs-client-2.8.3.jar:na]
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:359) ~[hadoop-hdfs-client-2.8.3.jar:na]
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:293) ~[hadoop-hdfs-client-2.8.3.jar:na]
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:157) ~[hadoop-hdfs-client-2.8.3.jar:na]
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2812) ~[hadoop-common-2.8.3.jar:na]
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100) ~[hadoop-common-2.8.3.jar:na]
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2849) ~[hadoop-common-2.8.3.jar:na]
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2831) ~[hadoop-common-2.8.3.jar:na]
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389) ~[hadoop-common-2.8.3.jar:na]
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) ~[hadoop-common-2.8.3.jar:na]
at com.dremio.exec.store.dfs.FileSystemWrapper.get(FileSystemWrapper.java:131) ~[dremio-sabot-kernel-3.1.1-201901281837360699-30c9d74.jar:3.1.1-201901281837360699-30c9d74]
at com.dremio.exec.store.hive.DatasetBuilder.inputPathExists(DatasetBuilder.java:781) ~[dremio-hive-plugin-3.1.1-201901281837360699-30c9d74.jar:3.1.1-201901281837360699-30c9d74]
at com.dremio.exec.store.hive.DatasetBuilder.buildSplits(DatasetBuilder.java:601) ~[dremio-hive-plugin-3.1.1-201901281837360699-30c9d74.jar:3.1.1-201901281837360699-30c9d74]
at com.dremio.exec.store.hive.DatasetBuilder.buildIfNecessary(DatasetBuilder.java:324) ~[dremio-hive-plugin-3.1.1-201901281837360699-30c9d74.jar:3.1.1-201901281837360699-30c9d74]
at com.dremio.exec.store.hive.DatasetBuilder.getDataset(DatasetBuilder.java:246) ~[dremio-hive-plugin-3.1.1-201901281837360699-30c9d74.jar:3.1.1-201901281837360699-30c9d74]
at com.dremio.exec.catalog.DatasetManager.getTableFromPlugin(DatasetManager.java:298) [dremio-sabot-kernel-3.1.1-201901281837360699-30c9d74.jar:3.1.1-201901281837360699-30c9d74]
… 62 common frames omitted
Caused by: java.net.UnknownHostException: nameservice1
… 79 common frames omitted

@wjw870907, does this occur when you add a Hive source or are you seeing this when you try to access a particular Hive table from Dremio?

hi,ben
Exactly! It is worked when my namenode is not ha,hence,i can access a particular Hive table from Dremio until namenode enable ha.Make a guess,the data location becamed hdfs://nameservice1,not hdfs://ip:8020 after enable nn ha.Then,I remove the data source and add a new hive source again.But still like this

Can you access Hive from the Dremio coordinator not through Dremio, but rather using a Hive shell like Beeline?

I dont’s use distributed.But I try to use beeline connecting Hive server, it works

@ben
what’s going on?I need your kindly feedback very much.

Hi @wjw870907

Is it possible for you to attach server.log and server.out files to check? Also, can you tell us the order of the steps you followed to make NN in HA mode and adding hive, this will help us to understand the series of changes.

Please provide us the timestamps of the events to correlate the logs

Thanks
@Venugopal_Menda

@Venugopal_Menda
server.zip (16.6 KB)
Here’s attachment.And I make NN in HA through cdh(cloudera management).Then I update hive metastore namenode to replace the original hive data location.

@Venugopal_Menda
How’s going?

Hi @wjw870907

From the provided logs, it seems the NN HA config is not updated properly in the config files.
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: nameservice1
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:417) ~[hadoop-common-2.8.3.jar:na]
at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:132) ~[hadoop-hdfs-client-2.8.3.jar:na]

Can you check the configurations in the core-site.xml and hdfs-site.xml?

You can download the config files from Cloudera Manager and replace with the files with same permissions and try

Thanks
@Venugopal_Menda

Hi,@Venugopal_Menda
Huge thanks.Em… I’don’t find the configurations in the core-site.xml and hdfs-site.xml in dremio from backend.Where’s location?I just know the Cloudera Manager configurations of location

Hi @wjw870907

Not in Dremio, i wanted to check at the HDFS configuration directories. It can be under your HADOOP Installation directory the file paths would be like “/etc/hadoop/conf/hdfs-site.xml”.

You can download the configs for HDFS service from Cloudera Manager and see if there is any difference in the config on the system and CDM config.

Thanks
@Venugopal_Menda

Hi @Venugopal_Menda
Here’s configuration,I don’t find something wrong with it
config.zip (2.9 KB)

Hi @wjw870907

You have 2 choices,

Choice 1:
On the edge node from the Hadoop conf folder symlink or copy core-site.xml and hdfs-site.xml to the conf folder under the $DREMIO_HOME

Choice 2:
Add the Namenode HA specific parameters in the advanced tab - under additional properties on the Hivesource, for the list of parameters to pass, see Cloudera Namenode HA Parameters

The important ones are

fs.defaultFS
dfs.nameservices (mycluster in this example)
dfs.namenode.rpc-address.mycluster.nn1
dfs.namenode.rpc-address.mycluster.nn2
dfs.ha.namenodes.mycluster (in this example will be nn1, nn2)
dfs.client.failover.proxy.provider.mycluster

Similarly you can configure Yarn RM HA using Cloudera Yarn HA setup

Kindly let us know if you have any other questions

Thanks
@balaji.ramaswamy

Hi,@balaji.ramaswamy
Huge thanks for your help.It works!