Unable to update data source in latest version of Dremio for Windows (3.06)

I get the following error:

2019-01-07 09:46:05,971 [catalog-source-synchronization] WARN  c.d.exec.catalog.CatalogServiceImpl -Failure updating source [SourceConfig{id=EntityId{id=42ec989d-070f-41d4-a722-d98798f53e64}, legacySourceTypeEnum=null, name=HDFS, ctime=1541684230534, img=null, description=null, config=<ByteString@354a9c98 size=47>, version=0, accelerationTTL=null, metadataPolicy=MetadataPolicy{namesRefreshMs=3600000, datasetUpdateMode=PREFETCH_QUERIED, datasetDefinitionTtlMs=null, authTtlMs=86400000, datasetDefinitionRefreshAfterMs=3600000, datasetDefinitionExpireAfterMs=10800000, deleteUnavailableDatasets=true, autoPromoteDatasets=false}, lastRefreshDate=null, accelerationRefreshPeriod=3600000, accelerationGracePeriod=10800000, type=HDFS, accelerationNeverExpire=false, accelerationNeverRefresh=false}] during scheduled updates.java.util.ConcurrentModificationException: Source [HDFS] was updated, and the given configuration has older ctime (current: 1546599259246, given: 1541684230534)
at com.dremio.exec.catalog.CatalogServiceImpl.synchronize(CatalogServiceImpl.java:487) ~[dremio-sabot-kernel-3.0.6-201812082352540436-1f684f9.jar:3.0.6-201812082352540436-1f684f9]
at com.dremio.exec.catalog.CatalogServiceImpl.synchronizeSources(CatalogServiceImpl.java:293) ~[dremio-sabot-kernel-3.0.6-201812082352540436-1f684f9.jar:3.0.6-201812082352540436-1f684f9]
at com.dremio.exec.catalog.CatalogServiceImpl$Refresher.run(CatalogServiceImpl.java:193) [dremio-sabot-kernel-3.0.6-201812082352540436-1f684f9.jar:3.0.6-201812082352540436-1f684f9]
at com.dremio.concurrent.RenamingRunnable.run(RenamingRunnable.java:36) [dremio-common-3.0.6-201812082352540436-1f684f9.jar:3.0.6-201812082352540436-1f684f9]
at com.dremio.concurrent.SingletonRunnable.run(SingletonRunnable.java:41) [dremio-common-3.0.6-201812082352540436-1f684f9.jar:3.0.6-201812082352540436-1f684f9]
at com.dremio.concurrent.SafeRunnable.run(SafeRunnable.java:40) [dremio-common-3.0.6-201812082352540436-1f684f9.jar:3.0.6-201812082352540436-1f684f9]
at com.dremio.service.scheduler.LocalSchedulerService$CancellableTask.run(LocalSchedulerService.java:94) [dremio-services-scheduler-3.0.6-201812082352540436-1f684f9.jar:3.0.6-201812082352540436-1f684f9]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_181]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_181]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_181]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [na:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_181]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_181]

Any clues on how to fix this.

It prevents me to make any changes to my data source.

I tried creating a new one with the same configuration and I get the same error.

Hi @orlandob

What source is this? Can you please send us the screenshot of the source settings ans the server.log?

Thanks
@balaji.ramaswamy

Hello @balaji.ramaswamy

It is a HDFS Source. It used to work perfectly before in fact. At least without noteworhy errors.

At the end of the startup I get this message which indicates the status:

2019-01-08 15:34:25,129 [main] INFO  c.dremio.exec.catalog.PluginsManager - Result of storage plugin startup: 
Localhost: success (172ms). Healthy
__jobResultsStore: success (148ms). Healthy
HIVE: failed (3618ms). Unavailable: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: GSS initiate failed
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:477)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:285)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:210)
at com.dremio.exec.store.hive.HiveClient$1.run(HiveClient.java:155)
at com.dremio.exec.store.hive.HiveClient$1.run(HiveClient.java:152)
at com.dremio.exec.store.hive.HiveClient$9.run(HiveClient.java:328)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at com.dremio.exec.store.hive.HiveClient.doAsCommand(HiveClient.java:325)
at com.dremio.exec.store.hive.HiveClient.connect(HiveClient.java:151)
at com.dremio.exec.store.hive.HiveClient.createClient(HiveClient.java:137)
at com.dremio.exec.store.hive.HiveStoragePlugin.start(HiveStoragePlugin.java:574)
at com.dremio.exec.catalog.ManagedStoragePlugin$1.run(ManagedStoragePlugin.java:246)
at com.dremio.concurrent.RenamingRunnable.run(RenamingRunnable.java:36)
at com.dremio.concurrent.SingletonRunnable.run(SingletonRunnable.java:41)
at com.dremio.concurrent.SafeRunnable.run(SafeRunnable.java:40)
at com.dremio.concurrent.Runnables$1.run(Runnables.java:45))
INFORMATION_SCHEMA: success (0ms). Healthy
__logs: success (1087ms). Healthy
__support: success (128ms). Healthy
HOME : success (290ms). Healthy
__datasetDownload: success (59ms). Healthy
sys: success (0ms). Healthy
$scratch: success (77ms). Healthy
HDFS: failed (1489ms). Unavailable: Unavailable: SIMPLE authentication is not enabled.  Available:[TOKEN, KERBEROS]
__home: success (127ms). Healthy
__accelerator: success (207ms). Healthy
Order: success (1011ms). Healthy

Attached you can find the screenshot at the server.log.

server.log.201900108.zip (8,8 KB)

*.unix.net is our local domain for unix and linux machines.

Hi @orlandob

This looks like a kerberized HDFS cluster. Does Dremio have the right permissions to read the keytab file on the Dremio coordinator. See doc link below

Dremio with Kerberos

Thanks
@balaji.ramaswamy

Hello @balaji.ramaswamy

Thanks, it worked following the instructions provided.

I indeed needed to download the client configs yarn-site, hdfs-site and core-site. Make the required adjustments.

Then finally because I am under Windows and using MIT Kerberos credential manager, I had to make the following changes in dremio-env:

DREMIO_JAVA_EXTRA_OPTS=-Dsun.security.krb5.debug=true -Djavax.security.auth.useSubjectCredsOnly=false