Can no see new data on Elasticsearch dataset

Hi,

I can no see any new data coming in since the day I created the dataset for an Elasticsearch cluster 2 days ago. I can see that data on Kibana.

SELECT *
FROM Kibana."systemlog-c-2017.43".systemlog t
WHERE t.timing.started > TIMESTAMPADD(DAY, -2, CURRENT_TIMESTAMP)

Do I have to refresh dataset some way?
Thanks

That’s strange. A couple questions:

  • Did you create any reflections on this dataset?
  • What client application are you using to run the SQL query? Or are you just using the SQL box in Dremio’s user interface?

Thanks @tshiran,

No refelections set on this dataset, I’m planing using them later.
Currently I’m just playing with the Dremio SQL editor selecting the index.
I’ve created a new dataset with only one node of the cluster and still having the same issue :frowning:
I’ll reinstall and see what happens.

If you do a count(*), do you see the correct count?

Can you make sure you’re clicking Run as opposed to Preview. The Preview button may utilize caching as it’s designed to make it possible for users to curate/shape large datasets.

Yep, did that as well, no luck :frowning:
I restarted Dremio and got a message asking to run

ALTER TABLE Kibana.“systemlog-c-2017.43”.systemlog REFRESH METADATA

After got this error:

Failure handling type. Dremio only supports a path to type mapping across all schema mappings. Dataset path Kibana.“systemlog-c-2017.43”.systemlog. Path to field segments. Declared Type segments::struct<toIata::varchar, marketingCarrier::varchar, timeOfArrival::timestamp, fromIata::varchar, timeOfDeparture::timestamp, operatingCarrier::varchar, flightNumber::varchar>. Observed Type segments::union<struct<toIata::varchar, marketingCarrier::varchar, timeOfArrival::timestamp, fromIata::varchar, timeOfDeparture::timestamp, operatingCarrier::varchar, flightNumber::varchar>, list<struct<toIata::varchar, marketingCarrier::varchar, timeOfArrival::timestamp, fromIata::varchar, timeOfDeparture::timestamp, operatingCarrier::varchar, flightNumber::varchar>>, struct<toIata::varchar, marketingCarrier::varchar, timeOfArrival::timestamp, fromIata::varchar, timeOfDeparture::timestamp, operatingCarrier::varchar, flightNumber::varchar>>

I have created a new dataset with just one node and I’m getting always the REFRESH METADATA error when running the query and old data with the preview.

UPDATE
After reinstalling I tested few queries:

SELECT MAX(“@timestamp”)
Returns correct date

Running (not preview) a query fails with the error above. Could it be caused by an structure change during this days?

Dremio only supports a path to type mapping across all schema mappings.

Does it means I have a mapping mistmatch between nodes?
Thanks

Hi @gbrian,

Let me try to troubleshoot this. Can I have the below uploaded if possible

  1. The profile of the failed query
  2. The Elastic data source settings including the “Advanced Settings”
  3. The mapping of the index if that can be shared

Thanks,
@balaji.ramaswamy

Thanks @balaji.ramaswamy,

The profile of the failed query
51adaf0b-5301-4d03-a314-40e6df734b39.zip (4.4 KB)

The Elastic data source settings including the “Advanced Settings”

General
Name

Kibana
Description
Hosts
Host
X.X.X.X
Port
9200
Add Host
Authentication

No Authentication

Pushdown Options
✔

Scripts enabled in Elasticsearch cluster
Advanced Options
Show hidden indices that start with a dot (.)
✔
Use the Painless scripting language when connecting to Elasticsearch 5.0+ (experimental)
✔
Query whitelisted hosts only
✔ 

Show _id column in elastic tables
Read Timeout (Seconds)
60
Scroll Timeout (Seconds)
300
Scroll Size

4000
Metadata Caching
Dataset Discovery
Fetch every
1
Hour(s)
Dataset Details
Fetch mode
Only Queried Datasets
Fetch every
1
Hour(s)
Expire after
3
Hour(s)
Connection Options
 

SSL enabled connecting to Elasticsearch cluster
Acceleration
Refresh Policy
Refresh every

1
Hour(s)
Expire after

3
Hour(s)

The mapping of the index if that can be shared
This can not be possible right now.Currently, AFAIK it’s giant! :frowning:

Hope this helps.

Hi @gbrian,

Let me look into this and see what’s going on and get back to you

Thanks,
@balaji.ramaswamy

Hi @gbrian ,

Is “systemlog-c-2017.43”.systemlog” an ALIAS? Can you also remove and reattach the source if that is not a big hassle. Just trying to narrow down the issue.

Thanks,
@balaji.ramaswamy

Hi @balaji.ramaswamy,
Is not an alias, is an index. I have left only one node on the dataset to check if it was a problem with different configurations between nodes in the cluster but problem persist.

What do you mean by “remove and reattach” ? the alias?

Thanks in advance

Hi @gbrian

Can you please remove and re-attach the Elastic Search data source containing the index

Thanks
@balaji.ramaswamy

Sorry, how i can do this. Did not found a place for deleting datasets so i created new one.

@gbrian

Are you still running into metadata issues on the new data source?

Thanks
@balaji.ramaswamy

So you know, if you want to remove a data source, you navigate to all data sources, then hover over the source for a gear to appear:

Thanks,

Did and got new error :frowning:

{"id":{"part1":2736988012257849376,"part2":7450418437535531008},"start":1510228966200,"end":1510228966234,"query":"SELECT * FROM Kibana5.last_3_months.systemlog","foreman":
{"address":"Linux1","userPort":31010,"fabricPort":45678,"roles":
{"sqlQuery":true,"javaExecutor":true},"startTime":1509176198938,"maxDirectMemory":8589934592},"state":4,"totalFragments":0,"finishedFragments":0,"user":"admin","error":"Failure while attempting to read metadata for 
Kibana5.last_3_months.systemlog.","verboseError":"DATA_READ ERROR: Failure while attempting to read metadata for Kibana5.last_3_months.systemlog.\n\nSql Query SELECT * FROM Kibana5.last_3_months.systemlog\n\n\n  
(java.lang.NullPointerException) null\n    com.google.common.base.Preconditions.checkNotNull():210\n    com.google.common.collect.ImmutableCollection$ArrayBasedBuilder.add():339\n    
com.google.common.collect.ImmutableList$Builder.add():652\n    com.google.common.collect.ImmutableList$Builder.add():630\n    com.google.common.collect.ImmutableCollection$Builder.addAll():301\n    com.google.common.collect.ImmutableList$Builder.addAll():691\n    com.google.common.collect.ImmutableList.copyOf():275\n    com.google.common.collect.ImmutableList.copyOf():226\n    com.google.common.collect.FluentIterable.toList():373\n    com.dremio.plugins.elastic.mapping.SchemaMerger$MergeField.toField():280\n    com.dremio.plugins.elastic.mapping.SchemaMerger$MergeField$2.apply():283\n    com.dremio.plugins.elastic.mapping.SchemaMerger$MergeField$2.apply():280\n    com.google.common.collect.Iterators$8.transform():799\n    com.google.common.collect.TransformedIterator.next():48\n    com.google.common.collect.ImmutableCollection$Builder.addAll():301\n    com.google.common.collect.ImmutableList$Builder.addAll():691\n    com.google.common.collect.ImmutableList.copyOf():275\n    com.google.common.collect.ImmutableList.copyOf():226\n    com.google.common.collect.FluentIterable.toList():373\n    com.dremio.plugins.elastic.mapping.SchemaMerger$MergeField.toField():280\n    com.dremio.plugins.elastic.mapping.SchemaMerger$MergeField$2.apply():283\n    com.dremio.plugins.elastic.mapping.SchemaMerger$MergeField$2.apply():280\n    com.google.common.collect.Iterators$8.transform():799\n    com.google.common.collect.TransformedIterator.next():48\n    com.google.common.collect.ImmutableCollection$Builder.addAll():301\n    com.google.common.collect.ImmutableList$Builder.addAll():691\n    com.google.common.collect.ImmutableList.copyOf():275\n    com.google.common.collect.ImmutableList.copyOf():226\n    com.google.common.collect.FluentIterable.toList():373\n    com.dremio.plugins.elastic.mapping.SchemaMerger$MergeField.toField():280\n    com.dremio.plugins.elastic.mapping.SchemaMerger$3.apply():71\n    com.dremio.plugins.elastic.mapping.SchemaMerger$3.apply():68\n    com.google.common.collect.Iterators$8.transform():799\n    com.google.common.collect.TransformedIterator.next():48\n    com.google.common.collect.Iterators$7.computeNext():651\n    com.google.common.collect.AbstractIterator.tryToComputeNext():143\n    com.google.common.collect.AbstractIterator.hasNext():138\n    com.google.common.collect.TransformedIterator.hasNext():43\n    com.google.common.collect.ImmutableCollection$Builder.addAll():300\n    com.google.common.collect.ImmutableList$Builder.addAll():691\n    com.google.common.collect.ImmutableList.copyOf():275\n    com.google.common.collect.ImmutableList.copyOf():226\n    com.google.common.collect.FluentIterable.toList():373\n    com.dremio.plugins.elastic.mapping.SchemaMerger.merge():67\n    com.dremio.plugins.elastic.ElasticTableBuilder.populate():187\n    com.dremio.plugins.elastic.ElasticTableBuilder.buildIfNecessary():161\n    com.dremio.plugins.elastic.ElasticTableBuilder.getDataset():142\n    com.dremio.exec.store.SimpleSchema.getTableFromDataset():332\n    com.dremio.exec.store.SimpleSchema.getTableWithRegistry():295\n    com.dremio.exec.store.SimpleSchema.getTable():415\n    org.apache.calcite.jdbc.SimpleCalciteSchema.getImplicitTable():67\n    org.apache.calcite.jdbc.CalciteSchema.getTable():219\n    org.apache.calcite.prepare.CalciteCatalogReader.getTableFrom():117\n    org.apache.calcite.prepare.CalciteCatalogReader.getTable():106\n    org.apache.calcite.prepare.CalciteCatalogReader.getTable():73\n    org.apache.calcite.sql.validate.EmptyScope.getTableNamespace():71\n    org.apache.calcite.sql.validate.DelegatingScope.getTableNamespace():189\n    org.apache.calcite.sql.validate.IdentifierNamespace.validateImpl():104\n    org.apache.calcite.sql.validate.AbstractNamespace.validate():84\n    org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace():910\n    org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery():891\n    org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom():2859\n    org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom():2844\n    org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelect():3077\n    org.apache.calcite.sql.validate.SelectNamespace.validateImpl():60\n    org.apache.calcite.sql.validate.AbstractNamespace.validate():84\n    org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace():910\n    org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery():891\n    org.apache.calcite.sql.SqlSelect.validate():208\n    org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression():866\n    org.apache.calcite.sql.validate.SqlValidatorImpl.validate():577\n    com.dremio.exec.planner.sql.SqlConverter.validate():188\n    com.dremio.exec.planner.sql.handlers.PrelTransformer.validateNode():167\n    com.dremio.exec.planner.sql.handlers.PrelTransformer.validateAndConvert():155\n    com.dremio.exec.planner.sql.handlers.query.NormalHandler.getPlan():43\n    com.dremio.exec.planner.sql.handlers.commands.HandlerToExec.plan():66\n    com.dremio.exec.work.foreman.AttemptManager.run():282\n    java.util.concurrent.ThreadPoolExecutor.runWorker():1152\n    java.util.concurrent.ThreadPoolExecutor$Worker.run():622\n    java.lang.Thread.run():748\n","errorId":"aa1f1d0b-f9ac-4e0b-8466-4e216cd7ca6b","errorNode":"Linux1:31010","planningStart":1510228966203,"planningEnd":0,"clientInfo":{"name":"Dremio Java local client","version":"1.2.2-201710100154510864-d40e31c","majorVersion":1,"minorVersion":2,"patchVersion":2,"application":"20186@Linux1","buildNumber":2,"versionQualifier":"201710100154510864-d40e31c"},"planPhases":[{"phaseName":"Kibana5.last_3_months.systemlog: PERMISSION_CACHE_HIT","durationMillis":0}],"accelerationProfile":{"accelerated":false,"numSubstitutions":0,"millisTakenGettingMaterializations":0,"millisTakenNormalizing":0,"millisTakenSubstituting":0}}

Hi @gbrian

I am not able to see the error fully. Can you please send the the profile again (of the new error)?

Thanks,
@balaji.ramaswamy

Instructions on how to share a query profile: https://www.dremio.com/tutorials/share-query-profile/

@balaji.ramaswamy, @kelly
Please find attached the query profilec6a9041e-1185-4e7f-b292-fea3ed57efe2.zip (4.4 KB)

Thanks @gbrian,

Will take a look and get back to you ASAP