Dremio Tech Question

Hello Team,

I have few questions which came up while I was evaluating the product:

  1. I can’t find a way to connect data sources not listed in Dremio using jdbc driver. I am not looking for jdbc driver for Dremio but jdbc driver which can connect to any source.

  2. I did not understand the point made on one of the forums about impala be replaced by Apache arrow?

  3. I would also like to understand how row level security is maintained in Dremio as I read through I see Dremio uses ranger but if I consider my usecase i have sentry implemented on cloudera stack and it doesn’t supports ranger. How would this model work?

  4. Can we restrict administrator to run query or see data which is highly sensitive?

  5. When you say Dremio provides data as a service, can I publish my jobs as web services? If not then what do you mean by data as service and how to achieve the same?

  6. Which technology does Dremio uses to execute it’s job using yarn resource manager? Is it map reduce or spark?

  7. Finally if I want replace my existing etl tool i.e. informatica how easy would it be to write complex etl replacement in Dremio?

See answers inline …

  1. I can’t find a way to connect data sources not listed in Dremio using jdbc driver. I am not looking for jdbc driver for Dremio but jdbc driver which can connect to any source.

This does not exist. We are working on making an SDK available that will allow you to build your own connectors for sources that Dremio does not yet support.

  1. I did not understand the point made on one of the forums about impala be replaced by Apache arrow?

The two projects are not comparable. Apache Arrow is a project for columnar in-memory data and efficient processing of this columnar format. Impala is a SQL engine. Read more about Arrow here: https://www.dremio.com/apache-arrow-explained/

  1. I would also like to understand how row level security is maintained in Dremio as I read through I see Dremio uses ranger but if I consider my usecase i have sentry implemented on cloudera stack and it doesn’t supports ranger. How would this model work?

Dremio Enterprise Edition provide row and column-level access control as well as masking, integrated with LDAP/AD group membership. This functionality is independent of Ranger or Sentry. If you are using Ranger we also provide integration here. We do not support Sentry.

  1. Can we restrict administrator to run query or see data which is highly sensitive?

See previous answer.

  1. When you say Dremio provides data as a service, can I publish my jobs as web services? If not then what do you mean by data as service and how to achieve the same?

You can provision new datasets as web services, yes. See more on Data-as-a-Service here: https://www.dremio.com/what-is-data-as-a-service/

  1. Which technology does Dremio uses to execute it’s job using yarn resource manager? Is it map reduce or spark?

Dremio provides its own engine, based on Apache Arrow. When deployed in a Hadoop cluster as a YARN application, Dremio is a long-running process that is allocated resources based on the YARN queue. Most of our cloud customers deploy Dremio independent of a Hadoop cluster using Kubernetes. In short, MapReduce and Spark do not provide the speed or resource management features necessary, so we developed our own SQL engine.

  1. Finally if I want replace my existing etl tool i.e. informatica how easy would it be to write complex etl replacement in Dremio?

Long-running ETL should be run in an ETL tool or via Hive or Spark. Interactive ETL, or what we call “last mile” ETL is a good fit for Dremio’s virtual datasets (https://docs.dremio.com/working-with-datasets/virtual-datasets.html). Instead of making copies of the data, Dremio manages these transformations using standard SQL and applies them at query time each time the data is accessed. This makes it easy to let every user have the exact version of the data they need, without creating copies that add risk for security and governance. In addition, Dremio automatically tracks the lineage and provenance of the data using our Data Graph: https://www.dremio.com/solutions/data-lineage

Hi,

I want to know if I can manage and publish API from Dremio?

Hi @Deepak_Mishra,

 I want to know if I can manage and publish API from Dremio?

The Dremio REST API that we publish is stable so you can rely on those endpoints for whatever product you’re building on top of Dremio services.

But How do I publish views/tables in Dremio as rest AP I?

You can submit a job for Dremio to execute as described here:
https://docs.dremio.com/rest-api/sql/post-sql.html

Then you can get the results of a job as described here:
https://docs.dremio.com/rest-api/jobs/get-job.html