Connections to Dremio

Hello all, I am new to Dremio. I want to connect our Dremio instance with python. The first choice is via Glue job in AWS. I can use local pyspark docker env to connect to Dremio via jdbc connection which is good without any issue. Then I apply the same logic into Glue. Here is the exception I am facing:
java.sql.SQLException:cfjd.org.apache.arrow.flight.FlightRuntimeException:UNAVAILABLE: unresolved address.
The logic I used for above(local docker and aws Glue):

table_data_dev = spark.read.format("jdbc") \
.option("user",user_dev)\
.option("password", password_dev) \
.option("driver", "org.apache.arrow.driver.jdbc.ArrowFlightJdbcDriver") \
.option("url", jdbc_uri_dev) \
.option("disableCertificateVerification", "true") \
.option("threadPoolSize",100)\
.option("query", 'SELECT * FROM INFORMATION_SCHEMA."TABLES"') \
.load()

Then I followed example from link below on my local dev machine:

This time,I got another exception
pyarrow._flight.FlightUnavailableError: Flight returned unavailable error, with message: DNS resolution failed for https:: C-ares status is not ARES_SUCCESS qtype=AAAA name=https is_balancer=0: Domain name not found

The goal for us is to fetch data from Dremio and store them in our s3 bucket. Glue job is the top choice, or python logic in ec2 as the second.

From exceptions above seems there are some net work or cert related configurations have been ignored from our side. There are limited resources for above two exceptions and we are blocked by them.May I know if someone can help regarding this? Thanks in advance.

@zac Are you able to add the S3 bucket as a S3 source in Dremio? Once that is done, you can then write a SQL that fetches the data you need. Once that is done you can add a “Create Table <table_name> as Select …” on top of it. This should write the results as Iceberg or Parquet to the path you specify on the CTAS command. Have you tried that?

Hello,
Thanks for the reply. Yes, our team fix that issue and we are able to get data from Dremio.

Thanks

Thanks for the update @zac

Hello Zac,

we have been encountering exactly the same issue at our end. Can you share any details on how your team has resolved the same ?

Thanks and best regards,
Julian