Dremio persists Data Reflections as Parquet. Dremio’s Parquet reader is highly optimized and reads the data into Arrow in-memory buffers.
Your approach makes sense if you’re trying to ‘retire’ the legacy system. Data Reflections aren’t intended to be a long term persistence strategy. Instead, they are used to optimize query performance. For example, you could have Dremio maintain one or more Data Reflections that are sorted/partitioned/aggregated in different ways to optimize different query patterns. Users wouldn’t ever need to know about these Data Reflections - the query planner would automatically pick the best one for a given query.
A common use case is that the source isn’t being retired and will remain as the system of record. In this case, Data Reflections 1) optimize the analytical processing (most sources aren’t optimized for scan-heavy workloads), and 2) offloads the analytical workload from the source, so the SLA isn’t impacted.
You could still use Dremio to generate the Parquet files, even for long-term storage, by following the instructions in this thread: Converting JSON to Parquet
You might find this is 1) better performance than turbodbc (assuming you have a Dremio cluster), and 2) works with non-relational sources (eg, JSON).