Azure DataBricks
We've moved! To improve customer experience, the Collibra Data Quality User Guide has moved to the Collibra Documentation Center as part of the Collibra Data Quality 2022.11 release. To ensure a seamless transition, dq-docs.collibra.com will remain accessible, but the DQ User Guide is now maintained exclusively in the Documentation Center.
Read the File by setting up the azure key.
spark.conf.set("fs.azure.account.key.abcCompany.blob.core.windows.net","GBB6Upzj4AxQld7cFv7wBYNoJzIp/WEv/5NslqszY3nAAlsalBNQ==")
val df = spark.read.parquet("wasbs://[email protected]/FILE_NAME/20190201_FILE_NAME.parquet")
Process the file using Owl
// register in Owl Catalog, Optional
val owl = new Owl(df).register
// run a full DQ Check
owl.owlCheck()
Additional imports and input options
import com.owl.core._
import com.owl.common._
val props = new Props()
props.dataset = datasetName
props.runId = 2019-03-02
props..... // look at the many input options
Last modified 6mo ago