An Azure service for ingesting, preparing, and transforming data at scale.
Within Azure Data Factory, the most native and maintainable pattern for handling Business Central OData responses (with @odata.* metadata and a nested value array) without Azure Functions is:
- Use REST connector + pagination to land raw JSON “as-is” in file storage
- Configure a REST dataset for the Business Central API.
- Use the REST connector’s pagination rules so ADF issues multiple requests and writes all pages. The connector supports patterns where the next URL comes from headers or body, and can stop when an end condition is met.
- To export the JSON response without caring about schema, use a Copy activity with no mapping defined. This enables the schema‑agnostic mode and writes the raw JSON response directly to Blob/ADLS.
- This step preserves the original OData metadata and
valuearray but decouples ingestion from transformation.
- Store JSON as Type II
arrayOfObjectswhere possible- When writing JSON via Copy activity, set
filePatterntoarrayOfObjectsif the response is already an array of records or can be normalized to that pattern. - ADF’s JSON handling in Copy activity supports
setOfObjectsandarrayOfObjects.arrayOfObjectsaligns with thevalue: [ ... ]structure and is the most natural pattern for downstream processing.
- When writing JSON via Copy activity, set
- Use Mapping Data Flow with JSON source configured for arrays
- In a Mapping Data Flow, use a JSON source pointing to the landed files.
- Under JSON settings in the source options, choose the appropriate Document Form:
- If each file is a JSON array (e.g., the
valuearray persisted as the file root), use Array of documents. - If each file is a single JSON object containing
value: [ ... ], use the default document form and then flatten thevaluearray.
- If each file is a JSON array (e.g., the
- Mapping Data Flows support JSON as a source and can read from Blob, ADLS Gen1/Gen2, SFTP, and S3.
- Flatten the
valuearray in Mapping Data Flow- Use the Flatten transformation to denormalize the
valuearray into rows. - Configure the Flatten transformation to:
- Set the Unroll by property to the
valuearray. - Project the fields from the unrolled items as columns.
- Set the Unroll by property to the
- This removes the nested array structure and produces a flat rowset suitable for loading into Azure SQL staging tables.
- Use the Flatten transformation to denormalize the
- Handle OData metadata by ignoring it in schema
- The OData metadata properties (
@odata.context,@odata.nextLink, etc.) can simply be omitted from the projection in the data flow. - Since the Flatten transformation focuses on the
valuearray, the metadata fields remain at the root and are not part of the flattened output unless explicitly selected.
- The OData metadata properties (
- Dynamic / metadata‑driven pattern considerations
- For a metadata‑driven framework where multiple entities are ingested:
- Use a generic REST dataset and parameterize the relative URL, query parameters, and pagination rules per entity.
- Land each entity’s raw JSON into a parameterized folder/path in Blob/ADLS.
- Use a single generic Mapping Data Flow where:
- The source dataset path is parameterized (entity name, date, etc.).
- The Flatten transformation always unrolls the
valuearray. - Downstream schema mapping to Azure SQL can be driven by metadata (e.g., dynamic sink mapping or separate per‑entity mappings if needed).
- While Mapping Data Flow’s Import projection is static, the JSON source can still operate generically if the structure of the
valuearray is similar across entities, or if the framework is designed to handle a known set of schemas.
- For a metadata‑driven framework where multiple entities are ingested:
- Avoiding BOM / encoding pitfalls when chaining Copy → Data Flow
- If Copy activity is used to generate JSON files that are later consumed by a JSON source in data flows, be aware:
- Copy activity writes Unicode JSON with BOM for UTF encodings.
- Data Flow JSON source with Single document enabled does not support Unicode with BOM.
- Issues like
JSON parsing error, unsupported encoding or multilinecan occur when usingarrayOfObjectswith BOM andSingle documentin Data Flow.
- To avoid this:
- Prefer
arrayOfObjectsand configure the JSON source’s Document Form accordingly instead of usingSingle documentwhere BOM is present. - Or adjust encoding settings so the combination of Copy sink and Data Flow source is supported.
- Prefer
- If Copy activity is used to generate JSON files that are later consumed by a JSON source in data flows, be aware:
This pattern stays entirely within ADF: REST connector + Copy activity (schema‑agnostic) → JSON files (arrayOfObjects or object with value array) → Mapping Data Flow JSON source + Flatten transformation → Azure SQL staging. It removes the need for Azure Functions while handling both the value array and OData metadata in a dynamic, reusable way.
References: