How should you conf...
 
Notifications
Clear all

How should you configure the sink for the copy activity? To answer, select the appropriate options in the answer area. NOTE:

1 Posts
1 Users
0 Likes
82 Views
(@bulluckrolando)
Posts: 664
Noble Member
Topic starter
 

HOTSPOT

You are building an Azure Data Factory solution to process data received from Azure Event Hubs, and then ingested into an Azure Data Lake Storage Gen2 container.

The data will be ingested every five minutes from devices into JSON files. The files have the following naming pattern.

/{deviceType}/in/{YYYY}/{MM}/{DD}/{HH}/{deviceID}_{YYYY}{MM}{DD}HH}{mm}.json

You need to prepare the data for batch data processing so that there is one dataset per hour per device Type. The solution must minimize read times.

How should you configure the sink for the copy activity? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Show Answer Hide Answer

Suggested Answer:

Explanation:

Box 1: @trigger().startTime

startTime: A date-time value. For basic schedules, the value of the startTime property applies to the first occurrence. For complex schedules, the trigger starts no sooner than the specified startTime value.

Box 2: /{YYYY}/{MM}/{DD}/{HH}_{deviceType}.json One dataset per hour per deviceType.

Box 3: Flatten hierarchy

- FlattenHierarchy: All files from the source folder are in the first level of the target folder.

The target files have autogenerated names.

 
Posted : 08/02/2023 10:18 am

Latest DP-203 V1 Dumps Valid Version

Latest And Valid Q&A | Instant Download | Once Fail, Full Refund
Share: