How to handle the fact that the blob input get updated once every minute, so 59-62 times per hour. Then a rollover file is created.
Doesn't it create duplicate entries in Splunk ?
It documented that NSG flow log files are created in blocks:
[0] {"records":[ (12)
[1] { ... },{ ... } => after 1 minute
[2] ,{ ... },{ ... } => after 2 minutes
...
[n] ]} (2)