…
Good questions. One of the goals of autonomous vehicles is that they should both communicate and cooperate (or so we're told), so surely yes to the first part, and in fact more so than humans, because of communication that doesn't rely on only hands and eyes and cooperation that doesn't involve egos. However, you just know that someone is going to devise ways to get autonomous vehicles to mimic human egos of various sorts.
As for the hive-mind grudge, in theory the situation should never arise, because of no ego. But then again...
On the first point, the autonomy will be a mix of rules and machine learning. The learning will have to be trained on something. Probably humans.
As for the grudge, imagine an AI spidey sense. It spots a car (or cyclist, wild animal etc) behaving erratically, or not following the Highway Code. It needs to adjust it’s behaviour.
Is it wildly unreasonable to check with the hive mind for similar incidents, to inform a model of threats and likely behaviour?
Is it unreasonable to program it to avoid a situation where a problem car is an ongoing interaction, if there’s a choice?
Maybe they’ll come to regard Audis with the same disdain as we do - on a balance of probabilities?
So, no ego, just a programmed response to increase safety and efficiency.