There's an Otter thing that barakta sometimes uses.
The rule of thumb here is that speech recognition software struggles with context in almost exactly the same way that a deaf person does. So it will mangle critical nouns, drop 'not' in the worst possible places, and otherwise generate grammatically correct nonsense. Which makes it more useful as a memory aid than it does for understanding what's going on in real time. If two people are having a conversation, and the speaker can watch the transcript and clarify when it goes wrong, it's a bit more useful than it is for someone giving a presentation.
The gold standard is STTR, with a human operator (preferably in the same room, but they can work remotely). This is sufficiently expensive that it's in the realm of conference budgets or Access To Work funding, rather than something a Bridge Club is likely to be able to afford.
And regardless of whether you're using speech recognition or a remote human operator, microphone technique is a complicated subject with a need for more than one highlighter pen. If you're feeding the listener room audio from a fondleslab's internal mic, they're going to struggle.