PCs aren't faster, they have more cores, so they can do more at a time, but it takes effort to optimize for parallel work. Also the form factor keeps getting smaller, more people use laptops now and you can't cheat thermal efficiency.
If review is the "bottleneck", I'd say the code needs to be optimized for review time. Ship small increments of easy to understand code, touching as small surface as possible, and make sure it passes the review with no need for corrections and re-review.
Everyone. One person is too lazy to write a message, the others can't be bothered to listen to the whole thing 🤷♂️
The transcription should be attached to the audio recording so if the sender cares about it being correct they should be able to comment or add correction.
I think this is a showcase photo for the case and the flipppies look better amd more like generic floppy disk with the metal slider on the top. It clearly communicates the purpose of the item, and the keys are in to show that it locks with a key and there's 1 spare key.
I think we can assume it's nvidia H200 which peaks at 700W what what I saw on Google. Multiply that by the turnaround time from your prompt to full response and you have a ceiling value.
There's probably some queueing and other delays so in reality the time GPU spends on your query will be much less. If you use the API, it may include the timing information.
Especially if you're Python programmer