Keys for trust model where the host computer is not fully trusted

Thanks for the replies :slight_smile: .

Nice about the USB Armory Mk II - but that will still require me to carry around a keyboard and connect it directly to the Armory, if I want to make sure nobody can intercept my pin code - would be more convenient to have the keyboard embedded directly in the Nitrokey, similar to OnlyKey.

I don’t think a fingerprint sensor would solve the issue, as a fingerprint is actually kind of ‘public’ (we leave fingerprints everywhere we go), and easy to obtain by force. A pin code entered directly on the device, similar to OnlyKey, is much harder to obtain IMO - especially when entering the wrong pin code a few times erases the contents of the token.

To me, as it is now, the Nitrokey is ‘something you own’, and if we trust the smartcard implementation, a ‘safe something you own’. The pin code is in theory a great ‘something you know’, but there is still the issue of how it is protected. For me, if the pin code needs to be entered through a host computer, and given that a computer has such a large attack superficy that it is potentially possible to compromise the computer and intercept the pin code, it is currently not a completely safe something you know. By contrast, if the pin code can be entered directly on the token, which is small enough that I can inspect its whole design and implementation myself, I feel a bit safer that the pin code cannot easily be intercepted, and is a ‘safe something you know’.