So, since a reputation for trustworthiness/integrity has value, it seems that there are a few options:
- Figure out how to incorporate that value into a consequentialist decision-making framework.
- Just act with integrity. (Your suggestion, I believe.)
- Look into Virtue Ethics.
I think you make a good case for a person doing #2. But that’s probably not very clear for an A.I. agent. I discuss how #3 could relate to A.I. agents: