Wednesday, 27 September 2017
Why Androids Are More Trustworthy Than Humans
Androids have a bad reputation in science fiction. But in science fact, robot-human hybrids can be preferable to both the former and the latter. The trick is balancing the flexibility, intuition, and understanding of a human with the efficiency, reliability, and indefatigability of a robot.
At Invisible Technologies, we provide synthetic assistants composed of human teams managed by a robot. With an algorithm at the helm, you can trust that the assistant’s output will match your desires, but with humans executing, you can trust that the assistant won’t get stumped the first time reality doesn’t match its instructions.
With dozens of human agents recording their experiences over hundreds of task instances, the accumulated understanding starts to resemble a rudimentary brain — something capable of memory and learning. The robot brain directs the human arms, which update the robot brain, which then directs the human arms better — and you have yourself a synthetic assistant far more capable than any individual human or robot.
What qualities do you look for in a normal human assistant?
You want someone who:
Never forgets a task,
Always knows your personal preferences, and
Never makes mistakes.
How would you find these qualities in a human? You might trust a referral, their relevant experience, or just like the person in question — but they can never meet these promises one hundred percent of the time. Memory is fickle, notes get lost, people get sick, and their judgement is never quite as good as yours. And good luck finding an artificial intelligence that can understand all that!
For a synthetic intelligence (humans plus robots), meeting this criteria is a simple matter of programming.
Build a robot brain to prioritize the above qualities, hire humans to follow the robot’s instructions, and you don’t have to trust either — just the feedback loop itself.
That’s the problem with human brains — you can’t see the mental programming that determines the person’s actions. You don’t know how they remember things, how they learn from mistakes, or how they pay attention to detail — you just see the results of those thoughts.
Meanwhile, synthetic brains are 100% visible. You can peer into them and rewire however you like. Make sure input A creates output B. Take action Y when situation X occurs. Under no circumstances should you Z. It’s all right there!
In the rest of this post, I’m going to offer you a peek inside the synthetic brains that currently power Invisible. Here are the dashboards our agents use to solve for all of the above.
Instances Dashboard: Never Forget a Task
Even a human can solve this one — just write down every task you get. At Invisible, we ensure agents record their task as part of the task itself.
This dashboard allows the clients to confirm at a glance what their assistant is spending time on, which Capabilities (categories) of work they’re prioritizing, and how much time they are spending on each.
Soon, recording this data will happen automatically via time tracking software, but before we automate anything, we always execute manually to ensure we understand all of the relevant pieces first.
Context and Preferences Dashboards: Always Remember Preferences
A common problem with virtual assistants is context transfer — how can you trust Agent A learns from the work of Agent B? Managing and coordinating humans is a job in itself, as any manager can tell you.
Our Context dashboard stores all of the things an agent needs to know about the client. It doesn’t matter if it’s their birthday, the size of a conference room, or how to sort emails from their spouse. It’s the Single Source of Truth for everything we know about the client.
Note the robotic commands, written in natural language. It’s easy to say: “If we book a flight, then we should note the client as Out of Office.”
But it’s harder to know which of their 3 recurring family events per week we should block time for. Or whether that email from their co-founder should be labeled as urgent or just an FYI.
This is where human intuition comes in handy. The agent can use the Context they have, in tandem with the commands the algorithm gave them, and make a decision that matches the circumstances.
As long as we have the right process instructions written down, the agent will make the right decision. And if we don’t, the client will tell us, and we’ll record it as a new Preference.
Preferences come from client Feedback or Mistakes (both of which have their own dashboards). They’re essentially edits to the synthetic brain’s instructions — do this instead of that. They might add a new step, tweak an existing one, or reiterate something that wasn’t written before.
By storing all of these idiosyncrasies in one place, the client can trust that as long as they input their desires into the dashboard, then their assistant will act the way they want.
Mistakes Dashboard: Never Make The Same Mistake Twice
Robots are the only ones who can truly promise they’ll never make a mistake. But that also means they can’t innovate or solve problems that aren’t addressed in the initial delegation. Our synthetic assistants can, which means they can promise the next best thing — no mistake made twice.
We track all of our mistakes in the Mistakes dashboard, and every Mistake gets a Preference to match. That way, we can promise we’ll never make the same mistake twice, and stand by it. Every mistake updates the brain so it’s incapable of making it again.
You’ll notice not all mistakes are equal. Some are due to human error, a systems failure, or the cost of trying a new innovation. We categorize those as well, and this informs our product roadmap and agent training procedures.
You can trust your synthetic assistant when we promise it will never make the same mistake twice. Just check its brain!
Learn more at inv.tech