What should the Partnership on AI become?

The Partnership on AI (https://www.partnershiponai.org/ ), for those that aren’t familiar with the group, brings together some of the most influential corporations in the Artificial Intelligence (AI) development businesses. They are wise enough to foresee that the growth in this technology will have profound influence on humanity in a variety of ways. They have recently created this non-profit organization to foster the successful and beneficial implementation of AI into society at large. Their Board of Trustees just published a series of thematic pillars and are currently looking to hire the leadership team for the Partnership .

Using the Partnership’s thematic pillars, I take the liberty to create an action plan on their behalf as their goals and ForHumanity’s goals have many like-minded tenets. In the sections below are my action items in italics, which follow each of the Partnership on AI’s thematic pillars (which are copied verbatim from the website).

1. SAFETY-CRITICAL AI

Advances in AI have the potential to improve outcomes, enhance quality, and reduce costs in such safety-critical areas as healthcare and transportation. Effective and careful applications of pattern recognition, automated decision making, and robotic systems show promise for enhancing the quality of life and preventing thousands of needless deaths.

However, where AI tools are used to supplement or replace human decision-making, we must be sure that they are safe, trustworthy, and aligned with the ethics and preferences of people who are influenced by their actions.

We will pursue studies and best practices around the fielding of AI in safety-critical application areas.

This needs to be a primary function of the Partnership. The leadership ought to quickly scour the academic literature and operating AI businesses for best practice on safety and control. A team should be built to work directly with industry and request the transparency to identify these best practices and quickly implement a Partnership guide on AI safety. This would be published widely and carried directly back to the users of AI for compliance.

In addition, the Partnership should create a SAFEAI “Good Housekeeping Seal of Approval”. This seal of approval would be licensed to consumer products once the product has met the Partnership’s standard for safety,cyber security, privacy, data bias, control and ethics. This model would allow for a revenue stream into the Partnership that is its own. The SAFEAI seal would enhance its service to the masses as well as the Partnership’s brand. The SAFEAI seal would become a green light to purchasers of AI consumer products when it puts consumer’s minds at ease. A strong brand and a source of revenue will enhance the Partnership’s stature by giving it independence.

Finally, the Partnership should begin to convene user groups, such as parents, to identify their concerns regarding safety and AI. This will enable the SAFEAI seal to be used specifically on toys, during the early stages of its introduction. SAFEAI then becomes a natural talking point on mass media, such as the Today Show and GMA. This is another important step towards the goal of building the brand and reaching the mass market, educating them on AI, especially SAFEAI.

2. FAIR, TRANSPARENT, AND ACCOUNTABLE AI

AI has the potential to provide societal value by recognizing patterns and drawing inferences from large amounts of data. Data can be harnessed to develop useful diagnostic systems and recommendation engines, and to support people in making breakthroughs in such areas as biomedicine, public health, safety, criminal justice, education, and sustainability.

While such results promise to provide great value, we need to be sensitive to the possibility that there are hidden assumptions and biases in data, and therefore in the systems built from that data. This can lead to actions and recommendations that replicate those biases, and suffer from serious blindspots.

Researchers, officials, and the public should be sensitive to these possibilities and we should seek to develop methods that detect and correct those errors and biases, not replicate them. We also need to work to develop systems that can explain the rationale for inferences.

We will pursue opportunities to develop best practices around the development and fielding of fair, explainable, and accountable AI systems.

The Partnership should adopt IEEE’s Ethically Aligned Design standards immediately. Where there are areas of doubt or questions, the Partnership should immediately engage John Havens and the IEEE team to voice the concerns and make changes as appropriate, however EAD is an excellent jumping off point and the faster we achieve consensus, the easier it will be to create standards in the industry and subsequent compliance.

The Partnership should then create a business team that will seek audit level transparency from AI firms across their businesses and look to assign the Partnership’s seal of approval on SAFEAI at the firm level. This audit level function would remain confidential. Failures would not be published, however the SAFEAI seal may not be adopted until compliant. This business model for the non-profit is similar to the Credit Ratings model of corporate bond issuance. The Credit Ratings business model has proven successful for decades by increasing confidence from bond buyers and thus increased the acceptance of debt sold by corporations. AI firms today can only imagine the power of this seal of approval on their ability to transact business, especially in the public realms.

Systems and processes will have to be developed inside of the Partnership in order to continually improve it’s ability to evaluate data bias, cyber- security, ethics, privacy, control and standards compliance. Over time, this team would become the de facto experts in AI compliance.

3. COLLABORATIONS BETWEEN PEOPLE AND AI SYSTEMS

A promising area of AI is the design of systems that augment the perception, cognition, and problem-solving abilities of people. Examples include the use of AI technologies to help physicians make more timely and accurate diagnoses and assistance provided to drivers of cars to help them to avoid dangerous situations and crashes.

Opportunities for R&D and for the development of best practices on AI-human collaboration include methods that provide people with clarity about the understandings and confidence that AI systems have about situations, means for coordinating human and AI contributions to problem solving, and enabling AI systems to work with people to resolve uncertainties about human goals.

I think this tenet from the Partnership creates the opportunity for sponsor firms and SAFEAI participants to demonstrate their good faith to humanity broadly as they look to commercialize their developments of AI.

The Partnership can take a leadership role to provide smart, safe and responsible AI applications that benefit all of humanity. It is uniquely positioned to be an advocate for further adoption of AI because of the transparency and independence which member firms should provide. Furthermore, it should be a resource to humans and businesses who have concerns or questions about safety, ethics, cyber security, data bias,privacy and control issues.

The Partnership should be highly visible, speaking regularly at industry events as well as in the mass media to enhance its brand and become known as the independent resource on AI safety with the primary goal of advancing AI for the benefit of humanity.

In the implementation of AI, the Partnership is more like the referee, ensuring that game is being played by the rules — rules which are well known to both AI developers and consumers. Games that are played with referees, where the rules are well known to all, are always regarded as the most fair by all participants and onlookers. They also become the games that are most widely watched/participated in, so it is truly in everyone’s best interest.

4. AI, LABOR, AND THE ECONOMY

AI advances will undoubtedly have multiple influences on the distribution of jobs and nature of work. While advances promise to inject great value into the economy, they can also be the source of disruptions as new kinds of work are created and other types of work become less needed due to automation.

Discussions are rising on the best approaches to minimizing potential disruptions, making sure that the fruits of AI advances are widely shared and competition and innovation is encouraged and not stifled. We seek to study and understand best paths forward, and play a role in this discussion.

I view this as a vital role for the Partnership both directly to companies as well as in policy formation. Personally I am convinced that significant technological unemployment is an unavoidable consequence of the advancement of AI and automation. The Partnership is uniquely positioned to understand the sources of these job dislocations as well as the companies involved in their impetus.

I can envision a company-elected program, designed by the Partnership that helps any company with a multi-step process of managing technological unemployment. One step could be job retraining, both internally and externally. The Partnership would be uniquely positioned to see the “new” jobs being create by AI and could create a training program that takes advantage of that perch to devise programs that fulfill those “new” job descriptions.

Retraining will not solve all of the problems and there will likely be room for the creation of Universal Basic Income (or some other facsimile), both at the local level and the Federal level. This could take the form of an insurance program that corporations could subscribe to, whereby the pool becomes the broader corporate community at-large. Should companies choose to forego a strict bottom-line only approach they may choose to protect employees and communities that they directly influence. The insurance pool would exist for payouts to companies that experience technological unemployment, resulting in a “benefit” being paid to the employee.

Finally, much research will need to be done on the concept of Universal Basic Income (or facsimile) and the Partnership should sponsor research to understand this topic, its impact on society and possible forms of implementation. Then the Partnership will be well positioned to advise corporations and legislators as they consider the possible solutions to technological unemployment.

5. SOCIAL AND SOCIETAL INFLUENCES OF AI

AI advances will touch people and society in numerous ways, including potential influences on privacy, democracy, criminal justice, and human rights. For example, while technologies that personalize information and that assist people with recommendations can provide people with valuable assistance, they could also inadvertently or deliberately manipulate people and influence opinions.

We seek to promote thoughtful collaboration and open dialogue about the potential subtle and salient influences of AI on people and society.

Here I see the Partnership having to strike a delicate balance between data driven business and the humanity it serves. The Partnership ought to be a thought leader and independent auditor of AI systems. Violations of human rights and the law, cannot be tolerated. They must be swiftly addressed and rooted out at their core. This can be done in a myriad of ways,such as shining a light on the problem and/or providing a source for independent and unbiased corrections.

One solution might take the form of an audit, or application of standards and practice. Or another might take the form of media coverage and a comprehensive forum for open dialogue designed to reach not only consensus but practical application.

The SAFEAI format should provide a basis for privacy and how privacy is applied. I feel that it might involve choice, or “opting -in”, but once the standard is determined I view privacy as one of the applications of SAFEAI.

6. AI AND SOCIAL GOOD

AI offers great potential for promoting the public good, for example in the realms of education, housing, public health, and sustainability. We see great value in collaborating with public and private organizations, including academia, scientific societies, NGOs, social entrepreneurs, and interested private citizens to promote discussions and catalyze efforts to address society’s most pressing challenges.

Some of these projects may address deep societal challenges and will be moonshots — ambitious big bets that could have far-reaching impacts. Others may be creative ideas that could quickly produce positive results by harnessing AI advances.

The spirit of this tenet highlights the altruistic nature of the Board of Trustees at the Partnership on AI. The Partnership can provide a robust, independent verification of safety, ethics, standards, cyber security, privacy, bias and control that would enable large public users of AI (who have a diverse constituency)to have the confidence they need to make large scale decisions to implement AI into their processes.

Where AI impacts public policy, I do believe that the Partnership should have a view and a voice in the way AI is adopted into legislation. I could see a dedicated advocacy role form around the principles and research which the Partnership sponsors.

One example might eventually be in the realm of General AI. Prof. Nick Bostrom, of Oxford, has suggested that for the benefit of the common good, if/when General AI is achieved and unleashed, then it should no longer be held by an individual or single corporation but instead operated on behalf of all, for the common good. I could envisage a future where the Partnership becomes wholly representative of that common good and might even become entrusted with the guidance over a General AI system.

This last point highlights the most important skill that the Partnership should have, nimbleness. This world is dynamic and requires an agile organization to respond to the ever-changing challenges that AI will present. Nimbleness is a culture that must be cultivated. The Partnership should avoid being rigid in its action plans and instead use data driven research and active public feedback loops to look for new challenges. The Partnership should then use its considerable resources to swiftly tackle the challenge and always strive for practical, applicable solutions.

7. SPECIAL INITIATIVES

Beyond the specified thematic pillars, we also seek to convene and support projects that resonate with the tenets of our organization. We are particularly interested in supporting people and organizations that can benefit from the Partnership’s diverse range of stakeholders.

We are open-minded about the forms that these efforts will take.

I believe there are four additional topics that the Partnership should also be prepared to speak about 1) Income inequality 2) Transhumanism 3) AI and Faith 4) Life extension. Each of these four topics centered around AI has enormous impact on the way that society functions and our current version of humanity. The implications of these four topics and certainly others, yet unknown, will be vital in facilitating the integration of AI throughout humanity in a way that is most beneficial to all. Especially since some of these topics may foster deep division in humanity and create new minorities that need to be protected.

All of this speaks to an organization that is nimble, data driven and action-oriented in the exploration of the boundaries of the Partnership’s mandate. It also requires a continual feedback loop between the Partnership staff and stakeholders (both on the Board and for the common good) to explore if the Partnership should expand its mandate. This means that communication is paramount. The organization needs to create a robust culture of open communication and dialogue with a diverse array of constituents. This type of culture is not created overnight, but instead will only happen when the leadership team leads by example and rewards open and positive dialogue.

I hope that the Partnership on AI gets the right leadership and becomes everything that it could be. The future promises some significant upheaval and it needs strong organizations to provide guidance and be a aligned with the best interests of humanity.