How to Ethically Use AI? New Corporate Challenges

GlobalLogic Latinoamérica
GlobalLogic LatAm
Published in
3 min readMay 27, 2024

The digital era presents unprecedented potential due to the emergence of new technologies that offer efficiency and innovation to various operations within a company. From the automation of repetitive tasks to the analysis of large volumes of information, Artificial Intelligence (AI) is undoubtedly the technology most leveraged by industries to improve their services.

However, the impact of these systems raises challenges, and their growing implementation and use by society at large necessitates the application of ethics. This means values and principles that guide the development and use of technology to ensure transparent and responsible integration by all parties.

As Gabriel Arango, Head of Technology at GlobalLogic LatAm, highlights,

“Artificial intelligence tools, especially generative ones like ChatGPT, are powerful and disruptive technologies that not only follow the same principles as other technologies; they must also refrain from inputting proprietary information as data and review the output data generated by GenAI tools to mitigate the risk of errors and security vulnerabilities.”

Although the very nature of artificial intelligence allows it to act without direct human intervention, its systems are driven by the massive collection of information from individuals. This presents challenges in terms of ethics and transparency: on the one hand, the data used to train algorithms may contain biases that result in discriminatory outcomes; on the other, algorithms can be contaminated by false information that can jeopardize user privacy.

Both scenarios, instead of optimizing service and fostering customer loyalty, negatively affect user experience and reputation, in this case, of companies. For this reason, going beyond the research and development of artificial intelligence systems, regulatory frameworks are established to make data usage and decision-making transparent, and guidelines for combining human and computational efforts.

“At the information security level, at GlobalLogic we have evaluated and tested various AI tools and solutions to recommend to our clients or partners, but only those that guarantee the security and privacy of the data used. Internally, an Acceptable Use Policy for GenAI Tools by GlobalLogic Personnel has been implemented, along with mandatory training for all those involved in product development to ensure quality is not compromised and to understand the associated advantages and risks,” shares Arango.

Nevertheless, the potential for future harm is always present. AI tools work with users’ IPs, allowing unauthorized disclosure of proprietary information. That is, interaction with the technology can leak third-party code, confidential information, and personal data of companies and their clients, causing a violation of intellectual property and copyright.

Therefore, corporate transparency and ethics not only consist of prohibiting the entry of sensitive data and the use of AI-generated content for product development but also presenting contracts aligned with information security requirements that inform the risks, potential intellectual property leaks within or outside the product, and similar concerns. It is essential to obtain the client’s explicit written consent before using these tools.

With still unknown boundaries, these corporate practices are crucial to building trust in technology and companies. Rapid advancements may render regulations obsolete, but they must still be exhaustively established. Consequently, the debate on ethics in popular tools like artificial intelligence is increasingly relevant in the digital world. Companies are tasked with understanding these complex systems and their ethical principles to avoid misuse. This way, they will demonstrate a commitment to transparency, security, and the well-being of their customers, allowing them to stand out in the market.

“Undoubtedly, AI, and particularly the generative AI branch, will significantly impact and revolutionize the way we develop software in the future and the workplace,” concludes Arango. “We are shifting from a team-centered model to an AI-centered one. Therefore, we are testing AI-based assistants, substantially improving the productivity of development teams and the quality of the developed software. Although the impact on different industries is still uncertain, surely the application of ethics, supervision, and transparency will positively enhance user experience and the quality of offered services.”

A futuristic robot advisor manages investment portfolios in a high-tech office, illustrating the transformative power of AI in the financial sector.

--

--