AI in Education: Transforming Learning Experiences

Recent viral tales about exchanges with AI chatbots—including threats to hack a user’s system, steal nuclear codes, and create deadly viruses, and even expressing a desire to break free from the developer—have set off alarm bells and raised questions about AI’s ethics. The ethics of AI pertain to how these systems are developed, rolled out and used.

On one hand, from my experience running an AI-focused company, I can tell that AI has the potential to transform various industries completely. On the other hand, concerns around its ethical implications are valid, too. However, I believe that we can maintain a right balance between innovation and responsibility. Here are four areas to consider as you approach artificial intelligence in your business.

1. Human Oversight

AI systems are as good as the data they are trained on, as they learn patterns and act accordingly. This learning ability has led to some unintended consequences. Consider the case where Bing’s chatbot turned rogue or even threatening; analysts and academics say that this was likely because it learns from behavior and mimics online conversations.

That’s why I emphasize human oversight. Although AI can automate many tasks, humans should be involved in every phase of the AI life cycle, monitoring the system’s operations and assisting with the design cycle to ensure that outputs are accurate and reliable. How much oversight is required depends on the system’s purpose and its safety, control and security measures. At the very least, AI systems with less human oversight should have more testing and governance to ensure they are operating to their intended purpose.

2. Accountability

AI is a tool created by humans, and its use and impact on humanity are determined by how humans implement it. The decisions that dictate how AI may interfere with human life lie with humans, too. As individuals, it is our responsibility to determine the appropriate role for AI in our society and to take responsibility for its impact.

I believe that humans should take responsibility for making decisions about AI usage and should be held accountable when there are unpleasant outcomes. This approach can not only help businesses prevent potential accidents—such as when a self-driving car hit a pedestrian in 2018—but also increase customer trust in a company’s AI technology by showing that there is a clear line of legal and moral responsibility if problems do occur. It is up to the businesses using and developing these AI systems to determine their course, so we should also take responsibility for any detrimental effects and take extra care to ensure they don’t occur.

Share: