AI x Human Cognizance: What’s the word?

Author — Naveena Tirumala (LinkedIn)

Whether to consider an exceptional AI, giving human-like output and striking versatility a boon or a bane?

Hooman : Let’s talk about love.

Intel Being : Love is important.

Hooman : Why is that?

Intel Being : Love allows people to survive and thrive. It allows families to come together and stay together.

Hooman : Isn’t love just a chemical reaction in the body?

Intel Being : No, it isn’t.

Hooman : Please explain.

Intel Being : Love is a gift from the universe….

Here, this empathetic and wise being in the conversation is the newer creation of OpenAI, GPT-3 (Generative Pre-trained Transformer 3), an essentially powerful artificial intelligence that amazed the Silicon Valley technocrats with its NLP (natural learning process) skills. GPT-3 might literally sound like the name of an Android from Star Wars, but it’s really a neural network–powered language model — a computer program that, in layman’s terms, takes a load of text and then “speaks” by guessing what the next word is most likely to be. And it’s not just storytelling that it’s good at, it’s pervading all walks of human life.

What’s the use of such AI, you may think? It can drive vehicles with little to no human intervention. It can detect cancer cells and can help with facial recognition. Meanwhile, policymakers are using it, businesses are deploying it, and it is commercially used across various sectors. AI is already doing things that at once seemed magical!

For instance, a Paris startup, SourceAI, aspires to a broader vision for software development and has used GPT-3 to automate the growing arrays of coding tasks. It’s just like you asking the tool to perform a multiplication of two numbers given by the user and poof! the company’s tool will whip you outlines of coding required to perform the task at hand.

In other cases, such technology could be deployed as medical chatbots. Nabla, a company specialized in healthcare technologies, used the GPT-3 bot to determine its usability as a giver of medical advice. But when it came to the territory of mental health support, the chatbot built on GPT-3 had advised one fake patient to kill himself when he reported he had suicidal tendencies.

It remains to be seen how well such Artificial Intelligence actually addresses real-world issues, though.

Read more here:

1. NATURE — Robo-writers: the rise and risks of language-generating AI

2. WIRED — Now for AI’s Latest Trick: Writing Computer Code

3. AINEWS — Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves

An interest group at the Indian School of business, India. We write about technologies & their applications in businesses.