Generative artificial intelligence (AI) has become a buzzword this year, capturing the public’s fancy and sparking a rush among Microsoft and Alphabet to launch products with the technology they believe will change the nature of work.
Here is everything you need to know about this technology.
What is Generative AI?
Like other forms of AI, generative AI learns how to take actions from past data. It creates brand new content — a text, an image, even computer code — based on that training, instead of simply categorising or identifying data like other AI.
The most famous generative AI application is ChatGPT, a chatbot that Microsoft-backed OpenAI released late last year. The AI powering it is known as a large language model because it takes in a text prompt and from that writes a human-like response.
GPT 4, a newer model that OpenAI announced this week, is “multimodal” because it can perceive not only text but images as well. OpenAI’s president demonstrated on Tuesday how it could take a photo of a hand-drawn mock-up for a website he wanted to build, and from that generate a real one.
What are benefits of Generative AI?
Demonstrations aside, businesses are already putting generative AI to work.
The technology is helpful for creating a first draft of marketing copy, for instance, though it may require cleanup because it isn’t perfect. One example is from CarMax Inc, which has used a version of OpenAI’s technology to summarise thousands of customer reviews and help shoppers decide what used car to buy.
Generative AI likewise can take notes during a virtual meeting. It can draft and personalise emails, and it can create slide presentations. Microsoft Corp and Alphabet Inc’s Google each demonstrated these features in product announcements this week.
What are problems with it?
Nothing, although there is concern about the technology’s potential abuse.
School systems have fretted about students turning in AI-drafted essays, undermining the hard work required for them to learn. Cybersecurity researchers have also expressed concern that generative AI could allow bad actors, even governments, to produce far more disinformation than before.
At the same time, the technology itself is prone to making mistakes. Factual inaccuracies touted confidently by AI, called “hallucinations,” and responses that seem erratic like professing love to a user are all reasons why companies have aimed to test the technology before making it widely available.
Are there only Google and Microsoft in this race?
Those two companies are at the forefront of research and investment in large language models, as well as the biggest to put generative AI into widely used software such as Gmail and Microsoft Word. But they are not alone.
Large companies like Salesforce Inc as well as smaller ones like Adept AI Labs are either creating their own competing AI or packaging technology from others to give users new powers through software.
What is Elon Musk doing in it?
He was one of the co-founders of OpenAI along with Sam Altman. But the billionaire left the startup’s board in 2018 to avoid a conflict of interest between OpenAI’s work and the AI research being done by Telsa Inc — the electric-vehicle maker he leads.
Musk has expressed concerns about the future of AI and batted for a regulatory authority to ensure the development of the technology serves the public interest.
“It’s quite a dangerous technology. I fear I may have done some things to accelerate it,” he said towards the end of Tesla Inc’s Investor Day event earlier this month.
“Tesla’s doing good things in AI, I don’t know, this one stresses me out, not sure what more to say about it.”