In our first installation, we’re taking stock of the sensational rise of this new technology: its origins and recent explosion into public awareness, the record-breaking deals funding its ascent, and the regulation that is (or isn’t) governing the space. Plus, read on for our predictions about what comes next.
With 25 billion+ pieces of synthetic content online, it’s clear that generative AI makes content easier than ever to create – even as it produces new obstacles to understanding where an image or a fragment of text is coming from – and whether it’s real or fake.
You might have heard about the challenges creators face in tracing the use of their work by others – and in exercising control over what it is transformed into by AI. That’s not just a problem for the artists, writers, and designers among us; it has implications for everyone. How will we determine what’s real and what’s fake in a world where attribution is impossible? Who – and what – will we trust to authenticate the words, images, sounds, and videos that AI models spit out into the digital ether?
Our reports will span the generative AI space but we’ll return to questions of authentication from one quarter to the next as we grapple with the future of content as we know it.
So mused Alan Turing in the opening of his 1950 paper introducing “the imitation game” now known as the Turing test. More than 70 years since the publication of that groundbreaking work on artificial intelligence, people are still debating the answer. There’s no question that AI has extended our ability to analyze the world around us in the decades since.
AI helps us detect patterns in everyday life, predicting when our packages will arrive or suggesting which video we should watch next on YouTube. This traditional or “analytical” AI supplements our ability to assess things that already exist, but not our capacity to create new content.
The development of latent diffusion — the technology behind what we call generative AI — flipped the script. Latent diffusion models are machine learning models that learn the underlying structure of a dataset and map it to a small sample of a 512 x 512 image, known as “latent space”. Essentially, the models learn the right patterns of data and then replicate them on a smaller scale, which builds to a bigger result that we can recognize — a new image, text, sound, etc. that captures the relevant style.
In practice, what this means is that a user can enter a plain text prompt, such as “A picture of a horse” and a picture of a horse is generated.
Now, dozens of companies offer tools capable of creating new, never-before-seen text, images, audio, and video that rival art created by humans in beauty and meaning.
Many creators see this as an existential threat. “Art is dead, dude. It’s over. A.I. won,” one creator told the New York Times this September. The concern is familiar.
Fears of new creative modes making old, human-dependent ones obsolete have a long history. In 1859, poet and art critic Charles Beaudelaire fretted about a new way to capture images, writing, “If photography is allowed to supplement art in some of its functions, it will soon have supplanted or corrupted it altogether.”
More recently, musicians saw themselves pitted against nameless, faceless “pirates” making their songs available online to the masses for free.
But of course, today painters still paint, and musicians still make music. Why? Because the creators and the technologists — and their lawyers — were able to hammer out a middle ground. For generative AI, the key is authentication…
The reasons are manifold: people will want to quickly generate content in their own style, for their own use or enjoyment.
Think, for instance, of being able to generate a new song in the style of your favorite bands, on-demand; or writing an entire paper in your voice by typing a few prompt phrases; or a synthetic version of you answering questions at work — and you’ve approved of all of this. The list goes on.
We need quick, easy, and transparent ways to determine the lineage of content and track derivatives so that everyone can be fairly compensated and credited for their work.
At Vermillio we call these solutions, “Authenticated AI.” Major generative AI platforms don’t yet afford users the ability to determine who has created what or where it came from. At Vermillio, we believe that this level of verification is not only crucial for ownership reasons but also for determining what content is real and what’s fake. The lineage of everything created on our platform is traced on blockchain, enabling — for the very first time — AI-generated images and all derivatives to be properly attributed and owned. All creations have a digital signature and verifiable “public receipt.”
Generative AI holds enormous potential to galvanize human creativity. It can be used for everything from image enhancements – people jazzing up their selfies – to the generation of wholly new images – for instance, advertising agencies using text-to-image generation to speed up and enrich the storyboarding process.
Firms are racing to invest in more than 160 start-ups that have been launched in this space just this year alone. To date, nearly 500 startups have been engaged in the Generative AI space.
In the last 12 months, Generative AI has swapped places with cryptocurrency in search interest according to Google Trends. At the 2023 CES trade show that took place in January, more than 550 exhibitors were listed in the show’s “Artificial Intelligence” category, which was more than double Metaverse (176), Cryptocurrency (19), and Blockchain (55) combined. And, according to (in our view) a lowball estimate from Grand View Research, the generative AI market is expected to be worth $109.37 billion by 2030.
Perhaps there was no surer sign that generative AI had broken through to the mainstream, though, than the backlash that came alongside general amazement at the technology’s capabilities. In the latter half of 2022, artists, educators, and corporate IP holders like Getty Images took their concerns to the press — and in the case of Getty and several creators, to the courts — meaning that those previously unacquainted with generative AI likely had a negative first impression of the innovation.
No, AI will not replace you, but it will completely change the world.
Nvidia CEO Jensen Huang summed up what comes next: “Over the course of the next 10 years… I believe we’re going to accelerate AI by another million times.” Huang’s right, but he doesn’t go far enough – over the course of the next 10 years, we believe we’re going to accelerate AI by another billion times. We are experiencing a societal shift in which the concept of creation is moving away from that of physical goods to one of digital goods. From Roblox’s undeniable popularity to Epic’s recent announcement of their new digital marketplace, Fab, the creation and ownership of digital goods is becoming overwhelmingly important.
Just a few years ago, generative AI struggled to gain credibility among investors.
Fast-forward to 2022. As the crypto winter set in, investors looking for the next big thing flocked to generative AI after a series of attention-grabbing releases.
According to CB Insights, last year saw a record $2.6 billion of equity investment in generative AI startups across 110 deals, up from $271 million in 2020. Another source estimates the total enterprise value of the generative AI space to be about $48 billion.
Venture capital, of course, played a major role, with investors comparing the emergence of generative AI to the dawn of mobile technology or the internet itself. A publication from Sequoia Capital stated that generative AI had “the potential to generate trillions of dollars of economic value.”
But it’s not just VC firms leading the way — perhaps the most important deals to keep an eye on are the blockbuster acquisitions and partnerships by established tech giants — most notably Microsoft’s extended, $10 billion deal with Open AI.
And the money can flow out just as quickly as it can flow in. Google, for example, saw a $100 billion decrease in its valuation after its generative AI chatbot, Bard, came up with the wrong answer to a question in an ad.
The rush to pour money into generative AI is reminiscent, of course, of tech investing’s most famous cautionary tale: the dot-com bubble.
Clearly, we’re still in the proliferation phase, with new companies popping up and investors eager to fund them. But this time, things are different. AI touches — or will soon touch — all aspects of human life, because, as researchers like Ajay Agrawal have pointed out, prediction — AI’s fundamental strength — is involved in every decision we make.
In the meantime, financiers would do well to remember that bandwagons seldom offer a free ride; in the end, the cream will rise to the top, and those with anything less will pay the price.
These days, conversations about the future of work usually feature raised voices lamenting the job-displacing effects of generative AI. AI’s critics point to the technology’s uncanny capacity for producing human-sounding text; its ability to mimic the styles of human artists; and its skillful manipulation of old music into new sounds as omens of the coming revolution in creative and white-collar work.
Even proponents are predicting a major revolution in how we allocate work. Microsoft CEO Satya Nadella IBM framed it as a question of support and not replacement. “I see these technologies acting as a co-pilot, helping people do more with less,” he explained.
Anxieties about robots coming for our jobs are hardly new. In the early 19th century, Luddites destroyed machinery in textile factories across England. More recently, Elon Musk predicted that we will one day “exceed a one-to-one ratio of humanoid robots to humans.”
It’s not that these fears are overblown (many Luddites really were made permanently redundant) but that they fail to account for AI’s upsides.
Taking a global view, it’s clear that the technology has the potential to radically level the playing field. The average AI effects salary is 75k per year in the US but only 19k in Kenya. That may change as users without formal programming training are newly equipped to imagine and create entire worlds, from films to video games.
One thing is clear: as generative AI suffuses the global economy we will need reliable ways of authenticating its products.
Last month, CNET published over 70 AI-generated articles rife with inaccuracies, including one story that gave readers the wrong formula for calculating compound interest. Despite the oversight, the publisher laid off about 10% of its writing staff a week later to focus on its “tech stack.”
The real question remains: how long will it be before generative AI systems get good enough that we can fully trust their outputs without the risk of “hallucinations”? It could happen before the next installment of this report.
In the first quarter of 2023, three lawsuits — one filed by a group of artists, two by media giant Getty Images — made the first attempt at putting the brakes on generative AI. The plaintiffs allege that the tech platforms named in the suits infringe upon copyrights by using creative works to train image-generation algorithms without permission. Generative AI companies, on the other hand, view their datasets as upholding fair use doctrine and the suits as hurdles to innovation.
At Vermillio, we’ve spent the past three years having conversations with people on all sides of this debate. What we’ve learned is that balancing the interests of content owners big and small with those of generative AI firms is not only possible — but essential.
Creators should be fairly credited and compensated for their work in the digital world — period.
Recent history has shown that this can be done; consider the series of legal disputes that set a new status quo for the digital music industry in the early 2000s. Though the current state of affairs is not perfect, music piracy was curtailed and everyone is all the better for it: streaming platforms pay artists, and music is widely available to listeners at a reasonable cost.
As the legal framework develops, creators and tech platforms will need mechanisms to ensure seamless compliance with laws and ethical standards. In the US, lawmakers have begun to call for regulation at the state and federal level and you can expect to see local legislatures move fast to establish norms protecting a range of uses, including in governance and government intelligence.
Meanwhile, a supreme court decision against the use of training data could spell the end of not just AI as a tool for governance but also as a driver of everything from your Netflix algorithms to ChatGPT. (We’re already staring down versions of this outcome with the European Parliament set to vote on new risk-based regulations and Italy having issued a temporary ban on ChatGPT over concerns that the tool could violate the country’s standards on data collection.)
Expect to see even more dots on our Eye on AI chart for next quarter. New launches in the space will continue for the foreseeable future — but at some point the rubber will have to meet the road.
The birth of the internet again provides a useful analogy: in the early days of the internet, there were many more web browsers than what we see and use today.
So, in generative AI, what will separate the winners from the losers?
Though there’s still much that remains to be seen, two things are abundantly clear: one, without a way to authenticate AI-generated creations, the adoption of this technology will hit roadblocks, and those roadblocks in turn will stifle the creativity that can be enabled by embracing generative AI to enhance — not replace — human creativity.
Two, the appetite for using generative AI to make creations based on the most beloved IP in the world — film and TV franchises, book series, and artists — will only grow.
The ways in which this technology can create new experiences for fans to engage with their favorite content are endless. Imagine if six-year-olds could create new characters with AI that had been trained on their favorite superheroes. Think of a child who could not dance, generating a synthetic version of themself that dances to songs from their favorite movie.
The whirlwind of attention that generative AI has attracted over the past year has given the technology no shortage of both fans and detractors. How tech platforms maintain the former and convert the latter hinges on their ability to build trust among everyone involved. The need for solutions that make tracing, tracking, and verifying the lineage of content and their derivatives, at this moment in time, cannot be emphasized enough.
Vermillio is the first – and only – generative AI platform built to empower creators and protect their work.
Using our platform, artists, studios, and intellectual property owners can track and authenticate generative AI as well as the use of their work by others in order to receive fair credit and compensation.
Exercising control over their IP even as it is transformed by AI, creators large and small can engage directly with their fans, quickly and easily bringing new and immersive experiences to life in the style of the world’s most beloved IP.