Financial allocation models for an AI world

17
Jan 25
By | Other

Let’s think about some of the unintended consequences of AI and how to deal with them.

In all the questions about what we’re going to do with LLM, and what kind of hardware we’re going to use, and how we’re going to find the energy to power all these systems, one question often seems to get lost in the shuffle – how are we going to give them financial benefits to parties working or using these systems?

In fact, however, this has come up in recent conferences and talks, and in discussions I’ve had with industry leaders who are sensitive to these kinds of realities. “Whose bond?” (who benefits) is a time-tested phrase in the legal world, but we must also apply it to our new business world, a world that really doesn’t look much like it did five years ago.

I was listening to Bill Gross speak recently about these issues. He is the founder of Idealab and creator of Knowledge Adventure, an educational software product. He also pioneered the technology that became Lotus … so he has some good ones in the field.

I’ll break down some of the key ideas of what Gross discussed about how to make AI gains more equitable and the market context at play.

Oxygen is Free: The Value of Digital Goods

Gross mentioned a variation on this theme early on, talking about what happens when the cost of a given commodity approaches zero. For example, he noted that the Internet made the cost of distributing information close to zero. (I would add that digital photography made the cost of an image close to zero, too, and this is a prime example.)

What else? The cloud, Gross suggested, made the cost of storage close to zero. Now, AI is making the cost of knowledge acquisition close to zero as well.

This is even more important, in my opinion, as we’ve seen things like liquid networks and underlying models significantly lower token costs. (Disclaimer: I have been involved in consulting work that Liquid AI and the CSAIL lab are doing on liquid models.)

We’ll come back to this financial idea in a minute…

Year of the Rat

Gross also spoke on some thoughts on emerging AI capabilities based on the evolution of neural networks.

In the beginning, he pointed out, neural networks had only a few layers. This is comparable to the cognitive sophistication of a fruit fly.

Now, he said, we are almost at the “mouse level,” where neural networks have perhaps 100 to 200 layers, equivalent to the cognitive function of one of these large rodents.

However, he noted that even at this scale, AI entities are capable of passing most Turing tests. So we communicate with them much better than with rats.

He also suggested that given the rapid pace of technology, it will soon be beyond the mouse stage, and more towards perhaps the equivalent of a dog or a horse.

“Look where we’re going,” he said. “Our brain has roughly 1,000 layers, so dwarfing (what) Chatgpt can do today … everything is growing exponentially. We are getting more and more computing power. Nvidia just announced the new chips last week at CES in Las Vegas. It’s really, really fast-paced.”

Unintended consequences of change and evolution

After making these points, Gross went on to talk about how technology can have unintended consequences (and often does.)

Oil and gas power generation is one of the main ones and the one that brought Gross. It is perhaps the darkest example of how technology brings us new problems. Right now, we’re past the 1.5° benchmark that was part of global climate science, and we’re seeing wildfires raging across California.

Gross identified the oil and gas boom as a prime example of not understanding what the side effects of technologies will be. What are the side effects of AI? We really don’t know yet…

“I tried to put together a list of the consequences of AI in three categories,” Gross said. “I have listed some of the obvious positives, education and cures, innovation, perhaps climate correction, productivity, ease of tasks, leisure. There are (are) many, many positives of AI. I have listed some of the negatives, copyright theft, misinformation, bias. … unintended negative consequences, things you can’t even think about, like rogue artificial intelligence, or the pollution that comes from the power of AI, the power required for AI, obviously propaganda…”

He ended up focusing, he said, on the issue of copyright theft.

Income sharing models

I thought that perhaps the biggest idea, and the one that Gross spent the most time on, was the idea of ​​revenue sharing models.

A major side effect of AI, he suggested, is that companies are simply stealing copyrighted information and using it for free. He pointed out how the heads of these giant companies want to be able to scrape data for free to feed their systems. But then he mentioned platforms like YouTube and Spotify that have adopted revenue-sharing models and found that it works much better.

Specifically, with YouTube, he noted that the channel followed this kind of “cat and mouse game” of trying to fix copyrighted content, but then when they started revenue sharing, everything became much easier.

The basic idea is that when you establish basic patterns of cooperation and people do not have conflicting incentives, you should not try to enforce a set of unenforceable behavioral constraints.

“It’s absolutely ridiculous that you have to get something else to free up other people’s work to make your business work,” he said. “My dream would be that we have a planet where anyone with intellectual property in their brain, anyone with an idea, anyone with creativity, can create something, record it, and every time it’s used in AI engines, they will receive a check, a royalty check, every month for the content they create. …. there are so many opportunities, and it really is the best time in history to make a positive impact on the world, because almost every aspect of life will be touched by AI.”

It was an eye opener. Personally, I don’t think we’ve learned this lesson in AI, and we probably haven’t in ordinary life either. Capitalism and social interaction tend to be the same kind of cat and mouse games, or for another species analogy, a dog-eat-dog world.

But it doesn’t have to be, and inspired by perspectives like Gross’s, perhaps we can work to make AI have a better impact on what we do together.

Click any of the icons to share this post:

 

Categories