Posts about ai

Technology Does Not Belong to the Technologists

Sam Altman just published a set of principles for OpenAI, in which he asserts, “AI will dwarf what people could do with steam engines or electricity.”

Uh, history would like a word, Sam.

Sam believes that his talkative tool will dwarf powered transportation, powered industry, lighting, electronic communication, amplification, even computation. This is the hubris of the present tense.

What follows in his principles is the kind of sophomoric banality only an LLM could produce.

He speaks of democratization. That occurs through the institutions of government and the vote, not companies. He leaps to the conclusions that he will build the mythical AGI and that it will yield “universal prosperity” the demands “huge infrastructure” to get there.

And what does this even mean? “While we are quite confident that universal prosperity will remain really important, we can imagine periods in the future where we have to trade off some empowerment for more resilience.” In other words, he’ll hold onto power because he knows best.

I think AI’s amazing. Hell, I cohost two podcasts and I’m editing a book series about it. But for God’s sake, history teaches us that technologies — especially the two Sam so glibly dismisses — brings unforeseen consequences. Gutenberg didn’t foresee the Reformation and couldn’t have controlled it.

At least Altman’s bête noire and courtroom opponent, Musk, says even more ridiculous things, namely that saving for retirement is irrelevant because AI is going to create a world of abundance: ‘It won’t matter.’ Yeesh.

A key lesson of technological history that the technologists forget — one I write about in my books — is that once the technology becomes familiar as a tool in many hands, both the technologist and the technology fade into the background and what matters is what is made with it by others, the rest of us.

Technology does not belong to the technologists.

Announcing ‘Intelligence: AI and Humanity’

Bloomsbury Academic is announcing the launch of a new book series: Intelligence: AI and Humanity. I’m humbled, delighted, and honestly amazed to say that I will be the series editor.

Intelligence is a venue for writers from a wide array of fields and areas of expertise to reflect on artificial intelligence as a mirror to society and culture. Books in this series will not be technical — not about artificial intelligence as technology. Instead, they will examine AI’s meaning to our lives and collective humanity. AI’s entrance into public discourse as a literate machine challenges us to reexamine our views of intelligence, creativity, language, learning, authority, humanity. The intended audience is broad, both academic and trade: anyone with an interest in AI and its profound implications for us all.

The first three books and authors we’re announcing represent the range of perspectives we wish to offer. 

  • Dr. Rumman Chowdhury, CEO and cofounder of Humane Intelligence and a pioneer in the field of applied algorithmic ethics, asks the first and fundamental question raised by AI: What is intelligence?
  • Dr. Charlton McIlwain, Vice Provost and Professor of Media, Culture, and Communication at NYU and author of Black Software, will examine whether and how Black Americans could use the opportunity of AI to overcome years of white technological oppression. 
  • Dr. Matthew Kirschenbaum, the Commonwealth Professor of Artificial Intelligence and English at the University of Virginia, warns of the coming Textpocalypse, altering our relationship with text forever. 

I hope to see authors proposing books to reflect on fundamental questions raised by AI and to explore how AI in turn reflects on society, for AI replays to us the collective notions, misapprehensions, clichés, and biases of those who have had the power and privilege to publish in the past. I want to see books that challenge presumptions about AI and power, creativity, education, democracy, sustainability, religion, history, artistry, collaboration, and countless topics I’ve yet to imagine. 

Featuring scholars, public intellectuals, journalists, and professionals, books in Intelligence will be written by authors from many fields — history, psychology, anthropology, sociology, philosophy, communication, community studies, linguistics, literature, religion, classics, economics, law, government, and the arts — and from diverse and global perspectives.

Almost seven decades ago, Sputnik overthrew the humanities in favor of science, technology, and mathematics in American education, policy, and culture. But now that the machine can speak our languages, the CEOs of some AI companies say schools should stop training computer scientists in favor of developing domain expertise. Could this, then, be the revenge of the liberal arts major?

The humanities and social sciences have been largely left out of deliberation about technology and its impact on society. Intelligence will provide them their place at the table, to bring their perspective, expertise, and inquiry to critical discussion of this technology and the opportunities, perils, and questions it presents.

Print required capital to control. Electronics required expertise to operate. AI is different in that its tools are designed for anyone to use. All one needs is human language and a phone or a keyboard or geeky eyeglasses to seek, organize, and query information or to command a computer to create text, image, sound, or code. 

That potential for broad and fast adoption of these tools is why Bloomsbury Academic and I believe this series is needed, providing space for writers to stand apart, to observe, to ask key questions, and most of all to challenge readers to understand and undertake their roles in the future of these technologies and society. 

The series it the brainchild of Haaris Naqvi, Director of Scholarly and Student Publishing for Bloomsbury US and Global Editorial Director of Bloomsbury Academic. Haaris has been the wise, supportive, and patient editor and publisher of three of my own books. One day, Haaris called and asked whether I thought a book series on AI was a good idea — and whether I would like to edit it. Well, of course. We compared our hopes and plans for the series and found ourselves in quick kismet. 

So now here we are. We plan to publish three to five books a year, each an independent work through which we hope readers will be led to more books in the series. Prospective authors may submit proposals— emailing intelligencebloomsbury@gmail.com — to be reviewed by us, outside reviewers, and the Bloomsbury board. The decisions will not be mine alone. I will be eager to hear suggestions for both subjects and authors. 

We also plan to hold a series of events featuring writers and ideas covered in the series. Watch this space and listen to the AI podcasts I cohost —  Intelligent Machines and AI Inside — for announcements and updates.

Rethinking intelligence

Here is an excellent paper that clearly explains the philosophy that guides Yann LeCun’s research in AI and his new company, AMI Labs. It also perfectly expresses my complaints about the trope of artificial general intelligence — AGI, or BS for short.

LeCun et al reject the idée fixe that obsesses the Promethean dreams of too many of the AI boys: that they have the power, nearly there, to surpass human intelligence in every way: thus, it is general. The paper argues instead that human intelligence itself is not general: Each of us is good at some things, incompetent at others.

To set the goal for AI development in anthropomorphic and ultimately hubristic terms is a mistake. Instead, how much better it will be to build systems that are specialized (as humans are) to concentrate scarce resources on efficiently advancing toward one skill or another, not all. “Given finite energy, an approach that directs available energy towards learning a finite set of tasks will reasonably outperform an approach that distributed the finite energy over an infinite amount of tasks.” Or in its pithy conceit quoted here: “The AI that folds our proteins should not be the AI that folds our clothes!”

LeCun also believes that embracing specialization will enable a system’s creators to limit its function, thus its power, and ensure its safety. The other AI boys think they will create the God machine whose fury even they cannot contain. LeCun has the more mature view that machines, even intelligent ones, are still machines with plugs to pull.

The paper indirectly illuminates LeCun’s devotion to world models over large-language-models’ text prediction. Or as the company’s homepage puts it: “We share one belief: real intelligence does not start in language. It starts in the world.” LeCun himself pioneered thinking that helped lead to LLMs, but he believes text can take the technology only so far. He aims to build systems that can adapt to reality because they are trained on reality, not on text as tokens or pixels next to pixels, but as machines able to train themselves to understand the laws of nature that toddlers and cats discern, without language.

The paper is written by LeCun, Judah Goldfeder, Philippe Wyder, and Ravid Shwartz-Ziv.

Demote the doomsters

This paper in Science on “managing extreme AI risks amid rapid progress” with 25 co-authors (Harari?) is getting quick attention. The paper leans heavily toward the AI doom, warning of “an irreversible loss of human control over AI systems” that “could autonomously deploy a variety of weapons, including biological ones,” leading if unchecked to “a large-scale loss of life and the biosphere, and the marginalization or extinction of humanity.”

Deep breath.

Such doomsaying is itself a perilous mix of technological determinism and moral panic. There are real, present-tense risks associated with AI — as the Stochastic Parrots paper by Timnit Gebru, Margaret Mitchell, Emily Bender, and Angelina McMillan-Major carefully laid out — involving bias of input and output, anthropomorphization and fraud (just listen to ChatGPT4o’s saccharine voice), harm to human workers cleaning data, and the environment. The Science paper, on the other hand, glosses over those current concerns to cry doom.

That doomsaying makes many assumptions.

It concentrates on the technology over the human use of it. Have we learned nothing from the internet? The problems with it have everything to do with human misuse.

It engages in the third-person effect and hypodermic theory of media brought to AI, assuming that AI will have some mystical ability to “gain human trust, acquire resources, and influence key decision-makers.” This is part and parcel with the doomsters’ belief that their machine will be smarter than everybody (except perhaps them). It is condescending and paternalistic in the extreme. 

It imagines that technology is the solution to the problems technology poses — its own form of technological determinism — in the belief that systems can be “aligned” with human values.

Now here’s the actual bad news. Any general machine can be misused by any malign actor with ill intent. The pursuit of failsafe guardrails in AI will prove futile, for it is impossible to predict every bad use that anyone could make of a machine that can be asked to do anything. That is to say, it is impossible to build foolproof guardrails against us, for there are too many fools among us. 

AI is, like the printing press, a general machine. Gutenberg could not design movable type to prevent its use in promoting propaganda or witch hunts. The analogy is apt, for at the beginning of any technology, the technologists are held liable — in the case of print, printers were beheaded, beheaded, and burned at the stake for what came off their presses. Today, the Science paper and many an AI panelist say that the makers of AI models should be held responsible for everything that could ever be done with them. At best, that further empowers the already rich companies that can afford liability insurance. At worst, it distracts from the real work to be done and the responsibility that also lies with users. 

All this is why we must move past discussions of AI led by AI people and instead hear from other disciplines — the humanities and social sciences — which study human beings.

It is becoming impossible to untangle the (male, white, and wealthy) human ego involved in much of the AI boys’ discussion of AI safety: ‘See how powerful I am. I am become death and the machine I build will destroy worlds. So invest in me. And let me write the laws that will govern what I do.’

Take the coverage of “safety” at OpenAI. The entire company is filled with true believers in the BS of AGI and so-called x-risk (presumptions also apparently swallowed by the Science paper’s authors). The “safety” team at OpenAI were the more fervent believers in doom, but everyone there seems to be in the same cult. They— the humans — are the ones I worry about. Yet in stories about the “safety” team’s departure, reporters take the word “safety” at face value and refuse to do their homework on the faux philosophies of #TESCREAL (Google it) and how they guide the chest-thumping of the doomsters.

The doomsaying in this paper is cloaked in niceties but it is all of a type.

All this is why I wrote my next book (I won’t turn this post into a plug for it) and why I am hoping to develop academic programs that bring other disciplines into this discussion. It is time to demote the geeks and the doomsters.

In the echo chamber

Well, that was surreal. I testified in a hearing about AI and the future of journalism held by the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. Here is my written testimony and here’s the Reader’s Digest version in my opening remarks:

It was a privilege and honor to be invited to air my views on technology and the news. I went in knowing I had a role to play, as the odd man out. The other witnesses were lobbyists for the newspaper/magazine and broadcast industries and the CEO of a major magazine company. The staff knew I would present an alternative perspective. My fellow panelists noted before we sat down — nicely — that they disagreed with my written testimony. Job done. There was little opportunity to disagree in the hearing, for one speaks only when spoken to.

What struck me about the experience is not surprising: They call the internet an echo chamber. But, of course, there’s no greater echo chamber than Congress: lobbyists and legislators agreeing with each other about the laws they write and promote together. That’s what I witnessed in the hearing in a few key areas:

Licensing: The industry people and the politicians all took as gospel the idea that AI companies should have to license and pay for every bit of media content they use. 

I disagree. I draw the analogy to what happened when radio started. Newspapers tried everything to keep radio out of news. In the end, to this day, radio rips and reads newspapers, taking in and repurposing information. That’s to the benefit of an informed society.

Why shouldn’t AI have the same right? I ask. Some have objected to my metaphor: Yes, I know, AI is a program and the machine doesn’t read or learn or have rights any more than a broadcast tower can listen and speak and vote. I spoke metaphorically, for if I had instead argued that, say, Google or Meta has a right to read and learn, that would have opened up a whole can of PR worms. The point is obvious, though: If AI creators would be required by law to license *everything* they use, that grants them lesser rights than media — including journalists, who, let’s be clear, read, learn from, and repurpose information from each other and from sources every day. 

I think there’s a difference in using content to train a model versus producing output. It’s one matter for large language models to be taught the relationship of, say, the words “White” and “House.” I say that is fair and transformative use. But it’s a fair discussion to separate out questions of proper acquisition and terms of use when an application quotes from copyrighted material from behind a paywall in its output. The magazine executive cleverly conflated training and output, saying *any* use required licensing and payment. I believe that sets a dangerous precedent for news media itself. 

If licensing and payment is required for all use of all content, then I say the doctrine of fair use could be eviscerated. The senators argued just the opposite, saying that if fair use is expanded, copyright becomes meaningless. We disagree. 

JCPA: The so-called Journalism Competition and Preservation Act is a darling of many members of the committee. Like Canada’s disastrous Bill C-18 and Australia’s corrupt News Media Bargaining Code — which the senators and the lobbyists think are wonderful — the JCPA would allow large news organizations (those that earn more than $100,000 a year, leaving out countless small, local enterprises) to sidestep antitrust and gang together and force platforms to “negotiate” for the right to link to their content. It’s legislated blackmail. I didn’t have the chance to say that. Instead, the lobbyists and legislators all agreed how much they love the bill and can’t wait to try again to pass it. 

Section 230: Members of the committee also want to pass legislation to exclude generative AI from the protections of Section 230, which enables public discourse online by protecting platforms from liability for what users say there while also allowing companies to moderate what is said. The chair said no witness in this series of hearings on AI has disagreed. I had the opportunity to say that he has found his first disagreement.

I always worry about attempts to slice away Section 230’s protections like a deli balogna. But more to the point, I tried to explain that there is nuance in deciding where liability should lie. In the beginning of print, printers were held liable — burned, beheaded, and behanded — for what came off their presses; then booksellers were responsible for what they sold; until ultimately authors were held responsible — which, some say, was the birth of the idea of authorship. 

When I attended a World Economic Forum AI governance summit, there was much discussion about these questions in relation to AI. Holding the models liable for everything that could be done with them would, in my view, be like blaming the printing press for what is put on and what comes off it. At the event, some said responsibility should lie at the application level. That could be true if, for example, Michael Cohen was misled by Google when it placed Bard next to search, letting him believe it would act like search and giving him bogus case citations instead. I would say that responsiblity generally lies with the user, the person who instructs the program to say something bad or who uses the program’s output without checking it, as Cohen did. There is nuance.

Deep fakery: There was also some discussion of the machine being used to fool people and whether, in the example used, Meta should be held responsible and expected to verify and take down a fake video of someone made with AI — or else be sued. As ever, I caution against legislating official truth.  

The most amusing moment in the hearing was when the senator from Tennessee complained that media are liberal and AI is liberal and for proof she said that if one asks ChatGPT to write a poem praising Donald Trump, it will refuse. But it would write a poem praising Joe Biden and she proceeded to read it to me. I said it was bad poetry. (BTW, she’s right: both ChatGPT and Bard won’t sing the praises of Trump but will say nice things about Biden. I’ll leave the discussion about so-called guardrails to another day.)

It was a fascinating experience. I was honored to be included. 

For the sake of contrast, in the morning before the hearing, I called Sven Størmer Thaulow, chief data and technology officer for Schibsted, the much-admired (and properly so) news and media company of Scandinavia. Last summer, Thaulow called for Norwegian media companies to contribute their content freely to make a Norwegian-language large language model. “The response,” the company said, “was overwhelmingly positive.” I wanted to hear more. 

Thaulow explained that they are examining the opportunities for a native-language LLM in two phases: first research, then commercialization. In the research phase now, working with universities, they want to see whether a native model beats an English-language adaptation, and in their benchmark tests, it does. As a media company, Schibsted has also experimented with using generative AI to allow readers to query its database of gadget reviews in conversation, rather than just searching — something I wish US news organizations would do: Instead of complaining about the technology, use it to explore new opportunities.

Media companies contribute their content to the research. A national organization is making a blanket deal and individual companies are free to opt out. Norway being Norway — sane and smart — 90 percent of its books are already digitized and the project may test whether adding them will improve the model’s performance. If it does, they and government will deal with compensation then. 

All of this is before the commercial phase. When that comes, they will have to grapple with fair shares of value. 

How much more sensible this approach is to what we see in the US, where technology companies and media companies face off, with Capitol Hill as as their field of play, each side trying to play the refs there. The AI companies, to my mind, rushed their services to market without sufficient research about impact and harm, misleading users (like hapless Michael Cohen) about their capabilities. Media companies rushed their lobbyists to Congress to cash in the political capital earned through journalism to seek protectionism and favors from the politicians their journalists are supposed to cover, independently. Politicians use legislation to curry favor in turn with powerful and rich industries. 

Why can’t we be more like Norway?