I think that doing any fundamental AI research/development right now is harmful, and it's not even because of existential risks of AI itself. The reason is very simple and very obvious to me: AI is a general-purpose accelerator. Truly remarkable how general-purpose it is, yeah - but conceptually, at societal scale, all it does is making the current system go faster. If the current system was headed towards sustainable happiness and prosperity for humanity, then I'd be all for AI progress - but the current system is headed towards multiple kinds of collapse at the same time. Whatever mode of collapse you can think of - look into the systems at play in it, every one of them is now full of people who do their job faster thanks to AI. It's like getting a jet engine on your bike when you're just learning how to ride a bike. We have to redirect the system, and making it go faster gives us less time to do that. Obviously that doesn't mean that we shouldn't use AI for good things - of course we should, it would be ridiculous if we only allowed bad things to be accelerated. But we shouldn't fool ourselves that, just because it can be used for good, that means it's an overall good thing right now. I'm fully aware that I'm shouting into the void. I think, if I was working at Google or OpenAI right now, or seriously considering it as a career, I would have been incapable of understanding this message, for obvious reasons. So the purpose of this post is to get it out of my system rather than change anyone's career. But I do also hope that it will change someone's career, too. What should you do instead if you still want to be "in AI"? Work on specific beneficial applications. Not on the fundamental technology.
I'm struggling with this right now. Working in a marketing capacity in the data market, AI readiness is a major narrative we're expected to push, but there is such an acute cognitive dissonance between this and the sustainability initiatives that big tech companies also supposedly care about. It's getting harder to keep quiet about it. How much harm am I perpetuating? It's a complicated situation—I don't, for instance, believe everyone at these companies only cares about the bottom line with no regard for negative externalities, but I do believe there are enough people who hold this view (especially the higher up the corporate ladder you go) that profit comes at the expense of sustainability. With rare exceptions, companies are not democratic. It seems many people who actually care about this have little to no say in the matter without fear of retribution. It has never been a goal of mine to work in AI, but this is a way to make a decent living and it's what I could get. I have unfortunately had to abandon or pause pursuits in other fields simply because doing so would likely mean defaulting on my student loans or losing my house. It's a real catch-22, and it's increasingly hard to see a way out of it through conventional means.
This is very insightful! You've framed your fear of AI accelerating to collapse. Do you believe there is any possibility AI could accelerate not-collapse? As a futurist, I know there is data showing we've been on track for collapse since the 1960s, just look at the Club of Rome and Limits to Growth. Knowing this information has changed nothing, and I might argue the internet accelerated these collapse trends. Has it? Would it have been a good reason to stop developing the internet because of disinformation, hackers, identity theft, money laundering, crypto scams? As your post provoked my thinking, I would like to provoke your thinking, from one of fear based focused on an "imminent collapse" to see other possibilities that might have more power and possibility to emerge than it appears from your point of view. If that's not possible, what's the point of even living, taking up valuable resources on this beautiful planet?
My long form thoughts, which were not generated by AI (hehe): In my nonprofit work various folks involved are constantly mentioning how ChatGPT and AI could help with this or that or make tasks easier/quicker. As an ex software engineering manager and someone who didn't have an email address until I was in college, I have big issues with AI as the "latest shiny new thing" and how so many think it is awesome but look past the long list of problems. (Same with "tech will solve everything" ideas like self driving cars) Is bigger, easier, faster, and more computing power better overall for our world? If used mostly for sustainability issues, then (maybe) great, but so much of it probably can and will harm our world - and have humans think in a less deep and meaningful way about their work and calling. I'd prefer more human products and thought, less time on computers, more real connection, and long term solutions, etc.
People often say that it is just that hard to understand the world. I disagree. If we look close enough, it’s really not that hard to understand the world. AI is a fitting example. 1. The big picture: we need to rename AI from Artificial Intelligence to something like “Applied Informatics” “Aggregated Input” There is nothing artificial about intelligence. Even the majority of humanity is incapable of original thought. Don’t try to sell me on machines. This marketing must change. 2. Applications: there are applications with Machine Learning (for example: checkout the Queen of Hearts app to study ECG) and with evolved human computer interaction. I can get behind this. But big picture we must challenge the optics and narrative of the “all-hail-AI” conversation. 3. Have you all read Sam Altman’s latest declaration called The Intelligence Age? It’s a blinding commentary. We are all being sold on a dream of “a better world.” The right question is: a better world for whom? (URL: https://ia.samaltman.com/) 4. There are no guardrails with AI & every effort is being made to keep it that way. Full story: https://www.newyorker.com/magazine/2024/10/14/silicon-valley-the-new-lobbying-monster
I think "fundamental research/development" can easily be misread. Is research in self-driving "fundamental"? Yet I think it will lead to the increase of ride sharing, cargo fleet operations efficiency, and therefore reduced footprint. Or, you would counter that with - Jevons paradox will counter-act that? At least for personal transportation, probably not because people probably don't want to spend in cars many more hours every day than they do already. And for cargo/freight, its price is already probably small enough component in the price of consumer delivery that driving it to zero would not cause disproportional increase in demand.
Ok, I thought about this deeper and came to a tentative conclusion that general AI capabilities that differentially favor obtaining and leveraging "interconnected knowledge" are on net beneficial. These capabilities include federated learning and inference, privacy-preserving computation, knowledge graph mining and hybrid reasoning on KGs and multi-modal data, semantic and structured search, cross-language retrieval and integration (esp. for low-online-presence languages), and more. Details here: https://engineeringideas.substack.com/p/differential-knowledge-interconnection
Super interesting perspective. We can only hope that many of the destructive behaviors and industries are slower to adapt to new tools while beneficial work like climate is disproportionately accelerated. I’m hopeful this is the case given my experience going through Y Combinator and the brilliant peer founders that were building AI applications for tackling climate change (and the lack of people building things that directly make it worse)
Agree 100% -- the missing intelligence component to a sustainable human enterprise has been, and remains, ecological intelligence. AI is poised to extend the most unsustainable aspects of the human enterprise and along the way relentlessly proclaim its indispensability. Any perceived social benefits will be incidental to its overall destructive consequences. File under ‘progress trap’ https://en.wikipedia.org/wiki/Progress_trap . AI is now an insidious presence in most apps and websites, constantly enticing new users into its energy intensive enclosure. A good read to reinforce and expand your skepticism: Jonathan Crary's Scorched Earth: https://www.versobooks.com/en-ca/products/214-scorched-earth
There are people out there that have been saying this for a long time, you just have to find them through the noise (grift) 😊
Making sustainability part of everybody's job
2mosharing a good resource for people who want a more nuanced understanding of the environmental impacts of AI: in this first wave, we are myopically focused on the resources required to build (steel, concrete, silicon etc.) and run (land, water, ENERGY) these massive systems. fair, it is staggering and projected to grow at a truly *unsustainable* rate. however, the application side (what the tech is actually being used for, and those environmental impacts) is orders of magnitude higher. there is no clearer example of that than watching oil and gas companies leverage the tech to keep their business model indefinitely profitable. we are quibbling over the emissions coming from a butane lighter that's being used to start a global forest fire. for now, the positive sustainability benefits of AI remain largely speculative, while the real world impacts are intentionally obscured and ignored by the companies building it (and making billions doing so). http://bit.ly/eeatlantic