Peter Slattery, PhD’s Post

View profile for Peter Slattery, PhD, graphic
Peter Slattery, PhD Peter Slattery, PhD is an Influencer

Lead at the MIT AI Risk Repository | MIT FutureTech

What is the most important thing that 90% of people are missing about #ChatGPT? Everyone's talking about ChatGPT and claiming that 90% of us are missing out on the #AI revolution. Posting about 10, 30, or 100 top AI tools that we NEED to know about. Like my attachment (which is just a hook for this message, sorry!). These AI tools are cool, but they are not really important. What 90% of people are missing, that is VERY IMPORTANT, is that AI is coming much faster than nearly anyone expected and that far too few people are working on how to safely develop and deploy it. If we develop and deploy AI right we may have much better lives than we could have ever imagined. If we get it a little wrong, it could totally mess up society and marginalise millions of people (amongst other problems). If we get it very wrong, it could be truly terrible - an actual catastrophe. Unfortunately, I am not at all certain that we will get it right. Surveys of Machine Learning experts shows that they estimate a 1/10 chance that AI has bad outcomes and a 1/20 to 1/50 chance that it has terrible outcomes (1). With this and similar arguments in mind, many of the smartest people I know have changed careers to focus on reducing risks from AI because of concern about how it could go so badly or even terribly. They could all be wrong but at the very least I recommend learning about what they think and why. To learn more on AI and its risks please check out: 'The case for taking AI seriously as a threat to humanity' by Kelsey Piper at Vox 💬 What do you think? Do you agree that the safe development and deployment of AI is important and under resourced? Please share any relevant experiences, evidence or research. Please also let me know if you are interested in working in this area and I can connect you to resources 🙏 (1) Zhang, Baobao, et al. "Forecasting AI Progress: Evidence from a Survey of Machine Learning Researchers." arXiv preprint arXiv:2206.04132 (2022).

Renato Lopez

Asistente de Investigación | Análisis de datos | Análisis Conductual | Ciencias Cognitivas #ciencia #innovación

1y

Hello! Very interesting post. My concern with AI is that it is developing very quickly and does not leave a sufficient window of time for adequate training of people in the use of this new technology. I was just interacting with the chatgpt platform for a post that I published today about its uses applied to psychology and I don't think it's close to replacing humans but maybe it will lead to bad results if critical thinking is not strengthened and other higher-order skills in university education, since the origin of the information that feeds the AI, as far as I am aware, is not easy to access with what would only be left to trust or critically analyze if we go one step further. For example, I asked it to retrieve some references and then Google them, but I couldnt find the source, or when asked to generate items for scales it wasnt clear if it was a original production or based on specific sources. Lastly, I wonder what role psychology and behavioral sciences would have in this revolution.

Kirsten Bradley

Founder @ ShuffleStuff | Startup Community Coordinator | Economics + Sustainability + Data

1y

Great post Peter. Thanks for a more nuanced perspective than the hype that comes from my data science feed! I'm overwhelmingly positive about the potential for AI. But that enthusiasm is balanced by the knowledge that there *will* be unintended consequences, and the range of possibilities is vast. Something I've been pondering in relation to AI is the impact that it will have on the proliferation of mis- and dis-information. LLMs like ChatGPT can sneak through our critical thinking measures, because they give incorrect answers with so much confidence that they are believable. This effect compounds when paired with a population that (in my experience at least) doesn't understand the limitations of of the technology. I've had some great chats about AI with people outside of tech/data/behavioural science circles, and my overwhelming take away from those is that the general public trusts AI far more than is justified. Instead of challenging AI output and cross-checking facts, the people I've spoken with are assuming the AI output is accurate, and accepting it at face value. When we add in to the mix the potential for bad actors to deploy this technology to serve their own ends, we may end up in a real mess.

Richard Kickbush

Video | Virtual Reality | Behavioural Science | Messaging for Behaviour Change

1y

Interesting that the entire gamut of future AI is being reduced to a good / bad binary impression. The optimism bias also seems prevalent in the term ‘net positive’, where the holder of this belief must assume they themsleves wont be one of those disproportionately negatively impacted…that a ‘net positive’ for humanity necessarily equates to a ‘net positive’ in their own life…

Di Rifai

Shaping Tech Ethics in Investment

1y

Hi Peter Slattery, PhD would be delighted to connect. We have brought together a group of investors/shareholders who are looking to engage with companies in their portfolios - that are both developing and utilising AI, to ensure that they are doing so responsibly. We need to come at this from many different angles & we’re working to ensure that the owners of companies are letting their investees know that this is a very important topic to them

James Teague

Serial Entrepreneur. Founder of My Everything Store, Online Medical Supply Canada, and Canadian Medical Marketplace.

1y

Did you use ChatGPT to write this? Kidding of course. I see where you are headed here, we are going straight into a full blown Skynet scenario. The only way that's going to happen is if we do two things. Put AI in robots (now AI can physically move) and release it online. AI has been used in operating rooms with DaVinci for some time helping save lives. Yes it's going to take jobs (sorry copywriters) but it's also going to do some good. Now are we going to put it in robots and release it online? Of course we are, we are human and we are inherently stupid beings in a lot of ways.

Annet Hoek, PhD

Nurturing Research Programs and Professionals for Impactful Change ◆ Human Insights ◆ Coach / Mentor ◆ Facilitator ◆ Consultant ◆ Board Director

1y

You had me hooked with your attachment Peter Slattery, PhD :-) I am overall net positive about AI (similar to the survey results in the paper), and grateful for people like yourself who also consider and work on safe implementation of AI. Since the field is evolving quickly, it would be valuable to see recent survey data, as the data in the paper is from a couple of years ago. I'm sure you are working on a similar project!?! Thanks for keeping us up to date with your AI-related posts 🙏

_ Paolo C.

Senior Cybersecurity Strategic Advisor @ BARE Cybersecurity | Startup Fractional CISO | vCISO | Founder, CTO | Passionately developing teams and organizations @ BARE Elevate.

1y

I wonder what happens to the files that you submit. Do they keep them?

Christiaan Lustig

Intranet and digital workplace consultant • author of Digital employee experience • internal digital communications, services, and collaboration • speaker

1y

Is there a tool that removes all caps from text? 😄

Dr. Cornelia C. Walther

ProSocial AI. Founding Director POZE@ezop. Wharton Fellow.

1y

Fascinating Peter. Both in terms of the risks, but also the unfulfilled potential of using it for a radically different inclusive society, where maximum quality of life is shared by a maximum of people. Aspirational Algorithms https://www.youtube.com/watch?v=WTuybMlE7J0 are one way to make this happen. Happy to discuss! https://ideas.repec.org/a/spr/ariqol/v17y2022i5d10.1007_s11482-022-10060-0.html

See more comments

To view or add a comment, sign in

Explore topics