
Last night I dreamt my phone got hijacked by a mashup of ChatGPT and The New York Times. The home screen was filled with inaccurate news and useless information. No matter how hard I tried, I couldn’t navigate past it.
This reflects my current frustrations with AI writing tools. They have tremendous power to synthesize information like The Times. But they also generate loads of misinformation. I wish they worked better.
I’ve used AI writing tools for months now. I describe them to friends as absorbing huge datasets and spitting out average results. They work okay for very basic documents, but for original pieces, it’s a slog to get quality prose. For example, they can create an individual donor fundraising letter that I’m going to customize anyway fairly well (except for the exaggerations), but if I want to create a more original, less-templated document, it’s a slog through multiple prompts and edits, often with extremely boring and mediocre results. I keep wishing there was a way to accelerate these tools to get better because they have so much power to synthesize and aggregate, but they are really not there yet in terms of generating quality prose on the first try.
I recognize these complaints reflect both the evolving products, their limitations and costs (my AI tools are all free), and my skills as a user. To improve my inputs, I’ve spent time reading up on AI prompts, both in the documentation for Claude and ChatGPT, and I also follow articles and newsletters about Generative AI, including Marshall Kirkpartrick’s AiTTENTION, Jeremy Caplan’s Wonder Tools, and Ethan Mollick’s One Useful Thing. These are great resources, and my ability to get good results has improved as I’ve learned more about sequencing and refining prompts.
And yet, it feels so magical, I wish it worked better. There’s a part of me delighted by how fast the AI generates text. I feel like I’ve pulled off a magician’s trick when it quickly spits out an answer. But then I read the result, and most of the time, I’m disappointed. Am I expecting too much? Or are these companies racing to improve, and my goal of efficient AI writing and editing is less than a year away?
I don’t know, but as a committed, curious user, I expect to find out. I remain dedicated to honing my skills with these powerful, ever-evolving tools. But as my dream reflects, I worry about the misinformation such powerful platforms can generate.
PS I asked my AI to respond to my post. This is what Claude told me.
Last night I dreamt my phone got hijacked by a mashup of ChatGPT and The New York Times. The home screen was filled with inaccurate news and useless information. No matter how hard I tried, I couldn’t navigate past it.
This reflects my current frustrations with AI writing tools. They have tremendous power to synthesize information like The Times. But they also generate loads of misinformation. I wish they worked better.
I’ve used AI writing tools for months now. I describe them to friends as absorbing huge datasets and spitting out average results. They work okay for very basic documents, but for original pieces, it’s a slog to get quality prose. For example, they can create an individual donor fundraising letter that I’m going to customize anyway fairly well (except for the exaggerations), but if I want to create a more original, less-templated document, it’s a slog through multiple prompts and edits, often with extremely boring and mediocre results. I keep wishing there was a way to accelerate these tools to get better because they have so much power to synthesize and aggregate, but they are really not there yet in terms of generating quality prose on the first try.
I recognize these complaints reflect both the evolving products, their limitations and costs (my AI tools are all free), and my skills as a user. To improve my inputs, I’ve spent time reading up on AI prompts, both in the documentation for Claude and ChatGPT, and I also follow articles and newsletters about Generative AI, including Marshall Kirkpartrick’s AiTTENTION, Jeremy Caplan’s Wonder Tools, and Ethan Mollick’s One Useful Thing. These are great resources, and my ability to get good results has improved as I’ve learned more about sequencing and refining prompts.
And yet, it feels so magical, I wish it worked better. There’s a part of me delighted by how fast the AI generates text. I feel like I’ve pulled off a magician’s trick when it quickly spits out an answer. But then I read the result, and most of the time, I’m disappointed. Am I expecting too much? Or are these companies racing to improve, and my goal of efficient AI writing and editing is less than a year away?
I don’t know, but as a committed, curious user, I expect to find out. I remain dedicated to honing my skills with these powerful, ever-evolving tools. But as my dream reflects, I worry about the misinformation such powerful platforms can generate.
PS I asked my AI to respond to my post. This is what Claude told me.