"What if we used AI to think more, not less?"
I hear this line a lot these days in the midst of conversations around AI and education, AI and governance, AI and art, AI and business.
This is the hope many have. It's the question around which so many engineers, designers, investors, and entrepreneurs have anchored themselves. Countless apps, wrappers, master-prompts, and projects seek to give an answer to this question and its implications.
For me, more questions always arise in response:
Are narrow or wide systems required to harness this technology better?
Does this kind of tool shape the direction of our thinking in undue technological, efficiency-focused ways?
Will there be "broccoli" and "junk food" versions of our models that create a wall between the haves and the have nots so some can pay and not just play with these tools?
And, most importantly, is it even possible to use AI in a way that allows for more thinking? Is the base concept not about offloading intelligence to another system? Do those goals, their designs, and our intentions not all coalesce to work against thought for even the most well-intentioned user?
It seems to me that a significant portion of the dialogue surrounding these innovations leaves out key questions. As one author put it, "questions shine a light on the work to be done. (They) bring tomorrow forward to today, right here and right now, allowing us to articulate a strategy," a plan to go out into a new era and create flourishing where there was only failure.