What AI Can and Cannot Do
Artificial intelligence has become a catchall term for very different kinds of systems. Treating all of them as if they were the same creates confusion from the start. A tool that writes text, a system that recommends a route, and software that claims to judge a person’s future are not doing the same kind of work, and they should not be trusted in the same way.
One useful distinction separates generative AI from predictive AI. Generative systems create text, images, audio, or code by learning patterns from huge amounts of data. Predictive systems are sold as tools for forecasting human behavior, such as who will be a good employee, who might commit a crime, or which student will struggle in school. Real progress has been strongest in generative AI, while many of the boldest claims about predictive AI fall apart under scrutiny.
Generative AI can be genuinely useful. It can help draft documents, summarize material, suggest code, and support people in many kinds of knowledge work. But it does not understand the world in a human sense. It predicts plausible outputs from patterns in data, which is why it can sound confident while inventing facts, fake citations, or false accusations.
Predictive AI creates deeper problems because it is often asked to answer questions that are much harder than they appear. Human lives are shaped by luck, changing circumstances, hidden information, and social systems that do not sit still long enough to be modeled cleanly. When a company claims it can measure honesty from a face, infer personality from a short video, or rank people by future worth, it is often wrapping guesswork in technical language.
The consequences are serious because these systems are often deployed in settings where people cannot easily challenge them. Insurance companies, courts, schools, employers, and hospitals may rely on automated scores that appear objective but rest on weak assumptions. Even tools that perform well on narrow technical tasks can become dangerous when they are plugged into institutions that affect liberty, health, income, or reputation.
A clearer public conversation starts with plain language. Some AI systems work well for limited tasks like spam filtering, navigation, transcription, or generating drafts. Skepticism becomes essential when software claims it can peer into the future of a human being and make life-altering decisions on that basis.



