Can the bedroom of an eleven-year-old girl be objectively a “mess”? To a pair of exhausted, exasperated working parents the answer is obvious. But when the girl in question notes that “mess” is a value claim and thus is not a matter of fact but an opinion, the point must be grudgingly conceded -- though allowance may still be withheld.
Pride in the growing ability of your child to articulate the difference between fact and opinion is tempered by the realization that it’s being turned against you, and that it will soon be deployed in disagreements inevitably more fraught than whether the dirty socks and Taylor Swift t-shirt need to be picked up right now. That my daughter has learned this skill in school on one level validates our decision to enroll her where we did, though on another it suggests continued vigilance is warranted: The Common Core curriculum, under fire from numerous quarters for a number of reasons, is now also getting the attention of moral philosophers who say it “embeds a misleading distinction between fact and opinion.” From Justin P. McBrayer at The Stone blog of The New York Times:
[O]ur public schools teach students that all claims are either facts or opinions and that all value and moral claims fall into the latter camp. The punchline: there are no moral facts. And if there are no moral facts, then there are no moral truths.
The inconsistency in this curriculum is obvious. For example, at the outset of the school year, my [second-grade] son brought home a list of student rights and responsibilities. Had he already read the lesson on fact vs. opinion, he might have noted that the supposed rights of other students were based on no more than opinions. According to the school’s curriculum, it certainly wasn’t true that his classmates deserved to be treated a particular way — that would make it a fact. Similarly, it wasn’t really true that he had any responsibilities — that would be to make a value claim a truth.
McBrayer says he’d realized many of his college students already don’t believe in moral facts, and that conversations with other philosophy professors suggest “the overwhelming majority of college freshmen … view moral claims as mere opinions that are not true or are true only relative to a culture.” The implications are obvious and relevant to the recent discussion here concerning curricula at Catholic universities. Concerns about moral relativism in academia are established, though, and it’s too soon to know how anything specifically inculcated by Common Core will have an effect. College students were cheating, for example, long before Common Core; so were corporate executives; so were spouses. But it bears watching, of course, given that millions of students in more than forty states are being educated according to the standards -- which themselves might have arisen out of the academic environment McBrayer describes.
Plus, given the pace of technological development, it might one day be not just human beings that need moral compassing.
This may be getting a little out ahead of the immediate issue, but researchers in artificial intelligence, cognizant of the need to keep a theoretical super-intelligent AI from “going rogue and threatening us all,” are thinking about how to create and maintain “friendly AIs.” From Nautilus:
It’s simply not enough … for an AI to understand what moral behavior is; it must also have a preference for it. … [T]he hypothesis is that it won’t be humans who directly create a super-intelligence; instead, we’ll create a human-level AI that then continuously improves on its own design, making itself far more ingenious than anything humans could engineer. Therefore the challenge is in making sure that when it comes specifically to refining its own intelligence, the AI goes about it wisely, making sure that it maintains our moral values even as it rewires its own “brain” ….
As of now, [AI researcher] Nate Soares thinks we’re very far from an adequate theory of how an intelligence beyond ours will think. One of the main hurdles he highlights is programming the AI’s level of self-trust, given that it will never be able to come up with a mathematically certain proof that one decision is superior to another. Program in too much doubt, and it will never decide how to effectively modify itself; program in too much confidence, and it will execute poor decisions rather than searching for better ones. …This is why figuring out how to make machines moral first, perhaps before allowing them to self-modify, “is of critical importance,” writes Soares. “For while all other precautions exist to prevent disaster, it is value learning which could enable success.”
Instilling this in a child “can be maddening,” the article notes, as “every parent knows.” What makes the situation tolerable “is the child’s lack of power to do any real harm in the meantime—which may not be the case at all with a super-intelligence.” Strange to think how debate on the importance of distinguishing between fact and opinion may feel theoretical and abstract to our moment, yet urgent and concrete with regard to our theoretical future.