If you see a product rated 4.8, with 1000 ratings, and you think the rating should be 4, what should you do?
vote 4 = you believe in asymptotic approach to truth. This is "absolute truth". This is a naive trusting view, that if everyone says what they see from their own point of view, on average things will work out.
vote 1 = you have more immediate influence, but can still be truth aligned. This is "relative truth".
Someone taking your action out of context would get the wrong idea; but someone looking at the world you create will actually get a more accurate view (accurate relative to your view)
The advantage "absolute truth" people have over this third manipulative type is that they can locally justify their behavior.
They can say "look, I regularly use the product, but didn't recommend it to my friends; it is credible that I think of it as a 4". But if questioned, a "relative truth" voter has a harder time explaining his behavior. Despite rating it a 1, he didn't return the product, and uses it daily. This is detectably inconsistent. He may make justifications like "but I'm just trying to meta-adjust the average rating and increase my personal power, because i strongly believe the correct rating is a 4!". But this does not distinguish him from type 3 manipulators who make the same argument, while hiding the fact that they're being paid for their ratings.
It's vulnerable to manipulators. Relative truth voters' power is overrepresented because they use the full voting range.
Tolerance of them masks the actions of manipulators.
People who are effective have good reason to have more confidence in themselves, and are tempted to have more power, and exaggerate their claims. The ends justify the means.
Evolution also fell for this. "Absolute truth" evolution is reasonable, testing hypothesis etc. "Relative truth" evolution just monitors an individual's style, confidence, and tries to shortcut the cost of really understanding things and doing experiments.
This works great until it is common, at which point you both don't have enough people trying to get to truth, and the market for manipulators is rich, to exploit the earlier attempted exploiters.
This bias towards meta-evaluation of presentation of facts is inherent even in people who try to be as reasonable as possible. So even for someone leading scientists (to pick an example profession), intentionally overstating your confidence level will assuage their internal doubts from their built-in confidence detectors too.
So in this case we observe heavy polarization. True evaluators may range from 40%-60% confidence; but this logic pushes the 40s to zero and the 60s to 100.
The meta-defense is to 1) resist this everywhere 2) precommit to exposing your doubts 3) resist the temptation of confidence and certainty 4) rake in the profits paid to you by reality 5) move the foundation of human though away from exploitable evaluation strategies.
In practice this means being suspicious of people with exceptionally high charisma or confidence. A useful test to disqualify basic manipulators is press them to admit they have made a mistake. V0 of their strategy will have a hard time. Of course, truly expert manipulators have built in humility.
Another strategy is to observe "from a distance" - quantify things as much as possible and try to look at the diff between results when a person is involved or not. If you can detect a fault, even when everyone around someone is raving about them, it can be a sign of a manipulator.
Media is incentivized to not look too closely at their own system, since more absolute truth based systems would have on average lower confidence levels and be less attractive to naive audiences. This is why news and punditry is not interested at all in verifying correctness of retroactively evaluating people's motives to see if they are "relative truth" or "manipulators".
Much of society is about indoctrination of young people into absolute truth, to make them more pliable for the actual manipulators. This happens more when there is diversity leading to contact between people who literally don't care about each other at all. i.e. psychological manipulation of african slaves' worldviews in the US south was probably more severe than contemporary indoctrination into beliefs in the north for co-culturalists. Actually caring about someone for cultural reasons tends to increase retribution for manipulative behaviors (since the connection is stronger;). Similar example: restaurants in a train station are much more likely to be bad deals than ones in small towns with strong reputation maintenance systems.
Absolute truth is really good for dealing with the real universe. You can trust a good but not power-motivated physicist's numbers; but a relative truth physicist can justify things like "I'm smarter than X, so I will say Y to increase my power to lead to more truth later on". The scary thing is that it requires counter-manipulation to defeat people like this (whether they're actually truth aligned or are just pure power manipulators).
But if a group is too good at this, or has too low of a truth value, they will be beaten on the merits.
In poker there is a clear equilibrium; as much as many lifelong absolute truth believers hate it, correct play involves a precise level of manipulation & misrepresentation. But even poker has a strong meta-control - the rules of the game are fixed and it's rare to have collaboration between dealers & players to distort the game entirely. In the real world there is no such deal so it's very hard to really get a safe area. The only real limitation is individual minds and their genetic / cultural tendencies towards or away from truth, which while weak, can certainly be imagined to be even weaker in a sci-fi dystopia.