Rebuilding Society on Meaning
# Ideas
- an interesting modeling approach could be combining an LM that converses with users encouraging them to introspect on their values combined w/ a theory of mind that the model constructs (and persists) of them between use sessions (this could also get quite dark)
- i like the idea of trying to align recommenders w/ what users actually value, however I’m concerned about how the existing incentive structure (advertising / attention economy / surveillance capitalism) will incorporate research in this area. For instance: one solution could involve: gather more data to discern more intimate information about users; which requires additional surveillance and could further drive nefarious attempts to manipulate values (ala how maximizing ad revenue leads to feedback loops that manipulate preferences)
- i’m intrigued by the idea of inferring an agent’s values via their attentional policy, and was inspired to some ideas about how this could be accomplished. i am, however, a bit concerned how this might be developed & deployed given our current incentive structure. as @joe describes, the profit model at present drives engagement via a shallow understanding of the user as consumer. given this paradigm, it would seem that attempting to infer a latent value system would require more complex modeling of the user and thus additional data. though well intentioned, might attempting to infer values under a profit motive incentivize deeper invasions of user’s privacy and commoditization of their data?
# Notes
To those in Humane tech, Machine Learning and Economics: 3 subfields to start
- unified ‘humane tech’: great list of related orgs, many of whom I’ve been tracking / reaching out to as well
- people develop the virtues needed for the environments and relationships they find themselves in. So, what we want is not a set of virtues. It’s an algorithm: a way for a person or agent to recognize, conceptualize, and start living by a virtue that’s missing or needed in their environment
- First up are alternatives to rational choice theory / expected value (I hear it’s in the works). From there, we probably need to replace game theory wholesale, and there’ll be other serious changes across microeconomics (watch out price theory, information econ, and organizational econ) Values, Preferences & Meaningful Choice
- central argument: what an agent chooses to attend to in order to make a choice leading up to making a choice (their attentional policy) reveals more about their values than the choice itself (the revealed preference)
- revealed preferences: simple, universally applicable, robust, and high resolution
- users can be lead astray / choose something w/o serving their interests when only tracking revealed preferences / engagement metrics
- non-introspective agents can still (often) express a preference
- constitutive attentional policies fall short in that they:
- require more introspection
- require more articulacy
- are harder to verify
- Suggested research directions:
- rich, interactive experiences to uncover CAPs
- infer CAPs from data
- visualizations & crypotography to make CAPs understandable, auditable and legitimate
The Gradient: Joe Edelman on Meaning-Aligned AI
- revealed preferences assumed independent variables. are dependent variables due induced feedback loop
- feedback loops induced due to recommender providing a small, biased sample to select from
- values as constitutive attentional policies
- Social Programming Considered as a Habitat for Groups