Skip to content

Satisficing and searching for elements of a “core” of ABM

March 28, 2011

I’ve been thinking about what might constitute a mathematical and/or statistical “core” of ABM. On a related note, something people discuss often when talking about agent-based models is employing non-optimizing preferences for agents — the idea being that people don’t optimize in real life, for a number of reasons. I’ve always been curious about alternative models of preferences — prospect theory, for one, or “satisficing behavior.” I just ran across this earlier today, almost entirely on accident:

Very interesting. The article has a little bit of a harsh tone to it, but it sounds very thorough. I’m looking forward to reading more.

I’ve often wondered if “satisficing”-style preferences could be formulated by simply considering “the correct additional costs” in some way or another.

This is subject for another post, but I understand why many people like optimizing (to use the term incredibly fast and loose!), while at the same time understanding why many people really dislike optimizing. Good points on both sides. Something that will need to be solved, I think, with anything that is non-optimizing, is having a good understanding of how and what non-optimizing means for agent behavior, and why that choice of actions is a good approximation to reality. There is an *enormous* amount of “raw understanding” of the mathematical implications of various maximizing-behavior formulations, which is *very good to have* — not because this correctly describes how the world functions, but because any model is necessarily an abstraction from reality (“is false,” to quote Dr. Box), and we must understand how the false things we have created *act* in their possible existence-spaces. If we don’t understand how our creations act in their created space, how will we know when they are deviating from reality in ways we strongly care about?

In my very limited view, the thing “optimizing” has going for it strongly is the thorough understanding that we have of this agent’s “action space.”

Yes, yes, this is all ridiculously vague. Definitely thoughts for future posts…

No comments yet

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: