Skip to content

First meeting of CSS Econ group

September 2, 2011

This Friday is the first meeting of the CSS department’s econ-modeling group. Interesting group of people and ideas.

Something Rob said in the CSS department’s “new student welcome” has stuck with me. Roughly:

“More and more today we have big data, big information, and big computers. Computational Social Science is all about trying to incorporate these things into a consistent and rigorous model.”

We’re all talking about what we did this summer, and so far, that fits. I’m looking forward to this semester.

Advertisements

Grad Student Pep Talk

August 29, 2011

I heard a great “pep talk” at the department meeting this last week about “how to succeed as a grad student.” I’ve heard a bit of it before here and there, but I think I’m going to particularly try to achieve a lot of these this semester.

Highlights included:
– “Read a paper a day!”
– “Keep track of your papers with an online bibliography!”
– “Use versioning for everything!”
– “Write everyday!”

My “five papers external-to-project-or-classes” goal this semester is the first above. I think I’ll use Mendeley for the second, along with this blog. I’ve been hearing great things about Git for a while now; I’ll aim to learn that with my class projects for the third. The fourth is more for students in the dissertations phase, I think, but I’ll try to employ it as I get classwork done.

Updates as I go!

Update and a New Semester

August 29, 2011

I’ve been gone from blogging for a while. My summer was busier than I expected — internship and a project at GMU. Good times!

I now have a backlog of posts I banged out over the summer; I need to get those up.

School starts again tomorrow. I have a goal this semester of reading at least 5 non-class-related papers each week — ideally 7 (one a day!) but we’ll see how the semester goes. Hopefully my new Kindle will make this easier — thanks Mom and Dad! Ideally I’ll post notes on each paper as I read them; this way I can record some thoughts somewhere and hold myself to it.

Papers each week should fill out some or one of these categories:
1. Recommended reading from Rob.
2. Reading of “people whose work I’m interested in.”
3. GMU project reading.
4. “Current Journals/Literature.”
a. Current Journals
b. Current literature “in my field” — ABM-related.
5. My own projects.

1. and 4. roughly correspond to “Classics and current literature,” which I think is the job of every grad student to learn.

Roughly this is:

  • Learning the “field right now.”
  • “Learning the historical literature.”
  • GMU project reading.
  • “Keeping up with the current literature and research.”

 

Reading this week:
1. Bullard, Duffy (1998), JEDC : A model of learning and emulation with artificial adaptive agents
2. Howitt, Ozak, (2009), WP: Adaptive Consumption Behavior (18pp)
3. Marimon, McGratten, Sargent (1990), JEDC: Money as a medium of exchange in an economy with artificially intelligent agents (38-78pp — lotta plots)
4. Farmer, Geanakoplos (2009), WP: Hyperbolic discounting is rational: Valuing the far future with uncertain discount rates (18pp)
5. Seppecher (2011, forthcoming), MD: Flexibility of wages and macroeconomic instability in an agent-based computational model with endogenous money.
6. Bonus if time: Nourry, Venditti (2010), MD: Endogenous business cycles in OLG economies with multiple consumption goods
7. Bonus if time: Farmer, Lo (1999), Proc. of Nat. Acc. Science: Frontiers of finance: Evolution and efficient markets.

I’ll update as I read this week.

Collection of Links: Folks Doing Multi-Agent Modeling

April 10, 2011

Just a collection of places and resources for multi-agent modeling. Very incomplete.

Websites:
http://www.agent-based-models.com
— see the excellent “researchers” and “resources” section. I need to spend a lot more time exploring this.

 

Journals:
Computational and Mathematical Organization Theory
— see the “Call for papers.” Cool.

 

Conferences:

 

Centers:

 

People and Papers:

Scientific Computing, Part 1

April 5, 2011

Today: “Version control for scientific computing,” or, “how not to waste half your research time remembering what you changed in your code.”

As I’ve started to code perhaps-dissertation-worthy projects, I’ve been thinking about how to avoid wasting extensive time in “code-confusion.” If you’ve programmed any not-one-time-and-small project, you’re familiar with this — “is this chart from the new code? I mean the new-new code, not the old new-code…shoot…”.

It’s been a while since I considered the “software engineering process” more seriously than opening Vim and banging out code. As I move from one-off projects (i.e. HW and class projects) to longer-term scientific computing (i.e. potential research topics), I know that won’t work any more. I need careful version, ideally with some sort of scientific-computing bent, if such a thing exists.

A few Google searches netted some very useful articles, conversations, and blog posts. I don’t have time to do the conversations justice now, but my ridiculously incomplete summary is: (1) USE VERSIONING for your scientific computing. Think about including documentation and notes and thoughts with your code. (2) git looks really nice for a combo of “get running fast, doesn’t interrupt workflow too much,” (3) I still need to figure out where to host… sourceforge, google code, and github are the three main options I’m vaguely thinking about.

Quickly throwing up articles, in order of my favorites:

  • “Where’s the Real Bottleneck in Scientific Computing?” from the American Scientist, and Software Carpentry
    — This is tied with the next for my favorite discovery. The article is well worth reading, and a quick search turned up the course.
  • “Google Code/Sourceforge/GitHub/SciForge as a Science Repository” blog and discussion
    Blog post about using google code/etc as project management: docs and data as well as code. This lead me to a very helpful discussion at Friend Feed; both have helpful links. Ideas in action at this post.
  • “Version Control in Scientific Computing”
    — Nice “introductory” blog post with descriptions, discussion of CVS vs Git. Convinced me to try Git first.
  • A voice for CVS-style
    Blog post about CVS-style version control. Only read a little; broadly applicable observations in the intro.

“Where’s the Real Bottleneck in Scientific Computing?” from the American Scientist, and Software Carpentry

This is tied with the next for my favorite discovery. The article is well worth reading, and a quick search turned up the course.

The utility of multi-agent modeling

April 1, 2011

One must always ask why one is employing a particular tool for a job. Your affinity for the tool is not a good primary reason to use that tool — it must provide something particularly useful in answering a question. This goes doubly for complex tools — one must always justify not using a simpler tool.

One broad usefulness of multi-agent models is the ability to add enormous detail to a particular model. Now, enormous detail is a double-edged sword. In my mind, every bit of detail one adds is a potential systematic error introduced into the workings of the system, and this must be given careful thought.

When I say “enormous detail,” I mean both in terms of data and theory. A lot of micro-data exists, and there is a lot of microeconomic theory that I think is a long way from making it into economic models, macroeconomic and otherwise. A related topic is the incorporating real geography into an economic model — I think this area is ripe for research.

Not that implementing these things well will be straightforward in a multi-agent framework, but I think it is feasible. I will report some examples as soon as I have some fleshed out a little more.

Satisficing and searching for elements of a “core” of ABM

March 28, 2011

I’ve been thinking about what might constitute a mathematical and/or statistical “core” of ABM. On a related note, something people discuss often when talking about agent-based models is employing non-optimizing preferences for agents — the idea being that people don’t optimize in real life, for a number of reasons. I’ve always been curious about alternative models of preferences — prospect theory, for one, or “satisficing behavior.” I just ran across this earlier today, almost entirely on accident:

http://www.moshe-online.com/satisficing/

Very interesting. The article has a little bit of a harsh tone to it, but it sounds very thorough. I’m looking forward to reading more.

I’ve often wondered if “satisficing”-style preferences could be formulated by simply considering “the correct additional costs” in some way or another.

This is subject for another post, but I understand why many people like optimizing (to use the term incredibly fast and loose!), while at the same time understanding why many people really dislike optimizing. Good points on both sides. Something that will need to be solved, I think, with anything that is non-optimizing, is having a good understanding of how and what non-optimizing means for agent behavior, and why that choice of actions is a good approximation to reality. There is an *enormous* amount of “raw understanding” of the mathematical implications of various maximizing-behavior formulations, which is *very good to have* — not because this correctly describes how the world functions, but because any model is necessarily an abstraction from reality (“is false,” to quote Dr. Box), and we must understand how the false things we have created *act* in their possible existence-spaces. If we don’t understand how our creations act in their created space, how will we know when they are deviating from reality in ways we strongly care about?

In my very limited view, the thing “optimizing” has going for it strongly is the thorough understanding that we have of this agent’s “action space.”

Yes, yes, this is all ridiculously vague. Definitely thoughts for future posts…