It has become common for theoretical positions in cognitive science to be implemented as models that can be simulated on computer, because of several related benefits:
A simulation ensures the conclusions of the theory follow from its assumptions (alone), and there is no contradiction in the claims.
Simulations can concretely show the correspondence between theory and data.
Model predictions can be made using a computer, rather than the theory in the theorist's head, to which experimenters have limited access.
Complex interactions between simple components can result in patterns of behavior that can only be discovered by quantitative analyses such as simulations.
Our experience with cognitive modelling is in the area of visual word recognition (VWR) — the processing of the written word — as produucers of models, and as evaluators of models. Whilst VWR research has been heavily influenced by the development of models of many empirical phenomena, we have observed problems that lead us to believe that a more effective combination of modelling and data could be achieved
At present, relatively few researchers have the programming skills to produce computational models. An unfortunate consequence of this is that the ideas that are instantiated in computational models tend to be those favoured by these modelling experts. Modelling novices (and their ideas) are excluded from this process.
Even among experts, it seems that the original developers of a particular model are typically the only ones who can test or extend it. Some models have simply been unavailable for further testing after initial publication, and for other models, the program has been of a form that cannot be modified or extended. As with experiments, there is an important concern that modelling results be replicable — and not merely unverifiable assertions — and that extensions be explored to properly understand the implications of these results.
In principle, to extend a model, it may be first reconstructed from the paper, but the programming time involved is usually prohibitive. Even when the original code is available to modify, doing so is only an option for the few who already model. Even for them, the gap between a conceptual description of the model and the code implementation makes such progress difficult.
The theoretical importance of already collected data and of potential experiments would be clearer if experimenters could more easily examine the predictions of models. Modellers would find it more difficult to ignore inconvenient experiments if they were accompanied by simulation failures.
These problems can be resolved if models are all implemented in a single easy-to-use (i.e., with a visualization interface) but highly flexible and powerful system. easyNet is such a system.