I always believed that the study of visualizations and the study of (natural) languages could mutually inform each other, so through examining the development of linguistic theory, we can learn something useful for pushing forward theoretical research on visualization.
Noam Chomsky’s Syntactic Structure is undoubtedly a classic piece of linguistic theory research. The philosophy it embodies was the Zeitgeist of its time: the 1960s was the golden age of the “cognitive revolution”, and a central belief was that cognition and intelligence were to be explained as the manipulation and transformation of symbols. Chomsky’s approach now
is characterized as “generative grammar” in differentiation with the previous dominant approach called “structuralism“. Before structuralism, as the Wikipedia tells us, the linguistic theory was all about how languages originate and change historically.
The development of linguistic theory in terms of paradigm shifts, to me, is not a continuous effort to refine existing theories. Instead it is about the changing notion of what theoretical questions are important, or simply, theories of what? Historical linguistics is undoubtedly concerned with the theoretical question of how do natural languages evolve? Structuralists such as Saussure seemed less concerned with these questions. He believed that language may be analyzed as a formal system of differential elements such as phonemes, morphemes, lexical categories, noun phrases, verb phrases, and sentence types.
Chomsky’s approach builds upon structuralism but he is more interested in a theory that explains the generative mechanisms of language. Structuralist approach, to him, is inadequate because the theory cannot produce the full range of diverse sentences in English. Chomsky’s theory, however, focuses exclusively on syntax and dismisses the problem of meaning. As a response, the emerging cognitive semantics becomes more interested in theoretical questions about semantics: what is meaning? how does meaning arise?
Visualization theory research is much less advanced than linguistic theory research, but I seem to notice some similar patterns in the development of visualization theory. The pinnacle of visualization theory research so far is represented in Bertin’s Semiology of Graphics, Mackinlay’s APT framework and Wilkinson’s Grammar of Graphics. Bertin’s work, to me, is inherently structuralist. The APT framework builds upon Bertin’s structuralist work and incorporates generative mechanisms. These two apporaches, however, do take into the considerations of the meaning and expressiveness of visualizations. Historical analysis and cognitive analysis in visualization theory have been lacking so far (and our lab’s work on Distributed Cognition and mental models may be characterized as cognitive).
I do not believe, however, that more recent theoretical paradigms will always be better than outdated paradigms – there is simply no such thing as “absolute good or better”. These paradigms are concerned with different theoretical questions, which may be deemed to be important or not important in different times. From a utilitarian perspective, the structuralist and generative paradigms in linguistics do seem to be the most useful approaches: they can be directly applied to build intelligent machines that can automatically produce syntactically correct sentences. Similarly, we have seen how the APT framework and the Grammar of Graphics lead to sophisticated visual analytics systems like Tableau.