Today I’m picking up where I left off last.  I briefly explained how the “real” world and the “internet” are very different, no matter what many science fiction stories would have you believe.  My main contention with these hypothetical “meta-worlds” is that they’re always in three dimensions.  While many network based three dimensional worlds exist and are quite popular, the notion of a lattice based two dimensional embedding is artificial, and has more to do with the expectations of the user/gamer than with the characteristics of hardware and information systems that it runs on.  These information systems, such as the internet itself, are all structural network variations.  Any lattice based scheme must exist as an (awkward, fragile, or redundant) abstraction upon their true nature.The expectations of the user or gamer are taken, in part, from how our senses are “tuned”.  Our stereoscopic vision is geared to detect and interpret movement in three dimensions, and to determine shape, color, and texture through the effects of light as it is reflected off of various objects.

Because vision is our most “sensitive” and “trained” sense, it is the most viable sense to use as an “interaction pathway” between humans and complex data.  This is of course true with nearly any form of computer interaction task, but it is especially true in this case.  There is a large amount of interesting work in this field, and I won’t be able to describe everyone of interest here.  (However, two people stick out in my mind: Tufte‘s work on data graphics, and MacEachren’s work on How Maps Work.)

Using vision as a main sensory tool is sort of a no-brainer.  However, it carries a pretty stiff limitation.  You can’t “see” anything in higher than three dimensions.  It’s hard for us even to conceive of something as existing in more than three dimensions, but it’s rather trivial for computers to deal with.  Furthermore, for all practical purposes, most representations will be limited to two dimensions, such as that offered by a computer monitor.

Finally, a lot of the information that is useful for describing complex systems does not fit neatly into an “a priori” dimension.  Rather, the most important dimensions of a complex system usually emerge from the relationships between data points.  An example of this is the PageRank system that Google uses to score webpages.  The nodes cannot be placed onto this dimension without first considering the connections of (pretty much) every other node in their network.

There is a profound difference between the basic three dimensional euclidean scheme that constrains our natural world, and the hyper-dimensional schemes that pervade much of complexity and network theory.  However, there are methods that one can use to “summarize” the structure of a network in a lower dimension.  The convenient metaphor for this is a map.  Maps of the world are useful two dimensional representations of a three dimensional space.  Nobody lives, works, or moves in the center of the earth, and we only move off the surface in marginal cases (such as in airplanes). Therefore, we can effectively use polar coordinates to pinpoint a person’s location in three dimensional space with only two pieces of information (lattitude and longitude).  We all effectively live on a manifold of three (or four if you really want to be picky) dimensional space.  We can “squash” this manifold into two dimensions to serve as a practical and effective summary of our surroundings.

In this same manner, we can squash the higher dimensionalities of networks into a more digestable visual format.  However, squashing dimensions almost always involves distortion.  Any map of the world that you look at will invariably be “wrong” in the proportioning or distances between locations.  Certain techniques involve splitting the sphere into sections like an orange peel, or by distorting the northern and southern polar regions.  Luckily, we’ve got a lot of oceans and uninteresting/ uninhabitable terrain (Antarctica) which serve as good neutral areas for skewing, stretching, or slicing.

There are two main approaches towards summarizing the structure of networks in two or so dimensions.  The first approach is known as a “force directed” embedding.  These plots involve treating connections between nodes as attractive or repulsive forces, and then simulating the network dataset as a physical system.  The system updates its layout continuously until it reaches a point of equilibrium, where the general structure of the network is readily apparent.  These plots are also wonderful interactive interfaces.  Many such systems treat user interaction as just another force acting upon the network.  So, a user can drag, stretch, or isolate parts of the network that he or she finds interesting through simple point and click interactions.  However, this method is not without its drawbacks.  First, the representation produced by the force directed algorithm is not correct, reproducible, or even tractable in many cases.  The force directed algorithm may end up placing parts of the network structure in different, less optimal orientations depending on how the layout algorithm pans out.  Also, the nodes must constantly update their position based on their attraction/repulsion to other nodes.  This calculation can quickly become intractable for large datasets on modern computing hardware, especially in cases where the plots are to be used as an interactive interface as described previously.

The other option is to use eigendecomposition (also called spectral decomposition), a strange sounding method that is used in countless science and engineering applications. To do this, the network is described as a matrix, where each row/column pair describes the connection between two nodes. Eigendecomposition itself is a special case of singular value decomposition (SVD).  SVD’s are a very simple factorization of matrices:

(From Wikipedia)
Suppose M is an m-by-n matrix whose entries come from the field K, which is either the field of real numbers or the field of complex numbers. Then there exists a factorization of the form

$M = U\Sigma V^*, \,\!$

where U is an m-by-m unitary matrix over K, the matrix Σ is m-by-n with nonnegative numbers on the diagonal and zeros off the diagonal, and V* denotes the conjugate transpose of V, an n-by-n unitary matrix over K. Such a factorization is called a singular-value decomposition of M.

Eigendecomposition is essentially the same process, except performed on (positive semidefinite/symmetric) square matrices.
Essentially, eigendecomposition treats the matrix as a series of functions, and then solves them.  Each solution that is found is an eigenvector.  The solutions also come with corresponding eigenvalues.  These eigenvalues are essentially scalars for the solution, and the higher the scalar for the solution, the “more important” that solution is for describing the overall distribution of information in the matrix.  The highest magnitude eigenvectors (corresponding to the highest eigenvalues) can serve as useful two or three dimensional approximations of a network in this fashion.

I’m glossing over a ton of information here, such as the definition and differences between LaPlacian and correspondence matrices, the difference between weighted and unweighted networks, etc.  However, the main point is that eigendecomposition methods are flat out solutions for a network, while force directed methods offer useful and interactive approximations.

So, to summarize, we have a couple of useful tools for turning the hyper-dimensional world of information networks into forms we can digest a little more easily.  This is an important first step towards giving us a means of understanding and navigating complexity.

From → Uncategorized

One Comment
1. test test