Blog The life and ramblings of just another data scientist.

Rotations in N dimensions

Several years ago I was writing a Machine Learning paper that required me to do rotations in an arbitrary number of dimensions. As such I had an entire section of the paper devoted to explaining how that was done before moving on to the actual algorithm. Here I extracted the portion where I explain N-dimensional rotations, basically rotations in 4-dimensional space or higher, and created its own PDF out of it. I hope this will be of use to some of you to help explain the process. Its a bit math-heavy but I am, as always, happy to answer any questions.

    / [pdf]

Nodal Analysis Tutorial

Nodal Analysis is a technique for circuit analysis where each node (A point where two or more components are connected) is interpreted individually. This analysis can be used to determine the voltage, or any other variable, at any point in the circuit, sometimes as a function of time. It is based on the fact that at any node all the current that enters the node will be equal to all the current leaving that same node. By doing this we are able to reduce each node to an equation, and then, the entire circuit to a set of simultaneous equations.

Identifying nodes

First you need to learn how to identify nodes in a circuit. A node is simply the junction where two or more components are connected by wire. The following diagram shows a circuit where the nodes are circled in red:

Analyzing a node

The next step is to represent each node in your circuit as an equation. If you are doing an analysis in the time domain then this is usually represented as voltage as a function of time \(V(t)\). This can also be done in the frequency domain using Phasors. For this example we will work in the time domain. Let's use the same circuit as the one above:

First we need to assign arbitrary directions for the current flow in and out of each component on the circuit. It doesn't need to match the direction the actual current flows because if we reverse it then the values we get for the current through that component will just come out to be negative. What is important is that if current goes in one side of a component it goes out the other side. Other than that it's completely arbitrary. So lets pick some random directions for the current flow in our diagram:

You probably also noticed the ground point. This is also arbitrary, you can pick any node in the circuit as your ground. Mathematically you represent whatever node you pick as ground to be 0 volts. Depending on which point you pick as ground it will effect the voltage values you get for the other nodes, but they will all be the same relative to each other.

Lets assume \(I_{1}\) varies with time. Therefore it will be represented as a mathematical function: \(I_{1}(t)\). Now lets look at each node individually and represent them as mathematical functions. In addition let's assign a variable to each node representing that node's voltage.

First let's look at the upper left node labeled \(V_{1}\), the one connecting \(I_{1}\), \(R_{1}\), and \(C_{1}\). It has three paths for current to flow in/out from each of the three components. We know these three currents will add up to zero because all the current coming into the node (positive values) must equal all the current exiting the node (negative values). With this knowledge we can represent the node mathematically.

The current coming in from \(I_{1}\) is easy, since this is a current source that varies with time it is simply \(I_{1}(t)\).

We then need to consider the current coming into the node from \(R_{1}\). Since we know Ohm's Law which states \(I=V/R\) we can represent the current flowing through \(R_{1}\) in these terms. Since \(V_{4}\) is our ground point it will always have a voltage of 0. Therefore the current flowing through \(R_{1}\) is simply:

$$ \frac{0-V_{1}(t)}{R_{1}} $$

Now the current through \(C_{1}\) is a bit more tricky since we need to use differential equations to model a capacitor in the time domain. The differential equation representing the current flowing through a capacitor is:

$$ I=C\frac{,dv}{,dt} $$

Therefore we represent the current through \(C_{1}\) as:

$$ C_{1}\frac{,d}{,dt}(V_{1}(t)-V_{2}(t)) $$

Remembering that the sum of these three currents will equal zero we come up with the equation:

$$ I_{1}(t)+\frac{0-V_{1}(t)}{R_{1}}-C_{1}\frac{,d}{,dt}(V_{1}(t)-V_{2}(t))=0 $$

The final step is to get this into a function representing the node's voltage as a function of time, in other words \(V_{1}(t)\). To do this we just use some calculus and differential math to solve for \(V_{1}(t)\) in the above equation. Teaching calculus is outside the scope of this document so we will leave it to the reader to solve for \(V_{1}(t)\). Once we solve for \(V_{1}\) we get the following:

$$ V_{1}(t)= \frac { ( \int\limits_0^t ( e^{-\frac{t}{C_{1}R_{1}}} ( I_{1}(t)+C_{1} ( \frac{,d}{,dt}V_{2}(t) ) ) ) ,dt )e^{\frac{t}{C_{1}R_{1}}} } {C_{1}} $$

We now know the voltage of this node (\(V_{1}\)) as a function of time. The only problem is that this function also seems to depend on the voltage functions for the other 3 nodes. That's why the next step is to obtain the voltage as a function of time for the other 3 nodes in the same way we did for this node. Once we do that we will have 4 simultaneous equations and we will have what we need to solve for any variable we want.

Lets quickly go through and perform nodal analysis on the other 3 nodes starting with node \(V_{2}\):

$$ C_{1}\frac{,d}{,dt}(V_{1}(t)-V_{2}(t))+\frac{V_{3}(t)-V_{2}(t)}{R_{2}}=0 $$

Solve for \(V_{2}(t)\) to get:

$$ V_{2}(t)=\frac{(\int\limits_0^t (e^{\frac{t}{C_{1}R_{2}}}(C_{1}R_{2}(\frac{,d}{,dt}V_{1}(t))+V_{3}(t))),dt)e^{-\frac{t}{C_{1}R_{2}}}}{C_{1}R_{2}} $$

The last two nodes will be much simpler since they don't connect to the capacitor, therefore we don't need to deal with differential equations. For \(V_{3}\) we get:

$$ -\frac{V_{3}(t)-V_{2}(t)}{R_{2}}+\frac{0-V_{3}(t)}{R_{3}}=0 $$

Solve for \(V_{3}(t)\) and you get:

$$ V_{3}(t)=\frac{R_{3}V_{2}(t)}{R_{3}+R_{2}} $$

For node \(V_{4}\) we already know its value is 0 so we can skip this node all together.

We now have 3 simultaneous equations, one for each node of the circuit (except ground). In the next section we will use them to solve for the voltage at any point as a function of time.

Bringing it all together

The rest of the process should be pretty apparent right now if you're familiar with solving for simultaneous equations. Since this isn't meant to be a math tutorial we won't go into the details on how to solve simultaneous equations.

There are basically two approaches to simultaneous equations. Either substitution, which is more tedious but doesn't require any advanced math, or linear algebra matrices. Generally if you aren't familiar with linear algebra and are doing it by hand you're gonna want to use substitution, which is the same as the process in basic algebra. However if you are familiar with linear algebra and plan to use a calculator to get your answer then matrices are much easier to work with.

The first step is to assign some values to our components (normally this is done for you of course):

$$ C_{1}=\frac{1}{100000} F $$

$$ R_{1}=100 \Omega $$

$$ R_{2}=1000 \Omega $$

$$ R_{3}=10000 \Omega $$

Similarly we need to define the function governing the current source. We will use a simple sinusoidal current source:

$$ I_{1}(t)=sin(t) $$

Since the scope of this tutorial doesn't include calculus we will show the quick way to do this in maple. To solve for multiple simultaneous equations in maple simply use the dsolve command like this:

dsolve({<equation 1>,<equation 2>,...<equation n>},[<unknown 1>, <unknown 2>,...<unknown n>])

Maple will now produce the three independent functions for the 4 nodes. When we plug the values for the components in and solve for the various nodes simultaneously we get the following equations:

$$ V_{1}(t)=100 sin(t)-\frac{100}{111} e^{-\frac{1000}{111}t}(\int\limits_0^t (cos(t)e^{\frac{1000}{111}t}) ,dt) $$

$$ V_{2}(t)=\frac{11000}{111} e^{-\frac{1000}{111}t}(\int\limits_0^t (cos(t)e^{\frac{1000}{111}t}) ,dt) $$

$$ V_{3}(t)=\frac{10000}{111} e^{-\frac{1000}{111}t}(\int\limits_0^t (cos(t)e^{\frac{1000}{111}t}) ,dt) $$

Thats all there is to analyzing a circuit in the time domain!

Frequency Domain Circuit Analysis Tutorial

Some time back I started writing a book in an attempt to help explain how to do circuit analysis by hand. Originally it was going to include both time-domain analysis as well as frequency-domain analysis. The book is incomplete and I never got to cover the time-domain but has several complete examples showing frequency-domain analysis on many common circuits. I’ve gotten some compliments over the years, particularly from HAM radio operators, on how useful the book has been to them. I’d like to share here what I have so far in case anyone might find it useful.

As a side note if anyone would like to revive the project and work with me on expanding and completing the book it would be most welcome. Please feel free to contact me. In the meantime here is the compiled book in its current form. I don’t currently have the latex source code published anywhere but if there is any interest I will happily publish it and open-source it on github.

    / [pdf]

Latex PDF from Markdown

I write a lot of technical documents, and it is important to me that they look nice. In the past I have always resorted to using latex directly. It's an ugly language, hard to maintain, but it has always been the best solution. Microsoft Word completely fails at complex documents, and is tedious and buggy if you want to do anything technical with it. If I have a simpler document without much math or special formatting I've always used Markdown.

Recently however I was able to get the best of both worlds. Using R-script's flavored Markdown it adds a tremendous amount of power to Markdown. The most important feature is the fact that it uses pandoc in order to compile your Markdown text into a latex template which is then used to compile a PDF. This means I get all the power of latex with all the simplicity of Markdown. That alone made it a winner for me, but it also allows you to compile into HTML and a number of other formats which can make it even more useful.

It also has one other very sweet added bonus if you don't mind coding in R-Script: you can put R-script code directly into your markdown in order to produce charts or other output programmatically. R-script is a heavily math oriented programming language so it can be extremely powerful when writing math papers.

Getting Started

If you want to get started using R-flavored Markdown to compile into latex and PDF I have provided my templates on GitHub. I also have another GitHub repository where I provide a complete example of a markdown document and makefile that will compile it into a PDF. Really simple to figure it out, just clone the repository and start editing it. Make sure, of course, that you have R-script installed and all the prerequisites you might need. Finally I also have a third GitHub repository with even more templates and examples. Some of the examples go in depth on showing how you can use r-script inside the Markdown to generate plots and graphs or other data. Worth a look there are templates for CVs/resumes, articles, powerpoint presentations, syllabus, etc.

R-flavored Markdown files use the extension "Rmd" and use all the conventional Markdown syntax with some additions. All you need to do is add some front-matter to the begining of the Markdown file which provides some hints to the latex template to do some formatting. The front-matter looks like the following, taken directly from the example I linked. The rest of the content of the file is just the Markdown, very simple. You can see the full file with front-matter and Markdown as used in this example by clicking the link.

output:
  pdf_document:
    citation_package: natbib
    keep_tex: true
    fig_caption: true
    latex_engine: pdflatex
    template: templates/syncleus-white.tex

# Required properties
title: "Conditional Probabilities and Bayes Theorem"
abstract: "A simple explanation of conditional probabilities."
author:
  name: Jeffrey Phillips Freeman
  affiliation: Syncleus
  correspondence: jeffrey.freeman@syncleus.com

# Optional Properties
other-authors:
- name: John Doe
  affiliation: NoTech
tags: Statistics, Math
start-date: "April 6, 2017"
date: "`r format(Sys.time(), '%B %d, %Y')`"
draft: "`r format(Sys.time(), '%B %d, %Y')`"
revision: 1.1
tocdepth: 5
title-color: 0,0,90
# No indentation
indent: 0pt
parskip: 3mm
# Standard Anglo-american indentation
#indent: 15pt
#parskip: 0mm
archive: Pseudoscience Journal of America
geometry: left=2cm, right=2cm, top=2.25cm, bottom=2.25cm, headheight=13.6pt, letterpaper
bibliography: ./probabilities.bib
biblio-style: plainnat
biblio-title: References

Below you will see the output of the compiled pdf using this example, different temples can give different layouts and formatting.

    / [pdf]

In the above examples you can see how the date properties are generated dynamically as the current date using R-script. This gives you just a small taste of how you can use R-script directly inside the Markdown files. Most of the properties above are self explanatory but let me provide a few descriptions for clarity all the same.

  • title - The title of the document.
  • abstract - A few sentences describing the document used int he abstract box once rendered.
  • author - Information about the author
  • other-authors - Optional supplemental authors.
  • tags - optional list of keywords.
  • start-date - Date the author began working on the paper. This is used int he copyright information.
  • date - Date the paper was released, this is used int he copyright information.
  • draft - Optional text which will appear in a Draft watermark on the paper. If this property is left out then no draft watermark will be added.
  • revision - The revision version number of the paper displayed int he upper right corner.
  • tocdepth - The maximum depth displayed int he table of contents for nested sections.
  • title-color - 3 comma separated integers from 0 to 255 representing the RGB color of the title text.
  • indent - The size of the paragraph indentation.
  • parskip - The vertical distance between paragraphs.
  • archive - The name of the journal or archive the paper is to be published in.
  • geometry - Optional parameters that overrides the latex geometry, useful for setting custom margins.
  • bibliography - location of the natbib or bibtext bibliography file.
  • biblio-style - Optional override for the bibliography style.
  • biblio-title - The title used for the bibliography section.

Thats really all there is to it. One last thing, I made sure the properties I picked for my template match those expected in a middleman blog written in markdown. This adds some convenience that the same source file can be compiled by either Middleman or R-script.

Hyperassociative Map Explanation

Introduction

Almost 8 years ago, on Aug 15, 2009, I invented a new game-changing algorithm called the Hyperassociative Map algorithm. It was released as part of the dANN v2.x library. The HAM algorithm, as it is often called, has since been used by countless developers and in hundreds of projects. HAM is a Graph Drawing algorithm that is similar to force-directed algorithms but in a class all its own. Unlike other force-directed algorithms HAM does not have a sense of momentum or acceleration which makes it debatable if it can even still be called force-directed.

Below is a video demonstration of HAM in action. In this 3D visualization the vertices of the graph are displayed as grey spheres but the edges are not rendered. The graph's topology is relatively simple containing 128 nodes in groups of 16 layered such that each group is fully connected to each adjacent group. This results in 256 edges between each adjacent group. Since the groups on either end only have one other group they are adjacent to that means there is a total of 1,792 edges. Despite this the graph aligns quickly and smoothly on a single 1 Ghz processor as demonstrated in the video. It starts with randomized locations for each vertex and then aligns. After each alignment the graph is reset with new random starting positions to show that the same alignment is achieved every time.


What makes HAM so special is that it retains many of the advantages that have made force-directed algorithms so popular while simultaneously addressing their short comings. Wikipedia describes the following advantages to using force-directed algorithms, all of which hold true for the HAM algorithm.

  • Good-quality results - The output obtained usually have very good results based on the following criteria: uniform edge length, uniform vertex distribution and showing symmetry. This last criterion is among the most important ones and is hard to achieve with any other type of algorithm.
  • Flexibility - Force-directed algorithms can be easily adapted and extended to fulfill additional aesthetic criteria. This makes them the most versatile class of graph drawing algorithms. Examples of existing extensions include the ones for directed graphs, 3D graph drawing, cluster graph drawing, constrained graph drawing, and dynamic graph drawing.
  • Intuitive - Since they are based on physical analogies of common objects, like springs, the behavior of the algorithms is relatively easy to predict and understand. This is not the case with other types of graph-drawing algorithms.
  • Simplicity - Typical force-directed algorithms are simple and can be implemented in a few lines of code. Other classes of graph-drawing algorithms, like the ones for orthogonal layouts, are usually much more involved.
  • Interactivity - Another advantage of this class of algorithm is the interactive aspect. By drawing the intermediate stages of the graph, the user can follow how the graph evolves, seeing it unfold from a tangled mess into a good-looking configuration. In some interactive graph drawing tools, the user can pull one or more nodes out of their equilibrium state and watch them migrate back into position. This makes them a preferred choice for dynamic and online graph-drawing systems.
  • Strong theoretical foundations - While simple ad-hoc force-directed algorithms often appear in the literature and in practice (because they are relatively easy to understand), more reasoned approaches are starting to gain traction. Statisticians have been solving similar problems in multidimensional scaling (MDS) since the 1930s, and physicists also have a long history of working with related n-body problems - so extremely mature approaches exist. As an example, the stress majorization approach to metric MDS can be applied to graph drawing as described above. This has been proven to converge monotonically. Monotonic convergence, the property that the algorithm will at each iteration decrease the stress or cost of the layout, is important because it guarantees that the layout will eventually reach a local minimum and stop. Damping schedules cause the algorithm to stop, but cannot guarantee that a true local minimum is reached.

However the two disadvantages described of force-directed algorithms, namely high running time and poor local minima, have been corrected in the HAM algorithm. As described earlier HAM is not a true force-directed algorithm because it lacks any sense of momentum. This was intentional as it ensures there is no need for a dampening schedule to eliminate oscillations that arise from the momentum of nodes. This has the added advantage that the algorithm does not prematurely come to rest at a local minima. It also means fewer processing cycles wasted on modeling oscillations and vibrations throughout the network.

These properties alone already make HAM a worthwhile algorithm for general study and real-world applications, however it is important to note that HAM was originally designed with a very specific use case in mind. Originally HAM was designed to facilitate the distribution of massive real-time graph processing networks. The sort of scenario where each vertex in a graph had to process some input data and produce some output data and where each vertex is part of a large interdependent graph working on the data in real time. When distributing the tasks across a cluster of computers it is critical that vertices that are highly interconnected reside on the same computer in the cluster and physically close to the computers housing the vertices that will ultimately receive the data, process it and then carry it throughout the rest of the network. For this purpose HAM was created to model graphs such that each node in a compute cluster took ownership of tasks associated with vertices that were spatially close to each other according to the HAM's drawing of the compute graph.

In order for HAM to be successful at it's job it needed to exhibit a few very specific properties. For starters the interactivity property mentioned earlier was a must. HAM needed to be able to work with a graph that is constantly changing its topology with new vertices able to be added, removed, or reconfigured in real time. This is ultimately what led the algorithm to be modeled in a way that made it similar to force-directed algorithms.

The other requirement is that the motion of the vertices had to be smooth without any oscillations as they align. This was critical because if oscillations occurred on a vertex as it was near the border that distinguishes one compute node from another then those oscillations across that border would cause the task in the compute cluster to be transferred between the nodes in the cluster each time. Since this is an expensive operation it is important that as HAM aligned the vertices didn't jitter causing them to cross these borders excessively.

Finally HAM needed to be able to be parallelized and segmented. That means that it needed to scale well for multi-threading but in such a way that each thread didn't need to be aware of the entire graph in order to process it; instead each thread had to be capable of computing the alignment of HAM on an isolated section of the graph. This is obviously critical because of the distributed nature of the compute graph, particularly if we want something capable of unbounded scaling. I basically wanted an algorithm that could be successful on even massively large graphs.

With almost 8 years of testing it has become evident that HAM is top in its class compared to many graph drawing algorithms. Despite this it is still scarcely understood by those studying graph drawing algorithms. For this reason I wanted to write this article to share some of its internal workings so others can adapt and play with the algorithm for their own projects.

The Algorithm

In this section I want to get into the internal workings of the Hyperassociative Map algorithm, HAM. Below is the pseudocode breakdown explaining the algorithm. Notice I use some math notation here for simplicity. Most notably I use vector notation where all variables representing vectors have a small arrow above the variable and the norm, or magnitude, of the vector is represented by double vertical bars on either side, for example \(||\vec{p}||\). If you have trouble with vector notation or just want to see a concrete example the full working java code can be found at the end of this article for reference.

\begin{algorithm}
\caption{Hyperassociative Map}
\begin{algorithmic}
% equilibrium distance
\REQUIRE $\tilde{\chi} > 0$
%repulsion strength
\REQUIRE $\delta > 1$
% learning rate
\REQUIRE $\eta = 0.05$
% alignment threshold (determines when graph is aligned)
\REQUIRE $\beta =$ 0.005
\PROCEDURE{HAM}{Vertex Set \textbf{as} $g$}
  \STATE \CALL{Randomize}{$g$}
  \WHILE{\CALL{AlignAll}{$g$} $> \beta \cdot \tilde{\chi}$}
    \STATE optionally recenter the graph
  \ENDWHILE
\ENDPROCEDURE
\PROCEDURE{AlignAll}{Vertex Set \textbf{as} $g$}
  \STATE $\zeta = 0$
  \FORALL{$v$ \textbf{in} $g$}
    \STATE $\vec{{\scriptsize \triangle} p}  =$ \CALL{Align}{$v$}
    \IF{$||\vec{{\scriptsize \triangle} p}|| > \zeta$}
      \STATE $\zeta = ||\vec{{\scriptsize \triangle} p}||$
    \ENDIF
    \STATE \CALL{Place}{$v$, \CALL{Position}{$v$} $+ \vec{{\scriptsize \triangle} p}$}
  \ENDFOR
  \RETURN $\zeta$
\ENDPROCEDURE
\PROCEDURE{Align}{Vertex \textbf{as} $v$}
  \STATE $\vec{p} =$ \CALL{Position}{$v$}
  \STATE $\vec{{\scriptsize \triangle} p} = 0$
  \FORALL{$m$ \textbf{in} \CALL{Neighbors}{$v$}}
    \STATE $\vec{q} =$ \CALL{Position}{$m$} - $\vec{p}$
    \STATE $\vec{{\scriptsize \triangle} p} = \vec{{\scriptsize \triangle} p} + \vec{q} \cdot \frac{(||\vec{q}|| - \tilde{\chi}) \cdot \eta}{||\vec{q}||}$
  \ENDFOR
  \FORALL{$m$ \textbf{in} \CALL{NotNeighbors}{$v$}}
    \STATE $\vec{q} =$ \CALL{Position}{$m$} - $\vec{p}$
    \STATE $\vec{c} = \vec{q} \cdot \frac{-\eta}{{||\vec{q}||}^{\delta + 1}}$
    \IF{$||\vec{c}|| > \tilde{\chi}$}
      \STATE $\vec{c} = \vec{c} \cdot \frac{\tilde{\chi}}{||\vec{c}||}$
    \ENDIF
    \STATE $\vec{{\scriptsize \triangle} p} = \vec{{\scriptsize \triangle} p} + \vec{c}$
  \ENDFOR
  \RETURN $\vec{{\scriptsize \triangle} p}$
\ENDPROCEDURE
\PROCEDURE{Randomize}{Vertex Array \textbf{as} $g$}
  \STATE randomise position of all vertex in $g$
\ENDPROCEDURE
\PROCEDURE{Place}{Vertex \textbf{as} $v$, Vector \textbf{as} $\vec{p}$}
  \STATE sets the position of $v$ to $\vec{p}$
\ENDPROCEDURE
\PROCEDURE{Neighbors}{Vertex \textbf{as} $v$}
  \RETURN set of all vertex adjacent to $v$
\ENDPROCEDURE
\PROCEDURE{NotNeighbors}{Vertex \textbf{as} $v$}
  \STATE $s =$ set of all vertex not adjacent to $v$
  \STATE $w =$ set of all vertex whose position is close to that of $v$
  \RETURN $s \cap w$
\ENDPROCEDURE
\PROCEDURE{Position}{Vertex \textbf{as} $v$}
  \RETURN vector representing position of $v$
\ENDPROCEDURE
\end{algorithmic}
\end{algorithm}

Obviously the pseudocode packs a lot of information into only a few lines so I'll try to explain some of the more important parts so you have an idea at what you're looking at.

Constants

First, lets consider the constants defined at the beginning. the variable \(\tilde{\chi}\) is called the Equilibrium Distance. It defines the ideal distance between two vertices connected by an edge. If a pair of vertices connected by a single edge are the only vertices present then they will align such that they are approximately as far apart as the value of \(\tilde{\chi}\). For simplicity here we have represented \(\tilde{\chi}\) as a single constant but in practice it is also possible to assign a different value to this constant for each edge, resulting in a graph with different aesthetic qualities. This value must of course always be a positive number greater than \(0\). The default value, and the one used in the demonstration video, is \(1\).

The second constant is called the Repulsion Strength and is represented by the \(\delta\) variable. This constant determines how strong the repulsion between two unconnected vertices are, that is two vertices not connected by an edge. Lower values for \(\delta\) result in a stronger repulsion and larger numbers represent a weaker repulsion. The default value is \(2\) and this is the value used in the demonstration video.

Next is the Learning Rate constant, \(\eta\). This is simply a scaling factor applied when the vertices are aligned to ensure the graph is aligned to each node with equivalent effect rather than over-fitting to the last node processed.

The last constant is the Alignment threshold, \(\beta\), this represents the minimum movement threshold. Once the vertices move less than this value during an alignment cycle it is presumed the graph is sufficiently aligned and the loop ends.

Align Procedure

The algorithm itself is represented such that it is broken up into three major procedures. The procedure named HAM is the entry point for the algorithm, the procedure named Align calculates the incremental alignment for a single vertex, and the procedure named AlignAll calculates alignment once for every vertex in the graph.

Lets first explain what is going on in the Align procedure. Here we have a single value being passed in, the vertex to be aligned, \(v\). On line 19 the current position of the vertex is obtained and represented as a euclidean vector, \(\vec{p}\). Next on line 20 a zero vector is initialized and represented as \(\vec{{\scriptsize \triangle} p}\). The vector \(\vec{{\scriptsize \triangle} p}\) is calculated throughout the procedure and represents the desired change to the current position of the vector \(\vec{p}\). In other words once \(\vec{{\scriptsize \triangle} p}\) is calculated it can be added to the current position of the vertex and will result in the new position for the vertex. Just as if calculating forces the \(\vec{{\scriptsize \triangle} p}\) will be the sum of all the composite influences acting on the vertex; so it represents the overall influence exerted on the vertex at any time.

When calculating \(\vec{{\scriptsize \triangle} p}\) the procedure must iterate through all the other vertices that have an effect on the vertex being aligned. There are two type of vertices each with different effects: neighbors, and non-neighbors. Neighbors are all the vertices connected directly to the current vertex by an edge, non-neighbors are all the other vertices not connected by an edge.

First the influence from the neighbor vertices is calculated on lines 21 - 24. The influence two neighbor vertices have on each other is different depending on how far apart they are. If they are closer than the Equilibrium Distance, \(\tilde{\chi}\), then the effect is repulsive. If they are farther apart than \(\tilde{\chi}\) then the effect is attractive. The calculation for this is represented by line 23. It basically calculates the vector that represents the difference between the position of the vertex being aligned and its neighbor and reduces the magnitude of the vector back to \((||\vec{q}|| - \tilde{\chi}) \cdot \eta\). To look at it another way if the equation was just \(||\vec{q}|| - \tilde{\chi}\) then the new position of the vector would be exactly at the Equilibrium Distance, but instead it is scaled to a fraction of this by \(\eta\) which adjusts how quickly the vertex will approach its equilibrium point.

Next the influence between non-neighbor vertices is calculated on lines 25 - 32. Non-neighbor vertices, that is vertices not connected by an edge, always exhibit a purely repulsive influence. Line 27 calculates this in a similar technique as before. That is the difference between the position of the two vertices is represented by \(\vec{q}\) and then its magnitude is scaled. Of course it's also negative to indicate that the force is repulsive. The equation just seems confusing in its simplified and compacted form. Initially it was derived by calculating the new magnitude of \(\vec{q}\) as the following.

$$ \frac{-1}{{\mid\mid\vec{q}\mid\mid}^{\delta}} \cdot \eta $$

This makes a lot more sense as we know in nature repulsive forces are the inverse square of their distance. So this accurately represents a repulsive influence that diminishes with distance. Once we actually apply that magnitude to the vector and simplify we arrive at our final equation.

$$ \vec{q} \cdot \frac{\frac{-1}{{\mid\mid\vec{q}\mid\mid}^{\delta}} \cdot \eta}{\mid\mid\vec{q}\mid\mid} \Rightarrow \vec{q} \cdot \frac{-\eta}{{||\vec{q}||}^{\delta + 1}}$$

The only caveat to this is seen in lines 28 to 30 where it checks the distance moved as a result of the repulsive influence. If it is greater than the Equilibrium Distance, \(\tilde{\chi}\), then its magnitude is scaled back to be \(\tilde{\chi}\). This is done because at very close distances the exponential nature of the repulsive influence becomes overwhelming and we want to ensure the majority of this influence works at a distance to allow the graph to spread apart but still allow the other influences to be the dominate influences on the graph.

At this point the computed change in position for the vertex is simply returned at line 33 for further processing by the AlignAll procedure.

AlignAll Procedure

The AlignAll Procedure is extremely straight forward. It is passed in the set of all vertices in the graph as \(g\) and iterates over the set while aligning them one at a time. Each vertex will get aligned once per call to the procedure, this means the procedure will usually need to be called multiple times.

On line 8 the Maximum Vertex Movement variable, represented as \(\zeta\), is initialized to \(0\). This variable represents the greatest distance any vertex moved during the alignment; after being calculated it's value is returned on line 15. The Maximum Vertex Movement is important for determining when the HAM algorithm has finished processing.

Other than that this procedure doesn't do anything special, the vertex alignment vector is calculated on line 10 and the new position for the vertex is set on line 14.

HAM Procedure

The HAM procedure is another rather straight forward procedure to explain. It starts by assigning some initial random coordinates to each vertex in the graph. After that it continually loops calling AlignAll until the graph is sufficiently aligned.

On line 3 the AlignAll procedure is called in a loop until the Max Vertex Movement returned is less than \(\beta \cdot \tilde{\chi}\). This is just the Alignment Threshold normalized by the Equilibrium Distance constant. The Alignment Threshold is sufficiently small such that if the movements in the graph are less than this value then they are considered negligible and the alignment can end.

As an optional step after each alignment iteration it may be desired to translate the entire graph so it is centered around the zero vector. There is a small amount of drift as the alignment of the graph is calculated and by doing this it ensures the graph remains in the center of the view when rendered. The drift is usually negligible however so this step is entirely optional. In the full java example below the logic for centering the graph is included.

Appendix: Full Java Code

public class HyperassociativeMap<G extends Graph<N, ?>, N> implements 
        GraphDrawer<G, N> {
    private static final double REPULSIVE_WEAKNESS = 2.0;
    private static final double DEFAULT_LEARNING_RATE = 0.05;
    private static final double EQUILIBRIUM_DISTANCE = 1.0;
    private static final double EQUILIBRIUM_ALIGNMENT_FACTOR = 0.005;

    private final G graph;
    private final int dimensions;
    private Map<N, Vector> coordinates = Collections.synchronizedMap(new 
            HashMap<N, Vector>());
    private static final Random RANDOM = new Random();
    private final boolean useWeights;
    private double equilibriumDistance;
    private double learningRate = DEFAULT_LEARNING_RATE;
    private double maxMovement = 0.0;

    public HyperassociativeMap(final G graph, final int dimensions, final 
    double equilibriumDistance, final boolean useWeights) {
        if (graph == null)
            throw new IllegalArgumentException("Graph can not be null");
        if (dimensions <= 0)
            throw new IllegalArgumentException("dimensions must be 1 or more");

        this.graph = graph;
        this.dimensions = dimensions;
        this.equilibriumDistance = equilibriumDistance;
        this.useWeights = useWeights;

        // refresh all nodes
        for (final N node : this.graph.getNodes()) {
            this.coordinates.put(node, randomCoordinates(this.dimensions));
        }
    }

    @Override
    public G getGraph() {
        return graph;
    }

    public double getEquilibriumDistance() {
        return equilibriumDistance;
    }

    public void setEquilibriumDistance(final double equilibriumDistance) {
        this.equilibriumDistance = equilibriumDistance;
    }

    public void resetLearning() {
        maxMovement = 0.0;
    }

    @Override
    public void reset() {
        resetLearning();
        // randomize all nodes
        for (final N node : coordinates.keySet()) {
            coordinates.put(node, randomCoordinates(dimensions));
        }
    }

    @Override
    public boolean isAlignable() {
        return true;
    }

    @Override
    public boolean isAligned() {
        return isAlignable()
                && (maxMovement < (EQUILIBRIUM_ALIGNMENT_FACTOR * 
                equilibriumDistance))
                && (maxMovement > 0.0);
    }

    @Override
    public void align() {
        // refresh all nodes
        if (!coordinates.keySet().equals(graph.getNodes())) {
            final Map<N, Vector> newCoordinates = new HashMap<N, Vector>();
            for (final N node : graph.getNodes()) {
                if (coordinates.containsKey(node)) {
                    newCoordinates.put(node, coordinates.get(node));
                } else {
                    newCoordinates.put(node, randomCoordinates(dimensions));
                }
            }
            coordinates = Collections.synchronizedMap(newCoordinates);
        }

        maxMovement = 0.0;
        Vector center;

        center = processLocally();

        // divide each coordinate of the sum of all the points by the number of
        // nodes in order to calculate the average point, or center of all the
        // points
        for (int dimensionIndex = 1; dimensionIndex <= dimensions; 
             dimensionIndex++) {
            center = center.setCoordinate(center.getCoordinate
                    (dimensionIndex) / graph.getNodes().size(), dimensionIndex);
        }

        recenterNodes(center);
    }

    @Override
    public int getDimensions() {
        return dimensions;
    }

    @Override
    public Map<N, Vector> getCoordinates() {
        return Collections.unmodifiableMap(coordinates);
    }

    private void recenterNodes(final Vector center) {
        for (final N node : graph.getNodes()) {
            coordinates.put(node, coordinates.get(node).calculateRelativeTo
                    (center));
        }
    }

    public boolean isUsingWeights() {
        return useWeights;
    }

    Map<N, Double> getNeighbors(final N nodeToQuery) {
        final Map<N, Double> neighbors = new HashMap<N, Double>();
        for (final TraversableCloud<N> neighborEdge : graph.getAdjacentEdges
                (nodeToQuery)) {
            final Double currentWeight = (((neighborEdge instanceof Weighted)
                    && useWeights) ? ((Weighted) neighborEdge).getWeight() : 
                    equilibriumDistance);
            for (final N neighbor : neighborEdge.getNodes()) {
                if (!neighbor.equals(nodeToQuery)) {
                    neighbors.put(neighbor, currentWeight);
                }
            }
        }
        return neighbors;
    }

    private Vector align(final N nodeToAlign) {
        // calculate equilibrium with neighbors
        final Vector location = coordinates.get(nodeToAlign);
        final Map<N, Double> neighbors = getNeighbors(nodeToAlign);

        Vector compositeVector = new Vector(location.getDimensions());
        // align with neighbours
        for (final Entry<N, Double> neighborEntry : neighbors.entrySet()) {
            final N neighbor = neighborEntry.getKey();
            final double associationEquilibriumDistance = neighborEntry
                    .getValue();

            Vector neighborVector = coordinates.get(neighbor)
                    .calculateRelativeTo(location);
            double newDistance = Math.abs(neighborVector.getDistance()) - 
                    associationEquilibriumDistance;
            newDistance *= learningRate;
            neighborVector = neighborVector.setDistance(newDistance);
            compositeVector = compositeVector.add(neighborVector);
        }
        // calculate repulsion with all non-neighbors
        for (final N node : graph.getNodes()) {
            if ((!neighbors.containsKey(node)) && (node != nodeToAlign)
                    && (!graph.getAdjacentNodes(node).contains(nodeToAlign))) {
                Vector nodeVector = coordinates.get(node).calculateRelativeTo
                        (location);
                double newDistance = -1.0 / Math.pow
                        (nodeVector.getDistance(), REPULSIVE_WEAKNESS);
                if (Math.abs(newDistance) > Math.abs(equilibriumDistance)) {
                    newDistance = Math.copySign(equilibriumDistance, 
                            newDistance);
                }
                newDistance *= learningRate;
                nodeVector = nodeVector.setDistance(newDistance);
                compositeVector = compositeVector.add(nodeVector);
            }
        }
        Vector newLocation = location.add(compositeVector);
        final Vector oldLocation = coordinates.get(nodeToAlign);
        double moveDistance = Math.abs(newLocation.calculateRelativeTo
                (oldLocation).getDistance());

        if (moveDistance > maxMovement) {
            maxMovement = moveDistance;
        }

        coordinates.put(nodeToAlign, newLocation);
        return newLocation;
    }
    public static Vector randomCoordinates(final int dimensions) {
        final double[] randomCoordinates = new double[dimensions];
        for (int randomCoordinatesIndex = 0; randomCoordinatesIndex < 
                dimensions; randomCoordinatesIndex++) {
            randomCoordinates[randomCoordinatesIndex] = (RANDOM.nextDouble() 
                    * 2.0) - 1.0;
        }

        return new Vector(randomCoordinates);
    }

    private Vector processLocally() {
        Vector pointSum = new Vector(dimensions);
        for (final N node : graph.getNodes()) {
            final Vector newPoint = align(node);
            for (int dimensionIndex = 1; dimensionIndex <= dimensions; 
                 dimensionIndex++) {
                pointSum = pointSum.setCoordinate(pointSum.getCoordinate
                        (dimensionIndex) + newPoint.getCoordinate
                        (dimensionIndex), dimensionIndex);
            }
        }
        return pointSum;
    }
}