|timmattson.com||Life = Home + Work + Kayaking2 + Family|
Tim Mattson earned a PhD. in Chemistry for his work on quantum molecular scattering. This was followed by a Post-doc at Caltech where he ported his molecular scattering software to the Caltech/JPL hypercubes. Since then, he has held a number of commercial and academic positions with computational science on high performance computers as the common thread.
Dr. Mattson joined Intel in 1993 to work on a variety of parallel computing problems. This included benchmarking, system performance modeling, and applications research. He was a senior scientists on Intel's ASCI teraFLOPS project: a project that resulted in the first computer to run MPLINPACK in excess of one teraFLOPS.
Currently, he is working in Intel's Computational Software Laboratory. His goal is to develop technologies that will make parallel computing more accessible to the general programmer. This includes OpenMP, cluster computing, and peer to peer computing.
A long, informal biography
In graduate school, I couldn't decide whether to be a chemist, physicist, mathematician or a computer scientist. Enjoying chaos, I choose all four by getting a Ph.D. in chemistry (U.C. Santa Cruz, 1985) for solving a physics problem (quantum scattering) with new numerical methods (approximate potential methods for coupled systems of radial Schroedinger equations), on the primitive computers available in those days (Vax 750).
My confusion deepened during a Caltech post-doc in Geoffrey Fox's Concurrent Computation project where I took my differential equation solvers and ported them to the Caltech/JPL hypercubes. These machines were painful to use, but being a true masochist, I fell in love with parallel computers and have been using them ever since.
The details of my career are boring, but basically involve industrial experience in radar signal processing, seismic signal processing, numerical analysis, computational chemistry and of course, the use of parallel computers. I emphasize "the use of" parallel computers. I have always measured the value of a computer by how useful it is. Eventually, I ended up at Yale (1991) where my research took me into the depths of many different parallel programming environments (p4, PVM, TCGMSG, Linda, Parlog, CPS, and many others) on many different parallel computers including clusters of workstations.
In 1993, I left Yale and joined Intel's Scalable Systems Division (SSD).That was an exciting time at Intel SSD. The Paragon supercomputer was new and a huge amount of work was needed to understand how to use this big machine (with the best front panel lights in the industry). My job was to get inside the user's heads and make sure our products really worked for their problems. This resulted in a collection of performance models to help guide the design of future parallel computers.
The pinnacle of my work at Intel SSD was the ASCI Option Red supercomputer - the world's first computer to run the MPLINPACK benchmark in excess of one teraFLOPS (1.34 TFLOPS to be exact). I was a senior scientist on this project and was in the middle of the whole project. I helped write the proposals, verify the design, and debug the system. I was responsible for communicating technical issues with the customers and had to make sure the initial applications effectively scaled on the machine. When we delivered the system to the customer, I left Intel SSD and moved to Intel's long range research laboratory -- the Microcomputer Research Lab (MRL).
At MRL, my job was to solve once and for all the software crisis in parallel computing. Even with almost 20 years of research, the parallel computing community hasn't attracted more than a miniscule fraction of programmers. Clearly, we are doing something wrong. My hypothesis is that we can solve this problem, but only if we work from the algorithm down to the hardware -- not the traditional hardware first mentality. This work in part led to OpenMP (OK, a very small part --- the good people at SGI and KAI played a much larger role in getting OpenMP started).
Once OpenMP got off the ground, I moved to Intel's software products group (the Microcomputer Software Laboratory or MSL) to support the transition of OpenMP from research to product. With that underway, I've moved onto other application of concurrency to computing at Intel (clusters, distributed computing, and peer to peer computing).
My research will make parallel computing so easy, that the general programmer will use it routinely. People have been trying to pull this off for a few decades, but no one has come close to succeeding. I intend to buck that trend.
My approach is to solve the parallel programming problem at the algorithm design phase by creating a language of design patterns for parallel application programmers. Once in place, we will generate an object oriented framework from the pattern language. This combination of a pattern language, an object oriented framework, and quality parallel programming environments will solve the problem once and for all. For more information, take a look at my project's web page: www.cise.ufl.edu/research/ParallelPatterns/.
I've am also actively involved with the OpenMP shared memory programming API. I've worked on all the OpenMP specifications "out there" today and I am CEO of the corporation that "owns" OpenMP (the OpenMP Architecture review Board).
I am also very active in cluster computing. I am one of the leaders of the Open Cluster Group . We are working to create self-contained packages that include everything you need to easily build and use a cluster for high performance computing.
Finally, since distributed computing is just one extreme of cluster computing, and peer to peer computing is distributed computing carried to extremes, I am involved in fostering initiatives to make different peer to peer systems interoperatoble. I conduct this work as a member of the steering committee for the Peer to Peer working group.
.... plus many others (50+ total).