Last month, I presented as part of a tutorial session at and had a paper accepted into the 38th International Symposium on Computer Architecture (ISCA) in San Jose, CA. This was my sixth conference in computer architecture, and I have been observing a consistent problem through the academic research community: lack of market research. In this post, I describe aspects of the current state of market research within the academic computer architecture community, and in a follow-on post, I will suggest ways that we can focus the community to identify and tackle some of the biggest customer demands of next-next generation computing systems.

Processors are, by far, the most complex devices manufactured by humans. As a PhD student studying computer architecture, I’ve sifted through mountains of research papers, understanding the opportunities and challenges of mapping computation onto these massive sets of transistors. Optimizing and advancing computing technology is hard. Period. However, identifying and targeting consumers of future computing systems is just as hard… and just as important.

So what challenges is the computer architecture community facing? Right now, the “big companies” (in no particular order: Intel, AMD, IBM, ARM and NVidia) are struggling to overcome a few hard limitations to scaling system performance:

  • It’s hard to increase the frequency and performance of current processors without consuming too much power or generating too much heat
  • It’s hard to write high-performance code for large multicore processors
  • It’s hard to communicate data at a high enough rate to meet real-time constraints, such as processing graphics for your streaming video
  • Finally, it’s going to be hard to scale transistors much beyond their already tiny sizes.

All of the big companies face all of these scaling difficulties, which conspire together to slow the progress of computing.

Exceptions to Scaling Difficulties?:

Despite the presence of scaling difficulties, numerous companies are building phenomenal products and services that leverage current systems. As an example, consider Watson, a supercomputer designed by IBM and a team of academics. Watson competed against and defeated Jeopardy! champions Ken Jennings and Brad Rutter. The technologies required for this feat promise to provide revolutionary solutions for future call centers/technical support and medical IT, exciting steps in automation. Google too has been pushing the envelope, designing datacenters that are capable of returning near-real-time query results as you type your search, while at the same time servicing tens or hundreds of thousands of other searches.

The New Jeopardy Champion

The New and Final Jeopardy! Champion?

So what is it that makes IBM’s Watson supercomputer and Google’s datacenters so compelling? Simply put, these products are targeted at solving advanced and complex consumer needs. The designers of these systems were unified on the end goal of improving the life of each and every consumer. For IBM’s Watson, the potential set of applications and consumers is broad, diverse and numerous. For Google’s datacenters, 100s of millions of searches are executed by consumers every day.

My startup sense is tingling… I’ve seen this in the entrepreneurship world: Established companies must maintain a fierce focus on their consumer’s future needs so that new, disruptive technologies and startups don’t eat their lunches (a great reference for this is The Innovator’s Dilemma). Conversely, in order for a startup (or even small companies) to eat their competitor’s lunches, they need to have keen insight about their consumer’s demands.

Given a fierce consumer focus, product designers can sidestep and delay addressing tough design constraints as long as they know they’re not vitally important to the consumer.  For example, Google knows that their consumers are just as happy if their search rankings aren’t always ordered exactly the same way. In the design of their datacenters, this means that Google can start returning query responses to your browser as soon as they start generating them. This is how we have real-time, as-you-type search results. Good stuff!

The PhD Student:

Now, let’s circle back to the core academic researcher: the PhD student. Unfortunately for this young researcher, it’s nearly impossible to know the exact intended users of their research. The big companies know their consumers, but this competitive knowledge seems to be prized more than all the engineers that work in the industry. While these companies coordinate with academic research groups through faculty members at top-flight research institutions, the PhD student hears little more than the broad strokes about future consumer needs. Amplifying this problem is the fact that PhD students are aiming to identify and tackle technical challenges that are more than 2 processor generations into the future. What does a consumer want from their devices 8 years from now? Without talking to consumers, how can a PhD student know? This problem is all too similar to the challenge faced by big companies and startups alike: We need to predict the future needs of our consumers.

What’s the effect of this slim knowledge propagation to academic researchers? When research is not properly motivated by consumer needs, the community experiences slow progress: PhD students rarely know the market demands that they are working to meet, so their research tries to project forward from existing research solutions. This often results in their research being reviewed as “incremental” or “overly complex” – sadly, a couple of the most common adjectives in research paper reviews. Instead, researchers should always be aiming for or supporting “compelling” progress.

Even when PhD students do have some depth of market understanding, it’s extremely difficult to know the relative importance of market constraints. Take, for example, my academic colleague, Marc de Kruijf.  Marc submitted a research paper about making a simplification to a processor core to save on power consumption. On first submission, the technique was applied to a high performance processor core. The reviews came back warm, but not excited enough to accept the paper. Marc has pivoted the research, applying his technique to low-power mobile cores, where the savings are greater relative to the overall complexity/power cost of the processor. With the current excitement surrounding mobile devices, he hopes that reviewers will be more keen on the technique.

All of this evidence suggests that the computer architecture community needs to take a more unified approach to understanding and targeting the end consumer needs of future computing systems. I see a few potential solutions to these problems in the academic computer architecture community. Tune in next week for my next post, in which I suggest ways that we, as a research community, can move toward more consumer-centric research thrusts.

* Special thanks to Marc de Kruijf for his unique perspective on this topic.

— typed on my iPhone 4 at SFO airport