Cade M: I'd like to introduce two giants of the tech industry. Marc Andreessen, who is the co-founder of the Mosaic web browser and Netscape. He's now with the Silicon Valley venture capital firm Andreessen Horowitz.
Also, Andy Bechtolsheim, who is one of the co-founders of Sun and is now with a networking firm called Arista.
Marc and I were talking backstage, there's a certain irony here. Marc is famous in recent years as saying that software is eating the world and yet here he is speaking at a hardware conference but, in fact, that makes perfect sense. Does it not?
Marc A: I think so. I think so for two reasons. One is, I think that a lot of what is being discussed here and around this project was made possible by software. In particular, innovation around open source. In particular, the sledgehammer effect that Linux has had on the market makes possible a lot of what we're all talking about, of what's happened with hardware now.
The other thing that I just think that's so incredibly important is the implications of open source plus open hardware for radical reductions as ... I know Mark Zuckerberg talked about it yesterday. Radical reductions in price and cost for building systems which then makes it possible for software to do many more things that it wasn't able to do before.
When I think about software, it's the world. I think about software becoming much more important in fundamental industries, education, health care, financial services, media, many, many other areas of business. Being able to have very inexpensive hardware at high volume makes it possible to build a lot more software. Software becomes much more important in the world.
Cade M: To take that cycle even further, what that does in driving down the cost of hardware that feeds this new breed of cloud services which allows me, if I'm to start up, to get off the ground so much easier, right? I don't have to buy hardware. I don't have to buy servers and storage gear and networking gear. You think that the future is ... Or these cloud services and at the data center, so to speak, is dead. At least, for the smaller player.
Marc A: We see it everyday. We see two things everyday. One is, we see it's become a cliche but it's absolutely true as we see startups come in doing some fundamental new software innovation. Again, and often at very important areas like education or health care. It's two or three or four kids with their laptops and their laptops are their entire capex budget for the company for the first two years. Like, that's it. They're deploying entirely on to cloud services.
By the way, their business also is running entirely on cloud. Everything is. All the new companies are running entirely on system Salesforce.com, Box.net, Google Apps. We just backed a new company Zenefits which is a cloud-based service comprehensive for HR benefits.
These new companies can basically all run entirely on the cloud both for their own purposes and for all the business apps that they need. As a consequence, the amount of money required to start a new software business, I mean, is tiny in comparison to any historical precedent. $500,000 can see companies two years on the runway to be able to develop their product.
We see on the other side though with the companies that are scaling and so I see this ... Mark talked about this yesterday. Every time I go to a Facebook board meeting, it's like an out of body experience because I look at the capex budget of Facebook and in one sense, it's a lot of money because Facebook is buying a lot of hardware.
On the other hand, had Facebook existed in 1999, the capex budget would be somewhere 50 or 100 times bigger than it is. Facebook would be spending like $100 billion a year on capital equipment which, of course, is impossible which means, of course, Facebook cannot have existed in 1999.
Then, I think, when you project forward and you look out ten years, you basically start to ask this really interesting question which is if services like Facebook and Google are made possible today because of much lower cost, high volume hardware and then you project the curves for hardware over the next ten years, the services that are going to be made possible ten years from now are going to be mind-blowing in their sophistication and in their power.
They're going to make things that we think today are very powerful and compelling looked completely trivial in comparison. We're entering, I think, an entirely new era for what software is going to be able to do as a direct result of what's happening at hardware.
Cade M: How about you, Andy? You sell hardware into the data center. Are you at the opinion that the data center is dead?
Marc A: Even Sun used to sell hardware into the data center. Historically, most computers are brought by businesses one at a time. There was a great business for computer companies until about, I don't know, five, ten years ago, let's just say.
These days the computer is the data center. It's no longer the conventional computer. It's the whole thing. What the traditional IT industry missed is that there's a completely different optimization required to build things cost effectively for the very large scaled data centers than if you sell one at a time. It doesn’t matter whether it's Dell or HP or IBM because IBM sold their hardware business.
Its scale, you can optimized for a power efficiency connectivity fabrics in a very different way. At the same time, the spending has actually shifted. If you looked at where the growth in the market is for hardware vendors, the traditional IT business is actually ... They're not growing at all. They're declining or it's basically flat to declining.
All the growth is in the cloud, the cloud capex budgets for Facebook, Google and so if you take the largest six who used public cloud companies, it has been growing 35% a year, over year, over year. If one extrapolates let's say, a couple of years in the future, somebody actually predicted it that cloud capex spending by 2020, I believe, will exceed all the North American carriers combined.
All the stuff that Verizon and AT&T is putting into the ground or into the wireless station is actually smaller than what these six dot commers is going to spend just on enabling software. All the action these days is basically in the cloud and what my most recent company is doing is focusing on network switching for that segment of the market which is what we called cloud networking.
Again, even there we have to optimize equipment. Do we have the capacity and the power efficiency and the cost performance to make sense where traditional networking equipment was actually off by at least in order of magnitude.
Cade M: What we're also seeing in this networking world is the separation of the hardware and the software. Much as what we saw with servers now we're seeing bare-metal switches that you can put any software you like on it. How do you see that world changing the way we do things?
Andy B: Is that the hot seat question here?
Cade M: It is.
Andy B: Networking, I think, it makes a lot of sense to standardize network hardware. Basically, companies like Broadcom and Intel have build chipsets that most people are using. There are still a few companies that are building their own chips but that's a legacy model. If you have a standard chipset, it looks more or less like a standard, say, Intel server and the hardware isn't actually that different.
The thing of networking is you need a software stack that actually works. Building the software stack is actually a lot harder than the hardware. There's a saying that hardware is easy, software is hard. It's really true. Like, 90% of our engineers are working on the software stack. It's 400 people slaving away working on networking software and a very, very small team is working on the hardware.
The hardware section is not the difficult part but it does help to stand, I said for sure.
Cade M: Marc, where do you see that world going? Will we see a continuous separation there? What might that lead to down the road, in the networking world?
Marc A: Andy has forgotten more about networking today than I have learned in my entire life and so I'm going to be very cautious about how far I go. I'm out of a limb on this one. I would say we believe deeply in the structural change, in the same structural change that's happened in servers happening in networking.
We have two companies in particular that we've worked with that we think are right in the leading edge of this. Nicira has now become a brand name as a consequence of having been bought by VMWare but we think Nicira is fundamental for this new generation of software-based networking, the so-called software defined networking.
We think the great role out of software defined networking is just starting and it's going to have enormous implications for the networking industry. The other company we have launched on ... Actually, on Monday and is here at this conference. Cumulus Networks which, of course, is bringing Linux into the switch. It'll launched to a great excitement this week including a big partnership with Dell.
We basically see the same ... Basically, if you looked at what happened to servers, if you looked at what happened to Linux, to Andy's point, if you looked at what happened to servers with chip centralization with Intel CPUs, we think the same general process is starting to roll through the networking business.
Cade M: First servers, now networking, and then I guess the next step is storage.
Marc A: Yes. We think storage is the next step.
Andy B: Before you go-
Marc A: Yes, just go ahead.
Andy B: There's actually one topic in networking that is a good example of how important Open Compute is and will be in the future. It's just the traditional standards group that has standardized networking are the people at the IEEE. That's this committee IEEE. There's like fifty or a hundred people with grey beards traveling on the world, meeting every six weeks to decide what the future of ethernet is.
The problem is they have a boarding rule where under 70% of the people, meaning, vote for something, there's no change. They have been trying to standardized on cheaper optics now for the last two plus years. They had dozens of meetings and in the end they could not make a vote because nobody had the 70% majority for anyone of the proposal represented.
This is an example where a standard group has failed their own objective which is to deliver the next standard, to make things cheaper. With Open Compute, this would have never happened because if somebody had a better idea how to make cheaper optics, they could just publish the specs and saying here it is and it's available and it's standard by being published instead of having this gate where 70 out of 100 people in a meeting have to agree to make a change.
I like the Open Compute approach to standardization where it's a very inclusive approach. Microsoft has a better idea to make yet another rack server. That's great. Let's add it to the list. To think one thing we've learned here is one doesn’t have to narrow it down to one particular proposal. There's a lot of innovation potential both at the packaging side, the silicon side, the subsystems, the motherboards, the disk drives. I mean, there's a lot of interest [inaudible 11:43] here that all need to ... You have a venue to get to the public and except for Open Compute there never was such a venue.
I mean, Intel had specs on the motherboard size like ETX cards and the power supply gets to standardized but that was it. The industry never had a forum that allow to open hardware standardization until Open Compute. There is a major improvement how the industry can group and innovate going forward.
Cade M: Right. That's true.
Marc A: In fact, much closer than how the Internet got built.
Andy B: People have to be able to get together unless they say let's do something and this makes sense and it's good for Facebook or Google. Microsoft just doesn't matter. There's no barrier to innovation like we have seen in some of these standards groups.
Cade M: What we've seen time and again is that Open Compute, an open source product then feeds the commercial world. People take these ideas and then take them to all sorts of other customers. You think that will happen with storage as well, I imagine.
Marc A: Yes. We think storage is the next wave. We think storage is the next wave to get transformed the way that servers have been and the way that networking has been transformed now. I'll bring in a couple of our companies. We have a company called Maxta which is basically we think of as Nicira of storage, so software defying storage in the same way that Nicira was software defined networking.
Then Coho Data which is basically what Cumulus is for networking hardware. Coho is for basically bare-metal storage hardware. These are two companies. There will obviously a whole bunch of others but we think that exact same kind of innovation is going to happen.
Then we think, if I keep going ... We think then there's a four cycle that happens. After servers, networking, and storage gets through, there's actually a four cycle that happens which previous figures was alluding to which is what I think is happening in the long run is the grand unification of the smartphone supply chain and the data center supply chain.
I'm a radical on this. I think that data centers in ten, fifteen, twenty years are going to be running many of the same components that runs on smartphones. I think the ARM-based processor is case study number one and flash storage is case study two.
I think those are coming very quickly and then those will come sweep back through servers that ultimately networking a storage.
Cade M: That's interesting. Even here at the conference dedicated to change, you thought a lot of people who will say the opposite. That ARM is not going to happen. You could talk to fifty people who will say one thing, fifty will say another. Where do you come down? Is this going to happen?
Andy B: ARM's 32-bit was limited in terms of memory addressing. Let's be honest. Now, ARM's 64-bit solves that problem. The other thing is the world needs higher click rate. A cell phone at a gigahertz or a 1.2. What it really needs two, two and a half, three that makes it a lot more interesting.
This thing has inflate or influx and I don't know exactly when they're coming. There were some announcements here early today. It's definitely a race towards the lowest power, highest performance, highest cost performance metrics. The big advantage in the cloud is that many people owned their own software stack. They're not limited to what's available from commercial third party ISVs. They have to second. They can recompile it to ARM if and when it makes sense. It's likely to happen when it makes sense.
Marc A: The reason I'm so bullish on ARM in the data center. I mean, yes ... Andy's points are totally valid, of course. AMD announced yesterday the first eight-core 64-bit ARM server chip which is very exciting.
Andy B: Interesting, yes.
Marc A: Very interesting. Every large scale Internet service that I'm aware of, they're all incredibly bound by the cost of data center and within the data center, from an architecture standpoint, they're I/O bound. We deal with very few Internet applications at scale that are CPU bound as contrast to the I/O bound. Then, of course, the way economics work are it's all about power, and cooling, and efficiency in space in the data center.
The opportunity with ARM servers or ultimately with much more power efficient servers, server chips coming from other vendors including Intel, the opportunity exist for another five X reduction and data center costs by packing a lot more server chips into the same data center. I think especially for this I/O bound applications, I think the transition will be more seamless than what people think it will.
Andy B: If I can say one more thing, in defense of Intel, they've done a remarkably good job, the advance technologies like 14 nanometer into power reduction, more cores, more I/O-bound within, so and so. Intel is a tough competitor here, needless to say, and ARM would have to do better than what Intel is doing to make their point.
Marc A: Yes, and I think, the new CEO of Intel is highly focused on this.
Cade M: You make a good point. I mean, Intel is going to drive down power as well and offer improve deficiency. Are there other reasons to use ARM? Is it good to have a situation where you have multiple suppliers? Does that drive down your cost-
Andy B: I don't think it's philosophically. It's about really, practically, having competition in the market, right? If there was no competition, Intel may slack off and work more slowly. They're actually not doing it. They're working really hard. The fun of making is you need competition to keep innovation going and Moore's Law which predicted that transitions will double every year is still going on.
At the rate of progress, this data center that cost $1 billion today will actually only cost $10 million in ten years from now. It's that much of a cost reduction, a factor of a hundred for the same computer. Of course, what people would do is they will get 100 times more throughput out of the same investment.
We're looking at, in factor of a hundred cost performance improvement in ten years based on Moore's Law. People wanted and need that and that will enable the next generation of applications.
Cade M: The other question here is we’ve seen big cloud companies, Google, Amazon on down designed their own gear. We talked about servers and we talked about storage and networking. Could they, in fact, design their own chips? ARM is different from Intel and it licenses out its designs and that would allow Google or Facebook to go even further into the hardware. Is that a possibility?
Andy B: Let's see. If you look at the cost of doing stuff, defining a rack like the old computer rack, there was a discreet effort behind it but it was nothing compared to the cost of making a chip. They make a very advanced 14 nanometer chip. I'm sure it costs more than $100 million. Somebody has to spend that money to make those chips and they want their margin and there's a business model attached to that.
What really happens is people are trying to place their chips in the best way along the technology curve, the market curve, what costs as one, how many cores, power envelop, click rate. It’s a very complex optimization to get the most mileage out of these investments and should be getting more expensive at each generation.
Chips basically needs super high volume and even the volume that Google or Facebook had it may justify doing on chip but at the end it's better if the whole industry buy the same kind of chip because you can leverage that initially or do an investment across a larger base. In the end, you have to make lots and lots of chips to justify the upfront investment.
Cade M: As many of you may know, in addition to backing innovative storage and networking companies, Marc is a big supporter of Bitcoin, the increasingly popular digital currency. The most important question of the day is, how soon until we see an open compute Bitcoin miner?
Marc A: I mean, yes, there's tons of innovation going on. It's actually very interesting. Bitcoin mining is the heart of Bitcoin, right, which is all the computation based that we've done to maintain a trust network. The press reports mining is just a gigantic waste of time. In reality, it's all the paperwork computation that actually makes a distributed trust network work.
In the long run, I think, there's really good reasons now. I project forward in saying in the long run distributed mining systems like Bitcoin, in the long run are certainly an alternative if not a network replacement to centralized any of these like banks and stock exchanges. There's a very, very big thing going on with mining that were in the very early stages of.
There's a ton of work going in at optimizing Bitcoin mining which is very straightforward formulas, the cheaper you can do it, the more money you make. There's a ton of work going on and certainly server configurations, data center configurations. We're saying pitches for Bitcoin optimized data centers.
Point of chips is where it gets really interesting is we're not saying fundamental new chip designs around Bitcoin mining because of the nature of mining there's a big opportunity to apply custom silicon. We're seeing a new wave of really interesting new ideas in chip design around it which is not something that I would have predicted even a year ago.
In fact, I would say, I think, what's happening in mining right now, I suspect what's happening right now is mining is slipping the other way from a lot of what’s happening in the industry which I think the specialized chips are going to be, at least for a five-year period, much more efficient than the general purpose chips because mining is a very specific thing that you can highly parallelize. I think the custom mining chips are likely to dominate mining for quite a while.
Cade M: In your mind and in the mind of many others, this is not a niche market. I mean, the Bitcoin is analogous to the Internet twenty years ago. I mean, you think, that this is going to be the way we do things financially. The possibility to design chips for Bitcoin, Marc, is no small thing.
Marc A: Bitcoin is the first cryptocurrency generally. The general concept is the first thing I've described like the Internet, since the Internet. I've been waiting twenty years to be able to say, aha, this is like the Internet and this is the first one that I've seen.
The fundamental innovation which is really critically important is, cryptocurrency generally and Bitcoin specifically are the first practical way for people to do business over the Internet with no prior relationship and with no central hub, with no central broker, no central trust authority. You and I want to exchange money, we want to exchange title to a car or a house, we want to exchange digital contracts, we want to exchange a stock or a bond.
We want to do it ... We've never met before but we can do it over the Internet. We can do it in way where it's a unique digital asset. We can do it where everybody knows exchanges taken place. We can do it where everybody knows there's no double spending. We can do it where everybody can validate the transaction.
That never existed before. All e-commerce and payments and everything on the Internet up until now has had to run through some central authority. This is the first really distributive way to do that. All you really need to do is add up a number of people in the world times the number of transactions per year that they do and then the number of crosswise connections and then look at just the gigantic range of industries that exist today in legacy form to support all those transactions.
Let's say, okay, now, there is a better Internet-based way to do all these things. It seems like a very, very, very big opportunity. Therefore, as a consequence the amount of hardware that's going to get put behind it and the amount of innovation that is going to get put behind it is going to be gigantic.
Cade M: All you hardware makers out there, that's your homework for next year, Open Compute Bitcoin miners.
Andy B: More importantly, it would enable a whole new set of software applications that reduces infrastructure to sell things or transact things.
Marc A: Yes. We'll be sitting here. I'm quiet confident we'll be sitting here five or ten years and this is how contracts will be done and this is how ... Even digital keys, right, even services like BMBM, I'm quite convinced in five years, it's going to use Bitcoin to send the key back and forth to be able to get into the house or ridesharing services like Lift, you'll be able to share cars, the applications all the way down of a level of individual lock and all the way up to things like stock exchanges and banks.
Cade M: Fantastic. Thanks for coming. I appreciate it.
Marc A: Thanks, everybody.
Speaker 1: Thanks, Cade. Thanks, Andy. Great. Thanks for being here, man.
Next up on the list: Hacking AngelList syndicates with cofounder Naval Ravikant, as well as Book Recommendations from Dustin Moskovitz.
Click "Subscribe to updates by email »" to get notified.