There are at least two key problems that could entirely break the Internet, or at least severely limit its future growth, and I'm going to try to lay them out such that people who aren't card carrying geeks can understand them. For today, let's stick to one of them: the address problem.
First, a bit of background. When you use, for example, a web browser, you enter an address for a page you want to view. You might, for example, ask for www.google.com. The underlying network that connects you to Google (and your own computer, too) has no idea what that set of roman letters means. There's a system for converting that name that's (mostly) readable by humans into something that a computer can understand. Not surprisingly, that address is in binary; that is, it's a bunch of zeros and ones. It's normal for network operators to look at these addresses in a more human readable format, so the address for www.google.com might be something like 220.127.116.11.
So far, so good.
Here's the problem: we're running out of these addresses. The central authority (IANA) that coordinates these numbers will run out of available space in 2010. It will become difficult to acquire new ones in about 2012 as a result. (There's a certain amount stored up in regional registries, but that's a piece of complexity that's unnecessary for our discussion.)
Fortunately, there is a solution. It's called IPv6, and it's a vastly larger pool of addresses, although the new addresses are incompatible with the old addresses. That's a real problem.
For devices like routers (the pieces of the network that decide where to send your data), supporting IPv6 means an upgrade. I don't know how often any of you upgrade your home computers, but service providers don't like to upgrade too often - it's expensive! Content providers (like Google, Amazon, eBay, et al.) have no reason to upgrade (yet) because there's not a good justification for the cost, since no one will use the IPv6 version. (The web page would look the same, only the technical goo would be different.)
This is a basic chicken and egg problem. Until there are users, there won't be content on this "new" Internet. Symmetrically, there won't be any users until there's some useful content. It's also a bit like the Y2K problem, expect there isn't a hard deadline, and the deadline will be different for every organization. Also, hitting the deadline won't cause your network to stop working, although you won't be able to make it bigger.
Given the time required to make the necessary changes, we're already too late. This is going to be a mess, and probably very expensive. You can bet that the cost will get passed on to someone.
Why is this interesting on this trip? I'm presently at the NANOG conference. Check out the link if you're feeling geeky.
I'll talk about the other "death of the Internet" problem sometime soon.