Monday, March 16, 2015

Oldest Registered .COM Domains and where they are now

Today (for a few more minutes), is the 30th anniversary of the creation of the first Internet domain. A number of years ago, while I was at Sun Microsystems, I wrote a related blog. Unfortunately, Oracle shut down all of the Sun blogs (they would be fun reading now).  But thanks to archive.org, I've managed to recover my original posting and am reposting it here.

A number of people (Out of the Woodwork, Sun tied with IBM for 11th oldest .com domain name, and The 100 Oldest Registered .COMs) have already noted this web site with the 100 oldest currently registered .COM domains and noted that Sun was one of the first registered web sites, registered at the same time as IBM and before Intel and years before Cisco and Microsoft. What got my attention was the names the top of the list. The oldest registered .com domain, one full year before Sun and IBM was Symbolics, followed by BBN and Think.
Those names really take me back. BBN, of course, was one of the early contractors on the ARPANet. As a student I remember the excitement when the first "network interface" were rolled into MIT. That's one in the picture! Technology has really improved over the years. So it is not at all surprising that they were one of the very first to register a domain.
But its the symbolics.com and think.com domains that brought back the most memories. Symbolics was the Lisp Machine company that spun off from MIT. They built $80K workstations that were heavily used by the Artificial Intelligence community, but also by researchers in vision and in the animation industry at the time. (They were about the same size and price as that BBN IMP we used as our Arpanet interface.) I remember being awed by the "render farm" of Symbolics machines at Pixar on a customer visit to Pixar, which was one of Symbolics' larger customers. Symbolics did their own animated film, Stanley and Stella in Breaking the Ice, by using the rendering jobs to burn in machines coming off the manufacturing line.
Symbolics' (Lisp implementation) of TCP/IP was the reference implementation when it came out. It was the first implemented and due to the debugging and development tools on the Lisp Machine, people had the most confidence in it in those early days. I remember discussions at Symbolics about Sun; how they didn't have any interesting technology, how buggy their networking code was, and how they could never survive. I believe Symbolics had a market cap about twice Sun's when it went public. At that time Symbolics was taping out their first custom microprocessor (Ivory) and was in development of of their second version (Sapphire), while Sun was just using commodity Motorola 68K's. And of course, Symbolics had the very advanced operating system developed for the Lisp Machine, while Sun was using Unix, which was just an inferior rehash of Multics. How wrong we were!
Somewhere in there, a grad student at MIT and I were involved in an evaluation of Symbolics' Lisp machine implementation and TI's (which also had a lot custom silicon). The two of us had a great time eating barbecue at the County Line in Austin and fantasizing about the super computers we could build by combining hundreds of these machines.
Well, that never happened, but the grad student was pretty successful. He did design some super computers (using Symbolics machines) at Thinking Machines (think.com) before they also went out of business and he (Greg Papadopoulos, you probably guessed) came to Sun along with his friends Dave Douglas, Steve Heller and others.
A year ago I also joined Sun (and work for Greg). I think we all still believe in those same fantasies we shared overlooking the river Austin. You see glimpses of it in Greg's discussions about global computers and my ramblings on utility computing. The next few years are really going to be fun!
It's amazing how the circle closes.

Monday, March 14, 2011

Some Israeli Music

Here's a quick list of some Israeli music on YouTube that I've liked over the years, and continue to like.

One of the first Israeli Rock/Pop groups the High Windows with Arik Einstein, Shmulik Krauss, and Josie Katz. From their first and only album, The High Windows.
You Can't and First Love

Next for Arik, The Churchills...pretty much a copy of the Beatles, but toned down a bit.
Achinoam doesn't know

From Lul, a classic comedy series with Arik Einstein, Uri Zohar and others around 1972.
I like to Sleep

This is one of my favorites, also from Lul.

Why do I take to Heart, Arik Einstein and a very young Shlomo Hanoch.

A relatively recent group. Great pictures in the video, though.
Ethnix, She won't come back.

Yehuda Poliker songs. He's of Greek heritage, worth learning about his family history. Early music (Benzine) was straight on Israeli Rock. Later he let the Greek influences out.

I Don't Know

Less but Still Hurts

Of course, there's always Aviv Geffen. I never really got his goth look in the early days, well, still don't, but with his parentage you knew he could do something like this. Cry for You

Wednesday, July 21, 2010

Finally, A New Posting---Systems

It's been a long time since I last did a blog posting. Partially, this is because I've been working with a different company for the last year and it was hard to post anything interesting that didn't impinge on their proprietary interests. That's still true to some degree, but a lot of what I've seen with respect to system design there is still relevant. Of course, the other motivation is that the company I'm working for is ITA Software, which Google recently announced they were acquiring. So, if all goes well, sometime in the not too distant future this I'll be a Google employee.



So, let me start with ITA's products, some of their differences, and how not to learn form success. ITA's original product is a shopping and pricing product that is used on many web sites to find the flight combination for an itinerary with the best price. This is a lot harder problem then I expected, largely because airlines are really clever at controlling their prices to get the most money out of travelers on each flight. And as if that wasn't hard enough there are all those extra fees that need to be included (and taxed) in order to get the actual price of the tickets. And for international flights that cross multiple borders, each with their own currencies, tax structures and conversion rates. Just figuring out the price to charge for a coke on a flight is daunting.



But the ITA people are really smart, they work really hard, and they figured out how to do all of that, how to it all very fast, accurately, and reliably. This problem, while very complex to get right and efficient, is at its core very simple. It is a special filtered search in a very, very large graph which is never actually generated. One way to think about the problem is that is like looking for the way out of a large maze. At each corner you have 2 or 3 possible choices and the complete graph of all possible choices could be enormously large. You can't generate the whole graph and local optimums won't get you out of the maze. Sounds like an Artificial Intelligence problem doesn't it? Would you be surprised that our CEO comes the AI lab at MIT?



Next time, I'll talk a bit about the project I'm working on, and how it differs.

Tuesday, January 26, 2010

Keep the Candle Burning

I wasn’t at Sun for very long, but like most people that worked there, it made a quite an impression on me. Despite decades teaching engineering at universities, consulting, and working in industry, it was at Sun that I learned what it meant to be an engineer and how to celebrate success. Scott summed it up best: “kick butt and have fun”.

So, I’m disappointed in Jonathan’s farewell message as reported in the Register: “Go home, light a candle, and let go of the expectations and assumptions that defined Sun as a workplace. Honor and remember them, but let them go.”

Honor those expectations and assumptions for sure, but never ever let them go. Those characteristics made Sun great and will make other companies and other engineers great as well.

Light that candle and let it show you the way. And keep it burning so others may also benefit.

Thank you Scott.

Thursday, March 26, 2009

Project Caroline

Over the last four years, my former team at Sun has been focussed on cloud computing, its value to Sun's potential customers, and the most effective ways to make its value accessible to developers. A significant artifact that research was Project Caroline, which was demonstrated in April 2008 at the SunLabs Open House.

Here is a pointed to that presentation.

It begins with my introduction and motivation for the project. At 5:30, John McClain gives a demonstration of the use of Project Caroline to create an Internet service, creating a Facebook application. At 34:00, Bob Scheifler gives a developer oriented view of the resources available in the Project Caroline platform and how those resources are programmatically controlled. At 1:06:00, Vinod Johnson demonstrates how the Project Caroline architecture and API's facilitate the creation and deployment of horizontally scaled applications that dynamically adjust their resource utilization. This is done using a early version of the Glassfish 3.0 application server that has been modified to work with Project Caroline. Finally, at 1:33:30, John McClain returns to summarize the demos and the answer a few questions.

While the project is no longer active at Sun, the Project Caroline web site is still running, where you can find additional technical discussions and through which can download all of the source code for Project Caroline. It's worth noting that that web site is a Drupal managed web site that itself is running on Project Caroline!

Wednesday, March 4, 2009

The Data Center Layer Cake

In thinking through our customer's needs in cloud computing/utility computing or whatever else you want to call it, we've found the attached diagram, which maps the technologies in a data center to be very useful. All data centers contain three different types of technologies: computing, storage, and communications. Depending on the role of the data center the ratio of investments in these three areas will vary, but all three areas need some form of investment. These three areas are represented by the three columns in the diagram.

Slide1.jpg

The vertical axis corresponds to the level of virtualization or abstraction at which that technology area is expressed. For instance, computation could be expressed as specific hardware (a server or processor), as a virtualized processor using a hypervisor like Xen, VMWare, or LDOMs, as a process in supported by an operating system, a language level VM (like the Java VM), etc. As you can see, the computing community has built a rich array of technology abstractions over the years, of which only a small fraction are illustrated.

The purpose of these virtualization or abstraction points is to provide clean, well defined boundaries between the developers and those that create and manage the IT infrastructure for the applications and services created by the developers. While all developers claim to need complete control of the resources in the data center, and all data center operators claim complete control over the software deployed and managed in the data center, in reality a boundary is usually defined between the developer domain and the domains of the data center operator.

On the following diagram we have overlaid rough charactures of some of these boundaries. Higher the line is, the room the DC operator has to innovate (e.g., in choosing types of disks, networking, or processors), and smaller the developer's job in actually building the desired service or application.

Slide2.jpg

At the bottom of the diagram we have the traditional data center, where the developer spec's the hardware and the DC operator merely cables it up. Obviously, this gives the developers the most flexibility to achieve "maximum" performance; but it makes the DC operators job a nightmare. The next line up corresponds, more or less, to the early days of EC2 or VMWare where developers could launch VM's, but these VM's talked to virtual NIC's and virtual SAS disks. It is incredibly challenging to connect a distributed file system or content distribution network to a VM which doesn't even have an OS attached.

The brown line corresponds to what we surmise to be the internal platform used by Google developers. For compute, they develop to the Linux platform; for storage distributed file systems like GFS and distributed structured storage like Bigtable and Chubby are used. I suspect that most developers don't really interact with TCP/IP but instead use distributed control structures like MapReduce.

My team's Project Caroline is represented by the red line. We used language specific processes as the compute abstraction, completely hiding the underlining operating system used on the servers. This allowed us to leave the instruction set choice to the data center operator.

The green lines correspond to two even high level platforms, Google App Engine and Force.com. Note that these platforms make it very easy to build specific types of applications and services, but not all types. Building a general purpose search engine on Force.com or Google App Engine would be quite difficult, while straightforward with Project Caroline or the internal Google platform.

The key question for developers is, Which line to choose? If you are building a highly scalable search/advertising engine then both the green and blue lines would lead to costly solutions, either in the development, management, or both. But if you are building a CRM application then Force.com is the obvious choice. If you have lots of legacy software that requires specific hardware the the hypervisor approach makes the most sense. However, if you are building a multiplayer online game, then none these lines might be optimal. Instead, the line corresponding to Project Darkstar would probably be the best choice.

Regardless, the definition of this line is usually a good place to begin in a large scale project. It is the basic contract between the developers and the IT infrastructure managers.

Addendum:



The above discussion oversimplifies a number of issues to make the general principles clearer. Many business issues like partner relationships, billing, needs for specialized hardware, etc. impinge on the simplicity of this "layer cake" model.

Also, note that Amazon's EC2 offering hasn't been static, but has grown a number of higher level abstractions (elastic storage, content distribution, queues, etc.) that allow the developer to choose different lines than the blue hypervisor one and still use AWS. We find the rapid growth and use of these new technologies encouraging as they flesh out the viable virtualization points in the layer cake.

Wednesday, February 25, 2009

Cloud Computing Value Propositions

To cut through all the hype around cloud computing, I think you need to focus on what value is ultimately delivered to the developer who builds his or her systems on "Cloud infrastructures." I say the developer here rather than the end user because the end user really cares more about their applications and services, not how those services are implemented.

In a recent paper published by researchers from the RADLab at UC Berkeley, Above the Clouds: A Berkeley View of Cloud Computing three new hardware aspects of Cloud Computing are suggested:

  1. The illusion of infinite computing resources available on demand

  2. The elimination of an up-front commitment by Cloud users.

  3. The ability to pay for use of computer resources as needed,


But these are all just variants of "capital expenditure (CAPEX) to operating costs (OPEX) conversion." And in other contexts, it goes by the slightly jaundiced term outsourcing. We've been doing this for years in IT, but the refined techniques developed by Amazon, Google, and others has made finer grain outsourcing practical.

While these benefits are valuable and lower the barrier to entry for many startups and small organizations, it shares the same pitfalls of outsourced manufacturing or design.

I believe there is more value to Computing Computing than just new way to outsource. Cloud Computing delivers a far more unique value of enabling business agility.

Those who have benefited from Cloud Computing, like Google and Salesforce.com have done so because their developers have a much richer platform to work with than is traditional. They don't need to configure operating systems or databases, they don't need to port new packages to their environment and painfully identify conflicting interdependencies. Instead, they have a relatively comprehensive platform to develop to, which provides distributed data store, data mining, load balancing, etc. without any development or configuration on their part.

It is this rich platform, which spans computing, storage, and communications, that allows Google to rapidly respond to business changes and market sentiment, and run more agilely than their competitors.

The original EC2/S3 offering effectively delivered on the outsourcing business value, but what has made Amazon Web Services compelling today is the rich platform that an AWS developer has at their disposal: load balancing from Right Scale, the Hadoop infrastructure from Cloudera, Amazon's content distribution network, etc. So, I believe that a real challenge to the success of Cloud Computoing is determining the characteristics of a "platform", that provides the greatest value for Cloud Computing developers. Too high a level of abstraction, e.g., Force.com narrows the application range.

Too low a level, and you are just outsourcing your IT hardware.