If you are a registered voter in Lexington Massachusetts, I strongly urge you to vote for Bill Hurley and Jessie Steigerwald for School Committee in next week’s election. The election is on Monday (yes Monday!) March 2nd.
This is a particularly important election. If you are a parent or student, or if you follow developments relating to the school system, you know that we’ve had some unfortunate problems and controversies relating to our outgoing Superintendent of Schools and to the effectiveness of the School Committee. We’ve lost several of our most beloved and respected teachers, due in part to the climate that’s been created by the senior administrators of our schools.
Bill and Jessie are both incumbents, and they have from the start been helping to solve these problems. Both have excellent administrative skills and deep experience, but as important, they understand the needs of our students and our teachers. They know what it takes to support the best teachers, and they will maintain the tradition of great education for which Lexington’s schools are famous. Both of them have been instrumental in helping the town to choose our new superintendent, who will begin work this summer. Indeed, Bill was asked to play a direct role in the negotiations that led to the hiring of the new superintendent.
Bill has many years of experience as a Superintendent of Schools (Sudbury, Marshfield, and interim in Lexington), and also as an educator. Jessie has been extraordinarily effective in her several years on the School Committee, doing a terrific job of ensuring that the perspectives of students, parents, and faculty are well represented. Her background as a lawyer is also very valuable. Both have experience managing budgets, handling changing space requirements, and with other important administrative issues.
The candidates have Web sites (linked below) from which you can learn more about their qualifications and their positions on the issues.
This is a crucial election for the Lexington school system! Please vote on Monday March 2nd for Bill Hurley and Jessie Steigerwald for the Lexington, Massachusetts School Committee. If you can’t make it to the polls, absentee ballots are now available at Lexington Town Hall.
Here are the links to the candidates’ Web sites:
It’s been way too long since I posted here, but the 50th anniversary of the announcement of the IBM 360 seems like a good excuse. Of course, the 360 was to its era what the Intel architecture is today, and in fact even more. Not only was its instruction set the code of choice for business and much other computing, the 360 also defined the I/O interconnection architecture for a generation of IBM computers. If your disk or tape drive couldn’t talk to the IBM 360 channel, its market was very limited.
What I think too often gets missed is that the 360 still stands up as a remarkable achievement in system design. This is an instruction set and machine architecture that was created in the early 1960s, at a time when fully integrated circuits were too new for IBM to adopt. Cycle times were still measured in microseconds for all but the fastest machines. Today, programs written 50 years ago continue to run on CMOS mainframes with cycle times of under 1 nanosecond. Indeed, assembler code that I wrote for 370 machines in the 1970s is still being used, unmodified, by IBM’s customers today.
The designers of the 360 included Gene Amdahl, Gerrit Blaauw and Fred Brooks, and they achieved something truly remarkable. When I got to Stanford in the late 1970s everyone was excited about the DEC VAX, a machine with sophisticated instruction formats and a built in stack. The 360 was seen as old fashioned. Time went by and as manufacturers tried to scale their architectures DEC came to realize that the VAX could not be properly pipelined. The same instructions that were easy targets for compilers were hard to decode in an efficient way. There were probably other problems as well. The Vax was in many ways a lovely machine, especially for programmers, but it didn’t scale for even 20 years. The same was true for many other architectures that came and went. Indeed, one can argue that the only reason Intel has managed to scale the 8086 architecture is that chips now have so many circuits that tremendously complex techniques can be applied.
For anyone who’s interested in computer architecture this is a good time to step back and celebrate the achievements of the team that built the first truly scalable, compatible computer architecture.
BTW: IBM will apparently be live streaming a celebration event tomorrow, April 8, 2014.
Geekpage has a terrific interview with HTTP 2.0 WG chair Mark Nottingham on recent developments relating to HTTP 2.o. The newly released IETF draft is at http://tools.ietf.org/html/draft-ietf-httpbis-http2-04.
A few minutes ago, Tim Berners-Lee announced that as of June 1, Daniel Appelquist and Peter Linss will take over as chairs of the W3C Technical Architecture Group. I have been a member of the TAG since 2004 and chair since early 2009. Serving on and chairing the TAG has been one of the most exciting and rewarding things I’ve done, and I’m extraordinarily grateful that Tim and the TAG have given me the opportunity.
My current appointment would have run for a few more months, but the TAG is in the process of resetting its agenda and priorities for the next two years. Bringing in Peter and Dan now gives them opportunity to help the TAG plan as well as deliver this work. To free space for our new chairs, I will resign my position on the day they join. I will attend the TAG’s upcoming meeting in London and I expect to continue to work informally with the TAG and the new chairs over the next few months.
This is a particularly exciting time for the TAG and if we had not needed “my” slot for our new chairs, I would have been happy to continue. I have particularly enjoyed working with and learning from our new members, many of whom play key roles in the development of the most important new Web technologies such as HTML5, AJAX-style Web Apps, mobile Web apps, etc. Nonetheless, I am delighted that this change is happening. My teaching responsibilities are increasingly taking time that would otherwise go to the TAG, and have prevented me from traveling to some overseas W3C meetings. More important, Tim’s choice of Peter and Dan to replace me is in my opinion brilliant. We are very lucky to have them, and now is the perfect time for them to take over.
My warm thanks to all the TAG members past and present who have made my time on the TAG such a pleasure. I look forward to working informally with the W3C and the TAG in the future.
I’m pleased to announce that I have accepted a position as Professor of the Practice in the Tufts University Department of Computer Science. I expect to be teaching a course each term and advising students. As I said in my previous post, I’ve been having a wonderful time at Tufts, and I’m thrilled to have the opportunity to teach there in coming years.
If you’re wondering why I haven’t posted since September, it’s because I’ve been working nonstop on teaching my distributed systems course at Tufts. I’ve had a terrific time, and we had a wonderful group of students. Though he’d probably be surprised to hear it, the original inspiration for this course came from Tim Bray. We were sitting together at some meeting or other probably 14 years ago and he said to me: “you know, it would be great if someone taught a distributed systems course based on the Web.” The idea stuck with me, and when the opportunity came to teach at Tufts, I decided to give it a try.
There are always rough edges teaching a new course, but overall I’m really pleased with how it turned out. If you’re interested in what we covered, check out the Lectures Page , and especially the Principles Page, which gives a summary of the key points covered in the course.
The good news for me is that it looks like I will be back at Tufts this spring, and quite possibly in coming years as well. I’m delighted! Details to follow. Thanks to everyone who’s made me so welcome at Tufts!
After spending all summer preparing, my new Tufts course on Internet-scale Distributed systems starts this week. I’m excited! There’s still plenty of work to do on the details, but the course Web page is now up, and I expect that course notes, slides etc. will start appearing there as things unfold.
I’m delighted to announce that I have accepted a position as a Visiting Scholar in the Computer Science Department at Tufts University, and in fall of 2012, I will be teaching a new course titled: Internet-Scale Distributed Systems: Lessons from the World Wide Web.
The course description is:
The World Wide Web, one of the most important developments of our time, is a unique and in many ways innovative distributed system. This course will explore the design decisions that enabled the Web’s success, and from those will derive important and sometimes surprising principles for the design of other modern distributed systems.
We will introduce and draw comparisons with more traditional distributed system designs, including distributed objects, client/server, pub/sub, reliable queuing, etc. We will also study a few (easily understood) research papers and some of the core specifications of the Web. Specific topics to be covered include: global uniform naming; location-independence; layering and leaky abstractions; end-to-end arguments and decentralized innovation; Metcalfe’s law and network effects; extensibility and evolution of distributed systems; declarative vs. procedural languages; Postel’s law; caching; and HTML/XML/JSON document-based computing vs. RPC.
The purpose of this course is not to teach Web site development, but rather to explore lessons in system design that can be applied to any complex software system.
Everyone I’ve met from Tufts, both students and faculty, loves it there, and I’m very excited to have this opportunity to join them!
Awhile ago I posted an entry recommending a Nice Video about Colossus and Tommy Flowers. I have since been in touch with Capt. Jerry Roberts, who was one of the key members of the codebreaking team at Bletchley Park (B.P) during World War II. Capt. Roberts contacted me after I made my posting, and although he’s delighted to see the increasing recognition that’s coming to so much of the vital work done at B.P., he did have some concerns about details in the video.
In an e-mail to me, he made the following points (all quotations are directly from him…the “Testery“, mentioned in the quotes below, was the section of B.P. devoted to daily breaking of the message encipher on Tunny code, which was used by the German high command, including Adolph Hitler himself):
- “Colossus was built with one purpose only – to assist the Testery in speeding up one stage (breaking the chi wheel) of the breaking of Tunny, but the rest of the stages were still worked out by hand in the Testery by codebreakers and support staff. Without Bill Tutte breaking the Tunny-system, without the Testery daily breaking of Tunny, there would have been no need for Colossus at all.”
- The video compares the performance achieved using Colossus with the previous approach, which was to do the codebreaking entirely by hand. “They said [in the video] a message took 6 – 8 weeks to break the wheel patterns by hand; in fact we normally broke within 6 to 8 hours (within a shift); rarely, up to 16 hours might have been required. I know, I was a leading codebreaker doing this breaking all that time. Prof. Jack Copeland also quotes it as inside 8 hours, normally.”
- “From mid 1942 to mid 43 for the whole year, the Testery was breaking Tunny messages without any machine help (all by hand), included the messages relating to the Battle of Kursk – biggest tank Battle ever. Even when Colossus came later in spring 1944, still most work was done by hand in the Testery. The Testery broke 90% all the traffic handled on Tunny. The Newmanry (handled machinery – Robinsons and Colossuses) did 20 to 25% of the workload, and Testery did 75% at least. However, there is nothing about these inside information at B.P, as they are more keen to show off the machinery. “
By the way, Capt. Roberts remains very active in helping to promote recognition for all the good work done at Bletchley, and he is a terrific speaker. His belief (and the belief of many others) is that “Enigma decrypts helped Britain not to lose the War in 1941. Tunny decrypts helped shorten the European War by at least 2 years.”
Several years ago, before I realized we had mutual friends, I posted an entry recommending one of Capt. Roberts’ talks at UCL.
Note: this posting was updated on 27 June 2012 with minor corrections and annotations from Capt. Roberts, who was kind enough to review my original version.
IBM and CDC developed some of the most innovative computer architectures during the 1960’s. The advanced 360 architectures such as the IBM 360/91 are well known for their pioneering implementations of instruction-level parallelism and register renaming. Before that, Project Stretch was famous for contributing many innovations to computer architecture. Less well known was the Advanced Computing System (ACS), a no holds barred effort started in 1961 to build the fastest possible computer. Leading computer designers including Gene Amdahl, as well as my (later to be) friends from IBM: John Cocke, Fran Allen, and Harwood Kolsky came together to build this machine, which was eventually abandoned in 1969. Amdahl went on to found Amdahl Corporation, and John and Fran later won Turing awards, John’s for the invention of RISC architecture, and Fran’s for her pioneering work on compilers.
There’s a nice Web page up with a brief history of the ACS, and a video to a Computer History Museum video of a Feb. 2010 ACS reunion meeting. ACS pioneered many important features including instruction pre-fetch and dynamic out-of-order execution. The Web page and video are worth checking out if you’re interested in the history of computer architecture.
(Thanks to Lynn Wheeler for the link to the ACS page.)
It’s that time of year again, and audio mastering engineer Ian Davis is reminding us to celebrate “Dynamic Range Day“. This is about improving the sound of the audio recordings that we buy on CD and stream through the Internet. Specifically, it’s about compression. Read the rest of this entry »
Google has posted a nice little video about Bletchley Park and the Colossus machine, including nice remembrances of hardware designer Tommy Flowers. There’s also a posting about Tommy Flowers in the Google blog.
I had the opportunity to see the reconstruction of Colossus in operation, breaking actual codes, when I visited the British National Museum of Computing last year. “Lynetter” has posted to Youtube a video introduction to the reconstructed machine and also video of the late Tony Sale showing the reconstruction in operation. Unfortunately, when I visited, Tony had just recently died.
One twist on all this struck me as interesting: when Colossus was (partially) declassified in the mid 1970’s, books like The Ultra Secret claimed that it was used to break the German enigma code. Recently, more information has been declassified, and it’s now clear that’s wrong. The British did break the enigma at Bletchley park using a machine known as the Bombe. The Bombe (not bomb!) was not an electronic digital computer and it did not use valves (what we Americans call vacuum tubes). The Bombe was built of technology similar to old phone company rotary stepping switches and relays. Patch panels encoded information relating to the keys to be broken, and the switches would rotate through all possible positions until patterns suggesting a possible key match were detected. The switches would then stop in position, and from the final positions keys could be determined.
The enigma was used to encipher field and naval communications. In fact, we now know that Colossus, which was an electronic computer using large numbers of “valves” (along with relays, high speed paper tape, etc.) was built specifically to crack the much more difficult Lorenz cipher, which was used by the German high command. The British were thus able to decrypt traffic going to and from Adolf Hitler himself.
Many people have commented on the pros and cons of the New York Times paywall. Most of these comments debate the effectiveness of the paywall in meeting the Times’ financial goals, discuss ways in which users will circumvent the paywall, etc. Here I’d like to explore a different issue: it seems to me that the paywall, as currently implemented, violates the specifications for the Web’s HTTP protocol. Interestingly, my concern is not with the part of the system that charges readers, it’s with the part the tries to count the 20 free pages allowed per month.
I’m hoping to do more book reviews here, as I’ve been doing quite a bit of reading this year. So, here’s one to get started…
The first war of physics : the secret history of the atom bomb, 1939-1949 [ISBN: 1605981974] by Jim Baggott is an excellent history of the physics, politics, wartime events and espionage that all contributed to the remarkable history of the bomb. This is a moderately long book, but very readable, even gripping (well, I like this stuff). Very highly recommended for anyone with an interest in 20th century history, military history, or the history of technology. The physics is explained in readable terms, in the few places where it’s important to the story, but no technical background at all is required to appreciate this important book. Again, very highly recommended.
Note that the first 2/3 of the talk is a very interesting exploration of the security characteristics of Bitcoin, also showing how the Bitcoin database can be used as a peristent shared store. The latter third of the talk introduces Dan’s tools for detecting artificial delays introduced by ISPs.