Experimentation is the theme of the SDForum Business Intelligence SIG so far this year. The March meeting featured Deepak Nadig, a Principal Architect at eBay, talking about "Building Better Products Through Experimentation". Experimentation is an important technique for Business Intelligence, although its first uses were with medicine. In 1747, James Lind, a British naval surgeon performed a controlled experiment to find a cure for scurvy. In his book "Supercrunchers", Ian Ayres describes how the Food and Drug Administration has used experimentation since the 1940s to determine whether a medical treatment is efficacious.
While eBay has always used experimentation test and fine tune its web pages, in recent years the process has been formalized. While anyone can propose an experiment, product managers are the group of people who are most likely to do so. Deepak took us through the eBay process and discussed issues with using experimentation. Because they have the infrastructure, simple experiments can be set up within a matter of days. eBay usually runs an experiment for at least a week so that it is exposed to a full cycle of user behavior. Simple experiments to test a small feature typically run for a week or so, larger experiments may run for a month or two and some critical tests run continuously.
For example, eBay is interested in whether it is a good idea to place advertising on their pages. On the one hand it brings in extra revenue in the short term, on the other hand, it might cannibalize revenue in the long term. Experimentation has shown that advertising is a good thing in some situations, however its use is being monitored by some long term experiments to ensure that it remains beneficial.
Deepak took us through some of the issues that with experimentation. One issue is concurrency, how many experiments can be carried out at the same time. As eBay has a high traffic web site, they can get good results with experiments on a small proportion of the users, at most a few percent. As each experiment uses a small percentage of the users, several experiments can be run in parallel. Another issue is establishing a signal to noise ratio for experiments to ensure that experiments are working and giving valid results. eBay has done some AB experiments where A and B are exactly the same to establish whether their experimental technique has any biases.
Sunday, March 30, 2008
Wednesday, March 26, 2008
The Cogwheel Brain
The Cogwheel Brain by Doron Swade is the story of Charles Babbage and his quest to build the first computer. The book also details how Doron Swade built a Babbage Difference Engine for the 200th anniversary of Babbage's birth in 1991.
Charles Babbage designed 3 machines. His started with the Difference Engine that would use the method of finite differences to generate tables such as logarithms and navigation tables. The computing section of his first design was built although it did not have a printer. Next he conceived and designed an Analytic Engine, which was a fully functioning computer that was programmed by the same kind of punched cards that were used to run a Jacquard weaving loom. In the course of designing the Analytic Engine he realized that he could improve the design of the Difference Engine to make it faster and use less parts. This resulted in the design of Difference Engine 2. Only small demonstration parts of the Analytic Engine were built and the Difference Engine 2 existed only as a set of plans.
I expected the story to be similar to several other computing projects that I have seen and worked with. You know the projects, the ones where the architect keeps jumping to a new idea while the overall project goals get lost and the project overruns for years before it is abandoned. Building the Difference Engine was a lot more disciplined. The core of the first difference machine was built and worked even although it used orders of magnitude more machined parts than any other machine built up to that time. While it did take a long time, given the engineering practices of the day, all the parts had to be made by a single craftsman in a single workshop.
One thing from the book that surprised me is that during the 19th century other difference engines were built by other engineers. Although these machines were completed, they were never successfully used for any purpose. I think this goes to show that the 19th century was not ready for mechanical computing. The book is easy to read and highly recommended.
Thursday, March 13, 2008
Customer Relationship Intelligence
There is a curious thing about the organization of a typical company. While there is one Vice President in charge of Finance and one Vice President in charge of Operations there can be up to three Vice Presidents facing the customer: a Marketing Vice President, a Sales Vice President, and a Service Vice President. On the one hand, the multiplicity of Vice Presidents and their attendant organizations is a testament to the importance of the customer. On the other hand, multiple organizations mean that no one is in charge of the customer relationship and thus no one takes responsibility for it.
We see this in the metrics that are normally used to measure and reward customer-facing employees. Marketing measure themselves on how well they find leads regardless of whether sales uses the leads. Sales measure themselves on the efficiency of the sales people in making sales regardless of whether the customer is satisfied. Service, left to pick up the pieces of an overpromised sale, measure themselves on how quickly they answer the phone. Every one is measuring their own actions and no one is measuring the customer.
Linda Sharp addresses this conundrum head on in her new book Customer Relationship Intelligence. As Linda explains, a customer relationship is built upon a series of interactions between a business and its customer. For example, the interactions starts with acquiring a lead, perhaps through an email or mass mailing response or a clickthrough on a web site. Next, more interactions qualify the lead as a potential customer. Making the sale requires further interactions leading up to the closing. After the sale there are yet more interactions to deliver and install the product and service to keep it working. Linda's thesis is that each interaction builds the relationship and that by recording all the interactions and giving them both a value and a cost, the business builds a quantified measure of the value of its customer relationships and how much it has spent to build them.
Having a value for a customer relationship completely changes the perspective of that relationship. It gives marketing, sales and service an incentive to work together to build the value in the relationship rather than working at cross purposes to build their own empires. Moreover, knowing the cost of having built the relationship suggests the value in continuing the relationship after the sale is made. In the book, Linda takes the whole of the second chapter to discuss customer retention and why that is where the real profit is.
The rest of the book is logically laid out. Chapter Three “A Comprehensive, Consistent Framework” creates a unified model of a customer relationship throughout its entire lifecycle from the first contact by marketing through sales and service to partnership. This lays a firm bedrock for Chapter Four, “The Missing Metric: Relationship Value” which explains the customer relationship metric, the idea that by measuring the interactions that make the relationship we can give a value to the relationship.
The next two chapters discuss how the metric can be used to drive customer relationship strategy and tactics. The discussion of tactics lays the foundation for Chapter Seven, which shows how the metric is used in the execution of customer relationships. Chapters Six and Seven contain enough concrete examples of how the data can be collected and used to give to give us a feeling of the metric’s practicality. Chapter Eight compares the customer relationship metric with other metrics and explores the many ways in which it can be used. Finally, Chapter Nine summarizes the value of the Customer Relationship Intelligence approach.
Linda backs up her argument with some wonderful metaphors. One example is the contrast between data mining and the data farming approach that she proposes with her Relationship Value metric. For data mining, we gather a large pile of data and then use advanced mathematical algorithms to determine which parts of the pile may contain some useful nuggets of information. This is like the hunter-gatherer stage of information management. When we advance into the data farming stage, we know what customer relationship metric is important and collect that data directly.
As the metaphor suggests, we are still in the early days of understanding and developing customer relationship metrics. Until now, these metrics have concentrated on measuring our own performance to see how well we are doing. Linda Sharp’s Relationship Value metric turns this on its head with a new metric that measures our whole relationship with customers. Read the book to discover a new and unified way of thinking about and measuring your customers.
We see this in the metrics that are normally used to measure and reward customer-facing employees. Marketing measure themselves on how well they find leads regardless of whether sales uses the leads. Sales measure themselves on the efficiency of the sales people in making sales regardless of whether the customer is satisfied. Service, left to pick up the pieces of an overpromised sale, measure themselves on how quickly they answer the phone. Every one is measuring their own actions and no one is measuring the customer.
Linda Sharp addresses this conundrum head on in her new book Customer Relationship Intelligence. As Linda explains, a customer relationship is built upon a series of interactions between a business and its customer. For example, the interactions starts with acquiring a lead, perhaps through an email or mass mailing response or a clickthrough on a web site. Next, more interactions qualify the lead as a potential customer. Making the sale requires further interactions leading up to the closing. After the sale there are yet more interactions to deliver and install the product and service to keep it working. Linda's thesis is that each interaction builds the relationship and that by recording all the interactions and giving them both a value and a cost, the business builds a quantified measure of the value of its customer relationships and how much it has spent to build them.
Having a value for a customer relationship completely changes the perspective of that relationship. It gives marketing, sales and service an incentive to work together to build the value in the relationship rather than working at cross purposes to build their own empires. Moreover, knowing the cost of having built the relationship suggests the value in continuing the relationship after the sale is made. In the book, Linda takes the whole of the second chapter to discuss customer retention and why that is where the real profit is.
The rest of the book is logically laid out. Chapter Three “A Comprehensive, Consistent Framework” creates a unified model of a customer relationship throughout its entire lifecycle from the first contact by marketing through sales and service to partnership. This lays a firm bedrock for Chapter Four, “The Missing Metric: Relationship Value” which explains the customer relationship metric, the idea that by measuring the interactions that make the relationship we can give a value to the relationship.
The next two chapters discuss how the metric can be used to drive customer relationship strategy and tactics. The discussion of tactics lays the foundation for Chapter Seven, which shows how the metric is used in the execution of customer relationships. Chapters Six and Seven contain enough concrete examples of how the data can be collected and used to give to give us a feeling of the metric’s practicality. Chapter Eight compares the customer relationship metric with other metrics and explores the many ways in which it can be used. Finally, Chapter Nine summarizes the value of the Customer Relationship Intelligence approach.
Linda backs up her argument with some wonderful metaphors. One example is the contrast between data mining and the data farming approach that she proposes with her Relationship Value metric. For data mining, we gather a large pile of data and then use advanced mathematical algorithms to determine which parts of the pile may contain some useful nuggets of information. This is like the hunter-gatherer stage of information management. When we advance into the data farming stage, we know what customer relationship metric is important and collect that data directly.
As the metaphor suggests, we are still in the early days of understanding and developing customer relationship metrics. Until now, these metrics have concentrated on measuring our own performance to see how well we are doing. Linda Sharp’s Relationship Value metric turns this on its head with a new metric that measures our whole relationship with customers. Read the book to discover a new and unified way of thinking about and measuring your customers.
Labels:
Books,
Business Intelligence,
Web Analytics
Tuesday, March 04, 2008
Developing on a Cloud
The cloud computer is here and you can have your corner of it for as little as 10 cents an hour. This was the message that author and consultant Chris Richardson offered to the SDForum SAM SIG when he spoke on "Developing on a Cloud: Amazon's revolutionary EC2" at the SIG's February meeting.
As Chris tells it, you go to the Amazon site, sign up with your credit card, go to another screen where you describe how many cloud servers you need and a couple of minutes later you can SSH to the set of systems and start using them. In practice it is slightly more complicated than this. Firstly, you need to create an operating system configuration with all the software packages that you need installed. Amazon provides standard Linux set ups and you can extend them with your requirements and store the whole thing in the associated Amazon S3 storage array. There goes another 10 cents a month.
Next you need to consider how your cloud servers are going to be used. For example, you could configure a classic 3 tier redundant web server system with 2 cloud servers running web servers, and another 2 cloud servers running tomcat application servers and a another cloud server running the database with yet another cloud server on database standby. Chris has created a framework for defining such a network called EC2Deploy (geddit?). He has also implemented a Maven plug-in that sits on top of EC2Deploy that creates a configuration and starts applications on each server. Needless to say the configuration is defined declaratively through the Maven pom.xml files.
So why would want to use EC2 for? Chris suggested a couple of applications that are particularly interesting for impoverished start ups. Firstly, EC2 can be used to do big system tests before a new version of the software is deployed. The start up does not need buy all the hardware to replicate its production systems so that they can do a full scale system test. Big system tests are done on EC2 saving considerable resources. Another use it to have a backup solution for scaling should the startup take of in an unexpected manner. Given the unreliability of ISPs these days, having a quickly deployable backup system sounds like a good idea, and the best thing is that it does not cost you anything when you are not using it.
As Chris tells it, you go to the Amazon site, sign up with your credit card, go to another screen where you describe how many cloud servers you need and a couple of minutes later you can SSH to the set of systems and start using them. In practice it is slightly more complicated than this. Firstly, you need to create an operating system configuration with all the software packages that you need installed. Amazon provides standard Linux set ups and you can extend them with your requirements and store the whole thing in the associated Amazon S3 storage array. There goes another 10 cents a month.
Next you need to consider how your cloud servers are going to be used. For example, you could configure a classic 3 tier redundant web server system with 2 cloud servers running web servers, and another 2 cloud servers running tomcat application servers and a another cloud server running the database with yet another cloud server on database standby. Chris has created a framework for defining such a network called EC2Deploy (geddit?). He has also implemented a Maven plug-in that sits on top of EC2Deploy that creates a configuration and starts applications on each server. Needless to say the configuration is defined declaratively through the Maven pom.xml files.
So why would want to use EC2 for? Chris suggested a couple of applications that are particularly interesting for impoverished start ups. Firstly, EC2 can be used to do big system tests before a new version of the software is deployed. The start up does not need buy all the hardware to replicate its production systems so that they can do a full scale system test. Big system tests are done on EC2 saving considerable resources. Another use it to have a backup solution for scaling should the startup take of in an unexpected manner. Given the unreliability of ISPs these days, having a quickly deployable backup system sounds like a good idea, and the best thing is that it does not cost you anything when you are not using it.
Thursday, February 28, 2008
Develop Smarter Products
When someone asks me about Business Intelligence, I will usually say that it is about analyzing the data that a business already has, and the truth is that businesses collect huge amounts of useful data. However, there are many interesting applications where we go out and collect specific data for analysis. We heard about one such application at the February meeting of the SDForum Business Intelligence SIG where Cameron Turner, CEO of ClickStream Technologies, spoke on "Software Instrumentation: How to Develop Smarter Products with Built-in Customer Intelligence".
ClickStream Technologies has a data collector that collects user interactions with GUI based user interfaces. That data is loaded into a data warehouse for analysis of the user experience. Contrast the ClickStream method with other techniques for for analyzing program usage. The most common method is to analyze the logs generated by a program, but logs are typically recorded some distance from the user interface and tends to capture the end result of what the user did rather than how the user did it. For example, if there are several ways in which a function can be invoked, programs logs normally record that the function has been invoked, but not how it was invoked. Also collecting more information involves modifying the program to increase the log data that it produces, which is not always practical. The ClickStream data collector runs as a stand alone program and collects its data with minimum intrusion into the running of the program being instrumented. Another technique for gathering user experience data is to have someone stand behind the user and record what they do, but this is labor intensive and does not allow for large scale studies.
There are many reasons for evaluating the user experience with a program. Cameron lists them in his presentation which you can get a copy of by visiting the files area of the Business Intelligence SIG Yahoo Group. The one that is closest to my heart is providing feedback to the designers of a program that their design leaves a lot to be desired. There are many times when I have become frustrated with a program because I cannot find out how to do the the simplest and most obvious thing. This is when I wish that someone was recording my problems and feeding them back to the development team.
ClickStream Technologies started off as a consulting company. They are moving their offering towards something that is more standardized with the idea that eventually customers will be able to use it on a self service basis. Currently each engagement requires configuring the data collector and writing reports for the analysis. Also for each engagement they recruit a panel of testers who download the data collector. As such, is it more suitable for medium to large sized companies that want to do large scale studies.
ClickStream Technologies has a data collector that collects user interactions with GUI based user interfaces. That data is loaded into a data warehouse for analysis of the user experience. Contrast the ClickStream method with other techniques for for analyzing program usage. The most common method is to analyze the logs generated by a program, but logs are typically recorded some distance from the user interface and tends to capture the end result of what the user did rather than how the user did it. For example, if there are several ways in which a function can be invoked, programs logs normally record that the function has been invoked, but not how it was invoked. Also collecting more information involves modifying the program to increase the log data that it produces, which is not always practical. The ClickStream data collector runs as a stand alone program and collects its data with minimum intrusion into the running of the program being instrumented. Another technique for gathering user experience data is to have someone stand behind the user and record what they do, but this is labor intensive and does not allow for large scale studies.
There are many reasons for evaluating the user experience with a program. Cameron lists them in his presentation which you can get a copy of by visiting the files area of the Business Intelligence SIG Yahoo Group. The one that is closest to my heart is providing feedback to the designers of a program that their design leaves a lot to be desired. There are many times when I have become frustrated with a program because I cannot find out how to do the the simplest and most obvious thing. This is when I wish that someone was recording my problems and feeding them back to the development team.
ClickStream Technologies started off as a consulting company. They are moving their offering towards something that is more standardized with the idea that eventually customers will be able to use it on a self service basis. Currently each engagement requires configuring the data collector and writing reports for the analysis. Also for each engagement they recruit a panel of testers who download the data collector. As such, is it more suitable for medium to large sized companies that want to do large scale studies.
Sunday, January 27, 2008
The OpenSocial API
OpenSocial is a standard API for applications in social networking platforms. It is sponsored by Google. The API exists to make applications portable between different social networks. On January 22 Patrick Chanezon, OpenSocial Evangelist at Google spoke to the SDForum Web Services SIG on the topic "OpenSocial Update: On the Slope of Enlightenment".
Social Networks have been a big part of Web 2.0 and thousands of them have sprung up. In the future Social Networks could become like wikis where businesses and organizations set up social networks to allow their employees and members to communicate with one another so there is the potential for millions of social networks. A standard API makes a social network that support the API more valuable because applications can be easily ported to it.
When first announced, there was great expectations for OpenSocial. Unfortunately, many people assumed that it was either an API for communicating between different social networks or an API for porting members data between social networks. OpenSocial is neither of these things. Social networks regard their member data as their crown jewels so allowing for data portability or interaction between networks is something that would not be welcomed easily. As Patrick explained, to get the API out quickly, it had to be something uncontroversial and as all social networks want applications, it is easy to draw them around a common API.
Because of the great expectation OpenSocial went through the hype cycle quickly. In a few weeks it hit the Peak of Inflated Expectations and then just as quickly descended into the Trough of Disillusionment. Now Patrick claims that they are on the Slope of Enlightenment and firmly headed towards the Plateau of Productivity. All this on an API that has reached version 0.6.
APIs are difficult to judge, but this one seems kinda nebulous. There are three parts to the OpenSocial API. The first part is configuration where the application can find out about its environment, the main issue seems to be coming to agreement on the names for common things in social networks. The second part of the API is a container for persisting the applications own data. Finally the API has features for handling event streams that seem to be a common feature of social networks. Ho-hum.
Some other interesting tit-bits came out of the talk. Security is a big issue with JavaScript and browsers. As I wrote previously, the Facebook approach is to have their APIs use their own language which is easy to sanitize. The response from the rest of the world seems to be an Open Source project that filters JavaScript programs to effectively sandbox them. Unfortunately, I was not quick enough to record the name of the project.
Social Networks have been a big part of Web 2.0 and thousands of them have sprung up. In the future Social Networks could become like wikis where businesses and organizations set up social networks to allow their employees and members to communicate with one another so there is the potential for millions of social networks. A standard API makes a social network that support the API more valuable because applications can be easily ported to it.
When first announced, there was great expectations for OpenSocial. Unfortunately, many people assumed that it was either an API for communicating between different social networks or an API for porting members data between social networks. OpenSocial is neither of these things. Social networks regard their member data as their crown jewels so allowing for data portability or interaction between networks is something that would not be welcomed easily. As Patrick explained, to get the API out quickly, it had to be something uncontroversial and as all social networks want applications, it is easy to draw them around a common API.
Because of the great expectation OpenSocial went through the hype cycle quickly. In a few weeks it hit the Peak of Inflated Expectations and then just as quickly descended into the Trough of Disillusionment. Now Patrick claims that they are on the Slope of Enlightenment and firmly headed towards the Plateau of Productivity. All this on an API that has reached version 0.6.
APIs are difficult to judge, but this one seems kinda nebulous. There are three parts to the OpenSocial API. The first part is configuration where the application can find out about its environment, the main issue seems to be coming to agreement on the names for common things in social networks. The second part of the API is a container for persisting the applications own data. Finally the API has features for handling event streams that seem to be a common feature of social networks. Ho-hum.
Some other interesting tit-bits came out of the talk. Security is a big issue with JavaScript and browsers. As I wrote previously, the Facebook approach is to have their APIs use their own language which is easy to sanitize. The response from the rest of the world seems to be an Open Source project that filters JavaScript programs to effectively sandbox them. Unfortunately, I was not quick enough to record the name of the project.
Sunday, January 20, 2008
Music Business Models
So little time and so many things to write about. My February copy of Wired arrived before I had time to remark on the article in the January edition by Davis Byrne of Talking Heads on "The Fall and Rise of Music". In the article, Byrne describes 6 different business models for musicians in the new world on digital music. They range from the sell your soul to the record company at one end of the spectrum through to eschew any other organization so that you can create sell and distribute your music yourself. Moreover, Byrne gave examples of musicians who are using each of the business models to show that they are valid models that are used.
From his experience of record deals with big record companies, Byrne advises any young artist to avoid selling your soul to the man, or even taking the the standard record company deal where they just own everything that you do. Surprisingly, there many who hold the opposite position. In December Tech Crunch published their list of the 20 most popular posts of 2007. One of them was by Michael Arrington on "The Inevitable March of Recorded Music Towards Free". It is interesting to read the comments in response, particularly the many comments from people who live to sell their soul to the big record companies and their anger that the big record companies are becoming not so big.
I have laid out my opinion on the future of music in several previous posts. It is great to see Byrne explain the alternatives in such a dispassionate way. It is also great to know that there are forward looking people who are building next generation music businesses, for example RCRD LBL.
From his experience of record deals with big record companies, Byrne advises any young artist to avoid selling your soul to the man, or even taking the the standard record company deal where they just own everything that you do. Surprisingly, there many who hold the opposite position. In December Tech Crunch published their list of the 20 most popular posts of 2007. One of them was by Michael Arrington on "The Inevitable March of Recorded Music Towards Free". It is interesting to read the comments in response, particularly the many comments from people who live to sell their soul to the big record companies and their anger that the big record companies are becoming not so big.
I have laid out my opinion on the future of music in several previous posts. It is great to see Byrne explain the alternatives in such a dispassionate way. It is also great to know that there are forward looking people who are building next generation music businesses, for example RCRD LBL.
Sunday, January 13, 2008
Tao of Software Engineering
Scott Rosenberg's latest book "Dreaming in Code" contains a very readable history of Software Engineering. I will review the book in another post. Here, I want to talk about a couple of papers that are not mentioned in Scott Rosenberg's history that are parts of my Tao of Software Engineering.
The first is Melvin Conway's paper from 1968 called "How do Committees Invent". Fred Brooks references the paper in "The Mythical Man Month" and gave it the name "Conway's Law", however, only recently that my good friend Roger Shepherd pointed me to the original. The thesis of the paper is: "Any organization that designs a system (defined more broadly here than just information systems) will inevitably produce a design whose structure is a copy of the organization's communication structure."
While Conway's paper gives examples from several different branches of engineering, it is more applicable to Software Engineering which is the most plastic of the engineering arts. After all, a building has to have a floor, walls and a roof; a semiconductor chip is laid out on a flat surface; most mechanical devices start with a rotary power source and after that it is a question of packaging. On the other hand, software can take on any structure so it is absolutely natural that it should take of the structure of the organization producing it, whatever unsightly skew the organization may place on the software's structure.
It is not only the structure of the organization that influences a software architecture, it is the way the project is put together. For example, I have recently been working on a software project that started as a GUI demonstration. Because the GUI came first, it defined the client-server API and the data structures that passed through the API which then defined the database storage structure. We recently had to make significant changes to the API and data structures which has been extremely painful at such a late stage in the project.
This brings me to the second paper, Butler Lampson's "Hints for Computer System Design" from 1983. Like Conway's paper, it is not a difficult read. The paper is a collection of simple aphorisms on computer system design and examples to back up those aphorisms. While the aphorisms are still relevant to this day, some of the examples are creaky. For example, who remembers PL/1 or the SDS 940?
I reread this paper every few years and compare with my recent experience to see what I have done right and what could be done better next time. For example, one aphorism is "Keep basic interfaces stable". Several years ago, my very first act in a project to add a significant new feature to a large system was to go through the entire code base and change some basic APIs for the new feature. In my most recent project we have gone through months of pain because we decided to change the API just as the project was coming together.
The first is Melvin Conway's paper from 1968 called "How do Committees Invent". Fred Brooks references the paper in "The Mythical Man Month" and gave it the name "Conway's Law", however, only recently that my good friend Roger Shepherd pointed me to the original. The thesis of the paper is: "Any organization that designs a system (defined more broadly here than just information systems) will inevitably produce a design whose structure is a copy of the organization's communication structure."
While Conway's paper gives examples from several different branches of engineering, it is more applicable to Software Engineering which is the most plastic of the engineering arts. After all, a building has to have a floor, walls and a roof; a semiconductor chip is laid out on a flat surface; most mechanical devices start with a rotary power source and after that it is a question of packaging. On the other hand, software can take on any structure so it is absolutely natural that it should take of the structure of the organization producing it, whatever unsightly skew the organization may place on the software's structure.
It is not only the structure of the organization that influences a software architecture, it is the way the project is put together. For example, I have recently been working on a software project that started as a GUI demonstration. Because the GUI came first, it defined the client-server API and the data structures that passed through the API which then defined the database storage structure. We recently had to make significant changes to the API and data structures which has been extremely painful at such a late stage in the project.
This brings me to the second paper, Butler Lampson's "Hints for Computer System Design" from 1983. Like Conway's paper, it is not a difficult read. The paper is a collection of simple aphorisms on computer system design and examples to back up those aphorisms. While the aphorisms are still relevant to this day, some of the examples are creaky. For example, who remembers PL/1 or the SDS 940?
I reread this paper every few years and compare with my recent experience to see what I have done right and what could be done better next time. For example, one aphorism is "Keep basic interfaces stable". Several years ago, my very first act in a project to add a significant new feature to a large system was to go through the entire code base and change some basic APIs for the new feature. In my most recent project we have gone through months of pain because we decided to change the API just as the project was coming together.
Thursday, January 03, 2008
DRM Discordance
Read this article from someone who bought a new high definition monitor for their Windows Vista Media Center PC and found out that they could no longer use the Netflix Watch Now streaming video facility. In the past I have argued that if we have to put up with DRM, it is better system engineering to put the DRM decoding in a separate box rather than try to bundle it into a general purpose PC as Microsoft is trying to do with Windows Vista Media Center.
By coincidence the article appears on the same day that Netflix and LG announced their partnership to integrate Netflix Watch Now streaming video into a future LG device. The TechCrunch take on the Netflix set top box is that it will be a hard sell, but if the alternative is the flaky behavior that we see in Microsoft Vista, maybe the masses will buy the box.
By coincidence the article appears on the same day that Netflix and LG announced their partnership to integrate Netflix Watch Now streaming video into a future LG device. The TechCrunch take on the Netflix set top box is that it will be a hard sell, but if the alternative is the flaky behavior that we see in Microsoft Vista, maybe the masses will buy the box.
Tuesday, January 01, 2008
42
We know that 42 is the answer to the question, you know about life, the Universe and everything. Also, 42 is the number of posts that I have made in this blog in each of the last 3 years. I have always intended to write more posts, there are many subjects that I want to write about, but for one reason or another, they never get written or never get finished.
This year I did manage to write some more as I started the Developer Notebook blog to cover specific technical programming topics. The Developer Notebook also has less posts than intended. I have several posts in my head that are just waiting for me to find the time and energy to write them down. We will try to do better in 2008.
So as they say in my birthplace "Awrra best furra New Year".
This year I did manage to write some more as I started the Developer Notebook blog to cover specific technical programming topics. The Developer Notebook also has less posts than intended. I have several posts in my head that are just waiting for me to find the time and energy to write them down. We will try to do better in 2008.
So as they say in my birthplace "Awrra best furra New Year".
Subscribe to:
Posts (Atom)