S◊FTWARE L◊ZENGE
Develop Software; Don't Suck!
Tuesday, September 30, 2014
Thursday, April 24, 2014
Compucorp 700 series and OmegaNet
On a CPU World forum, a user has discovered an old computer in his attic and is looking for information about it. It's dead, an inert lump of 30 year old hardware marked "Compucorp 775". All over the western world, attics and basements are the burial chambers of forgotten-about computers that could blink in to life if anyone cared enough to plug them in. But the hopeful inquirer is stuck - there's little or no information about the Compucorp machines on the internet and he doesn't have a boot disk. The 775 is from the early 80's which is pre-history in internet time. It was the first machine I was paid to program. Let me tell you about it and its era.
After finishing university in the summer of 1985 I left dreary, downmarket Dublin and began work the day after my 21st birthday with Compucorp Ireland, based in Little Island, Co. Cork. It was a mystery to me how this company from sunny Santa Monica ended up Cork, a place which in the 80's was even more dreary and downmarket than Dublin. I suspect a rather large government incentive was involved. My first task there, set by my sharp team-leader Barbara Nelson, was to write a configuration tool to be used by clients when setting up their Omeganet LANs with Compucorp 700 Series workstations. I called the tool ICONFIGURE - if I'd written that as iConfigure I could have claimed prior art on Apple, dammit.
At that time the new-fangled IBM PCs had no networking capabilitiy - in contrast OmegaNet was tremendously impressive: a 500Kbps token-ring LAN with file servers and workstations. The workstations could be diskless (the 745) which booted from the server, they could have two floppy disks (775) or a 5 Megabit Winchester drive (785). The file server could be a 785, or a UNIX server known as the OA3200. With a CP/M-like OS called Zebra, the only application the 700's ran was Omega, the company's proprietary word processing suite. They also had a CBASIC interpreter which is what I used to write ICONFIGURE with some routines in Z80 assembly language.
Here's are PDFs of brochures from that time describing the 700 series and Omeganet.
The 700 series workstations were based on a Z80 microprocessor and had 256 Kbytes of RAM, arranged in 64 Kbyte pages as proscribed by the Z80's memory architecture. The workstations were sold with different amounts of memory and customers were charged for upgrades, but by 1985 they all left the factory with 256 Kbytes installed - a hardware dongle, the System Configuration Module, controlled how much memory was available and upgrading meant just changing this dongle. Abbreviated to SCM we called it the SCUM. When a 700 series machine failed it was usually the bloody SCUM that was the problem.
ICONFIGURE didn't turn out too badly: I was particularly proud of its user interface with a simple context-sensitive help that was occasionally helpful. But the tool would be very short lived. The IBM PC might have had little connectivity but it had popular applications and Compucorp was in trouble, even though at the time the capabilities of the Omega word-processor and Omeganet far exceeded anything on the PC. We flailed around looking for a way to stay competitive. A port of Omega to C on UNIX was already available but the PC didn't have the horsepower to run that. So we ran UNIX on a single-board computer with an NS32032 processor that we installed in a PC, with drivers in DOS to access the keyboard and screen. Imagine that beast of machinery to just run a word-processor! While fascinating as a proof-of-concept there was, needless to say, no market for it. Compucorp US in Santa Monica also started a port of Omega to the Commodore Amiga: with the exciting name of Omega on Amiga! I recall looking at a demo of this thing with its mouse and pointer and totally not getting it.
With all of this thrashing it was obvious even to me that the company was going nowhere. And I was going nowhere in Cork writing programs in Basic. I quit after 6 months and joined Ericsson to see the world. Compucorp Ireland closed its doors for good in 1987 when Compucorp US morphed in to a new company, Retix.
Compucorp 775 |
After finishing university in the summer of 1985 I left dreary, downmarket Dublin and began work the day after my 21st birthday with Compucorp Ireland, based in Little Island, Co. Cork. It was a mystery to me how this company from sunny Santa Monica ended up Cork, a place which in the 80's was even more dreary and downmarket than Dublin. I suspect a rather large government incentive was involved. My first task there, set by my sharp team-leader Barbara Nelson, was to write a configuration tool to be used by clients when setting up their Omeganet LANs with Compucorp 700 Series workstations. I called the tool ICONFIGURE - if I'd written that as iConfigure I could have claimed prior art on Apple, dammit.
At that time the new-fangled IBM PCs had no networking capabilitiy - in contrast OmegaNet was tremendously impressive: a 500Kbps token-ring LAN with file servers and workstations. The workstations could be diskless (the 745) which booted from the server, they could have two floppy disks (775) or a 5 Megabit Winchester drive (785). The file server could be a 785, or a UNIX server known as the OA3200. With a CP/M-like OS called Zebra, the only application the 700's ran was Omega, the company's proprietary word processing suite. They also had a CBASIC interpreter which is what I used to write ICONFIGURE with some routines in Z80 assembly language.
Here's are PDFs of brochures from that time describing the 700 series and Omeganet.
The 700 series workstations were based on a Z80 microprocessor and had 256 Kbytes of RAM, arranged in 64 Kbyte pages as proscribed by the Z80's memory architecture. The workstations were sold with different amounts of memory and customers were charged for upgrades, but by 1985 they all left the factory with 256 Kbytes installed - a hardware dongle, the System Configuration Module, controlled how much memory was available and upgrading meant just changing this dongle. Abbreviated to SCM we called it the SCUM. When a 700 series machine failed it was usually the bloody SCUM that was the problem.
ICONFIGURE didn't turn out too badly: I was particularly proud of its user interface with a simple context-sensitive help that was occasionally helpful. But the tool would be very short lived. The IBM PC might have had little connectivity but it had popular applications and Compucorp was in trouble, even though at the time the capabilities of the Omega word-processor and Omeganet far exceeded anything on the PC. We flailed around looking for a way to stay competitive. A port of Omega to C on UNIX was already available but the PC didn't have the horsepower to run that. So we ran UNIX on a single-board computer with an NS32032 processor that we installed in a PC, with drivers in DOS to access the keyboard and screen. Imagine that beast of machinery to just run a word-processor! While fascinating as a proof-of-concept there was, needless to say, no market for it. Compucorp US in Santa Monica also started a port of Omega to the Commodore Amiga: with the exciting name of Omega on Amiga! I recall looking at a demo of this thing with its mouse and pointer and totally not getting it.
With all of this thrashing it was obvious even to me that the company was going nowhere. And I was going nowhere in Cork writing programs in Basic. I quit after 6 months and joined Ericsson to see the world. Compucorp Ireland closed its doors for good in 1987 when Compucorp US morphed in to a new company, Retix.
Friday, February 1, 2013
Introducing Agile? Start with a de-tox...
No one comes to agile with an open mind. Not anymore. Anyone who has yet to try agile has already got some preconceived ideas, and some of the ones I've heard are not at all helpful:
In one of my previous employers, some of the teams twisted scrum in to a tool for micro-managing developers via daily burn-down charts. When senior management began to encourage this approach company-wide I knew it was time to leave.
So how do you (re-)introduce agile to someone with a warped understanding of it?
I think you start by going back to basics and talking about values, behaviours, and nothing else until you can hear that you've been understood. That might take several hours, or longer, but refuse to skip over it. If you start talking about the details of scrum or XP before this level of understanding is established then you will end up with another warped implementation.
So what are those values and behaviours? The important ones are:
"Agile - it means coding without doing any analysis"
"Anything goes in agile, there's no discipline at all"And if you're working with someone who has only seen bad implementations of agile then you may have real trouble sorting things out.
In one of my previous employers, some of the teams twisted scrum in to a tool for micro-managing developers via daily burn-down charts. When senior management began to encourage this approach company-wide I knew it was time to leave.
So how do you (re-)introduce agile to someone with a warped understanding of it?
I think you start by going back to basics and talking about values, behaviours, and nothing else until you can hear that you've been understood. That might take several hours, or longer, but refuse to skip over it. If you start talking about the details of scrum or XP before this level of understanding is established then you will end up with another warped implementation.
So what are those values and behaviours? The important ones are:
- The Agile Manifesto: still the best and most succinct description of the agile values
- Self-organizing teams and the behaviours these imply for team-members and others
Thursday, January 31, 2013
Big Data: Yes we can! Should we?
Martin Fowler has a gift for giving brilliantly simple explanations of complex topics. His info deck on Big Data is a must-read for anyone in the software industry. As he says:
Big data is a term that's generated a lot of hype. But I think it's important to resist our usual aversion to hype in this case - there is a significant change in thinking that's happening.
This shift forces us to change many long-held assumptions about data. It opens up new opportunities, but also calls for new thinking and new skills.
(from "Thinking about Big Data" by Martin Fowler)I think it's also important to consider the implications of this technology. The mining and correlation of big data may have consequences for our privacy and freedom that we may not like. In this early part of the 21st century, software developers are actually shaping society, for example with social networking technologies. The technology itself may be neutral, simply a fact, but its application has consequences in the wider world. As one of those society-shapers, where do you stand on this?
Facts are simple and facts are straight
Facts are lazy and facts are late
Facts all come with points of view
Facts don't do what I want them to
(from "Crosseyed and Painless" by Talking Heads)Nicholas Carr is a particularly insightful writer on how software technology is changing the world - I recommend his recent article on Big Data as a digestif after you've devoured Fowler's info deck:
This is the nightmare world of Big Data, where the moment-by-moment behavior of human beings — analog resources — is tracked by sensors and engineered by central authorities to create optimal statistical outcomes.
(from "Max Levchin's dream" by Nicholas Carr)Is that hyperbole? Well this is what Google's Executive Chairman Eric Schmidt says:
Technology is not really about hardware and software any more. It’s really about the mining and use of this enormous [volume of] data [in order to] make the world a better place.
(from "Google’s Schmidt: ‘Global mind’ offers new opportunities" at MITnews)Now I kind of like Google. But who put them in charge - and can I vote them out if I don't like their vision of "a better place"? And you, skilled developer of software, how will you use your talent?
Tuesday, January 29, 2013
Software Development and the English Language
My eleven-year-old son has started programming. An avid user, gamer and PowerPoint wiz for years now, he's been increasingly curious about "how computers work". After looking at a few programming environments for beginners we've found that Microsoft's Small Basic is an excellent introduction for the budding computer scientist. For each small step he takes there's the reward of being able to program something interesting and most of the time he can make progress by himself.
Like most beginners, the first lesson he learned was that computers demand very precise instructions. The programmer must be clear about what he wants to achieve and express it precisely, so that the computer "understands". Computers are particularly fussy but any sort of writing can benefit from the same advice: be clear about what you want to say and then express it precisely.
I thought of this yesterday while reading through "Quality System" documentation. Now I'm not the greatest fan of documented quality systems, to put it politely; instead I try to live every day of my professional life according to the agile manifesto, valuing working software over documentation and processes. But sometimes I have to explain our work methods to customers and partners and it helps to have some high-level documentation. Unfortunately, in my experience, most quality documents are poorly written: they use a special insiders' language with terms I find vague like "quality policy", "quality assurance", "change management", "traceability" and on and on. In short, they would be a lot more useful if the writer was clear about what he wanted to say, why he wanted to say it, and then expressed it precisely.
George Orwell's essay "Politics and the English Language" is wonderfully clear and precise in describing the verbosity of politicians and how it might be corrected. As an essay it gives a great example to follow and its recommendations are valuable for any kind of technical writing. It concludes as follows:
Like most beginners, the first lesson he learned was that computers demand very precise instructions. The programmer must be clear about what he wants to achieve and express it precisely, so that the computer "understands". Computers are particularly fussy but any sort of writing can benefit from the same advice: be clear about what you want to say and then express it precisely.
I thought of this yesterday while reading through "Quality System" documentation. Now I'm not the greatest fan of documented quality systems, to put it politely; instead I try to live every day of my professional life according to the agile manifesto, valuing working software over documentation and processes. But sometimes I have to explain our work methods to customers and partners and it helps to have some high-level documentation. Unfortunately, in my experience, most quality documents are poorly written: they use a special insiders' language with terms I find vague like "quality policy", "quality assurance", "change management", "traceability" and on and on. In short, they would be a lot more useful if the writer was clear about what he wanted to say, why he wanted to say it, and then expressed it precisely.
George Orwell's essay "Politics and the English Language" is wonderfully clear and precise in describing the verbosity of politicians and how it might be corrected. As an essay it gives a great example to follow and its recommendations are valuable for any kind of technical writing. It concludes as follows:
- Never use a metaphor, simile, or other figure of speech which you are used to seeing in print.
- Never use a long word where a short one will do.
- If it is possible to cut a word out, always cut it out.
- Never use the passive where you can use the active.
- Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent.
- Break any of these rules sooner than say anything outright barbarous.
This is another manifesto I try to follow every day of my professional life. You can tell from this blog how well or how badly I'm doing.
Tuesday, April 17, 2012
Software Platforms: where code meets politics
Sharing and re-using software within a company is a noble and rational goal, but I've seen it lead to some very ignoble and irrational behaviour.
What I now understand is that aligning products and projects around shared software has far reaching implications: it leads directly into organizational design, personal motivations and politics.
Those API's you designed now also serve as organization interfaces between the application and platform team. Poorly designed API's will have the teams bickering; application design choices will be limited by directives about re-use; in the worst case the motivation to produce a finely pointed solution may fizzle out, replaced by hacking around with the wrong tool for the job.
It doesn't help that re-use seems so obvious - heck even a CFO can almost understand it! Think of our platform as LEGO the bean counters are told with a helpful picture of a simple brick.
Oh yeah, re-use is that easy? Here try re-using this LEGO piece to build a house:
The worst Lego piece ever made? |
Frankly, the simplicity of the re-use idea is one of its downfalls - a case of an over-simplistic metaphor taking the place of critical thinking. Typically a cute name is devised for the shared software platform - see my list in the appendix below - which allows executives to sound like they know what they want even if they couldn't even begin to describe what it is. So a project may be forced to use a platform that doesn't meets its needs because of a corporate directive.
I could go on. I will go on. Here are some other problems I've encountered with shared platforms:
- They stifle innovation, or they're perceived to which amounts to much the same thing. Most innovators will want to work rapidly outside the platform rather than deal with its constraints: do you really want to slow down work on a prototype in order to make it platform-compliant?
- Platforms are a lightning rod for politics. In my experience there is always a battle between platform centrists and application separatists that becomes a power struggle, sapping energy and diverting attention from what really matters: the utility of the solution.
- They become legacy, instantly. Since new products will begin outside the platform, the platform itself becomes the embodiment of legacy.
- They force compromises in order to come up with shared services. Not bad in itself, but user interfaces come out badly from these sorts of compromise.
- They're costly. Developing features in line with the constraints of a platform is often more expensive that just doing them. However this should of course be balanced by features that the platform brings to each product for free.
Well no actually: sharing and re-using software within a company is a noble and rational goal. It's just bloody hard to do right. I've wrestled with this problem for years in a few companies and while I haven't cracked the problem - and it's a slightly different problem every time - there are some lessons I've learned.
So if you do find yourself in the desperately unlucky position of leading the development of a software platform, consider the following:
- Have a plan from the outset for innovation and prototypes. Sponsor new initiatives - don't try to kill them because they don't fit with today's platform.
- Set up the platform development to iterate quickly so that you can respond to the needs of projects. Agile methods with short iterations is the only way to go.
- Nothing is more important than the design of API's and getting the right balance between simplicity and power. There's a world of knowledge out there about how to do this so go and do some research. Be ready to make mistakes and start over.
- Support all products equally. Be ready to leave some products outside the platform if including them would cause too much disruption to the architecture.
- It's impossible to create a good platform by carving it out of an existing product - a platform has to be designed for that purpose then honed with application experience
- Keep the platform as small as possible - don't get it involved in every piece of product development. I like the principle of subsidiarity which basically means only centralize when a goal is better achieved that way - otherwise let the projects do what they need to do. (Subsidiarity is supposed to be a feature of the EU but often isn't - that's a future topic for the BigLooLaa I think.)
- Be aware of the political power struggles and see them for what they are. The platform will need Architects and Technical Leaders who are good listeners and consensus builders who can defuse the politics. People who combine these skills with technical nous are rare so treasure them.
- Above all, the platform should be a distilled version of the best of your products. The greatest attribute of your products is usability or scalability perhaps? Well that's what your platform should do best. Often platforms take on a direction that is different from that of the products and a platform like that can never succeed.
- Be humble, ready to learn and willing to change your software.
Appendix: Cute Names, Big Challenges
ART - Adaptive Runtime Technology
The core platform for all the integration products at IONA technologies, the name is a marketing way of saying "plug-ins dynamically loaded at run-time" but it gave us a pretty vocabulary for project names: Matisse, Warhol etc. We had lots of heated debated between our teams in Boston and Dublin, sometimes resolved by a shared interest in Guinness.
Oh, that reminds me:
Oh, that reminds me:
- Develop shared interests amongst users of the platform, especially those interests that lend themselves to a pub setting
JaNetOR - Java Network Oadapter Replacement
CORBA based O/R technology in Ericsson for a line of network management products. Functionally perfect but Java circa 1998 was a terrible choice for performance critical software. See suggestion #8 above.
STRIVE - Synthetic Tactical Real-time Interactive Virtual Environment
CAE's software platform for all of the simulated systems and environments in aircraft simulators. From a developer's perspective STRIVE does the job; for a system integrator, a critical role when building a simulator, it badly lacks tools. Missing key requirement lie that is typical for a platform that was originally designed with one product in mind and then adapted to others
iTopia - No idea. Perhaps this is where Steve Jobs is, now that he's iDead? (Sorry!)
This is the software platform I work with today. Since I apply everything I've learned, its development is totally smooth and uncontroversial. Cough.
Labels:
architecture,
re-use,
software development
Wednesday, February 15, 2012
Quality Assurance: it doesn't assure quality
Software testing is an honorable profession; if you want to produce a quality product then you need to have smart and committed software testers in your team. But somewhere in the evolution of our industry, this profession morphed in to a process and organization called QA, or sometimes Integration and Verification. For the most part, this has not been a good thing.
In many companies, quality is added to a product after it has been coded. The developers write code, then sling it into QA where it is beaten into shape. The results can be quite good: I've seen some reasonable products emerge from this strange process including, alarmingly, quite a bit of the software in aircraft systems. But it's a horribly inefficient way to work: it's impossible to predict how long the software beatings will last, it does nothing for teamwork, and it usually results in mediocre product. Where's the professional pride of the coders and testers in such a process?
The way to go is to to have most testing done as the software is being written and the best way to accomplish that is in self-organizing teams of coders, testers and analysts. There is still usually a need for a final test before release, as part of a rapid "end-game", but that's a time for polishing, not beating.
The net result is faster progress in development and better products. In my experience I've found that testers have valuable insights that can make products simpler and more useable - a dialogue with the coder and analyst at the time the software is being written draws this out. And removing the perception of a QA "safety-net" is good for coders and analysts - it keeps them concentrated on the product goal.
One of the indicators that all is well with this set-up is when team-members prioritise the product goal over their own tasks. So if testing is falling behind, the developers and analysts roll up their sleeves and help out with the testing effort. At the end of each sprint, it’s better to produce something that works than to have untested code or unused analysis, neither of which are of much value.
QA is about cleaning up a mess that shouldn't have been made in the first place. But when properly deployed, software testing professionals add real value to products.
In many companies, quality is added to a product after it has been coded. The developers write code, then sling it into QA where it is beaten into shape. The results can be quite good: I've seen some reasonable products emerge from this strange process including, alarmingly, quite a bit of the software in aircraft systems. But it's a horribly inefficient way to work: it's impossible to predict how long the software beatings will last, it does nothing for teamwork, and it usually results in mediocre product. Where's the professional pride of the coders and testers in such a process?
The way to go is to to have most testing done as the software is being written and the best way to accomplish that is in self-organizing teams of coders, testers and analysts. There is still usually a need for a final test before release, as part of a rapid "end-game", but that's a time for polishing, not beating.
The net result is faster progress in development and better products. In my experience I've found that testers have valuable insights that can make products simpler and more useable - a dialogue with the coder and analyst at the time the software is being written draws this out. And removing the perception of a QA "safety-net" is good for coders and analysts - it keeps them concentrated on the product goal.
Careful now!
One of the indicators that all is well with this set-up is when team-members prioritise the product goal over their own tasks. So if testing is falling behind, the developers and analysts roll up their sleeves and help out with the testing effort. At the end of each sprint, it’s better to produce something that works than to have untested code or unused analysis, neither of which are of much value.
QA is about cleaning up a mess that shouldn't have been made in the first place. But when properly deployed, software testing professionals add real value to products.
Thursday, February 2, 2012
Size doesn't matter if you're agile...
I'm a published software guru you know. The January 2007 issue of IEEE Software contained an article written by yours truly, thanks to an invitation from guest editor Dr. Ita Richardson. She saw me present at agile conferences in Ireland and thought I was absolutely the right guy to start a fight. So that's what the article is: a point - counterpoint argument, where I taunt a genuine software guru Wolfgang Strigel and he knocks my lights out. Thankfully it's behind the IEEE Software paywall so my ignominious defeat is well hidden.
The point I was arguing - and I'm still right, Wolfgang - was that agile methodologies can be used on both big and small projects, in small companies and in large enterprises. Size should not be a factor in accepting or rejecting agile methods because even the biggest software development project must be broken down into smaller coherent chunks, and developing these in self-organizing teams is almost always the way to go.
But that's too polite. I'm going to remove my gloves and take a bigger swing, like I should have done in that article.
Most very large software projects are disastrous failures: they cost too much, they're late and they don't achieve their goals. This presentation by Roger Sessions explores the relationship between project size and project failure - this slide is an extract.
(Yes, I know the Standish metrics are contentious these days - but even if they're only half-right the conclusion would be the same.)
The traditional methods of managing large projects, all those document-centric processes, don't work very well. My advice: if you're unable to break down a project into agile-sized chunks then either do the company a favour and call a halt to it or run very far away as fast as your skinny little software developer legs can carry you.
Sometimes, Agile is a bad fit for a project but it's not because of size. I can think of two characteristics that would mitigate against agile:
(So, you may ask, why didn't I run away from this project like I recommend above? Well dear reader, if I knew then what I know now I would have done things differently. Hence this blog.)
When I'm starting a new project I'll always try to apply an agile development approach, breaking it down into smaller chunks. I avoid projects with either of those nasty characteristics.
The point I was arguing - and I'm still right, Wolfgang - was that agile methodologies can be used on both big and small projects, in small companies and in large enterprises. Size should not be a factor in accepting or rejecting agile methods because even the biggest software development project must be broken down into smaller coherent chunks, and developing these in self-organizing teams is almost always the way to go.
But that's too polite. I'm going to remove my gloves and take a bigger swing, like I should have done in that article.
Most very large software projects are disastrous failures: they cost too much, they're late and they don't achieve their goals. This presentation by Roger Sessions explores the relationship between project size and project failure - this slide is an extract.
(Yes, I know the Standish metrics are contentious these days - but even if they're only half-right the conclusion would be the same.)
The traditional methods of managing large projects, all those document-centric processes, don't work very well. My advice: if you're unable to break down a project into agile-sized chunks then either do the company a favour and call a halt to it or run very far away as fast as your skinny little software developer legs can carry you.
Sometimes, Agile is a bad fit for a project but it's not because of size. I can think of two characteristics that would mitigate against agile:
- A very detailed specification has been contracted with the customer. I don't just mean a description of the features, but a specification that leaves no room to manoeuvre, which is fairly rare, thankfully. (Though see below!)
- The development organization is completely distributed with few co-located developers. How to manage virtual self-organizing teams is a problem I haven't been able to crack yet, but the co-ordination mechanisms in agile such as stand-up meets, story-boards and solution white-boarding aren't adapted for this
(So, you may ask, why didn't I run away from this project like I recommend above? Well dear reader, if I knew then what I know now I would have done things differently. Hence this blog.)
When I'm starting a new project I'll always try to apply an agile development approach, breaking it down into smaller chunks. I avoid projects with either of those nasty characteristics.
Labels:
agile,
project size,
software development
Code Wins!
Back in the early 2000's, the app server team at IONA had a mantra: "Code Wins!". It was a way of resolving arguments about the merits of a design. Instead of debating via emails or PowerPoints, go and prove your point in code with a prototype.
That team was one of the most creative I ever worked with, and the iPortal Application Server evolved at a tremendous pace.
I notice that almost the same phrase "Code Wins Arguments" shows up in the Facebook registration statement, filed yesterday with the Security and Exchange Commission. It's a powerful idea. Here's the extract from the statement:
That team was one of the most creative I ever worked with, and the iPortal Application Server evolved at a tremendous pace.
I notice that almost the same phrase "Code Wins Arguments" shows up in the Facebook registration statement, filed yesterday with the Security and Exchange Commission. It's a powerful idea. Here's the extract from the statement:
The Hacker Way
As part of building a strong company, we work hard at making Facebook the best place for great people to have a big impact on the world and learn from other great people. We have cultivated a unique culture and management approach that we call the Hacker Way.
The word “hacker” has an unfairly negative connotation from being portrayed in the media as people who break into computers. In reality, hacking just means building something quickly or testing the boundaries of what can be done. Like most things, it can be used for good or bad, but the vast majority of hackers I’ve met tend to be idealistic people who want to have a positive impact on the world.
The Hacker Way is an approach to building that involves continuous improvement and iteration. Hackers believe that something can always be better, and that nothing is ever complete. They just have to go fix it — often in the face of people who say it’s impossible or are content with the status quo.
Hackers try to build the best services over the long term by quickly releasing and learning from smaller iterations rather than trying to get everything right all at once. To support this, we have built a testing framework that at any given time can try out thousands of versions of Facebook. We have the words “Done is better than perfect” painted on our walls to remind ourselves to always keep shipping.
Hacking is also an inherently hands-on and active discipline. Instead of debating for days whether a new idea is possible or what the best way to build something is, hackers would rather just prototype something and see what works. There’s a hacker mantra that you’ll hear a lot around Facebook offices: “Code wins arguments.”
Hacker culture is also extremely open and meritocratic. Hackers believe that the best idea and implementation should always win — not the person who is best at lobbying for an idea or the person who manages the most people.
To encourage this approach, every few months we have a hackathon, where everyone builds prototypes for new ideas they have. At the end, the whole team gets together and looks at everything that has been built. Many of our most successful products came out of hackathons, including Timeline, chat, video, our mobile development framework and some of our most important infrastructure like the HipHop compiler.
To make sure all our engineers share this approach, we require all new engineers — even managers whose primary job will not be to write code — to go through a program called Bootcamp where they learn our codebase, our tools and our approach. There are a lot of folks in the industry who manage engineers and don’t want to code themselves, but the type of hands-on people we’re looking for are willing and able to go through Bootcamp.
Monday, January 2, 2012
The cost of getting what you want
Most enterprise software is a bit crap, I'm sorry to say. You can find nicely designed single user applications, and many mobile apps are a delight, but enterprise software is hard work to use. Usability has traditionally been an afterthought.
This unhappy situation is changing, albeit more slowly than I'd like. With agile methods and interaction design, there's a lot more focus these days on building software for users rather than for process specialists or, worst of all, CIO's. Software packages are starting to show the benefits, as users expectations are raised by what they see on consumer devices.
But those process specialists and CIO's haven't gone away you know. They still pop up at contract time, demanding custom modifications that deface your beautifully honed software. Your own sales people are only too happy to go along: give the customer what he wants and charge him $200 an hour for the privilege - what's the problem? But you, software product guy, you must resist, and you must be prepared to fight the good fight in your own organization.
Your sales guys don't understand, so you need to convince your CEO as to why this is a bad idea. It's more than a question of usability, which is a fluffy concept to your alpha-male CEO anyway.
This unhappy situation is changing, albeit more slowly than I'd like. With agile methods and interaction design, there's a lot more focus these days on building software for users rather than for process specialists or, worst of all, CIO's. Software packages are starting to show the benefits, as users expectations are raised by what they see on consumer devices.
But those process specialists and CIO's haven't gone away you know. They still pop up at contract time, demanding custom modifications that deface your beautifully honed software. Your own sales people are only too happy to go along: give the customer what he wants and charge him $200 an hour for the privilege - what's the problem? But you, software product guy, you must resist, and you must be prepared to fight the good fight in your own organization.
Your sales guys don't understand, so you need to convince your CEO as to why this is a bad idea. It's more than a question of usability, which is a fluffy concept to your alpha-male CEO anyway.
- Customization projects go wrong, frequently. Your reputation as a software vendor will suffer, not because of a deficiency in your product but because the customization is badly done.
- Many people involved in negotiating contracts with a vendor are naive when it comes to software. They think you can specify and cost everything up front, the poor things. Projects run like this often result in software that meets all the terms of the contract and none of the needs of the users. They'll complain, they'll be unhappy, and the people who actually asked for the customizations will blame you!
- Customized software is notoriously difficult and costly to upgrade. If you want a loyal customer who stays with you through many product releases you'd be best to limit the amount of customization.
- If your software needs to be heavily customized to suit a customer this could mean that either (a) your software is missing features it really needs to have or (b) this customer is not really in your target market. If (a) is true don't customize - build the generic features your customer wants in the next release. If (b) then walk away and put your attention on the market you're trying to build - this customer is diverting you from your goals.
In short, only sell customization if you really really have to - and be ready to explain to your customer why getting exactly what he wants could be a bad idea. Have them read the article Package Customization by Martin Fowler as a primer on the topic. You may actually find that your customer appreciates the advice - after all, you're the expert on software development in this relationship.
Labels:
agile,
customization,
software development,
usability
Subscribe to:
Posts (Atom)