I often have thoughts forming in my head about how we can actively encourage serendipity to happen more often. Then I had an opportunity to learn more about how others do it.
At Agile Open Northwest recently, I was practicing mob programming. Conference participants had gathered in a nearby hotel in the evening next to a cozy fireplace and a commandeered TV screen. In my sketchy notes from that evening, I found a book title, The Luck Factor, mentioned by the facilitator, Llewellyn Falco. The title piqued my interest. It might have been related to a discussion about having a bias toward action while mobbing, which makes sense to me now, but perhaps not for the reason Llewellyn had in mind.
I found several different books titled “The Luck Factor.” Llewellyn later told me that he was thinking of the one by Dr. Richard Wiseman. So I bought it. But I couldn’t help also getting one by Max Gunther, which was written much earlier. I read the Gunther version first, and here are my impressions of it. The full title, by the way, is The Luck Factor: Why Some People Are Luckier Than Others and How You Can Become One of Them. It was published in 1977.
More than half of the book is spent setting up a number of different things that good luck is mistakenly attributed to, such as paranormal effects and numerology, along with some amusing stories. But it doesn’t really start to deliver on its promises until part IV–”The Luck Adjustment.” Here are the essential parts of the solution, with my paraphrasing:
The Spiderweb Structure – being gregarious and building a broad social network
The Hunching Skill – recognizing when your subconscious is acting on real data and making educated guesses that you should pay attention to (vs. when it’s just wishful thinking)
Fortune Favors the Bold – basically, carpe diem, and make sure to answer the door when opportunity knocks
The Ratchet Effect – when that opportunity turns sour, be just as bold in cutting your losses
The Pessimism Paradox – plan for the worst, because you’ll sometimes need to act on those plans
These are all things I’m pretty familiar with, except the one I could stand to improve the most is my “hunching” skill. There are some especially useful ideas in that chapter.
I have Wiseman’s Luck Factor in my to-read pile now. Looking at the table of contents, it seems there may be a good deal of overlap with Gunther’s book. I’ll let you know how it goes.
A reader discovered my article from 2018, “It’s a Wonderful Career,” where I said this about software testing jobs: “Get out!” and “Don’t transition to doing test automation.” He asked if I still feel that way now, and the answer is “yes.” Here’s an update on my thoughts.
I’ve been a programmer for most of my life, but a majority of my career has been clustered around software testing. That shifted in 2017 when I took a job as a developer. Since then I’ve flirted with software quality in a 6-month role coaching some test automation folks who were building their programming skills, but otherwise I’ve stayed on the developer track. In my developer roles, I have worked with production code as well as a wide variety of automated tests, plus documentation and release pipeline automation. There have been no testing specialists involved at all.
In my post “Reflections on Boris Beizer” I briefly mentioned how testing as a specialty role has waxed and waned. It was perhaps the 1980s when software the testing specialist became a common role that was distinct from software developers. Fast forward to 2011 when Alberto Savoia gave his famous “Test is Dead” keynote.
I first wrote about the possible decline of the testing role in 2016 in “The black box tester role may be fading away.” I suggested that testing might be turning into low-pay manual piecework (think, Amazon Mechanical Turk), but I don’t see any evidence now that that’s coming true. I mentioned the outsourcing company Rainforest QA. A company by the same name now offers no-code AI-driven test automation instead.
I followed up in “What, us a good example?” where I wrote about companies that aren’t evolving their roles, and yet they’re surviving, but they’re going to have to survive without me. My expectations have evolved.
I know that many people are still gainfully employed as testing and test automation specialists. I can’t fault them for doing what works for them. And I’ll admit that it’s still tempting to go to my comfort zone again and go back to focusing on testing. Maybe I can shed some light on why I’m resisting that temptation. It’s pretty simple, really.
As a developer, my salary has grown tremendously. There are multiple factors involved here, including moving to a different company a few years ago. But the organization I’m working in has no testing specialists, so this opportunity wouldn’t have been available to me if I were applying for a job as a tester. I have a sense there are many more jobs open for developers than for testers out there, especially with a salary that I would accept now, and I’m curious if anyone has any numbers to back that up.
I don’t have any issues with people not respecting my contribution to the product. I don’t have to justify myself as overhead – I have a direct impact on a product that’s making money. And I still do all aspects of testing in addition to developing, maintaining, and releasing products.
In “The Resurrection of Software Testing,” James Whittaker recently described the decline of the testing role as being even more precipitous than I thought. And he also says that it needs to make a comeback now because AI needs testing specialists. He’s organizing meet-ups near Seattle to plot a way forward. I don’t have an opinion on whether AI is going to lead to a resurgence in testing jobs. Instead I’m focusing on an upcoming conference where I’m going to hone my development skills.
And that’s really where I stand on a lot of discussions about testing jobs – no opinion, really. I don’t benefit by convincing testing specialists to change their career path, but I’m happy to share my story if it helps anyone navigate their own career.
One thing I do ponder–there are still so many organizations out there that employ testing specialists, and I might end up working as a developer for one of them some day. How strange that would be if it comes to pass.
[credit for feature image: screen shot from the video of Alberto Savoia’s “Test is Dead” presentation at the 2011 Google Test Automation Conference]
In this installment of “Jerry’s Story,” we’ll get some background on the first IBM machine that Jerry Weinberg learned how to operate, even if it still wasn’t the first computer he programmed. See the home page for Jerry’s Story for the other installments.
In Jerry Weinberg’s first week working at IBM in June 1956, he learned how to use a data processing machine, the IBM Type 607 Electronic Calculating Punch. This type of machine is also called a tabulator, unit record equipment, or an electric accounting machine. But what you’re not likely to hear it called is a computer. Permit me, if you will, to explore the roots of this machine.
In the 1880s, Herman Hollerith built the first punch card tabulator. He established a company that would evolve into the International Business Machines Corporation, along with a few other companies that manufactured business machines like time clocks and computing scales. As we follow the evolution of tabulators, we’ll focus on IBM’s innovations, though IBM did have competitors producing tabulators of their own.
The core feature of the first tabulator was essentially one thing: counting. Like a hand-held tally counter, it could add 1 to a count repeatedly. It had forty counters that could each count up to 9,999. The machine could be wired to recognize a certain spot in a punched card and increment one of the counters when a card was presented that had a hole in that spot. When a batch of cards was done, the operator would read the values from the relevant counters and write them down by hand.
Many of the basic components of the first tabulator weren’t new – punch cards had been used for automated looms, and in a similar way, paper tape was used for automated musical instruments like player pianos. Mechanical counters had been used in many different forms, though it’s possible that Hollerith’s design was unique. Cash registers already had printers. We already had mechanical calculators. The innovation was being able to handle a large number of tabulations (and later, more complex calculations) much more quickly than ever before.
In 1886, the first tabulator was put to use in the Baltimore Department of Health. The components were electro-mechanical. A manually operated card reader was built-in. The cards were prepared using a keyboard-based punch. Next to the tabulator was a sorter, where a lid over one of the bins would automatically pop open so the operator could manually place a card in the correct bin.
Customer needs called for improved capabilities, and in 1889, The Hollerith Integrating Tabulator was able to add two numbers as well as tabulate. Subtraction didn’t become a feature until 1928. Multiplication followed quickly in 1931, first on the IBM 601 Multiplying Punch. It took six seconds to multiply two eight-digit whole numbers. Then in 1946, the IBM 602 Calculating Punch could do division.
Among other innovations worth noting was the automatic card feed in 1900, followed the next year by the Hollerith Automatic Horizontal Sorter, allowing a machine to process a batch of cards without needing anyone to move the cards through the machine.
In 1906, control panels (also called “plugboards”) were added that allowed operators to change the way a tabulator works by moving plugs to different sockets on the panel. Before then, changing the way the punch card data was tabulated required re-soldering the wires connected to the counters. Then perhaps as early as 1928, IBM introduced removable control panels, allowing control panels to be prepared while not connected to the machine, which greatly reduced the downtime when changing the function of the machine. Operators could maintain a library of prepared control panels.
By 1920, printers were developed, at first only able to print numbers. This freed operators from having to manually write down the values shown on the many dials. Somewhere around 1933 we had the first alphabetic tabulator, able to print out words in addition to numbers.
In 1946, the evolution of IBM’s 600 series continued with the 603 Electronic Multiplier, which used 300 large vacuum tubes. It’s called the world’s first mass-produced electronic calculator, though only about 20 units were built. Two years later, the IBM 604 Electronic Calculating Punch was a much improved model, with more than 1000 miniature vacuum tubes. It would sell more than 5000 units. Later that year, the IBM 605 Electronic Calculator, a slightly modified 604, was released. There didn’t seem to be a model “606” from IBM.
That brings us up to 1953 and the release of the IBM 607 Electronic Calculating Punch that Jerry would get his hands on a few years later. It was similar to the 604, and it included a memory unit. It was able to read data, but not program instructions, from punch cards. According to the Columbia University Computing History web site, the 607 weighed a little more than 2 tons, occupied 36 square feet of floor space, and had a heat load of 26,000 BTU.
The printer Jerry used with the 607 was an IBM 402 Accounting Machine. The 402 could print 43 letters and numbers on a line, followed by up to 45 numbers on the right side of the line. It was introduced in 1948. This can get a bit confusing, as the function of the 402 overlapped a bit with the 607. Later he used the IBM 407 as a printer.
Jerry described the machines he used with the 607 and how he configured them:
The 402 was a ‘tabulating’ machine. The wiring boards allowed formatting of the input and the output. I could add up numbers from successive cards and print totals, but no other calculations except by tricks. Like, you could multiply by 2 by adding, or take 1 percent of a number by displacing wires by two places.
The keypunch was ‘programmed’ with a code punch card.
The verifier was a fixed function machine that compared a punched card with the supposedly duplicate key strokes.
The sorter had no programming except by what the operator chose to do, which was basically by choosing a card column to sort on and the handling of the card for successive sort runs. You could only sort on one column at a time, and for alpha sorting you had to sort twice on the same column, if I remember correctly. We didn’t do a lot of alpha sorting because it was a PIA, so wherever possible, we used numeric codes.
The others [607 and reproducing punch] were wired program machines.
Following the 607 in 1957 was the IBM 608 Transistor Calculator, fully transistorized with no vacuum tubes. In 1960, the IBM 609 Calculator improved on this by adding core memory. This was the end of the line for the 600 series at IBM. But the IBM 407 wasn’t withdrawn from marketing until 1976.
Coming up next, a look at the history of the machine Jerry ran his first program on, the IBM 650, and programming in general.
References
I’ve chased many squirrels in the last few years trying to produce this installment. I finally realized that this installment was trying to be an entire book of its own. I’m satisfying that itch in a small way by cutting out a great deal of scope, and covering tabulators here, and computers in the next installment, then going back to focusing on Jerry.
Below are some of the resources I’ve used in producing this installment. I apologize for not having good enough notes at this point to be able to footnote all of the facts with the relevant reference.
Herman Hollerith: Forgotten Giant of Information Processing, Geoffrey Austrian, 2016
I know you can’t tell from the looks of it, but I’ve been hanging around here a lot lately, working on a post that wants to turn info a book of its own. I’m making good progress getting it tamed into a reasonable length. But first, inspired by LinkedIn posts from James Bach and from Jon Bach and subsequent comments, I want to explore another idea rattling around in my head.
I’m going to talk about three examples where members of a community were accused of groupthink of some sort. In many cases, some people observing the communities say that they see cult-like behavior. I’d prefer not to use the derogatory term “cult” here for a couple of reasons. One, because cult leaders actively encourage groupthink and blind obedience, and I don’t see that happening in these cases (even if their followers are picking up some of these traits). And also, because real cults have done a lot of damage, such as permanently ripping families apart. Let’s not try to equivocate that with what I’m talking about here.
Example 1: I learned a lot from the author and consultant Jerry (Gerald) Weinberg. I am of his followers. People outside his circle often don’t understand the level of devotion that many of his followers exhibited during his life and afterward. Someone even coined a term for it: “Weinborg”, which many of us have adopted for ourselves. (If you don’t get the reference, look up the fictional “Borg” in the Star Trek universe – we have been assimilated).
I attended three of Jerry’s week-long workshops. Every time I’ve been through an intense experiential training, it has been a deeply moving experience. That’s true of Jerry’s workshops, plus other experiential trainings I’ve attended (several Boy Scout training sessions come to mind, for example). Once you’ve recovered, you want more. But you can’t effectively explain why it was so moving to someone who wasn’t there, in fact, for many of them, you can’t give away too many of the details, or you may ruin the experience for someone who attends later.
There are likely many other behaviors among Jerry’s followers that looked odd to outsiders. Perhaps we would invoke his laws by name, like “Rudy’s Rutabaga Rule”. Or we might reference “egoless programming” and point to the book where Jerry wrote about it. We might get ourselves into trouble, though, if we recommended that people follow his ideas without being able to explain them ourselves. “Go read his book” isn’t very persuasive if we can’t give the elevator speech ourselves to show the value in an idea.
Early in my career, a wise leader cautioned me to build opinions of my own rather than constantly quoting the experts. That has been a guiding principle for me ever since, and one that I hope has steered me away from groupthink.
Example 2: James Bach is a consultant and trainer who has influenced a lot of people in the software testing community, along with his business partner. I have learned a lot from James, and I continue to check in with him periodically, though I have never chosen to join his community of close followers. Incidentally, he has also been influenced by Jerry Weinberg.
James has grown a community of people who agree on some common terminology, which streamlines their discussions. It gets interesting, though, when someone uses that terminology outside that community without explaining what it means to them. I remember attending a software quality meetup that advertised nothing indicating that it was associated with James Bach or his courses. But then I heard the organizers and some attendees use terminology that I recognized as originating from James. It’s been several years since the meetings I attended, but I think I remember them presenting other ideas that closely align with what James teaches, not always identifying where they came from or why they recommended them. I vaguely remember that I stood up once or twice and told them that I hadn’t accepted some of those ideas, and I don’t recall the discussion going very far.
If a group has an ideology that they expect participants to adopt as a prerequisite for participating, that’s fine, but it needs to be explicit. Otherwise, they need to be prepared to define their terms and defend their ideas.
Example 3: I participate in the “One of the Three” Slack forum and often listen to its associated podcast created by Brent Jensen and Alan Page. They have spoken out about James Bach and his community a few times. At one point, some participants piled on to some negative comments that seemed to have no basis other than “because Alan and Brent said so.” I called them out for groupthink, not unlike the very thing they were complaining about. Fortunately, I think they got the message.
I remember talking about the “luminary effect” with author and consultant Boris Beizer years ago. This is where people hesitate to challenge an expert, especially a recognized luminary in their field, because of their perceived authority on a topic. But in fact, all of the experts I’ve mentioned love for you to give them your well-reasoned challenges to any of their ideas. Granted, the more they love a debate, the more exhausting and intimidating it can be to engage with them. There are smart, after all, and you need to do your homework so you can competently defend your ideas – that’s not asking too much, right? In fact, one of the best ways to get their respect is to challenge them with a good argument. I just hope that a few of my ideas here will survive their scrutiny.
In this post I’m talking about some controversial people and some controversial topics. Where I’ve stayed neutral here, I’ve done so very deliberately, and though I have some further opinions unrelated to the topics I’m discussing, I’m not going to muddy this post with them.
Software developers, have you had this experience? You start to fix a bug or add a feature to some existing code, and you have a hard time working with the code because it’s poorly designed. It might not have decent unit tests. It might be full of code smells like long functions, poor naming, and maybe even misspelled words in names and comments. It’s really difficult not to complain about the state of the code you have to work with. If you’re pairing like I often do, you’ll complain to your pair. Or maybe you’ll whine about it to your whole team.
I’m going to make a case for developers to tone down the whining.
I can find peace when I’m annoyed by software that’s hard to maintain by remembering Boulding’s Backward Bias from Jerry Weinberg’s book The Secrets of Consulting: “Things are the way they are because they got that way.” Jerry attributed this to his mentor, the economist Kenneth Boulding. It was possibly inspired by biologist D’Arcy Wentworth Thompson, who said, “Everything is the way it is because it got that way.”
Boulding’s Backward Bias, in a tautological sort of way, reminds us to consider the potentially complex history that got us to where we are now. Weinberg points out “There were, at the time, good and sufficient reasons for decisions that seem idiotic today.” And, he says, the people who created the problems might still be around, and they might be in a position of authority. This leads to what Weinberg calls Spark’s Law of Problem Solution: “The chances of solving a problem decline the closer you get to finding out who was the cause of the problem.”
So resist the urge to track down who committed the code you’re concerned about. But do try to put yourself in their shoes when they were writing the code. Let’s consider a number of possible factors that could lead to awful code.
Maybe the developer wanted to do better, but they had constraints that prevented them from doing so. Common examples are schedule pressure or not thinking they have permission to write unit tests. Frequently I’ve seen that these constraints are imaginary; management probably wants developers to take the time to do it right the first time, but the developer nevertheless puts pressure on themselves to finish faster.
Maybe the developer was inexperienced at the time they developed the code, and there wasn’t enough technical leadership oversight to notice and correct the problems.
Maybe the developer thoughtfully chose a different design standard than the one that you’re judging the code by.
Maybe the developers who worked with the flawed code after it was initially written didn’t feel empowered to improve the design.
When dealing with organizational issues, you might want to learn about the history of how you got here. But with internal code design decisions, I find that it’s sufficient to understand that there probably were good reasons for them without knowing what the reasons actually were. Granted, if you can’t figure out why a particular feature works the way it does, that may require some historical investigation, and that’s beyond what I’m discussing here.
Complaining wastes time and distracts from getting the work done. What if some of the people who wrote that code hear your complaints? Some people are good-natured about such criticism, but not everyone is likely to appreciate these complaints about their work.
Typically when I start working with problematic code, I’ll grumble about it either out loud or to myself, but then I’ll get to work on it. My approach was heavily influenced by Michael Feathers’ book Working Effectively with Legacy Code. I will do enough refactoring to make sure I can write unit tests to cover the code I’m working with. I might do additional refactoring to make the code more readable. But I have to make tough choices about how deep to go with improving the code, or else I wouldn’t ever get much work done. I think I’ve done pretty well with this.
When I asked about this on Twitter, some of the responses indicated deeper issues than hearing whining about bad code.
There was a report about developers who said the code was too far gone to fix. There was some discussion about code ownership – it’s better to talk about how the team’s code has problems, rather than complaining about the output of one specific person. There was even a mention of a developer who wouldn’t fix the code because it was written by someone else. A few people didn’t think the complaints were much of a problem, and they suggested having a dialog with the original authors to get help improving the code.
Have you or will you ever write code that isn’t perfect? Could your own code be the subject of someone else’s complaints? Surely it will, and my hope is that the team will focus on making whatever improvements are necessary to get the job at hand done effectively without worrying about why the code is harder to work with than they’d like.
In this installment of “Jerry’s Story,” we’ll take a quick look at the computer job market when Jerry Weinberg started his career, plus a peek at his first project at IBM. Refer to the home page for Jerry’s Story to see the other installments.
When Jerry applied for a job at IBM in early 1956, he was answering a job ad in Physics Today. He said this was the first computer job ad he ever saw. I found what was most likely the ad he saw, which he might have seen in either the January or March, 1956 issues of Physics Today. I looked through the 1955 and 1956 Physics Today archives and can give a bit of context around what it may have been like to look for a computer job at the dawn of the computer age.
The term “programmer” was not very common in these job ads in the mid-1950s. Surprisingly, one job ad from an unidentified company in the Gulf South region mentions programmers. It was in the context of a “computer-analyst” who is expected to be able to supervise a team of programmers for a magnetic drum computer. Other job titles that involved directly supporting or using computers included “machine operator,” “draftsman,” “engineer,” “designer,” “mathematician,” “physicist,” “scientist,” and of course, IBM’s “applied science representative.” That same unidentified company, amazingly, also asked for experienced candidates: “Knowledge of digital computer techniques desirable but not essential.” National Cash Register was looking for a senior electronic engineer with a master’s degree “and minimum of 2 years digital computer experience.”
At least one job ad didn’t make it clear whether they were talking about working with human computers or machines. In the 1950s, it still wasn’t unusual for someone to be employed as a “computer” doing manual calculations (as Jerry had done was while he was at the University of Nebraska).
The wording in the ads in this era didn’t necessarily encourage diversity. Melpar was looking for engineers, saying it was “an opportunity for qualified men to join a company that is steadily growing.” IBM, in its applied science representative ad, used phrases like “For the mathematician who’s ahead of his time…” “This man is a pioneer, an educator…” and “You may be the man….” One ad even gave an acceptable maximum age for applicants.
One mystery that remains is why Jerry hadn’t noticed a computer-related job ad sooner. I found ads for data processing and computer jobs in publications such as Scientific American and Popular Science as early as 1952. At least five different companies placed job ads that mentioned computers in Physics Today in 1955 and early 1956. One, in March 1955, was an IBM ad similar to the one that got Jerry’s attention in early 1956. Though Jerry was a voracious reader, he had missed reading about several opportunities to fulfill his dream of working with computers. We can presume that by the time he got to college, he was no longer able to absorb all available information around him like he could when he was sitting at the breakfast table reading everything on the cereal box. I did notice that the frequency of the mentions of computer jobs was much higher in 1956 than even 1955, so the odds of one of them getting noticed were going up over time.
A few ads mentioned both analog and digital computers, including ads from General Electric and Melpar. In June 1956, a Honeywell Aeronautical Division ad said, “Several unusual positions are open in our Aeronautical Research Department… Experience or interest is desirable in digital and analog computing…” Jerry’s first programming project for a client involved writing a program to replace an old analog computer.
There were two room-sized electronic analog computers that were used to analyze hydraulic networks for city water systems in the U.S., one in Oregon and one in New York. They were built using resistors, and they solved systems of non-linear algebraic equations. To use one, you had to travel to one of the two locations and spend several days setting it up for a single calculation, all of which would cost thousands of dollars. IBM was tasked to replace these analog computers with a program that could run on any IBM 650.
Jerry partnered with civil engineer Lyle Hoag on the project. He said the two were essentially doing pair programming and test-first development as they replicated the analog computer’s features. Though the 650 wasn’t appreciably smaller than the analog computer, we can surmise that the program could run much more quickly and cheaply than its predecessor, and it could run anywhere there was an IBM 650 installation.
In his 2009 blog post “My Favorite Bug,” Jerry wrote about how this project produced his first and favorite bug. When the program passed all of the tests they had written, the pair brought in a small real-world problem to solve. After waiting two hours with no result, they were about to abort the program, when finally it started printing the results. This spurred them to make improvements in the program’s usability.
The experience led to Jerry’s first published article: “Pipeline Network Analysis by Electronic Digital Computer” [paywalled] (Lyle N. Hoag and Gerald Weinberg, May 1957, Journal of the American Water Works Association, vol. 49, no. 5). He hadn’t yet decided to use his middle initial in his “author name.” Jerry told me he got some unexpected fame from the article–
I had training in electrical engineering as part of my physics education, so I was familiar with networks and flow equations. As the article points out, the same program (modified) could be used for all sorts of network flow. But most of the civil engineering was provided by my partner, Lyle Hoag.
Many years later, I was way up north in Norway up in the fjords teaching a class in programming or something, and some student came up to me at the first break and said ‘Are you the famous Gerald Weinberg?’ I had published a few books by that time. I asked him, ‘Which book?’ ‘It’s not your book,’ was the answer, ‘it’s your program for hydraulic networks. Civil engineers everywhere use this, and they all know your name.’ It’s the only civil engineering paper I ever wrote. My partner became a famous civil engineer. He’s got quite a reputation; they named a few awards for him.
My team is committed to Test-Driven Development. Therefore, I was struck with remorse recently when I found myself writing some bash code without having any automated unit tests. In this post, I’ll show how we made it right.
Context: this is a small utility written in bash, but it will be used for a fairly important task that needs to work. The task was to parse six-character record locators out of a text file and cancel the associated flight reservations in our test system after the tests had completed. Aside: I was also pair programming at the time, but I take all the blame for our bad choices.
We jumped in doing manual unit testing, and fairly quickly produced this script, cancelpnrs.bash:
#!/usr/bin/env bash
for recordLocator in $(egrep '\|[A-Z]{6}\s*$'|cut -d '|' -f 2)
do
recordLocator=$(echo -n $recordLocator|tr -d '\r')
echo Canceling $recordLocator
curl "http://testdataservice/cancelPnr?recordLocator=$recordLocator"
echo
done
The testing cycles at the command line started with feeding a sample data file to egrep. We tweaked the regular expression until it was finding what it needed and filtering out the rest. Then we added the call to cut to output the record locator from each line, and then put it in a for loop. I like working with bash code because it’s so easy to build and test code incrementally like this.
After feeling remorse for shirking the ways of TDD, I remembered having some halting successes in the past with writing unit tests for bash code. We installed bats, the Bash Automated Testing System, then wrote a couple of characterization tests as penance:
#!/usr/bin/env bats
# Requires that you run from the same directory as cancelpnr.bash
load 'test/libs/bats-support/load'
load 'test/libs/bats-assert/load'
scriptToTest=./cancelpnrs.bash
@test "Empty input results in empty output" {
run source "$scriptToTest" </dev/null
assert_equal "$status" 0
assert_output ""
}
@test "PNRs are canceled" {
function curl() {
echo "Successfully canceled: (record locator here)"
}
export -f curl
run source "$scriptToTest" <<EOF
Thu Apr 02 14:23:45 CDT 2020
Checkin2Bags_Intl|LZYHNA
Checkin2Bags_TicketNum|SVUWND
EOF
assert_equal "$status" 0
assert_output --partial "Canceling LZYHNA"
assert_output --partial "Canceling SVUWND"
}
We were pretty pleased with the result. Of course, the test is a good deal more code than the code under test, which is typical of our Java code as well. We installed the optional bats-support and bats-assert libraries so we could have some nice xUnit-style assertions. A few other things to note here–when we’re invoking the code under test using “source“, it runs all of the code in the script. This is something we’ll improve upon shortly. We needed to stub out the call to curl because we don’t want any unit test to hit the network. This was easy to do by creating a function in bash. The sample input in the second test gives anyone reading the test a sense for what the input data looks like.
Looking at the code we had, we saw some opportunity for refactoring to make the code easier to understand and maintain. First we needed to make the code more testable. We knew we wanted to extract some of the code into functions and test those functions directly. We started by moving all the cancelpnrs.bash code into one function, and added one line of code to call that function. The tests still passed without modification. Then we added some logic to detect whether the script is being invoked directly or sourced into another script, and it only calls the main function when invoked directly. So when sourced by the test, the code does nothing but defines functions, but it still works the same as before when invoked on the command line. We changed the existing tests to call a function rather than just expecting all of the code to run when we source the code under test. This transformation was typical of any kind of script code that you would want to unit test.
At this point, following a proper TDD process felt very similar to the development process in any other language. We added a test to call a function we wanted to extract, and fixed bugs in the test code until it failed because the function didn’t yet exist. Then we refactored the code under test to get back to “green” in all the tests. Here is the current unit test code with two additional tests:
#!/usr/bin/env bats
# Requires that you run from the same directory as cancelpnrs.bash
load 'test/libs/bats-support/load'
load 'test/libs/bats-assert/load'
scriptToTest=./cancelpnrs.bash
carriageReturn=$(echo -en '\r')
setup() {
source "$scriptToTest"
}
@test "Empty input results in empty output" {
run doCancel </dev/null
assert_equal "$status" 0
assert_output ""
}
@test "PNRs are canceled" {
function curl() {
echo "Successfully canceled: (record locator here)"
}
export -f curl
run doCancel <<EOF
Thu Apr 02 14:23:45 CDT 2020
Checkin2Bags_Intl_RT|LZYHNA
Checkin2Bags_TicketNum_Intl_RT|SVUWND
EOF
assert_equal "$status" 0
assert_output --partial "Canceling LZYHNA"
assert_output --partial "Canceling SVUWND"
}
@test "filterCarriageReturn can filter" {
doTest() {
echo -n "line of text$carriageReturn" | filterCarriageReturn
}
run doTest
assert_output "line of text"
}
@test "identifyRecordLocatorsFromStdin can find record locators" {
doTest() {
echo -n "testName|XXXXXX$carriageReturn" | identifyRecordLocatorsFromStdin
}
run doTest
assert_output $(echo -en "XXXXXX\r\n")
}
You’ll see some code that deals with the line ending characters “\r” (carriage return) and “\n” (newline). Our development platform was Mac OS, but we also ran the tests on Windows because the cancelpnrs.bash script also needs to work in a bash shell on Windows. The script ran fine under git-bash on Windows, but it took some tweaking to get the tests to work on both platforms. There is surely a better solution to make the code more portable.
We installed bats from source and committed it to our source repository, and followed the instructions to install bats-support and bats-assert as git submodules. We’re not really familiar with submodules and not entirely happy with having to do a separate installation of the submodules on every system we clone our repository to (we have to run “git submodule init” and “git submodule update” after cloning, or else remember to add the option “–recurse-submodules” to the clone command).
Running the tests takes a fraction of a second. It looks like this:
$ ./bats test-cancelpnrs.bats
✓ Empty input results in empty output
✓ PNRs are canceled
✓ filterCarriageReturn can filter
✓ identifyRecordLocatorsFromStdin can find record locators
4 tests, 0 failures
Here is the current refactored version of cancelpnrs.bash:
#!/usr/bin/env bash
cancelEndpoint='http://testdataservice/cancelPnr'
doCancel() {
for recordLocator in $(identifyRecordLocatorsFromStdin)
do
recordLocator=$(echo -n $recordLocator | filterCarriageReturn)
echo Canceling $recordLocator
curl -s --data "recordLocator=$recordLocator" "$cancelEndpoint"
echo
done
}
identifyRecordLocatorsFromStdin() {
egrep '\|[A-Z]{6}\s*$' | cut -d '|' -f 2
}
filterCarriageReturn() {
tr -d '\r'
}
if [ "${BASH_SOURCE[0]}" -ef "$0" ]
then
doCancel
fi
There are two lines of code not covered by unit tests. Because the one test that hits the loop body in the doCancel stubs out curl, the actual curl call is not tested. Also, the doCancel call near the bottom is never tested by the unit tests. We ran manual system tests with live data as a final validation, and don’t see a need at this point to automate those tests.
It’s been too long. Hello again. I’m still working on the next installment of Jerry’s Story. I’m going to restart it once again, and some day I’m going to get it right. Meanwhile, I dug up a list of my articles, presentations, and other content from 1995 – 2009, both self-published and otherwise, and found working URLs for them (I appreciate archive.org so much!). I’m putting them here mostly for future reference, but if you do delve in, please let me know if any of the links still don’t work.
One bonus if you scroll all the way down–my very first feature article, from 1987.
There are also some posts on myTejas Software Consulting Newsletterblog that I may republish here. Many of the articles linked below have outdated contact information. I’d love to hear from the modern you either in a comment on this post or via Twitter.
So with no further ado, in reverse order of musty outdatedness, here is my long list of nostalgia.
March 2008 Gray Matters podcast (mp3) Jerry McAllister interviewed me for this podcast, where we talked about the testingfaqs.org Boneyard and the strength of the worldwide test tools market.
Trip Report from a USENIX Moocher Dallas/Fort Worth Unix User’s Group, July 2003, reprinted in the Tejas Software Consulting Newsletter, August/September 2003, v3 #4
Developing Your Professional Network The Career Development column in the January/February 2001 issue of Software Testing and Quality Engineering magazine. References from the article are available on the STQE web site.
Software Defect Isolation Co-authored with Prathibha Tammana. Presented at the High-Performance Computing Users Group, March 1998, and InterWorks, April 1998.
Tester’s Toolbox column: The Shell Game Software QA magazine, pp. 27-29, Vol. 3 No. 4, 1996 Using Unix shell scripts to automate testing. Basic information about the available shells on Unix and other operating systems.
Tester’s Toolbox column: Toward A Standard Test Harness Software QA magazine, pp. 26-27, Vol. 3 No. 2, 1996 The TET test harness and where it fits into the picture.
Tester’s Toolbox column: Testing Interactive Programs Software QA magazine, pp. 29-31, Vol. 3 No. 1, 1996 A concrete example of using expect to automate a test of a stubborn interactive program.
Tester’s Toolbox column: Using Perl Scripts Software QA magazine, pp. 12-14, Vol. 2. No. 3, 1995 The advantages of using the perl programming language in a test environment, help in deciding whether to use perl and which version to use.
Apple Kaleidoscope, Compute! Magazine, pp. 111-112, issue 91, Vol. 9, No. 12, December 1987. I was interviewed about this article for “The Software Update” episode of the Codebreaker podcast, released December 2, 2015.
In this installment of “Jerry’s Story,” we’ll continue the tale of Jerry Weinberg’s education. Refer to the home page for Jerry’s Story to see the other installments.
While he was finishing his undergraduate degree in Lincoln, Nebraska, Jerry decided that he still had much more to learn. He applied to six graduate schools: Harvard University, Princeton University, Stanford University, the University of Chicago, Massachusetts Institute of Technology, and the University of California, Berkeley. All but Stanford accepted him and offered a fellowship. Despite getting accepted to five schools, the rejection by Stanford bothered him for some time – he was still sensitive about the awards he was cheated out of in high school. Later, when he realized how small Stanford was compared to the others, he had a better understanding of why they might not have had room for him.
UC Berkeley was his first choice, because two of his physics professors at the University of Nebraska had strong connections there. They could arrange a job for him to supplement his fellowship, and they could help him get into a program to earn his Masters and PhD simultaneously. He accepted the invitation from Berkeley. Shortly after that, he received an acceptance letter from MIT, also offering a job in their computing lab. He very much wanted to go to MIT so he could work with computers, but he was afraid that someone at Berkeley would tell MIT that he had reneged on his acceptance and then MIT would reject him. On later reflection, he realized that this thinking was naive. But he would become much fonder of the Western region of the U.S. than the East coast, so he was probably happier in California than he would have been in Massachusetts. The computers would come soon enough.
Jerry moved to Berkeley, California in 1955 with his wife Pattie. Shortly thereafter, in September, their first child, Chris, was born. They had some typical first-time parent worries. They were in what was essentially a one-room apartment, so Chris slept in a crib not far away from them. The first night they brought him home, they worried all night about whether he was still breathing. They would drift off to sleep, then both of them would wake with a start because they couldn’t hear him breathing. But he was fine – Chris kept breathing, and he slept a lot better than his parents did.
When Chris was two weeks old, they took him to a pediatrician for his first checkup. I’ll share with you the conversation I had with Jerry about how that went –
Jerry: So we go in and we had about thirty pages of handwritten questions for our pediatrician.
Danny: Oh my Lord. Poor doctor.
And they were prioritized.
Thank goodness for that.
We knew we probably couldn’t get to all of them so we had the most important one first. What do you think the first question was? Two weeks.
So many possibilities.
You’ll never guess it.
Well you mentioned breathing so I guess I just have to say, “How do you make sure he keeps breathing without staying up all night?”
No the first question was when should we get him his first pair of shoes.
Wow.
I’ll never forget this, I wish I had a video of this. And he’s this wise old guy and he says, ‘Well that’s an important question. Because you know if your kid gets to high school and he’s barefoot, the other kids are gonna mock him, it’s going to destroy him psychologically.’ I remember the answer, it was just wonderful.
Great answer!
And we just put away the rest of the questions. It was so good, that was one of the great learnings of my life.
Jerry started his coursework in Physics. This included working with a particle accelerator called the “Bevatron” at Lawrence Berkeley National Laboratory, which overlooked the UC Berkeley campus. The Bevatron had only begun operation the previous year. He set up experiments to try to simulate cosmic ray events. About 90% of the work involved stacking lead bricks to build a shelter from the particle beam. The researchers didn’t carry any kind of radiation detection with them, and Jerry worried later about whether the beam in the accelerator had caused him any harm. Records show that proper shielding may have only been installed later.
The Bevatron was used for some groundbreaking work around this same time, but we don’t know whether Jerry was involved with any of it. In 1955, the existence of the antiproton was proven using the Bevatron, which earned a Nobel Prize for two people. The antineutron was discovered there in 1956. The work for either of these could have overlapped the 1955-1956 school year Jerry was working with the Bevatron, and cosmic ray experiments like he was doing may have been relevant to the antiproton work.
Interior of the Bevatron without shielding in place, 1956. Photo credit: Berkeley Lab.
In less than a year, Jerry had passed the necessary exams and finished the experiments for his thesis, which concerned a mysterious bump in a cosmic ray energy graph. But he never finished writing his thesis. Early in 1956, Jerry saw an ad in Physics Today that changed everything for him. Here’s the text of it, in part–
FOR THE MATHEMATICIAN who’s ahead of his time
IBM is looking for a special kind of mathematician, and will pay especially well for his abilities.
This man is a pioneer, an educator—with a major or graduate degree in Mathematics, Physics, or Engineering with Applied Mathematics equivalent.
You may be the man.
If you can qualify, you’ll work as a special representative of IBM’s Applied Science Division, as a top-level consultant to business executives, government officials and scientists. It is an exciting position, crammed with interest, and responsibility.
Employment assignment can probably be made in almost any major U.S. city you choose. Excellent working conditions and employee-benefit program.
Other ads that IBM placed that year were more clear that the job involved computers, but this one did include a picture of a computer room with a caption talking about data processing. You can imagine the appeal – the chance to finally work with computers, a promise of a good salary, and a choice of where to live. He had the right degree. He happened to be male, which the ad strongly implied was an important factor. Jerry applied for the job.
Jerry and Pattie were almost out of money. His fellowship covered his tuition. Wedding gifts and a small amount of savings were covering the rest. They had no health insurance to help pay for Chris’ birth, and now their second child was on the way. Jerry borrowed $400 from his father, the only time in his life Jerry had to borrow from him. Though they were down to their last penny, he would be able to pay it back soon.
Jerry got an offer to start at IBM on June 15. He told the university he was leaving, and his fellowship was terminated. His advisor cried after hearing the news – Jerry needed perhaps only two months more to complete his thesis to earn his doctorate. He did leave UC Berkeley with a master’s degree in Physics as a consolation prize. When I asked Jerry if he had any regrets about leaving, he answered, “only my regret that I’m finite, and can’t do everything I’m interested in.”
He had also applied for an engineering job at Boeing in Seattle, which led to a job offer from Boeing. This job did not involve working with computers, but the salary was more than twice as much as IBM was offering. Plus, he could start a few weeks earlier, which was important, because his fellowship money was gone and he was broke. But the computers were calling him. Jerry told IBM that if he couldn’t start a few weeks earlier, he would go to Boeing instead. IBM said “Yes” and Jerry accepted their offer.
It’s hard to tell whether Jerry was bluffing about going to Boeing. The part of the decision that was easy for him was leaving the university. He said, “I realized that the PhD would be irrelevant to my life, and I wouldn’t learn anything new completing the thesis. My favorite expression about education I think is by Mark Twain, who said ‘I was always careful never to let my schooling interfere with my education.'” (The Quote Investigator gives compelling reasons for why Grant Allen is more likely the originator of this aphorism.)
Going to college for him was all about what he could learn, and only peripherally about earning a degree. His desire to always be learning extended beyond his schooling. This influenced all of his decisions about how he spent his time, including his decision to walk away from a chance to double his salary at Boeing and work for IBM instead.
If he saw opportunity that didn’t involve learning, he was likely to turn it down. And if he was doing something that didn’t allow him to learn at a sufficient pace, he would tend to stop that activity. But how did he judge whether he was learning fast enough? Jerry told me, “It’s just a feeling. Like how do you know you’re hungry?”
Years later, Jerry did earn a doctorate, but that part of the story will be easier to understand after exploring his role as a programmer.
As I sit here listening to Christmas music, I’m giving myself the gift of extra time to write. I want to respond to something Paul Maxwell-Walters recently tweeted:
If there is such a thing as a Tester’s Mid-Life Crisis, I think I may be in the middle of it….
Paul cited the director of a counseling center who said mid-life crises are likely to happen between age 37 through the 50s. Paul, approaching his 40s, worries that his crisis is here. As I see my 50s getting large on the horizon, I don’t know if my crisis has past, is still coming, or will never come. I was actually around Paul’s age when my consulting business dried up and I ended my 16-year run in software testing. Four years later, though, I went back to my comfort zone, and had four consecutive short stints in various testing jobs.
That last testing job morphed into a development job. I’m very happy with my current employer for encouraging that path to unfold. Over the years, I have fervently resisted several opportunities to move into development, some of them very early in my career. I had latched onto my identity as a tester and staunch defender of the customer, and I wouldn’t let it go.
Paul wrote:
I have also come across people around my age and older who are greatly dissatisfied or apathetic with testing. They feel that they aren’t getting anywhere in their careers or are tired of the constant learning to stay relevant. They feel that they are being poorly treated or paid much less than their developer colleagues even though they all work in the same teams. They hate the low status of testing compared to other areas of software development. They regret not choosing other employers or doing something else earlier.
That’s surely the story of any tester’s career. Low status, low pay, slow growth. I embraced it, because I loved the work and loved what it stood for. The dissatisfaction seems to be more common now than it used to be, though. My advice, which you will know if you’ve been reading things on my blog like “The black box tester role may be fading away“, is: get out! Don’t transition to doing test automation. Become a developer, or a site reliability engineer, or a product owner, or an agile coach, or anything else that has more of a future. I think being a testing specialist is going to continue to get more depressing as the number of available testing jobs slowly continues to dwindle.
Because I’m writing this on Christmas Eve, I want to put an It’s a Wonderful Life spin on it. What if my testing career had never been born? In fact, what if the test specialist role had never been born?
Allow me to be your Angel 2nd Class and take you back to a time when developers talked about how to do testing. Literature about testing was directed toward developers. What if no one had worried about adding a role that had critical distance from the development process? What if developers had been willing to continue being generalists rather than delegating the study of testing practices to specialists, while shoving unit testing into a no-man’s land no one wanted to visit?
And what if I could have gotten over the absolute delight I got from destroying things and started creating things instead? I’m sure I’d be richer now. I’d have better design skills now. But alas, I’m not actually an Angel 2nd Class, and more to the point, I haven’t dug up enough historical context to really play out this thought experiment. But I’ll try to make a few observations. Within the larger community of developers, I might not have been able to carve out a space to start a successful independent consulting practice, which I dearly loved doing as a tester. Maybe I wouldn’t have developed my appreciation for software quality that I have now. Maybe I wouldn’t have adopted Extreme Programming concepts so readily as I have, which has now put me in a very good position process-wise, even if I’m having to catch up my enterprise design and architecture skills.
How about not having any testers in the first place? Maybe the lack of critical distance would have actually caused major problems. Maybe the lack of a quality watchdog would have allowed more managers to actually execute those bad decisions. And maybe those managers would have been driven out of software management. Would the lack of a safety net have actually improved the state of software management by natural selection, and even allowed some companies with inept executives to die a necessary death? I think I’m hoping for too much here, and perhaps being too brutal on Christmas Eve.
It has been a wonderful career. It could have been a different career, but I’m just glad that it has taken me to where I am now. Paul, I wish you a successful outcome from your mid-career crisis. I realize that my advice to get out is much easier said than done.