Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Post History

50%
+0 −0
Q&A Can "numbers" be good doc performance metrics? Is there a way to meaningfully interpret the quantitative user data we gather?

It is extremely difficult to measure the performance of a technical document because it is hard to gather the data and hard to interpret the data when you have it. Let's start with the aim of tech...

posted 6y ago by Mark Baker‭  ·  last activity 4y ago by System‭

Answer
#4: Attribution notice removed by user avatar System‭ · 2020-01-03T20:41:56Z (over 4 years ago)
Source: https://writers.stackexchange.com/a/33594
License name: CC BY-SA 3.0
License URL: https://creativecommons.org/licenses/by-sa/3.0/
#3: Attribution notice added by user avatar System‭ · 2019-12-08T08:04:50Z (over 4 years ago)
Source: https://writers.stackexchange.com/a/33594
License name: CC BY-SA 3.0
License URL: https://creativecommons.org/licenses/by-sa/3.0/
#2: Initial revision by user avatar System‭ · 2019-12-08T08:04:50Z (over 4 years ago)
It is extremely difficult to measure the performance of a technical document because it is hard to gather the data and hard to interpret the data when you have it.

Let's start with the aim of technical communication. The aim is to make the user of a product productive by enabling them to use the product confidently and correctly. The logical measure of performance, therefore, is user's mean time to productivity.

The problem is, measuring user's mean time to productivity is very difficult. Virtually impossible in many cases. You simply cannot be there to observe them at work, nor can you instrument them or their work or the docs to gather the relevant data.

The Web does let us measure how often a document is read and how long a reader spends on it. The problem is, neither of these is an indication of document performance.

- A technical document gets read when the problem it describes occurs. This has nothing to do with the quality of the document and everything to do with the quality of the product it describes.

- The amount of time that the reader spends reading the document is no measure of its quality, since a good document could give the reader the information they need quickly, while a bad one might force the reader to read to the end and still not tell them what they need to know. 

Finally, there is the issue of the relative value of a document. If the client's business loses a million dollars a minute when the server goes down, then the topic on how to restore the server after a crash is the most valuable topic in your doc set. But if your product is reliable, it will also be one of the least read topics in your doc set. Other commonly read topics may be worth only a few bucks in revenue each time they are read. They will score a lot higher in your metrics, but they deliver far less value in reality.

The best you can really do in many cases is to measure how well your docs ahere to known-good principles of design and rhetoric. It is a very imprecise measure and there will always be debates about which design principles and rhetorical practice best fit the current circumstances. (This is why answers on this board can never be provable in the way answers on SO are provable.)

A number of people have suggested performance measurements over the years but they are all either too expensive or too indirect to be certain. Better than nothing, perhaps, but certainly not definitive, and potentially quite misleading. (The problem with all indirect measurement is that it tempts you to optimize for the metric rather than for actual performance.)

#1: Imported from external source by user avatar System‭ · 2018-01-28T13:06:16Z (over 6 years ago)
Original score: 10