Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Can "numbers" be good doc performance metrics? Is there a way to meaningfully interpret the quantitative user data we gather?

+0
−0

I work on developer documentation at a tech startup. As of now, we implement the following feedback mechanisms:

  • We have a thumbs-up/down feedback system on each page of the docs site. If a user clicks thumbs-down, we show a pop-up with more granular options about why they found the doc unhelpful and how we can improve it. This is invaluable feedback, however, it is qualitative.
  • We also track the number of visitors to our docs site and the average time they spent. This is a more quantitative metric, and I usually feel better when the number of visitors goes up. However, I am not sure that's always a good thing.

I want to understand the techniques other tech writers use to know if their docs are serving the purpose. Do you track the number of visitors? Do you map the visitor trends to certain events like product releases? In short, how do you make sense of the numbers?

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://writers.stackexchange.com/q/33592. It is licensed under CC BY-SA 3.0.

0 comment threads

2 answers

You are accessing this answer with a direct link, so it's being shown above all other answers regardless of its score. You can return to the normal view.

+1
−0

Qualities of the documentation itself, even quantitive ones, usually have little intrinsic value.

However, quantitive impact of docs on other areas can be often precisely measured and meaningfully interpreted.

Some of the companies I worked with used the following quality metrics for documentation:

  • Number of support tickets. If customer support is overloaded with questions concerning a single topic, then maybe the documentation on this topic is not perfect.

  • Number of failed deployments. If a documented process/task regularly gets done in an incorrect way, then maybe it is not documented properly

  • Number of questions on a particular topic inside the team. If newly hired engineers tend to ask the senior staff the same set of questions concerning a single topic, then maybe this topic is not covered well in technical onboarding docs.

The best thing about this kind of metrics is that they can be clearly communicated to business.

– Our new user documentation has decreased the number of support tickets by X per cent? Great, so we've saved Y dollars we'd otherwise spend on outsourced technical support!

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

This post was sourced from https://writers.stackexchange.com/a/33596. It is licensed under CC BY-SA 3.0.

0 comment threads

+0
−0

It is extremely difficult to measure the performance of a technical document because it is hard to gather the data and hard to interpret the data when you have it.

Let's start with the aim of technical communication. The aim is to make the user of a product productive by enabling them to use the product confidently and correctly. The logical measure of performance, therefore, is user's mean time to productivity.

The problem is, measuring user's mean time to productivity is very difficult. Virtually impossible in many cases. You simply cannot be there to observe them at work, nor can you instrument them or their work or the docs to gather the relevant data.

The Web does let us measure how often a document is read and how long a reader spends on it. The problem is, neither of these is an indication of document performance.

  • A technical document gets read when the problem it describes occurs. This has nothing to do with the quality of the document and everything to do with the quality of the product it describes.

  • The amount of time that the reader spends reading the document is no measure of its quality, since a good document could give the reader the information they need quickly, while a bad one might force the reader to read to the end and still not tell them what they need to know.

Finally, there is the issue of the relative value of a document. If the client's business loses a million dollars a minute when the server goes down, then the topic on how to restore the server after a crash is the most valuable topic in your doc set. But if your product is reliable, it will also be one of the least read topics in your doc set. Other commonly read topics may be worth only a few bucks in revenue each time they are read. They will score a lot higher in your metrics, but they deliver far less value in reality.

The best you can really do in many cases is to measure how well your docs ahere to known-good principles of design and rhetoric. It is a very imprecise measure and there will always be debates about which design principles and rhetorical practice best fit the current circumstances. (This is why answers on this board can never be provable in the way answers on SO are provable.)

A number of people have suggested performance measurements over the years but they are all either too expensive or too indirect to be certain. Better than nothing, perhaps, but certainly not definitive, and potentially quite misleading. (The problem with all indirect measurement is that it tempts you to optimize for the metric rather than for actual performance.)

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »