Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Best practices for maintaining documented code examples?

+1
−0

A good SDK (software development kit) includes plenty of well-documented examples. It also includes good tutorials and developer guides, which introduce concepts in logical progressions, typically showing only the relevant excerpts from the sample code. (Nobody wants to see a 200-line program inline in a book, but it's important to show that 20-line excerpt that demonstrates a principle right there in the section that talks about that principle.)

This creates a maintenance challenge: over a progression of releases, interfaces or preferred coding patterns change. An IDE (a code-development environment) provides tools for finding the example programs that need to be updated (e.g. find all places where this function is called), but they don't tend to help with references in documentation. So what usually happens, in my experience, is that before a release somebody will page through the documentation looking for suspicious code snippets. This is, obviously, not 100% reliable. (Edit: the full examples are tested regularly, but that doesn't guarantee that the excerpts in the documentation remain in sync with the full example.)

Currently we rely on the technical writers, who we hope remember which examples were excepted in documentation where, to react when the example code or the relevant interfaces change. The team is fluent with the bug-tracking and source-control systems, including subscribing to check-ins, but we are still relying on people's personal knowledge, which becomes a problem if people leave the team. (As is often the case, testing of documentation tends to be a low priority for QA.)

I am looking for practices that have been used successfully to improve the ongoing accuracy of code examples in documentation.

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

0 comment threads

4 answers

+1
−0

The best practice would be that all of the code has to be compiled and potentially verified against some code rules.


The way we do this at my work is that all of the examples are tagged in documentation, and then during our nightly builds they all get extracted and compiled. So if someone modifies the interfaces, the nightly build will catch this.

To help us with that we have some custom scripts. Effectively different kinds of examples are marked differently, as they will need different wrapping code to be something that compiles. So we have code that is just a little snippet, complete functions, complete classes, and then some full projects. Each gets compiled, so they are at least up to date with the interfaces/functions.

We currently do not do anything for specific coding conventions, but if you have the first part working, you can then run some automated style checking. Since we do not do this automatically, we do go thru them every so often by hand.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

This post was sourced from https://writers.stackexchange.com/a/5304. It is licensed under CC BY-SA 3.0.

0 comment threads

+1
−0

Python has a useful module called doctest. It is commonly used to validate tutorial documentation and examples embedded as comments in the code.

The doctest module searches for pieces of text that look like interactive Python sessions, and then executes those sessions to verify that they work exactly as shown. There are several common ways to use doctest:

[...]

To write tutorial documentation for a package, liberally illustrated with input-output examples. Depending on whether the examples or the expository text are emphasized, this has the flavor of “literate testing” or “executable documentation”.

Since the code you're interested in is merely excerpts from the full source, you would extract the excerpts from the tested code based on some metadata. Tools like Doxygen are intended for this purpose.

A language-agnostic approach would be to include the full source for each excerpt in the library source code or wherever the primary documentation resides. Then when you want to build your tutorial/developer guide, you run automated tests on the code using your xUnit-equivlent with doctest-like connector code if necessary, and then extract the excerpts with Doxygen.

Regardless of what specific solution you implement, best practice for maintenance is to follow the DRY principle. Do what you can to keep all of the source code in one place. In your case, it sounds like this will require generating your excerpts from the original sample code each time you generate the documentation. There's some discussion on the topic of code sample testing and maintenance in The Pragmatic Programmer on pages 26-29 (DRY principle) and further on pages 100-101. The authors describe vaguely how they accomplished what you need:

[...] using the DRY principle we didn't want to copy and paste lines of code from the tested programs in the book. That would have meant that the code was duplicated, virtually guaranteeing that we'd forget to update an example when the corresponding program was changed. For some examples, we also didn't want to bore you with the framework needed to make our example compile and run. We turned to Perl. A relatively simple script is invoked when we format the book -- it extracts a named segment from a source file, does syntax highlighting, and converts the result into the typesetting language we use.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

This post was sourced from https://writers.stackexchange.com/a/5300. It is licensed under CC BY-SA 3.0.

0 comment threads

+1
−0

ReadTheDocs (a popular system for documenting code-bases) has an interesting feature which may provide an answer to this question: literalinclude.

With this directive, one can include code examples from another file within the documentation. The particularly interesting part is that a subset of lines can be extracted from the source file via the :lines: directive meaning that the source file could be a complete (executable, testable) example while only the snippet that is relevant to the documented section need appear in the docs.

Using this directive, it would theoretically be possible to include all examples in the documentation within the project's test suite. I can't say I've ever gone that far myself - but having run into this precise issue more than once I'm now tempted to give it a try!

History
Why does this post require moderator attention?
You might want to add some details to your flag.

This post was sourced from https://writers.stackexchange.com/a/16511. It is licensed under CC BY-SA 3.0.

0 comment threads

+0
−0

I would actually answer this with my graphic designer hat on: all code should be given a particular style in the layout program (font in particular, but type size, margins, justification), and then you just Search for each iteration of that style. It's still manual, but you won't miss any.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »