Hmm.. so let’s see if I’m getting this right.
Anders Hovmöller

You create patches between the unmodifed AST unparsed and the mutated AST unparsed?

Yes. astunparse can convert an AST into code. So we take the unmutated AST and the mutated AST, convert them to code, and calculate a diff. We store this diff directly with the rest of the results. As you say, it’s mildly annoying, but when you realize that most diffs are one line and involve maybe 5 characters total, you can see how it’s not too painful to manage in practice. Still, automating it would be ideal.

What I’m not sure of is how high-fidelity these diffs are against the actual original source (i.e. as opposed to the astunparse-generated source from the unmutated AST). I assume they’re generally not suitable for direct application to the original source, but I just haven’t investigated it much. If something like baron or astroid can give us more perfect diffs, then automating patch application would be more possible.

Changing CR to work single threaded with one worker if Celery is unavailable would be pretty great I think.

Yeah, you really got me thinking about this. I think I might be able to pull something together in even a 10-line change, as least as a proof of concept.

On another matter I’ve been thinking of the scalability aspect.

Integrating coverage analysis is definitely an interesting topic, and we’ve had an issue to look into it for quite some time. testmon is too pytest-specific, I think, though we might be able to use more fruitfully. It’s an open area for investigation, but when I gives talks on CR it’s an area I mention when discussing performance improvements.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.