I'm super confused about how to use GNU parallel to pass stdin to the job command.
I have what I imagined to be a really common use case. I have some process xxd that does something with stdin and outputs to stdout. I have some way to generate or get work from another standard stream, for exampleseq 3, and I can combine the two and make an impromptu power tool like so:
$ seq 3 | while read line; do echo $line | xxd; done
00000000: 310a 1.
00000000: 320a 2.
00000000: 330a 3.Great. We can clearly see that each invocation of xxd gets one line, and a trailing newline is appended.
This is what piping to parallel does:
$ seq 3 | parallel --pipe --recend="\n" -L 1 xxd
...
00000000: 310a 320a 330a 1.2.3.parallel --pipe takes all of stdin and sends it to one invocation of xxd which confuses me because all the documented parameters and their defaults seem to contradict this behavior: --recend="\n" (the default) delimits jobs by newline, -L 1 (the default) sends a maximum of one line to the command.
Null separators have the same problem. They're also passed though verbatim:
seq 3 | tr '\n' '\0' | parallel --null --pipe xxd
...
00000000: 3100 3200 3300 1.2.3.An explanation for this behavior would be appreciated, especially since these parameters appear to apply specifically to the --pipe mode of parallel.
2 Answers
You are SO close. -L sets the record size (in lines) but not how many records should be sent. This is controlled by -N. Default --recend is \n so that is not needed. Default -L is 1, so that is not needed.
seq 3 | parallel --pipe -N 1 xxd --pipesends stdin to job, implies--recend '\n'-N1send one captured input to one job
seq 3 | parallel --pipe -N1 xxdOld Answer:
Warning: this may corrupt your data stream as echo is interpreting the payload. It may work for ascii and utf-8 encoded strings, but I wouldn't trust it generally!
Instead of pipe mode, echo (or similar) can convert args back to stout per-job
$ seq 3 | parallel "echo {} | xxd"
00000000: 310a 1.
00000000: 320a 2.
00000000: 330a 3. 1