Conversation
| object ${Message}Test { | ||
| val scala = TestCases.make${Message}Scala | ||
|
|
||
| val java = protos.${Message}.toJavaProto(scala) |
There was a problem hiding this comment.
Potential problem with data preparation is here. Java protobuf created by conversion from scala proto instead of bytes parsing.
So, lazy_fields affects java serialization performance:
| serializeJava | Java |
|---|---|
| LargeStringMessage | 2,584 ns/op |
| LazyFieldsStringMessage (same as LargeStringMessage but lazy_fields: true) | 1,111 ns/op |
i think that it is caused by forced bytes writing at toJavaProto with lazy_fields enabled (ProtobufGenerator#348L). But anyway it doesn't look as clear java protobuf benchmark.
What about changing this line to val java = Protos.${Message}.parseFrom(bytes)?
| java: Boolean = true | ||
| ): Unit = { | ||
| ops.mkdir ! ops.pwd / 'results | ||
| val benchmarks0 = if (benchmarks.nonEmpty) benchmarks else testNames |
There was a problem hiding this comment.
I had a problem with running this script with fresh ammonite version. This strange code helped to run the script.
Also scalapb argument value is ignored further. So, I need to hardcode snapshot version into benchmarks/project/plugins.sbt.
| @@ -0,0 +1,81 @@ | |||
| # Agents | |||
There was a problem hiding this comment.
I can delete it if it is not necessary.
|
@thesamet Hello! I would be grateful for a review of these changes. |
| override def toString: String = value.toString() | ||
| // Equality for LazyField[T] is not commutative! | ||
| // It is extremely important to use LazyField[T] only with the `-language:strictEquality` enabled for Scala 3 or `-Xfatal-warnings` for Scala 2. | ||
| override def equals(other: Any): Boolean = value == other |
There was a problem hiding this comment.
These two functions may potentially force decoding unintentionally, for example if we end up if hashmaps of lazy fields. Can we define equals to be true only if the rhs is also a LazyField with the same bytes? If the user wants to compare to a string they should explicitly call .value
There was a problem hiding this comment.
This is a difficult trade-off between approaching drop-in replacement and the most explicit behavior.
In my opinion, this optimization is heuristic, so it's acceptable to perform decoding under the hood, even if the user doesn't think about it.
I would prefer to strive for the ability to enable this flag with minimal codebase changes, and I would document the heuristic nature and limitations. I'm basing this on java protobuf, which hides the lazy nature of string fields from the end user.
But anyway your comment is very helpful, I found and fixed bug in equality and added more tests. Now Set[LazyField[String]] works without unnecessary decoding, and Set[String] cannot be used without decoding in any case.
thesamet
left a comment
There was a problem hiding this comment.
Thanks for this PR — the performance gains are real and the approach of routing lazy string fields through customSingleScalaTypeName / TypeMapper is the right foundation. A few issues to address before this is ready to merge.
|
|
||
| def toByteString: ByteString = bytes | ||
|
|
||
| override def toString: String = value.toString() |
There was a problem hiding this comment.
toString forces lazy decoding. Any logging, string interpolation (s"$msg"), or debug print will silently trigger UTF-8 parsing — defeating the round-trip optimization. Consider s"LazyField(${bytes.size()} bytes)" here and letting callers use .value.toString() explicitly.
| override def toString: String = value.toString() | ||
| // Equality for LazyField[T] is not commutative! | ||
| // It is extremely important to use LazyField[T] only with the `-language:strictEquality` enabled for Scala 3 or `-Xfatal-warnings` for Scala 2. | ||
| override def equals(other: Any): Boolean = other match { |
There was a problem hiding this comment.
The non-commutative equals is the most concerning aspect of this public API. The tests explicitly demonstrate that Set[Any](s, lazyS) and Set[Any](lazyS, s) have different sizes depending on insertion order. This is a silent correctness hazard for code using Any-typed collections or comparing against extracted strings. The comment recommends -language:strictEquality, but that's Scala 3 only and opt-in — Scala 2 users have no protection. Please document this limitation prominently in the scaladoc on the class.
| implicit val stringDecoder: LazyDecoder[String] = _.toStringUtf8() | ||
| } | ||
|
|
||
| trait LazyEncoder[T] { |
There was a problem hiding this comment.
LazyEncoder is defined but not used anywhere in the implementation (the write path goes through the TypeMapper's toBase which calls .toByteString directly). Either use it or remove it to avoid dead public API surface.
There was a problem hiding this comment.
LazyEncoder already used for implicit conversions: example
| case FieldDescriptor.JavaType.STRING => FunctionApplication(s"$d.PString") | ||
| case FieldDescriptor.JavaType.ENUM => FunctionApplication(s"$d.PEnum") | ||
| case FieldDescriptor.JavaType.MESSAGE => MethodApplication("toPMessage") | ||
| case FieldDescriptor.JavaType.STRING if fd.getContainingType.lazyStringFields => |
There was a problem hiding this comment.
Mapping lazy string fields to PByteString in singleFieldAsPvalue breaks JSON serialization. Libraries like scalapb-json4s and scalapb-circe call getField (which uses this path) to convert messages to JSON. With this change, lazy string fields would be emitted as base64-encoded bytes instead of JSON strings.
The inverse direction is equally broken: generateMessageReads uses baseSingleScalaTypeName = ByteString for lazy fields, so it looks for PByteString in the field map. But JSON parsers produce PString for string fields — so JSON parsing of messages with lazy string fields would silently return default values.
Fix: keep the PString path for singleFieldAsPvalue (call .value on the lazy field to get the string before wrapping), and handle the PString -> LazyField[String] conversion in generateMessageReads.
| val bytes = original.toByteArray | ||
| val parsed = LazyRepeated.parseFrom(bytes) | ||
|
|
||
| def f(str: Seq[String]): Int = str.length |
There was a problem hiding this comment.
parsed.items has type Seq[LazyField[String]] (because lazyStringFields is enabled at file level), but f expects Seq[String]. There is no Conversion[Seq[LazyField[String]], Seq[String]] defined — implicit conversions don't apply element-wise to collections. This should fail to compile. Please verify this test actually compiles and passes.
| val bytes = original.toByteArray | ||
| val parsed = LazyDictionary.parseFrom(bytes) | ||
|
|
||
| parsed.stringToInt should contain theSameElementsAs originalStringToInt |
There was a problem hiding this comment.
With file-level lazy_string_fields: true, the map entry messages for LazyDictionary also inherit the setting (via message.getFile.scalaOptions.getLazyStringFields). This means stringToInt would be typed as Map[LazyField[String], Int] instead of Map[String, Int]. Map lookups by String key would silently fail at runtime because HashMap calls String.equals(LazyField[String]) on the stored keys, which returns false.
Consider whether lazyStringFields should be excluded from map entry key fields, and add a test that does .get("hello") on such a map to catch this.
There was a problem hiding this comment.
Lookups already works because of implicit conversions on .get("hello") calls. But you are right and this behaviour is error-prone because of untyped collection problem.
I excluded map keys from lazy parsing. Now parsing with lazy_string_fields: true results in Map[String, T] for map<string, t> proto dictionaries.
| @@ -0,0 +1,316 @@ | |||
| [ | |||
There was a problem hiding this comment.
Please don't commit benchmark result JSON files to the main branch. They bloat the repository (4 files × ~316 lines each) and will create churn every time benchmarks are re-run. Consider a dedicated benchmark-results branch or linking to them from the PR description instead.
Hello!
This is a revival of the work to support lazy fields (previous attempt #1376).
Context
In Java protobuf, string fields are handled using LazyField. This mechanism stores the field data as a ByteString and only parses it into a UTF‑8 string when the corresponding getter is called. When a message containing such fields is serialized, the raw
ByteStringis written directly, without performing UTF‑8 encoding or decoding (source).Unlike java protobuf, scalapb does not lazily serialize strings. Accordingly, this is an opportunity to reduce the overhead if the following factors coincide:
stringfields;Such usage patterns are quite common for cloud-native applications.
The essence of the changes
Generating
LazyField[String]for string fields ifscalapb.options.lazy_fieldsis enabled.LazyField[T]contains the originalByteStringand lazily parses the value on demand. Introduces implicit conversions for convenient use of generated case classes.Example:
How it works with parsing and serialization:
Benchmarks
New benchmarks have been added:
roundTripScalaandroundTripJava. They test the full proto lifecycle: parsing and serialization. I was confused by the fact that transforming data inobject ${Message}Testusing thetoJavaProtomethod affects the performance results. In my generated code, this method forcesByteStringusage during java proto preparation, so the comparison is not entirely fair. The results have also improved for existing benchmarks, but I wanted a clearer comparison.Looks great. More than 3x speedup 🚀 Of course, scalapb is faster than java proto even without additional improvements.
Questions
toJavaProtoin data preparation for benchmarks (object ${Message}Test)?