ssz

package module
v0.3.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 18, 2024 License: BSD-3-Clause Imports: 15 Imported by: 16

README

Obligatory xkcd

Simple Serialize (SSZ)... v15

API Reference Build Status Code Coverage

Package ssz provides a zero-allocation, opinionated toolkit for working with Ethereum's Simple Serialize (SSZ) format through Go. The focus is on code maintainability, only secondarily striving towards raw performance.

Please note, this repository is a work in progress. The API is unstable and breaking changes will regularly be made. Do not depend on this in publicly available modules.

This package heavily inspired from the code generated by- and contained within fastssz!

Goals and objectives

  • Elegant API surface: Binary protocols are low level constructs and writing encoders/decoders entails boilerplate and fumbling with details. Code generators can do a good job in achieving performance, but with a too low level API, the generated code becomes impossible to humanly maintain. That isn't an issue, until you throw something at the generator it cannot understand (e.g. multiplexed types), at which point you'll be in deep pain. By defining an API that is elegant from a dev perspective, we can create maintainable code for the special snowflake types, yet still generate it for the rest of boring types.
  • Reduced redundancies: The API aims to make the common case easy and the less common case possible. Redundancies in user encoding/decoding code are deliberately avoided to remove subtle bugs (even at a slight hit on performance). If the user's types require some asymmetry, explicit encoding and decoding code paths are still supported.
  • Support existing types: Serialization libraries often assume the user is going to define a completely new, isolated type-set for all the things they want to encode. That is simply not the case, and to introduce a new encoding library into a pre-existing codebase, it must play nicely with the existing types. That means common Go typing and aliasing patterns should be supported without annotating everything with new methods.
  • Performant, as meaningful: Encoding/decoding code should be performant, even if we're giving up some of it to cater for the above goals. Language constructs that are known to be slow (e.g. reflection) should be avoided, and code should have performance similar to low level generated ones, including 0 needing allocations. That said, a meaningful application of the library will do something with the encoded data, which will almost certainly take more time than generating/parsing a binary blob.

Expectations

Whilst we aim to be a become the SSZ encoder of go-ethereum - and more generally, a go-to encoder for all Go applications requiring to work with Ethereum data blobs - there is no guarantee that this outcome will occur. At the present moment, this package is still in the design and experimentation phase and is not ready for a formal proposal.

There are several possible outcomes from this experiment:

  • We determine the effort required to implement all SSZ features are not worth it, abandoning this package.
  • All the needed features are shipped, but the package is rejected in favor of some other superior design.
  • The API this package gets merged into some existing library and this work gets abandoned in its favor.
  • The package turns out simple, safe and performant enough to be added to go-ethereum as a test.
  • Some other unforeseen outcome of the infinite possibilities.

Design

Responsibilities

The ssz package splits the responsibility between user code and library code in the way pictured below:

Scope

  • Users are responsible for creating Go structs, which are mapped one-to-one to the SSZ container type.
  • The library is responsible for creating all other SSZ types from the fields of the user-defined structs.
  • Some SSZ types require specific types to be used due to robustness and performance reasons.
  • SSZ unions are not implemented as they are an unused (and disliked) feature in Ethereum.
Weird stuff

The Simple Serialize spec has schema definitions for mapping SSZ data to JSON. We believe in separation of concerns. This library does not concern itself with encoding/decoding from formats other than SSZ.

How to use

First up, you need to add the package to your project:

go get github.com/karalabe/ssz
Static types

Some types in Ethereum will only contain a handful of statically sized fields. One example is a Withdrawal:

type Address [20]byte

type Withdrawal struct {
    Index     uint64
    Validator uint64
    Address   Address
    Amount    uint64
}

To encode/decode such an object via SSZ, it needs to implement the ssz.StaticObject interface:

type StaticObject interface {
	// SizeSSZ returns the total size of an SSZ object.
	SizeSSZ() uint32

	// DefineSSZ defines how an object would be encoded/decoded.
	DefineSSZ(codec *Codec)
}
  • The SizeSSZ seems self-explanatory. It returns the total size of the final SSZ, and for static types such as a Withdrawal, you need to calculate this by hand (or by a code generator, more on that later).
  • The DefineSSZ is more involved. It expects you to define what fields, in what order and with what types are going to be encoded. Essentially, it's the serialization format.
func (w *Withdrawal) SizeSSZ() uint32 { return 44 }

func (w *Withdrawal) DefineSSZ(codec *ssz.Codec) {
	ssz.DefineUint64(codec, &w.Index)        // Field (0) - Index          -  8 bytes
	ssz.DefineUint64(codec, &w.Validator)    // Field (1) - ValidatorIndex -  8 bytes
	ssz.DefineStaticBytes(codec, &w.Address) // Field (2) - Address        - 20 bytes
	ssz.DefineUint64(codec, &w.Amount)       // Field (3) - Amount         -  8 bytes
}
  • The DefineXYZ methods should feel self-explanatory. They spill out what fields to encode in what order and into what types. The interesting tidbit is the addressing of the fields. Since this code is used for both encoding and decoding, it needs to be able to instantiate any nil fields during decoding, so pointers are needed.

To encode the above Withdrawal into an SSZ stream, use either ssz.EncodeToStream or ssz.EncodeToBytes. The former will write into a stream directly, whilst the latter will write into a bytes buffer directly. In both cases you need to supply the output location to avoid GC allocations in the library.

func main() {
	out := new(bytes.Buffer)
	if err := ssz.EncodeToStream(out, new(Withdrawal)); err != nil {
		panic(err)
	}
	fmt.Printf("ssz: %#x\n", blob)
}

To decode an SSZ blob, use ssz.DecodeFromStream and ssz.DecodeFromBytes with the same disclaimers about allocations. Note, decoding requires knowing the size of the SSZ blob in advance. Unfortunately, this is a limitation of the SSZ format.

Dynamic types

Most data types in Ethereum will contain a cool mix of static and dynamic data fields. Encoding those is much more interesting, yet still proudly simple. One such a data type would be an ExecutionPayload as seen below:

type Hash      [32]byte
type LogsBLoom [256]byte

type ExecutionPayload struct {
	ParentHash    Hash
	FeeRecipient  Address
	StateRoot     Hash
	ReceiptsRoot  Hash
	LogsBloom     LogsBLoom
	PrevRandao    Hash
	BlockNumber   uint64
	GasLimit      uint64
	GasUsed       uint64
	Timestamp     uint64
	ExtraData     []byte
	BaseFeePerGas *uint256.Int
	BlockHash     Hash
	Transactions  [][]byte
	Withdrawals   []*Withdrawal
}

Do note, we've reused the previously defined Address and Withdrawal types. You'll need those too to make this part of the code work. The uint256.Int type is from the github.com/holiman/uint256 package.

To encode/decode such an object via SSZ, it needs to implement the ssz.DynamicObject interface:

type DynamicObject interface {
	// SizeSSZ returns either the static size of the object if fixed == true, or
	// the total size otherwise.
	SizeSSZ(fixed bool) uint32

	// DefineSSZ defines how an object would be encoded/decoded.
	DefineSSZ(codec *Codec)
}

If you look at it more closely, you'll notice that it's almost the same as ssz.StaticObject, except the type of SizeSSZ is different, here taking an extra boolean argument. The method name/type clash is deliberate: it guarantees compile time that dynamic objects cannot end up in static ssz slots and vice versa.

func (e *ExecutionPayload) SizeSSZ(fixed bool) uint32 {
	// Start out with the static size
	size := uint32(512)
	if fixed {
		return size
	}
	// Append all the dynamic sizes
	size += ssz.SizeDynamicBytes(e.ExtraData)           // Field (10) - ExtraData    - max 32 bytes (not enforced)
	size += ssz.SizeSliceOfDynamicBytes(e.Transactions) // Field (13) - Transactions - max 1048576 items, 1073741824 bytes each (not enforced)
	size += ssz.SizeSliceOfStaticObjects(e.Withdrawals) // Field (14) - Withdrawals  - max 16 items, 44 bytes each (not enforced)

	return size
}

Opposed to the static Withdrawal from the previous section, ExecutionPayload has both static and dynamic fields, so we can't just return a pre-computed literal number.

  • First up, we will still need to know the static size of the object to avoid costly runtime calculations over and over. Just for reference, that would be the size of all the static fields in the object + 4 bytes for each dynamic field (offset encoding). Feel free to verify the number 512 above.
    • If the caller requested only the static size via the fixed parameter, return early.
  • If the caller, however, requested the total size of the object, we need to iterate over all the dynamic fields and accumulate all their sizes too.
    • For all the usual Go suspects like slices and arrays of bytes; 2D sliced and arrays of bytes (i.e. ExtraData and Transactions above), there are helper methods available in the ssz package.
    • For types implementing ssz.StaticObject / ssz.DynamicObject (e.g. one item of Withdrawals above), there are again helper methods available to use them as single objects, static array of objects, of dynamic slice of objects.

The codec itself is very similar to the static example before:

func (e *ExecutionPayload) DefineSSZ(codec *ssz.Codec) {
	// Define the static data (fields and dynamic offsets)
	ssz.DefineStaticBytes(codec, &e.ParentHash)                                           // Field  ( 0) - ParentHash    -  32 bytes
	ssz.DefineStaticBytes(codec, &e.FeeRecipient)                                         // Field  ( 1) - FeeRecipient  -  20 bytes
	ssz.DefineStaticBytes(codec, &e.StateRoot)                                            // Field  ( 2) - StateRoot     -  32 bytes
	ssz.DefineStaticBytes(codec, &e.ReceiptsRoot)                                         // Field  ( 3) - ReceiptsRoot  -  32 bytes
	ssz.DefineStaticBytes(codec, &e.LogsBloom)                                            // Field  ( 4) - LogsBloom     - 256 bytes
	ssz.DefineStaticBytes(codec, &e.PrevRandao)                                           // Field  ( 5) - PrevRandao    -  32 bytes
	ssz.DefineUint64(codec, &e.BlockNumber)                                               // Field  ( 6) - BlockNumber   -   8 bytes
	ssz.DefineUint64(codec, &e.GasLimit)                                                  // Field  ( 7) - GasLimit      -   8 bytes
	ssz.DefineUint64(codec, &e.GasUsed)                                                   // Field  ( 8) - GasUsed       -   8 bytes
	ssz.DefineUint64(codec, &e.Timestamp)                                                 // Field  ( 9) - Timestamp     -   8 bytes
	ssz.DefineDynamicBytesOffset(codec, &e.ExtraData, 32)                                 // Offset (10) - ExtraData     -   4 bytes
	ssz.DefineUint256(codec, &e.BaseFeePerGas)                                            // Field  (11) - BaseFeePerGas -  32 bytes
	ssz.DefineStaticBytes(codec, &e.BlockHash)                                            // Field  (12) - BlockHash     -  32 bytes
	ssz.DefineSliceOfDynamicBytesOffset(codec, &e.Transactions, 1_048_576, 1_073_741_824) // Offset (13) - Transactions  -   4 bytes
	ssz.DefineSliceOfStaticObjectsOffset(codec, &e.Withdrawals, 16)                       // Offset (14) - Withdrawals   -   4 bytes

	// Define the dynamic data (fields)
	ssz.DefineDynamicBytesContent(codec, &e.ExtraData, 32)                                 // Field (10) - ExtraData
	ssz.DefineSliceOfDynamicBytesContent(codec, &e.Transactions, 1_048_576, 1_073_741_824) // Field (13) - Transactions
	ssz.DefineSliceOfStaticObjectsContent(codec, &e.Withdrawals, 16)                       // Field (14) - Withdrawals
}

Most of the DefineXYZ methods are similar as before. However, you might spot two distinct sets of method calls, DefineXYZOffset and DefineXYZContent. You'll need to use these for dynamic fields:

  • When SSZ encodes a dynamic object, it encodes it in two steps.
    • A 4-byte offset pointing to the dynamic data is written into the static SSZ area.
    • The dynamic object's actual encoding are written into the dynamic SSZ area.
  • Encoding libraries can take two routes to handle this scenario:
    • Explicitly require the user to give one command to write the object offset, followed by another command later to write the object content. This is fast, but leaks out encoding detail into user code.
    • Require only one command from the user, under the hood writing the object offset immediately, and stashing the object itself away for later serialization when the dynamic area is reached. This keeps the offset notion hidden from users, but entails a GC hit to the encoder.
  • This package was decided to be allocation free, thus the user is needs to be aware that they need to define the dynamic offset first and the dynamic content later. It's a tradeoff to achieve 50-100% speed increase.
  • You might also note that dynamic fields also pass in size limits, in two places nonetheless. This is an unfortunate asymmetry in the SSZ spec wrt encoding and hashing data layouts.
    • During encoding/decoding, dynamic data is placed at the end of the SSZ blob, so limits need to be passed to the DefineXYZContent methods.
    • During hashing, dynamic data is merkleized inline, mixed with static data, so limits need to be passed to the DefineXYZOffset methods.
    • This is a bit unfortunate. Either parameter set could be avoided at the cost of internal tracking, but that would break 0-alloc.

To encode the above ExecutionPayload do just as we have done with the static Withdrawal object.

Asymmetric types

For types defined in perfect isolation - dedicated for SSZ - it's easy to define the fields with the perfect types, and perfect sizes, and perfect everything. Generating or writing an elegant encoder for those, is easy.

In reality, often you'll need to encode/decode types which already exist in a codebase, which might not map so cleanly onto the SSZ defined structure spec you want (e.g. you have one union type of ExecutionPayload that contains all the Bellatrix, Capella, Deneb, etc fork fields together) and you want to encode/decode them differently based on the context.

Most SSZ libraries will not permit you to do such a thing. Reflection based libraries cannot infer the context in which they should switch encoders and can neither can they represent multiple encodings at the same time. Generator based libraries again have no meaningful way to specify optional fields based on different constraints and contexts.

The only way to handle such scenarios is to write the encoders by hand, and furthermore, encoding might be dependent on what's in the struct, whilst decoding might be dependent on what's it contained within. Completely asymmetric, so our unified codec definition approach from the previous sections cannot work.

For these scenarios, this package has support for asymmetric encoders/decoders, where the caller can independently implement the two paths with their unique quirks.

To avoid having a real-world example's complexity overshadow the point we're trying to make here, we'll just convert the previously demoed Withdrawal encoding/decoding from the unified codec version to a separate encoder and decoder version.

func (w *Withdrawal) DefineSSZ(codec *ssz.Codec) {
	codec.DefineEncoder(func(enc *ssz.Encoder) {
		ssz.EncodeUint64(enc, w.Index)         // Field (0) - Index          -  8 bytes
		ssz.EncodeUint64(enc, w.Validator)     // Field (1) - ValidatorIndex -  8 bytes
		ssz.EncodeStaticBytes(enc, &w.Address) // Field (2) - Address        - 20 bytes
		ssz.EncodeUint64(enc, w.Amount)        // Field (3) - Amount         -  8 bytes
	})
	codec.DefineDecoder(func(dec *ssz.Decoder) {
		ssz.DecodeUint64(dec, &w.Index)        // Field (0) - Index          -  8 bytes
		ssz.DecodeUint64(dec, &w.Validator)    // Field (1) - ValidatorIndex -  8 bytes
		ssz.DecodeStaticBytes(dec, &w.Address) // Field (2) - Address        - 20 bytes
		ssz.DecodeUint64(dec, &w.Amount)       // Field (3) - Amount         -  8 bytes
	})
}
  • As you can see, we piggie-back on the already existing ssz.Object's DefineSSZ method, and do not require implementing new functions. This is good because we want to be able to seamlessly use unified or split encoders without having to tell everyone about it.
  • Whereas previously we had a bunch of DefineXYZ method to enumerate the fields for the unified encoding/decoding, here we replaced them with separate definitions for the encoder and decoder via codec.DefineEncoder and codec.DefineDecoder.
  • The implementation of the encoder and decoder follows the exact same pattern and naming conventions as with the codec but instead of operating on a ssz.Codec object, we're operating on an ssz.Encoder/ssz.Decoder objects; and instead of calling methods named ssz.DefineXYZ, we're calling methods named ssz.EncodeXYZ and ssz.DecodeXYZ.
  • Perhaps note, the EncodeXYZ methods do not take pointers to everything anymore, since they do not require the ability to instantiate the field during operation. Still, static bytes are passed by pointer to avoid heavy copy overheads for large arrays.

Encoding the above Withdrawal into an SSZ stream, you use the same thing as before. Everything is seamless.

Checked types

If your types are using strongly typed arrays (e.g. [32]byte, and not []byte) for static lists, the above codes work just fine. However, some types might want to use []byte as the field type, but have it still behave as if it was [32]byte. This poses an issue, because if the decoder only sees []byte, it cannot figure out how much data you want to decode into it. For those scenarios, we have checked methods.

The previous Withdrawal is a good example. Let's replace the type Address [20]byte alias, with a plain []byte slice (not a [20]byte array, rather an opaque []byte slice).

type Withdrawal struct {
    Index     uint64
    Validator uint64
    Address   []byte
    Amount    uint64
}

The code for the SizeSSZ remains the same. The code for DefineSSZ changes ever so slightly:

func (w *Withdrawal) DefineSSZ(codec *ssz.Codec) {
	ssz.DefineUint64(codec, &w.Index)                   // Field (0) - Index          -  8 bytes
	ssz.DefineUint64(codec, &w.Validator)               // Field (1) - ValidatorIndex -  8 bytes
	ssz.DefineCheckedStaticBytes(codec, &w.Address, 20) // Field (2) - Address        - 20 bytes
	ssz.DefineUint64(codec, &w.Amount)                  // Field (3) - Amount         -  8 bytes
}

Notably, the ssz.DefineStaticBytes call from our old code (which got given a [20]byte array), is replaced with ssz.DefineCheckedStaticBytes. The latter method operates on an opaque []byte slice, so if we want it to behave like a static sized list, we need to tell it how large it's needed to be. This will result in a runtime check to ensure that the size is correct before decoding.

Note, checked methods entail a runtime cost. When decoding such opaque slices, we can't blindly fill the fields with data, rather we need to ensure that they are allocated and that they are of the correct size. Ideally only use checked methods for prototyping or for pre-existing types where you just have to run with whatever you have and can't change the field to an array.

Monolithic types

We've seen previously, that asymmetric codecs can be used to implement custom serialization logic for types that might encode in a variety of ways depending on their data content.

One verify specific subset of that scenario is the Ethereum consensus typeset. Whenever a new fork is released, a number of types are slightly modified, usually by adding new fields to existing structs. In the beacon specs, this usually results in an explosion of types: a new base type for fork X is created (e.g. BeaconBlockBodyBellatrix), but any type including that also needs to be re-created for fork X (e.g. BeaconBlockBellatrix), resulting in cascading type creations. Point in case, there are 79 consensus types in Prysm, most of which are copies of one another with tiny additions.

This design is definitely clean and works well if these containers are used just as data transmission objects or storage objects. However, operating on hundreds of types storing the same thing in a live codebase is unwieldy. In go-ethereum we've always used monolithic types that encode just right according to the RLP specs of EL forks and thus this library aims to provide similar support for the SSZ world too.

We define a monolithic type as a container that can be encoded/decoded differently, based on what fork the codec runs in. To give an example, let's look at the previous ExecutionPayload, but instead of using it to represent a single possible consensus form, let's define all possible fields across all possible forks:

type ExecutionPayloadMonolith struct {
	ParentHash    Hash
	FeeRecipient  Address
	StateRoot     Hash
	ReceiptsRoot  Hash
	LogsBloom     LogsBLoom
	PrevRandao    Hash
	BlockNumber   uint64
	GasLimit      uint64
	GasUsed       uint64
	Timestamp     uint64
	ExtraData     []byte
	BaseFeePerGas *uint256.Int
	BlockHash     Hash
	Transactions  [][]byte
	Withdrawals   []*Withdrawal // Appears in the Shanghai fork
	BlobGasUsed   *uint64       // Appears in the Cancun fork
	ExcessBlobGas *uint64       // Appears in the Cancun fork
}

Not much difference versus what we've used previously, but note, the fields that are fork-specific must all be nil-able (Withdrawal is a slice that can be nil and the blob gas fields are *uint64, which again can be nil).

Like before, we need to implement the SizeSSZ method:

func (e *ExecutionPayloadMonolith) SizeSSZ(sizer *ssz.Sizer, fixed bool) uint32 {
	// Start out with the static size
	size := uint32(512)
	if sizer.Fork() >= ssz.ForkShanghai {
		size += 4
	}
	if sizer.Fork() >= ssz.ForkCancun {
		size += 16
	}
	if fixed {
		return size
	}
	// Append all the dynamic sizes
	size += ssz.SizeDynamicBytes(sizer, obj.ExtraData)
	size += ssz.SizeSliceOfDynamicBytes(sizer, obj.Transactions)
	if sizer.Fork() >= ssz.ForkShanghai {
		size += ssz.SizeSliceOfStaticObjects(sizer, obj.Withdrawals)
	}
	return size
}

This time, it was a bit more complex:

  • The static size can change depending on which fork we're encoding into. The base Frontier encoding is 512 bytes, but Shanghai adds the dynamic withdrawals (4 bytes static offset) and Cancun adds 2 static uint64s (2x8 bytes). You can retrieve what fork we're encoding into via the ssz.Sizer method argument.
  • The dynamic size can change just the same, if we're encoding into Shanghai, we need to account for the withdrawals too. The uint64s are not dynamic, so they don't appear in the that section of the size.

Similarly to how SizeSSZ needs to be fork-enabled, DefineSSZ goes through a transformation:

func (obj *ExecutionPayloadMonolith) DefineSSZ(codec *ssz.Codec) {
	// Define the static data (fields and dynamic offsets)
	ssz.DefineStaticBytes(codec, &obj.ParentHash)                                                                    // Field  ( 0) -    ParentHash -  32 bytes
	ssz.DefineStaticBytes(codec, &obj.FeeRecipient)                                                                  // Field  ( 1) -  FeeRecipient -  20 bytes
	ssz.DefineStaticBytes(codec, &obj.StateRoot)                                                                     // Field  ( 2) -     StateRoot -  32 bytes
	ssz.DefineStaticBytes(codec, &obj.ReceiptsRoot)                                                                  // Field  ( 3) -  ReceiptsRoot -  32 bytes
	ssz.DefineStaticBytes(codec, &obj.LogsBloom)                                                                     // Field  ( 4) -     LogsBloom - 256 bytes
	ssz.DefineStaticBytes(codec, &obj.PrevRandao)                                                                    // Field  ( 5) -    PrevRandao -  32 bytes
	ssz.DefineUint64(codec, &obj.BlockNumber)                                                                        // Field  ( 6) -   BlockNumber -   8 bytes
	ssz.DefineUint64(codec, &obj.GasLimit)                                                                           // Field  ( 7) -      GasLimit -   8 bytes
	ssz.DefineUint64(codec, &obj.GasUsed)                                                                            // Field  ( 8) -       GasUsed -   8 bytes
	ssz.DefineUint64(codec, &obj.Timestamp)                                                                          // Field  ( 9) -     Timestamp -   8 bytes
	ssz.DefineDynamicBytesOffset(codec, &obj.ExtraData, 32)                                                          // Offset (10) -     ExtraData -   4 bytes
	ssz.DefineUint256(codec, &obj.BaseFeePerGas)                                                                     // Field  (11) - BaseFeePerGas -  32 bytes
	ssz.DefineStaticBytes(codec, &obj.BlockHash)                                                                     // Field  (12) -     BlockHash -  32 bytes
	ssz.DefineSliceOfDynamicBytesOffset(codec, &obj.Transactions, 1048576, 1073741824)                               // Offset (13) -  Transactions -   4 bytes
	ssz.DefineSliceOfStaticObjectsOffsetOnFork(codec, &obj.Withdrawals, 16, ssz.ForkFilter{Added: ssz.ForkShanghai}) // Offset (14) -   Withdrawals -   4 bytes
	ssz.DefineUint64PointerOnFork(codec, &obj.BlobGasUsed, ssz.ForkFilter{Added: ssz.ForkCancun})                    // Field  (15) -   BlobGasUsed -   8 bytes
	ssz.DefineUint64PointerOnFork(codec, &obj.ExcessBlobGas, ssz.ForkFilter{Added: ssz.ForkCancun})                  // Field  (16) - ExcessBlobGas -   8 bytes

	// Define the dynamic data (fields)
	ssz.DefineDynamicBytesContent(codec, &obj.ExtraData, 32)                                                          // Field  (10) -     ExtraData - ? bytes
	ssz.DefineSliceOfDynamicBytesContentOnFork(codec, &obj.Transactions, 1048576, 1073741824)                         // Field  (13) -  Transactions - ? bytes
	ssz.DefineSliceOfStaticObjectsContentOnFork(codec, &obj.Withdrawals, 16, ssz.ForkFilter{Added: ssz.ForkShanghai}) // Field  (14) -   Withdrawals - ? bytes
}

The above code is eerily similar to our previous codec, yet, weirdly strange. Wherever fork specific fields appear, the methods get suffixed with OnFork and get passed the rule as to which fork to apply in (e.g. ssz.ForkFilter{Added: ssz.ForkCancun}). There are good reasons for both:

  • The SizeSSZ method used if clauses to check for forks and behaved differently based on which fork we're in. That is clean, however decoding has a quirk: if we decode into a pre-existing object (with fields set to arbitrary junk), the fields not present in a fork needs to be nil-ed out. As such, if clauses within the definitions won't work any more, we need to "define" missing fields too to ensure they get nil-ed correctly. Thus OnFork suffix for all fields, always.
  • Of course, calling an OnFork method it kind of pointless without specifying which fork we want a field to be present in. That's the ssz.ForkFilter parameter. By making it a slightly more complex filter type, the SSZ library supports both adding new fields in a fork, and also removing old fields (both cases happened in the beacon chain). Other operations will be added as needed.

Lastly, to encode the above ExecutionPayloadMonolith into an SSZ stream, we can't use the tried and proven ssz.EncodeToStream, since that will not know what fork we'd like to use. Rather, again, we need to call an OnFork version:

func main() {
	out := new(bytes.Buffer)
	if err := ssz.EncodeToStreamOnFork(out, new(ExecutionPayloadMonolith), ssz.ForkCancun); err != nil {
		panic(err)
	}
	fmt.Printf("ssz: %#x\n", blob)
}

As a side emphasis, although the SSZ library has the Ethereum hard-forks included (e.g. ssz.ForkCancun and ssz.ForkDeneb), there is nothing stopping a user of the library from using their own fork enum (e.g. mypkg.ForkAlice and mypkg.ForkBob), just type it with ssz.Fork and make sure 0 means some variation of unknown/present in all forks.

Generated encoders

More often than not, the Go structs that you'd like to serialize to/from SSZ are simple data containers. Without some particular quirk you'd like to explicitly support, there's little reason to spend precious time counting the bits and digging through a long list of encoder methods to call.

For those scenarios, the library also supports generating the encoding/decoding code via a Go command:

go run github.com/karalabe/ssz/cmd/sszgen --help
Inferred field sizes

Let's go back to our very simple Withdrawal type from way back.

type Withdrawal struct {
    Index     uint64
    Validator uint64
    Address   [20]byte
    Amount    uint64
}

This seems like a fairly simple thing that we should be able to automatically generate a codec for. Let's try:

go run github.com/karalabe/ssz/cmd/sszgen --type Withdrawal

Calling the generator on this type will produce the following (very nice I might say) code:

// Code generated by github.com/karalabe/ssz. DO NOT EDIT.

package main

import "github.com/karalabe/ssz"

// SizeSSZ returns the total size of the static ssz object.
func (obj *Withdrawal) SizeSSZ() uint32 {
	return 8 + 8 + 20 + 8
}

// DefineSSZ defines how an object is encoded/decoded.
func (obj *Withdrawal) DefineSSZ(codec *ssz.Codec) {
	ssz.DefineUint64(codec, &obj.Index)        // Field  (0) -     Index -  8 bytes
	ssz.DefineUint64(codec, &obj.Validator)    // Field  (1) - Validator -  8 bytes
	ssz.DefineStaticBytes(codec, &obj.Address) // Field  (2) -   Address - 20 bytes
	ssz.DefineUint64(codec, &obj.Amount)       // Field  (3) -    Amount -  8 bytes
}

It has everything we would have written ourselves: SizeSSZ and DefineSSZ... and it also has a lot of useful comments we for sure wouldn't have written outselves. Generator for the win!

Ok, but this was too easy. All the fields of the Withdrawal object were primitive types of known lengths, so there's no heavy lifting involved at all. Lets take a look at a juicier example.

Explicit field sizes

For our complex test, lets pick our dynamic ExecutionPayload type from before, but lets make it as hard as it gets and remove all size information from the Go types (e.g. instead of using [32]byte, we can make it extra hard by using []byte only).

Now, obviously, if we were to write serialization code by hand, we'd take advantage of our knowledge of what each of these fields is semantically, so we could provide the necessary sizes for a decoder to use. If we want to, however, generate the serialization code, we need to share all that "insider-knowledge" with the code generator somehow.

The standard way in Go world is through struct tags. Specifically in the context of this library, it will be through the ssz-size and ssz-max tags. These follow the convention set previously by other Go SSZ libraries;

  • ssz-size can be used to declare a field having a static size
  • ssz-max can be used to declare a field having a dynamic size with a size cap.
  • Both tags support multiple dimensions via comma-separation and omitting via ?
type ExecutionPayload struct {
	ParentHash    []byte        `ssz-size:"32"`
	FeeRecipient  []byte        `ssz-size:"32"`
	StateRoot     []byte        `ssz-size:"20"`
	ReceiptsRoot  []byte        `ssz-size:"32"`
	LogsBloom     []byte        `ssz-size:"256"`
	PrevRandao    []byte        `ssz-size:"32"`
	BlockNumber   uint64
	GasLimit      uint64
	GasUsed       uint64
	Timestamp     uint64
	ExtraData     []byte        `ssz-max:"32"`
	BaseFeePerGas *uint256.Int
	BlockHash     []byte        `ssz-size:"32"`
	Transactions  [][]byte      `ssz-max:"1048576,1073741824"`
	Withdrawals   []*Withdrawal `ssz-max:"16"`
}

Calling the generator as before, just with the ExecutionPayload yields the below, fork-enhanced code:

// Code generated by github.com/karalabe/ssz. DO NOT EDIT.

package main

import "github.com/karalabe/ssz"

// SizeSSZ returns either the static size of the object if fixed == true, or
// the total size otherwise.
func (obj *ExecutionPayload) SizeSSZ(fixed bool) uint32 {
	var size = uint32(32 + 32 + 20 + 32 + 256 + 32 + 8 + 8 + 8 + 8 + 4 + 32 + 32 + 4 + 4)
	if fixed {
		return size
	}
	size += ssz.SizeDynamicBytes(obj.ExtraData)
	size += ssz.SizeSliceOfDynamicBytes(obj.Transactions)
	size += ssz.SizeSliceOfStaticObjects(obj.Withdrawals)

	return size
}

// DefineSSZ defines how an object is encoded/decoded.
func (obj *ExecutionPayload) DefineSSZ(codec *ssz.Codec) {
	// Define the static data (fields and dynamic offsets)
	ssz.DefineCheckedStaticBytes(codec, &obj.ParentHash, 32)                           // Field  ( 0) -    ParentHash -  32 bytes
	ssz.DefineCheckedStaticBytes(codec, &obj.FeeRecipient, 32)                         // Field  ( 1) -  FeeRecipient -  32 bytes
	ssz.DefineCheckedStaticBytes(codec, &obj.StateRoot, 20)                            // Field  ( 2) -     StateRoot -  20 bytes
	ssz.DefineCheckedStaticBytes(codec, &obj.ReceiptsRoot, 32)                         // Field  ( 3) -  ReceiptsRoot -  32 bytes
	ssz.DefineCheckedStaticBytes(codec, &obj.LogsBloom, 256)                           // Field  ( 4) -     LogsBloom - 256 bytes
	ssz.DefineCheckedStaticBytes(codec, &obj.PrevRandao, 32)                           // Field  ( 5) -    PrevRandao -  32 bytes
	ssz.DefineUint64(codec, &obj.BlockNumber)                                          // Field  ( 6) -   BlockNumber -   8 bytes
	ssz.DefineUint64(codec, &obj.GasLimit)                                             // Field  ( 7) -      GasLimit -   8 bytes
	ssz.DefineUint64(codec, &obj.GasUsed)                                              // Field  ( 8) -       GasUsed -   8 bytes
	ssz.DefineUint64(codec, &obj.Timestamp)                                            // Field  ( 9) -     Timestamp -   8 bytes
	ssz.DefineDynamicBytesOffset(codec, &obj.ExtraData, 32)                            // Offset (10) -     ExtraData -   4 bytes
	ssz.DefineUint256(codec, &obj.BaseFeePerGas)                                       // Field  (11) - BaseFeePerGas -  32 bytes
	ssz.DefineCheckedStaticBytes(codec, &obj.BlockHash, 32)                            // Field  (12) -     BlockHash -  32 bytes
	ssz.DefineSliceOfDynamicBytesOffset(codec, &obj.Transactions, 1048576, 1073741824) // Offset (13) -  Transactions -   4 bytes
	ssz.DefineSliceOfStaticObjectsOffset(codec, &obj.Withdrawals, 16)                  // Offset (14) -   Withdrawals -   4 bytes

	// Define the dynamic data (fields)
	ssz.DefineDynamicBytesContent(codec, &obj.ExtraData, 32)                            // Field  (10) -     ExtraData - ? bytes
	ssz.DefineSliceOfDynamicBytesContent(codec, &obj.Transactions, 1048576, 1073741824) // Field  (13) -  Transactions - ? bytes
	ssz.DefineSliceOfStaticObjectsContent(codec, &obj.Withdrawals, 16)                  // Field  (14) -   Withdrawals - ? bytes
}

Points of interests to note:

  • The generator realized that this type contains dynamic fields (either through ssz-max tags or via embedded dynamic objects), so it generated an implementation for ssz.DynamicObject (vs. ssz.StaticObject in the previous section).
  • The generator took into consideration all the size ssz-size and ssz-max fields to generate serialization calls with different based types and runtime size checks.
    • Note, it is less performant to have runtime size checks like this, so if you know the size of a field, arrays are always preferable vs dynamic lists.
Cross-validated field sizes

We've seen that the size of a field can either be deduced automatically, or it can be provided to the generator explicitly. But what happens if we provide an ssz struct tag for a field of known size?

type Withdrawal struct {
    Index     uint64   `ssz-size:"8"`
    Validator uint64   `ssz-size:"8"`
    Address   [20]byte `ssz-size:"32"` // Deliberately wrong tag size
    Amount    uint64   `ssz-size:"8"`
}
go run github.com/karalabe/ssz/cmd/sszgen --type Withdrawal

failed to validate field Withdrawal.Address: array of byte basic type tag conflict: field is 20 bytes, tag wants [32] bytes

The code generator will take into consideration the information in both the field's Go type and the struct tag, and will cross validate them against each other. If there's a size conflict, it will abort the code generation.

This functionality can be very helpful in detecting refactor issues, where the user changes the type of a field, which would result in a different encoding. By having the field tagged with an ssz-size, such an error would be detected.

As such, we'd recommend always tagging all SSZ encoded fields with their sizes. It results in both safer code and self-documenting code.

Monolithic types

This library supports monolithic types that encode differently based on what fork the codec is operating in. Naturally, that is a perfect example of something that would be useful to be able to generate, and indeed, can do.

  • Monolithic type fields can be tagged with a ssz-fork:"name" Go struct tag, which will be picked up and mapped by the code generator from their textual form to pre-declared fork identifiers.
  • The fork names follow the Go build constraint rules:
    • A field can be declared introduced in fork X via ssz-fork:"x".
    • A field can be declared removed in fork X via ssz-fork:"!x".
type ExecutionPayloadMonolith struct {
	ParentHash    Hash
	FeeRecipient  Address
	StateRoot     Hash
	ReceiptsRoot  Hash
	LogsBloom     LogsBloom
	PrevRandao    Hash
	BlockNumber   uint64
	GasLimit      uint64
	GasUsed       uint64
	Timestamp     uint64
	ExtraData     []byte       `ssz-max:"32"`
	BaseFeePerGas *uint256.Int
	BlockHash     Hash
	Transactions  [][]byte      `ssz-max:"1048576,1073741824"`
	Withdrawals   []*Withdrawal `ssz-max:"16" ssz-fork:"shanghai"`
	BlobGasUsed   *uint64       `             ssz-fork:"cancun"`
	ExcessBlobGas *uint64       `             ssz-fork:"cancun"`
}

Calling the generator as before, just with the ExecutionPayloadMonolith yields the below, much more interesting code:

// Code generated by github.com/karalabe/ssz. DO NOT EDIT.

package main

import "github.com/karalabe/ssz"

// SizeSSZ returns either the static size of the object if fixed == true, or
// the total size otherwise.
func (obj *ExecutionPayloadMonolith) SizeSSZ(sizer *ssz.Sizer, fixed bool) (size uint32) {
	size = 32 + 20 + 32 + 32 + 256 + 32 + 8 + 8 + 8 + 8 + 4 + 32 + 32 + 4
	if sizer.Fork() >= ssz.ForkShanghai {
		size += 4
	}
	if sizer.Fork() >= ssz.ForkCancun {
		size += 8 + 8
	}
	if fixed {
		return size
	}
	size += ssz.SizeDynamicBytes(sizer, obj.ExtraData)
	size += ssz.SizeSliceOfDynamicBytes(sizer, obj.Transactions)
	if sizer.Fork() >= ssz.ForkShanghai {
		size += ssz.SizeSliceOfStaticObjects(sizer, obj.Withdrawals)
	}
	return size
}

// DefineSSZ defines how an object is encoded/decoded.
func (obj *ExecutionPayloadMonolith) DefineSSZ(codec *ssz.Codec) {
	// Define the static data (fields and dynamic offsets)
	ssz.DefineStaticBytes(codec, &obj.ParentHash)                                                                    // Field  ( 0) -    ParentHash -  32 bytes
	ssz.DefineStaticBytes(codec, &obj.FeeRecipient)                                                                  // Field  ( 1) -  FeeRecipient -  20 bytes
	ssz.DefineStaticBytes(codec, &obj.StateRoot)                                                                     // Field  ( 2) -     StateRoot -  32 bytes
	ssz.DefineStaticBytes(codec, &obj.ReceiptsRoot)                                                                  // Field  ( 3) -  ReceiptsRoot -  32 bytes
	ssz.DefineStaticBytes(codec, &obj.LogsBloom)                                                                     // Field  ( 4) -     LogsBloom - 256 bytes
	ssz.DefineStaticBytes(codec, &obj.PrevRandao)                                                                    // Field  ( 5) -    PrevRandao -  32 bytes
	ssz.DefineUint64(codec, &obj.BlockNumber)                                                                        // Field  ( 6) -   BlockNumber -   8 bytes
	ssz.DefineUint64(codec, &obj.GasLimit)                                                                           // Field  ( 7) -      GasLimit -   8 bytes
	ssz.DefineUint64(codec, &obj.GasUsed)                                                                            // Field  ( 8) -       GasUsed -   8 bytes
	ssz.DefineUint64(codec, &obj.Timestamp)                                                                          // Field  ( 9) -     Timestamp -   8 bytes
	ssz.DefineDynamicBytesOffset(codec, &obj.ExtraData, 32)                                                          // Offset (10) -     ExtraData -   4 bytes
	ssz.DefineUint256(codec, &obj.BaseFeePerGas)                                                                     // Field  (11) - BaseFeePerGas -  32 bytes
	ssz.DefineStaticBytes(codec, &obj.BlockHash)                                                                     // Field  (12) -     BlockHash -  32 bytes
	ssz.DefineSliceOfDynamicBytesOffset(codec, &obj.Transactions, 1048576, 1073741824)                               // Offset (13) -  Transactions -   4 bytes
	ssz.DefineSliceOfStaticObjectsOffsetOnFork(codec, &obj.Withdrawals, 16, ssz.ForkFilter{Added: ssz.ForkShanghai}) // Offset (14) -   Withdrawals -   4 bytes
	ssz.DefineUint64PointerOnFork(codec, &obj.BlobGasUsed, ssz.ForkFilter{Added: ssz.ForkCancun})                    // Field  (15) -   BlobGasUsed -   8 bytes
	ssz.DefineUint64PointerOnFork(codec, &obj.ExcessBlobGas, ssz.ForkFilter{Added: ssz.ForkCancun})                  // Field  (16) - ExcessBlobGas -   8 bytes

	// Define the dynamic data (fields)
	ssz.DefineDynamicBytesContent(codec, &obj.ExtraData, 32)                                                          // Field  (10) -     ExtraData - ? bytes
	ssz.DefineSliceOfDynamicBytesContent(codec, &obj.Transactions, 1048576, 1073741824)                               // Field  (13) -  Transactions - ? bytes
	ssz.DefineSliceOfStaticObjectsContentOnFork(codec, &obj.Withdrawals, 16, ssz.ForkFilter{Added: ssz.ForkShanghai}) // Field  (14) -   Withdrawals - ? bytes
}

To explicitly highlight, the ssz-fork tags have been extracted from the struct definition and mapped into both an updated SizeSSZ method as well as a new definition style in DefineSSZ.

Do note, this type (or anything embedding it) will require the OnFork versions of ssz.Encode, ssz.Decode, ssz.Hash and ssz.Size to be called, since naturally it relies on a correct fork being set in the codec's context.

Lastly, whilst the library itself supports custom fork enums, there is no support yet for these in the code generator. This will probably be added eventually via a --forks=mypkg or similar CLI flag, but it's a TODO for now.

Go generate

Perhaps just a mention, anyone using the code generator should call it from a go:generate compile instruction. It is much simpler and once added to the code, it can always be called via running go generate.

Multi-type ordering

When generating code for multiple types at once (with one call or many), there's one ordering issue you need to be aware of.

When the code generator finds a field that is a struct of some sort, it needs to decide if it's a static or a dynamic type. To do that, it relies on checking if the type implements the ssz.StaticObject or ssz.DynamicObject interface. If if doesn't implement either, the generator will error.

This means, however, that if you have a type that's embedded in another type (e.g. in our examples above, Withdrawal was embedded inside ExecutionPayload in a slice), you need to generate the code for the inner type first, and then the outer type. This ensures that when the outer type is resolving the interface of the inner one, that is already generated and available.

Merkleization

Half the SSZ spec is about encoding/decoding data into a binary format, the other half is about proving the data via Merkle Proofs.

Symmetric API

The same way that encoding/decoding has a "symmetric" and "asymmetric" API, so does merkleization. What's more, the symmetric API is actually exactly the same as for encoding/decoding, with no code changes necessary!

Taking our very simple Withdrawal type and it's codec code:

type Address [20]byte

type Withdrawal struct {
    Index     uint64
    Validator uint64
    Address   Address
    Amount    uint64
}

func (w *Withdrawal) SizeSSZ() uint32 { return 44 }
func (w *Withdrawal) DefineSSZ(codec *ssz.Codec) {
	ssz.DefineUint64(codec, &w.Index)        // Field (0) - Index          -  8 bytes
	ssz.DefineUint64(codec, &w.Validator)    // Field (1) - ValidatorIndex -  8 bytes
	ssz.DefineStaticBytes(codec, &w.Address) // Field (2) - Address        - 20 bytes
	ssz.DefineUint64(codec, &w.Amount)       // Field (3) - Amount         -  8 bytes
}

Hashing this works out of the box. To merkleize the above Withdrawal and calculate it's merkel trie root, use either ssz.HashSequential or ssz.HashConcurrent. The former will run on a single thread and use 0 allocations, whereas the latter might run on multiple threads concurrently (if large enough fields are present) and use O(1) memory.

func main() {
	hash := ssz.HashSequential(new(Withdrawal))
	fmt.Printf("hash: %#x\n", hash)
}
Asymmetric API

If for some reason you have a type that requires custom encoders/decoders, high chance, that it will also require a custom hasher. For those cases, this library provides an API surface very similar to how the asymmetric encoding/decoding worked:

func (w *Withdrawal) DefineSSZ(codec *ssz.Codec) {
	codec.DefineEncoder(func(enc *ssz.Encoder) {
		ssz.EncodeUint64(enc, w.Index)         // Field (0) - Index          -  8 bytes
		ssz.EncodeUint64(enc, w.Validator)     // Field (1) - ValidatorIndex -  8 bytes
		ssz.EncodeStaticBytes(enc, &w.Address) // Field (2) - Address        - 20 bytes
		ssz.EncodeUint64(enc, w.Amount)        // Field (3) - Amount         -  8 bytes
	})
	codec.DefineDecoder(func(dec *ssz.Decoder) {
		ssz.DecodeUint64(dec, &w.Index)        // Field (0) - Index          -  8 bytes
		ssz.DecodeUint64(dec, &w.Validator)    // Field (1) - ValidatorIndex -  8 bytes
		ssz.DecodeStaticBytes(dec, &w.Address) // Field (2) - Address        - 20 bytes
		ssz.DecodeUint64(dec, &w.Amount)       // Field (3) - Amount         -  8 bytes
	})
	codec.DefineHasher(func(has *ssz.Hasher) {
		ssz.HashUint64(has, w.Index)         // Field (0) - Index          -  8 bytes
		ssz.HashUint64(has, w.Validator)     // Field (1) - ValidatorIndex -  8 bytes
		ssz.HashStaticBytes(has, &w.Address) // Field (2) - Address        - 20 bytes
		ssz.HashUint64(has, w.Amount)        // Field (3) - Amount         -  8 bytes
	})
}

Hashing the above Withdrawal into a Merkle trie root, you use the same thing as before. Everything is seamless.

Quick reference

The table below is a summary of the methods available for SizeSSZ and DefineSSZ:

  • The Size API is to be used to implement the SizeSSZ method's dynamic parts.
  • The Symmetric API is to be used if the encoding/decoding/hashing doesn't require specialised logic.
  • The Asymmetric API is to be used if encoding or decoding or hashing requires special casing.

If some type you need is missing, please open an issue, so it can be added.

Type Size API Symmetric API Asymmetric Encoding Asymmetric Decoding Asymmetric Hashing
bool 1 byte DefineBool EncodeBool DecodeBool HashBool
uint8 1 bytes DefineUint8 EncodeUint8 DecodeUint8 HashUint8
uint16 2 bytes DefineUint16 EncodeUint16 DecodeUint16 HashUint16
uint32 4 bytes DefineUint32 EncodeUint32 DecodeUint32 HashUint32
uint64 8 bytes DefineUint64 EncodeUint64 DecodeUint64 HashUint64
[N]byte as bitvector[N] N bytes DefineArrayOfBits EncodeArrayOfBits DecodeArrayOfBits HashArrayOfBits
bitfield.Bitlist² SizeSliceOfBits DefineSliceOfBitsOffset DefineSliceOfBitsContent EncodeSliceOfBitsOffset EncodeSliceOfBitsContent DecodeSliceOfBitsOffset DecodeSliceOfBitsContent HashSliceOfBits
[N]uint64 N * 8 bytes DefineArrayOfUint64s EncodeArrayOfUint64s DecodeArrayOfUint64s HashArrayOfUint64s
[]uint64 SizeSliceOfUint64s DefineSliceOfUint64sOffset DefineSliceOfUint64sContent EncodeSliceOfUint64sOffset EncodeSliceOfUint64sContent DecodeSliceOfUint64sOffset DecodeSliceOfUint64sContent HashSliceOfUint64s
*uint256.Int¹ 32 bytes DefineUint256 EncodeUint256 DecodeUint256 HashUint256
*big.Int as uint256 32 bytes DefineUint256BigInt EncodeUint256BigInt DecodeUint256BigInt HashUint256BigInt
[N]byte N bytes DefineStaticBytes EncodeStaticBytes DecodeStaticBytes HashStaticBytes
[N]byte in []byte N bytes DefineCheckedStaticBytes EncodeCheckedStaticBytes DecodeCheckedStaticBytes HashCheckedStaticBytes
[]byte SizeDynamicBytes DefineDynamicBytesOffset DefineDynamicBytesContent EncodeDynamicBytesOffset EncodeDynamicBytesContent DecodeDynamicBytesOffset DecodeDynamicBytesContent HashDynamicBytes
[M][N]byte M * N bytes DefineArrayOfStaticBytes EncodeArrayOfStaticBytes DecodeArrayOfStaticBytes HashArrayOfStaticBytes
[M][N]byte in [][N]byte M * N bytes DefineCheckedArrayOfStaticBytes EncodeCheckedArrayOfStaticBytes DecodeCheckedArrayOfStaticBytes HashCheckedArrayOfStaticBytes
[][N]byte SizeSliceOfStaticBytes DefineSliceOfStaticBytesOffset DefineSliceOfStaticBytesContent EncodeSliceOfStaticBytesOffset EncodeSliceOfStaticBytesContent DecodeSliceOfStaticBytesOffset DecodeSliceOfStaticBytesContent HashSliceOfStaticBytes
[][]byte SizeSliceOfDynamicBytes DefineSliceOfDynamicBytesOffset DefineSliceOfDynamicBytesContent EncodeSliceOfDynamicBytesOffset EncodeSliceOfDynamicBytesContent DecodeSliceOfDynamicBytesOffset DecodeSliceOfDynamicBytesContent HashSliceOfDynamicBytes
ssz.StaticObject Object(nil).SizeSSZ() DefineStaticObject EncodeStaticObject DecodeStaticObject HashStaticObject
[]ssz.StaticObject SizeSliceOfStaticObjects DefineSliceOfStaticObjectsOffset DefineSliceOfStaticObjectsContent EncodeSliceOfStaticObjectsOffset EncodeSliceOfStaticObjectsContent DecodeSliceOfStaticObjectsOffset DecodeSliceOfStaticObjectsContent HashSliceOfStaticObjects
ssz.DynamicObject SizeDynamicObject DefineDynamicBytesOffset DefineDynamicBytesContent EncodeDynamicBytesOffset EncodeDynamicBytesContent DecodeDynamicBytesOffset DecodeDynamicBytesContent HashDynamicBytes
[]ssz.DynamicObject SizeSliceOfDynamicObjects DefineSliceOfDynamicObjectsOffset DefineSliceOfDynamicObjectsContent EncodeSliceOfDynamicObjectsOffset EncodeSliceOfDynamicObjectsContent DecodeSliceOfDynamicObjectsOffset DecodeSliceOfDynamicObjectsContent HashSliceOfDynamicObjects

¹Type is from github.com/holiman/uint256.
²Type is from github.com/prysmaticlabs/go-bitfield.

Performance

The goal of this package is to be close in performance to low level generated encoders, without sacrificing maintainability. It should, however, be significantly faster than runtime reflection encoders.

The package includes a set of benchmarks for handling the beacon spec types and test datasets. You can run them with go test ./tests --bench=.. These can be interesting for some baseline numbers, but they are unrealistic with regard to live beacon state data.

If you want to see the performance on a more realistic piece of data, you'll need to provide a beacon state SSZ object and place it into the project root named state.ssz. You can then run go test --bench=Mainnet ./tests/manual_test.go to explicitly run this one benchmark. A sample output running against a 208MB state export from around June 11, 2024, on a MacBook Pro M2 Max:

go test --bench=Mainnet ./tests/manual_test.go

BenchmarkMainnetState/beacon-state/208757379-bytes/encode-12         	      26	  45164494 ns/op	4622.16 MB/s	      74 B/op	       0 allocs/op
BenchmarkMainnetState/beacon-state/208757379-bytes/decode-12         	      27	  40984980 ns/op	5093.51 MB/s	 8456490 B/op	   54910 allocs/op
BenchmarkMainnetState/beacon-state/208757379-bytes/merkleize-sequential-12     2	 659472250 ns/op	 316.55 MB/s	     904 B/op	       1 allocs/op
BenchmarkMainnetState/beacon-state/208757379-bytes/merkleize-concurrent-12     9	 113414449 ns/op	1840.66 MB/s	   16416 B/op	     108 allocs/op

Documentation

Overview

Package ssz is a simplified SSZ encoder/decoder.

Index

Examples

Constants

This section is empty.

Variables

View Source
var ErrBadCounterOffset = errors.New("ssz: counter offset not multiple of 4-bytes")

ErrBadCounterOffset is returned when a list of offsets are consumed and the first offset is not a multiple of 4-bytes.

View Source
var ErrBadOffsetProgression = errors.New("ssz: offset smaller than previous")

ErrBadOffsetProgression is returned when an offset is parsed, and is smaller than a previously seen offset (meaning negative dynamic data size).

View Source
var ErrBufferTooSmall = errors.New("ssz: output buffer too small")

ErrBufferTooSmall is returned from encoding if the provided output byte buffer is too small to hold the encoding of the object.

View Source
var ErrDynamicStaticsIndivisible = errors.New("ssz: list of fixed objects not divisible")

ErrDynamicStaticsIndivisible is returned when a list of static objects is to be decoded, but the list's total length is not divisible by the item size.

View Source
var ErrFirstOffsetMismatch = errors.New("ssz: first offset mismatch")

ErrFirstOffsetMismatch is returned when parsing dynamic types and the first offset (which is supposed to signal the start of the dynamic area) does not match with the computed fixed area size.

View Source
var ErrInvalidBoolean = errors.New("ssz: invalid boolean")

ErrInvalidBoolean is returned from decoding if a boolean slot contains some other byte than 0x00 or 0x01.

View Source
var ErrJunkInBitlist = errors.New("ssz: junk in bitlist unused bits")

ErrJunkInBitlist is returned from decoding if the high (unused) bits of a bitlist contains junk, instead of being all 0.

View Source
var ErrJunkInBitvector = errors.New("ssz: junk in bitvector unused bits")

ErrJunkInBitvector is returned from decoding if the high (unused) bits of a bitvector contains junk, instead of being all 0.

View Source
var ErrMaxItemsExceeded = errors.New("ssz: maximum item count exceeded")

ErrMaxItemsExceeded is returned when the number of items in a dynamic list type is later than permitted.

View Source
var ErrMaxLengthExceeded = errors.New("ssz: maximum item size exceeded")

ErrMaxLengthExceeded is returned when the size calculated for a dynamic type is larger than permitted.

View Source
var ErrObjectSlotSizeMismatch = errors.New("ssz: object didn't consume all designated data")

ErrObjectSlotSizeMismatch is returned from decoding if an object's slot in the ssz stream contains more data than the object cares to consume.

View Source
var ErrOffsetBeyondCapacity = errors.New("ssz: offset beyond capacity")

ErrOffsetBeyondCapacity is returned when an offset is parsed, and is larger than the total capacity allowed by the decoder (i.e. message size)

View Source
var ErrShortCounterOffset = errors.New("ssz: insufficient data for 4-byte counter offset")

ErrShortCounterOffset is returned if a counter offset it attempted to be read but there are fewer bytes available on the stream.

View Source
var ErrZeroCounterOffset = errors.New("ssz: counter offset zero")

ErrZeroCounterOffset is returned when a list of offsets are consumed and the first offset is zero, which means the list should not have existed.

View Source
var ForkMapping = map[string]Fork{
	"unknown":        ForkUnknown,
	"frontier":       ForkFrontier,
	"homestead":      ForkHomestead,
	"dao":            ForkDAO,
	"tangerine":      ForkTangerine,
	"spurious":       ForkSpurious,
	"byzantium":      ForkByzantium,
	"constantinople": ForkConstantinople,
	"istanbul":       ForkIstanbul,
	"muir":           ForkMuir,
	"phase0":         ForkPhase0,
	"berlin":         ForkBerlin,
	"london":         ForkLondon,
	"altair":         ForkAltair,
	"arrow":          ForkArrow,
	"gray":           ForkGray,
	"bellatrix":      ForkBellatrix,
	"paris":          ForkParis,
	"merge":          ForkMerge,
	"shapella":       ForkShapella,
	"shanghai":       ForkShanghai,
	"capella":        ForkCapella,
	"dencun":         ForkDencun,
	"cancun":         ForkCancun,
	"deneb":          ForkDeneb,
	"pectra":         ForkPectra,
	"prague":         ForkPrague,
	"electra":        ForkElectra,
	"future":         ForkFuture,
}

ForkMapping maps fork names to fork values. This is used internally by the ssz codec generator to convert tags to values.

Functions

func DecodeArrayOfBits

func DecodeArrayOfBits[T commonBitsLengths](dec *Decoder, bits *T, size uint64)

DecodeArrayOfBits parses a static array of (packed) bits.

func DecodeArrayOfBitsPointerOnFork added in v0.3.0

func DecodeArrayOfBitsPointerOnFork[T commonBitsLengths](dec *Decoder, bits **T, size uint64, filter ForkFilter)

DecodeArrayOfBitsPointerOnFork parses a static array of (packed) bits if present in a fork. If not, the bit array pointer is set to nil.

func DecodeArrayOfStaticBytes

func DecodeArrayOfStaticBytes[T commonBytesArrayLengths[U], U commonBytesLengths](dec *Decoder, blobs *T)

DecodeArrayOfStaticBytes parses a static array of static binary blobs.

func DecodeArrayOfUint64s

func DecodeArrayOfUint64s[T commonUint64sLengths](dec *Decoder, ns *T)

DecodeArrayOfUint64s parses a static array of uint64s.

func DecodeArrayOfUint64sPointerOnFork added in v0.3.0

func DecodeArrayOfUint64sPointerOnFork[T commonUint64sLengths](dec *Decoder, ns **T, filter ForkFilter)

DecodeArrayOfUint64sPointerOnFork parses a static array of uint64s if present in a fork. If not, the bit array pointer is set to nil.

func DecodeBool

func DecodeBool[T ~bool](dec *Decoder, v *T)

DecodeBool parses a boolean.

func DecodeBoolPointerOnFork added in v0.3.0

func DecodeBoolPointerOnFork[T ~bool](dec *Decoder, v **T, filter ForkFilter)

DecodeBoolPointerOnFork parses a boolean if present in a fork. If not, the boolean pointer is set to nil.

This method is similar to DecodeBool, but will also initialize the pointer if it is not allocated yet.

func DecodeCheckedArrayOfStaticBytes

func DecodeCheckedArrayOfStaticBytes[T commonBytesLengths](dec *Decoder, blobs *[]T, size uint64)

DecodeCheckedArrayOfStaticBytes parses a static array of static binary blobs.

func DecodeCheckedStaticBytes

func DecodeCheckedStaticBytes(dec *Decoder, blob *[]byte, size uint64)

DecodeCheckedStaticBytes parses a static binary blob.

func DecodeDynamicBytesContent

func DecodeDynamicBytesContent(dec *Decoder, blob *[]byte, maxSize uint64)

DecodeDynamicBytesContent is the lazy data reader of DecodeDynamicBytesOffset.

func DecodeDynamicBytesContentOnFork added in v0.3.0

func DecodeDynamicBytesContentOnFork(dec *Decoder, blob *[]byte, maxSize uint64, filter ForkFilter)

DecodeDynamicBytesContentOnFork is the lazy data reader of DecodeDynamicBytesOffsetOnFork.

func DecodeDynamicBytesOffset

func DecodeDynamicBytesOffset(dec *Decoder, blob *[]byte)

DecodeDynamicBytesOffset parses the offset of a dynamic binary blob.

func DecodeDynamicBytesOffsetOnFork added in v0.3.0

func DecodeDynamicBytesOffsetOnFork(dec *Decoder, blob *[]byte, filter ForkFilter)

DecodeDynamicBytesOffsetOnFork parses the offset of dynamic binary blob if present in a fork.

func DecodeDynamicObjectContent

func DecodeDynamicObjectContent[T newableDynamicObject[U], U any](dec *Decoder, obj *T)

DecodeDynamicObjectContent is the lazy data reader of DecodeDynamicObjectOffset.

func DecodeDynamicObjectContentOnFork added in v0.3.0

func DecodeDynamicObjectContentOnFork[T newableDynamicObject[U], U any](dec *Decoder, obj *T, filter ForkFilter)

DecodeDynamicObjectContentOnFork is the lazy data reader of DecodeDynamicObjectOffsetOnFork.

func DecodeDynamicObjectOffset

func DecodeDynamicObjectOffset[T newableDynamicObject[U], U any](dec *Decoder, obj *T)

DecodeDynamicObjectOffset parses a dynamic ssz object.

func DecodeDynamicObjectOffsetOnFork added in v0.3.0

func DecodeDynamicObjectOffsetOnFork[T newableDynamicObject[U], U any](dec *Decoder, obj *T, filter ForkFilter)

DecodeDynamicObjectOffsetOnFork parses a dynamic ssz object if present in a fork.

func DecodeFromBytes

func DecodeFromBytes(blob []byte, obj Object) error

DecodeFromBytes parses a non-monolithic object from a byte buffer. If the type contains fork-specific rules, use DecodeFromBytesOnFork.

Do not use this method if you want to first read the buffer from a stream via some reader, as that would double the memory use for the temporary buffer. For that use case, use DecodeFromStream instead.

func DecodeFromBytesOnFork added in v0.3.0

func DecodeFromBytesOnFork(blob []byte, obj Object, fork Fork) error

DecodeFromBytesOnFork parses a monolithic object from a byte buffer. If the type does not contain fork-specific rules, you can also use DecodeFromBytes.

Do not use this method if you want to first read the buffer from a stream via some reader, as that would double the memory use for the temporary buffer. For that use case, use DecodeFromStreamOnFork instead.

func DecodeFromStream

func DecodeFromStream(r io.Reader, obj Object, size uint32) error

DecodeFromStream parses a non-monolithic object with the given size out of a stream. If the type contains fork-specific rules, use DecodeFromStreamOnFork.

Do not use this method with a bytes.Buffer to read from a []byte slice, as that will double the byte copying. For that use case, use DecodeFromBytes.

func DecodeFromStreamOnFork added in v0.3.0

func DecodeFromStreamOnFork(r io.Reader, obj Object, size uint32, fork Fork) error

DecodeFromStreamOnFork parses a monolithic object with the given size out of a stream. If the type does not contain fork-specific rules, you can also use DecodeFromStream.

Do not use this method with a bytes.Buffer to read from a []byte slice, as that will double the byte copying. For that use case, use DecodeFromBytesOnFork.

func DecodeSliceOfBitsContent

func DecodeSliceOfBitsContent(dec *Decoder, bitlist *bitfield.Bitlist, maxBits uint64)

DecodeSliceOfBitsContent is the lazy data reader of DecodeSliceOfBitsOffset.

func DecodeSliceOfBitsContentOnFork added in v0.3.0

func DecodeSliceOfBitsContentOnFork(dec *Decoder, bitlist *bitfield.Bitlist, maxBits uint64, filter ForkFilter)

DecodeSliceOfBitsContentOnFork is the lazy data reader of DecodeSliceOfBitsOffsetOnFork.

func DecodeSliceOfBitsOffset

func DecodeSliceOfBitsOffset(dec *Decoder, bitlist *bitfield.Bitlist)

DecodeSliceOfBitsOffset parses a dynamic slice of (packed) bits.

func DecodeSliceOfBitsOffsetOnFork added in v0.3.0

func DecodeSliceOfBitsOffsetOnFork(dec *Decoder, bitlist *bitfield.Bitlist, filter ForkFilter)

DecodeSliceOfBitsOffsetOnFork parses a dynamic slice of (packed) bits if present in a fork.

func DecodeSliceOfDynamicBytesContent

func DecodeSliceOfDynamicBytesContent(dec *Decoder, blobs *[][]byte, maxItems uint64, maxSize uint64)

DecodeSliceOfDynamicBytesContent is the lazy data reader of DecodeSliceOfDynamicBytesOffset.

func DecodeSliceOfDynamicBytesContentOnFork added in v0.3.0

func DecodeSliceOfDynamicBytesContentOnFork(dec *Decoder, blobs *[][]byte, maxItems uint64, maxSize uint64, filter ForkFilter)

DecodeSliceOfDynamicBytesContentOnFork is the lazy data reader of DecodeSliceOfDynamicBytesOffsetOnFork.

func DecodeSliceOfDynamicBytesOffset

func DecodeSliceOfDynamicBytesOffset(dec *Decoder, blobs *[][]byte)

DecodeSliceOfDynamicBytesOffset parses a dynamic slice of dynamic binary blobs.

func DecodeSliceOfDynamicBytesOffsetOnFork added in v0.3.0

func DecodeSliceOfDynamicBytesOffsetOnFork(dec *Decoder, blobs *[][]byte, filter ForkFilter)

DecodeSliceOfDynamicBytesOffsetOnFork parses a dynamic slice of dynamic binary blobs if present in a fork.

func DecodeSliceOfDynamicObjectsContent

func DecodeSliceOfDynamicObjectsContent[T newableDynamicObject[U], U any](dec *Decoder, objects *[]T, maxItems uint64)

DecodeSliceOfDynamicObjectsContent is the lazy data reader of DecodeSliceOfDynamicObjectsOffset.

func DecodeSliceOfDynamicObjectsContentOnFork added in v0.3.0

func DecodeSliceOfDynamicObjectsContentOnFork[T newableDynamicObject[U], U any](dec *Decoder, objects *[]T, maxItems uint64, filter ForkFilter)

DecodeSliceOfDynamicObjectsContentOnFork is the lazy data reader of DecodeSliceOfDynamicObjectsOffsetOnFork.

func DecodeSliceOfDynamicObjectsOffset

func DecodeSliceOfDynamicObjectsOffset[T newableDynamicObject[U], U any](dec *Decoder, objects *[]T)

DecodeSliceOfDynamicObjectsOffset parses a dynamic slice of dynamic ssz objects.

func DecodeSliceOfDynamicObjectsOffsetOnFork added in v0.3.0

func DecodeSliceOfDynamicObjectsOffsetOnFork[T newableDynamicObject[U], U any](dec *Decoder, objects *[]T, filter ForkFilter)

DecodeSliceOfDynamicObjectsOffsetOnFork parses a dynamic slice of dynamic ssz objects if present in a fork.

func DecodeSliceOfStaticBytesContent

func DecodeSliceOfStaticBytesContent[T commonBytesLengths](dec *Decoder, blobs *[]T, maxItems uint64)

DecodeSliceOfStaticBytesContent is the lazy data reader of DecodeSliceOfStaticBytesOffset.

func DecodeSliceOfStaticBytesContentOnFork added in v0.3.0

func DecodeSliceOfStaticBytesContentOnFork[T commonBytesLengths](dec *Decoder, blobs *[]T, maxItems uint64, filter ForkFilter)

DecodeSliceOfStaticBytesContentOnFork is the lazy data reader of DecodeSliceOfStaticBytesOffsetOnFork.

func DecodeSliceOfStaticBytesOffset

func DecodeSliceOfStaticBytesOffset[T commonBytesLengths](dec *Decoder, blobs *[]T)

DecodeSliceOfStaticBytesOffset parses a dynamic slice of static binary blobs.

func DecodeSliceOfStaticBytesOffsetOnFork added in v0.3.0

func DecodeSliceOfStaticBytesOffsetOnFork[T commonBytesLengths](dec *Decoder, blobs *[]T, filter ForkFilter)

DecodeSliceOfStaticBytesOffsetOnFork parses a dynamic slice of static binary blobs if present in a fork.

func DecodeSliceOfStaticObjectsContent

func DecodeSliceOfStaticObjectsContent[T newableStaticObject[U], U any](dec *Decoder, objects *[]T, maxItems uint64)

DecodeSliceOfStaticObjectsContent is the lazy data reader of DecodeSliceOfStaticObjectsOffset.

func DecodeSliceOfStaticObjectsContentOnFork added in v0.3.0

func DecodeSliceOfStaticObjectsContentOnFork[T newableStaticObject[U], U any](dec *Decoder, objects *[]T, maxItems uint64, filter ForkFilter)

DecodeSliceOfStaticObjectsContentOnFork is the lazy data reader of DecodeSliceOfStaticObjectsOffsetOnFork.

func DecodeSliceOfStaticObjectsOffset

func DecodeSliceOfStaticObjectsOffset[T newableStaticObject[U], U any](dec *Decoder, objects *[]T)

DecodeSliceOfStaticObjectsOffset parses a dynamic slice of static ssz objects.

func DecodeSliceOfStaticObjectsOffsetOnFork added in v0.3.0

func DecodeSliceOfStaticObjectsOffsetOnFork[T newableStaticObject[U], U any](dec *Decoder, objects *[]T, filter ForkFilter)

DecodeSliceOfStaticObjectsOffsetOnFork parses a dynamic slice of static ssz objects if present in a fork.

func DecodeSliceOfUint64sContent

func DecodeSliceOfUint64sContent[T ~uint64](dec *Decoder, ns *[]T, maxItems uint64)

DecodeSliceOfUint64sContent is the lazy data reader of DecodeSliceOfUint64sOffset.

func DecodeSliceOfUint64sContentOnFork added in v0.3.0

func DecodeSliceOfUint64sContentOnFork[T ~uint64](dec *Decoder, ns *[]T, maxItems uint64, filter ForkFilter)

DecodeSliceOfUint64sContentOnFork is the lazy data reader of DecodeSliceOfUint64sOffsetOnFork.

func DecodeSliceOfUint64sOffset

func DecodeSliceOfUint64sOffset[T ~uint64](dec *Decoder, ns *[]T)

DecodeSliceOfUint64sOffset parses a dynamic slice of uint64s.

func DecodeSliceOfUint64sOffsetOnFork added in v0.3.0

func DecodeSliceOfUint64sOffsetOnFork[T ~uint64](dec *Decoder, ns *[]T, filter ForkFilter)

DecodeSliceOfUint64sOffsetOnFork parses a dynamic slice of uint64s if present in a fork.

func DecodeStaticBytes

func DecodeStaticBytes[T commonBytesLengths](dec *Decoder, blob *T)

DecodeStaticBytes parses a static binary blob.

func DecodeStaticBytesPointerOnFork added in v0.3.0

func DecodeStaticBytesPointerOnFork[T commonBytesLengths](dec *Decoder, blob **T, filter ForkFilter)

DecodeStaticBytesPointerOnFork parses a static binary blob if present in a fork. If not, the bytes are set to nil.

func DecodeStaticObject

func DecodeStaticObject[T newableStaticObject[U], U any](dec *Decoder, obj *T)

DecodeStaticObject parses a static ssz object.

func DecodeStaticObjectOnFork added in v0.3.0

func DecodeStaticObjectOnFork[T newableStaticObject[U], U any](dec *Decoder, obj *T, filter ForkFilter)

DecodeStaticObjectOnFork parses a static ssz object if present in a fork.

func DecodeUint16 added in v0.3.0

func DecodeUint16[T ~uint16](dec *Decoder, n *T)

DecodeUint16 parses a uint16.

func DecodeUint16PointerOnFork added in v0.3.0

func DecodeUint16PointerOnFork[T ~uint16](dec *Decoder, n **T, filter ForkFilter)

DecodeUint16PointerOnFork parses a uint16 if present in a fork. If not, the uint16 pointer is set to nil.

This method is similar to DecodeUint16, but will also initialize the pointer if it is not allocated yet.

func DecodeUint256

func DecodeUint256(dec *Decoder, n **uint256.Int)

DecodeUint256 parses a uint256.

func DecodeUint256BigInt added in v0.3.0

func DecodeUint256BigInt(dec *Decoder, n **big.Int)

DecodeUint256BigInt parses a uint256 into a big.Int.

func DecodeUint256BigIntOnFork added in v0.3.0

func DecodeUint256BigIntOnFork(dec *Decoder, n **big.Int, filter ForkFilter)

DecodeUint256BigIntOnFork parses a uint256 into a big.Int if present in a fork.

func DecodeUint256OnFork added in v0.3.0

func DecodeUint256OnFork(dec *Decoder, n **uint256.Int, filter ForkFilter)

DecodeUint256OnFork parses a uint256 if present in a fork.

func DecodeUint32 added in v0.3.0

func DecodeUint32[T ~uint32](dec *Decoder, n *T)

DecodeUint32 parses a uint32.

func DecodeUint32PointerOnFork added in v0.3.0

func DecodeUint32PointerOnFork[T ~uint32](dec *Decoder, n **T, filter ForkFilter)

DecodeUint32PointerOnFork parses a uint32 if present in a fork. If not, the uint32 pointer is set to nil.

This method is similar to DecodeUint32, but will also initialize the pointer if it is not allocated yet.

func DecodeUint64

func DecodeUint64[T ~uint64](dec *Decoder, n *T)

DecodeUint64 parses a uint64.

func DecodeUint64PointerOnFork added in v0.3.0

func DecodeUint64PointerOnFork[T ~uint64](dec *Decoder, n **T, filter ForkFilter)

DecodeUint64PointerOnFork parses a uint64 if present in a fork. If not, the uint64 pointer is set to nil.

This method is similar to DecodeUint64, but will also initialize the pointer if it is not allocated yet.

func DecodeUint8 added in v0.3.0

func DecodeUint8[T ~uint8](dec *Decoder, n *T)

DecodeUint8 parses a uint8.

func DecodeUint8PointerOnFork added in v0.3.0

func DecodeUint8PointerOnFork[T ~uint8](dec *Decoder, n **T, filter ForkFilter)

DecodeUint8PointerOnFork parses a uint8 if present in a fork. If not, the uint8 pointer is set to nil.

This method is similar to DecodeUint8, but will also initialize the pointer if it is not allocated yet.

func DecodeUnsafeArrayOfStaticBytes

func DecodeUnsafeArrayOfStaticBytes[T commonBytesLengths](dec *Decoder, blobs []T)

DecodeUnsafeArrayOfStaticBytes parses a static array of static binary blobs.

func DefineArrayOfBits

func DefineArrayOfBits[T commonBitsLengths](c *Codec, bits *T, size uint64)

DefineArrayOfBits defines the next field as a static array of (packed) bits.

func DefineArrayOfBitsPointerOnFork added in v0.3.0

func DefineArrayOfBitsPointerOnFork[T commonBitsLengths](c *Codec, bits **T, size uint64, filter ForkFilter)

DefineArrayOfBitsPointerOnFork defines the next field as a static array of (packed) bits if present in a fork.

func DefineArrayOfStaticBytes

func DefineArrayOfStaticBytes[T commonBytesArrayLengths[U], U commonBytesLengths](c *Codec, blobs *T)

DefineArrayOfStaticBytes defines the next field as a static array of static binary blobs.

func DefineArrayOfUint64s

func DefineArrayOfUint64s[T commonUint64sLengths](c *Codec, ns *T)

DefineArrayOfUint64s defines the next field as a static array of uint64s.

func DefineArrayOfUint64sPointerOnFork added in v0.3.0

func DefineArrayOfUint64sPointerOnFork[T commonUint64sLengths](c *Codec, ns **T, filter ForkFilter)

DefineArrayOfUint64sPointerOnFork defines the next field as a static array of uint64s if present in a fork.

func DefineBool

func DefineBool[T ~bool](c *Codec, v *T)

DefineBool defines the next field as a 1 byte boolean.

func DefineBoolPointerOnFork added in v0.3.0

func DefineBoolPointerOnFork[T ~bool](c *Codec, v **T, filter ForkFilter)

DefineBoolPointerOnFork defines the next field as a 1 byte boolean if present in a fork.

func DefineCheckedArrayOfStaticBytes

func DefineCheckedArrayOfStaticBytes[T commonBytesLengths](c *Codec, blobs *[]T, size uint64)

DefineCheckedArrayOfStaticBytes defines the next field as a static array of static binary blobs. This method can be used for plain slices of byte arrays, which is more expensive since it needs runtime size validation.

func DefineCheckedStaticBytes

func DefineCheckedStaticBytes(c *Codec, blob *[]byte, size uint64)

DefineCheckedStaticBytes defines the next field as static binary blob. This method can be used for plain byte slices, which is more expensive, since it needs runtime size validation.

func DefineDynamicBytesContent

func DefineDynamicBytesContent(c *Codec, blob *[]byte, maxSize uint64)

DefineDynamicBytesContent defines the next field as dynamic binary blob.

func DefineDynamicBytesContentOnFork added in v0.3.0

func DefineDynamicBytesContentOnFork(c *Codec, blob *[]byte, maxSize uint64, filter ForkFilter)

DefineDynamicBytesContentOnFork defines the next field as dynamic binary blob if present in a fork.

func DefineDynamicBytesOffset

func DefineDynamicBytesOffset(c *Codec, blob *[]byte, maxSize uint64)

DefineDynamicBytesOffset defines the next field as dynamic binary blob.

func DefineDynamicBytesOffsetOnFork added in v0.3.0

func DefineDynamicBytesOffsetOnFork(c *Codec, blob *[]byte, maxSize uint64, filter ForkFilter)

DefineDynamicBytesOffsetOnFork defines the next field as dynamic binary blob if present in a fork.

func DefineDynamicObjectContent

func DefineDynamicObjectContent[T newableDynamicObject[U], U any](c *Codec, obj *T)

DefineDynamicObjectContent defines the next field as a dynamic ssz object.

func DefineDynamicObjectContentOnFork added in v0.3.0

func DefineDynamicObjectContentOnFork[T newableDynamicObject[U], U any](c *Codec, obj *T, filter ForkFilter)

DefineDynamicObjectContentOnFork defines the next field as a dynamic ssz object if present in a fork.

func DefineDynamicObjectOffset

func DefineDynamicObjectOffset[T newableDynamicObject[U], U any](c *Codec, obj *T)

DefineDynamicObjectOffset defines the next field as a dynamic ssz object.

func DefineDynamicObjectOffsetOnFork added in v0.3.0

func DefineDynamicObjectOffsetOnFork[T newableDynamicObject[U], U any](c *Codec, obj *T, filter ForkFilter)

DefineDynamicObjectOffsetOnFork defines the next field as a dynamic ssz object if present in a fork.

func DefineSliceOfBitsContent

func DefineSliceOfBitsContent(c *Codec, bits *bitfield.Bitlist, maxBits uint64)

DefineSliceOfBitsContent defines the next field as a dynamic slice of (packed) bits.

func DefineSliceOfBitsContentOnFork added in v0.3.0

func DefineSliceOfBitsContentOnFork(c *Codec, bits *bitfield.Bitlist, maxBits uint64, filter ForkFilter)

DefineSliceOfBitsContentOnFork defines the next field as a dynamic slice of (packed) bits if present in a fork.

func DefineSliceOfBitsOffset

func DefineSliceOfBitsOffset(c *Codec, bits *bitfield.Bitlist, maxBits uint64)

DefineSliceOfBitsOffset defines the next field as a dynamic slice of (packed) bits.

func DefineSliceOfBitsOffsetOnFork added in v0.3.0

func DefineSliceOfBitsOffsetOnFork(c *Codec, bits *bitfield.Bitlist, maxBits uint64, filter ForkFilter)

DefineSliceOfBitsOffsetOnFork defines the next field as a dynamic slice of (packed) bits if present in a fork.

func DefineSliceOfDynamicBytesContent

func DefineSliceOfDynamicBytesContent(c *Codec, blobs *[][]byte, maxItems uint64, maxSize uint64)

DefineSliceOfDynamicBytesContent defines the next field as a dynamic slice of dynamic binary blobs.

func DefineSliceOfDynamicBytesContentOnFork added in v0.3.0

func DefineSliceOfDynamicBytesContentOnFork(c *Codec, blobs *[][]byte, maxItems uint64, maxSize uint64, filter ForkFilter)

DefineSliceOfDynamicBytesContentOnFork defines the next field as a dynamic slice of dynamic binary blobs.

func DefineSliceOfDynamicBytesOffset

func DefineSliceOfDynamicBytesOffset(c *Codec, blobs *[][]byte, maxItems uint64, maxSize uint64)

DefineSliceOfDynamicBytesOffset defines the next field as a dynamic slice of dynamic binary blobs.

func DefineSliceOfDynamicBytesOffsetOnFork added in v0.3.0

func DefineSliceOfDynamicBytesOffsetOnFork(c *Codec, blobs *[][]byte, maxItems uint64, maxSize uint64, filter ForkFilter)

DefineSliceOfDynamicBytesOffsetOnFork defines the next field as a dynamic slice of dynamic binary blobs if present in a fork.

func DefineSliceOfDynamicObjectsContent

func DefineSliceOfDynamicObjectsContent[T newableDynamicObject[U], U any](c *Codec, objects *[]T, maxItems uint64)

DefineSliceOfDynamicObjectsContent defines the next field as a dynamic slice of dynamic ssz objects.

func DefineSliceOfDynamicObjectsContentOnFork added in v0.3.0

func DefineSliceOfDynamicObjectsContentOnFork[T newableDynamicObject[U], U any](c *Codec, objects *[]T, maxItems uint64, filter ForkFilter)

DefineSliceOfDynamicObjectsContentOnFork defines the next field as a dynamic slice of dynamic ssz objects if present in a fork.

func DefineSliceOfDynamicObjectsOffset

func DefineSliceOfDynamicObjectsOffset[T newableDynamicObject[U], U any](c *Codec, objects *[]T, maxItems uint64)

DefineSliceOfDynamicObjectsOffset defines the next field as a dynamic slice of dynamic ssz objects.

func DefineSliceOfDynamicObjectsOffsetOnFork added in v0.3.0

func DefineSliceOfDynamicObjectsOffsetOnFork[T newableDynamicObject[U], U any](c *Codec, objects *[]T, maxItems uint64, filter ForkFilter)

DefineSliceOfDynamicObjectsOffsetOnFork defines the next field as a dynamic slice of dynamic ssz objects if present in a fork.

func DefineSliceOfStaticBytesContent

func DefineSliceOfStaticBytesContent[T commonBytesLengths](c *Codec, blobs *[]T, maxItems uint64)

DefineSliceOfStaticBytesContent defines the next field as a dynamic slice of static binary blobs.

func DefineSliceOfStaticBytesContentOnFork added in v0.3.0

func DefineSliceOfStaticBytesContentOnFork[T commonBytesLengths](c *Codec, blobs *[]T, maxItems uint64, filter ForkFilter)

DefineSliceOfStaticBytesContentOnFork defines the next field as a dynamic slice of static binary blobs if present in a fork.

func DefineSliceOfStaticBytesOffset

func DefineSliceOfStaticBytesOffset[T commonBytesLengths](c *Codec, bytes *[]T, maxItems uint64)

DefineSliceOfStaticBytesOffset defines the next field as a dynamic slice of static binary blobs.

func DefineSliceOfStaticBytesOffsetOnFork added in v0.3.0

func DefineSliceOfStaticBytesOffsetOnFork[T commonBytesLengths](c *Codec, bytes *[]T, maxItems uint64, filter ForkFilter)

DefineSliceOfStaticBytesOffsetOnFork defines the next field as a dynamic slice of static binary blobs if present in a fork.

func DefineSliceOfStaticObjectsContent

func DefineSliceOfStaticObjectsContent[T newableStaticObject[U], U any](c *Codec, objects *[]T, maxItems uint64)

DefineSliceOfStaticObjectsContent defines the next field as a dynamic slice of static ssz objects.

func DefineSliceOfStaticObjectsContentOnFork added in v0.3.0

func DefineSliceOfStaticObjectsContentOnFork[T newableStaticObject[U], U any](c *Codec, objects *[]T, maxItems uint64, filter ForkFilter)

DefineSliceOfStaticObjectsContentOnFork defines the next field as a dynamic slice of static ssz objects if present in a fork.

func DefineSliceOfStaticObjectsOffset

func DefineSliceOfStaticObjectsOffset[T newableStaticObject[U], U any](c *Codec, objects *[]T, maxItems uint64)

DefineSliceOfStaticObjectsOffset defines the next field as a dynamic slice of static ssz objects.

func DefineSliceOfStaticObjectsOffsetOnFork added in v0.3.0

func DefineSliceOfStaticObjectsOffsetOnFork[T newableStaticObject[U], U any](c *Codec, objects *[]T, maxItems uint64, filter ForkFilter)

DefineSliceOfStaticObjectsOffsetOnFork defines the next field as a dynamic slice of static ssz objects if present in a fork.

func DefineSliceOfUint64sContent

func DefineSliceOfUint64sContent[T ~uint64](c *Codec, ns *[]T, maxItems uint64)

DefineSliceOfUint64sContent defines the next field as a dynamic slice of uint64s.

func DefineSliceOfUint64sContentOnFork added in v0.3.0

func DefineSliceOfUint64sContentOnFork[T ~uint64](c *Codec, ns *[]T, maxItems uint64, filter ForkFilter)

DefineSliceOfUint64sContentOnFork defines the next field as a dynamic slice of uint64s if present in a fork.

func DefineSliceOfUint64sOffset

func DefineSliceOfUint64sOffset[T ~uint64](c *Codec, ns *[]T, maxItems uint64)

DefineSliceOfUint64sOffset defines the next field as a dynamic slice of uint64s.

func DefineSliceOfUint64sOffsetOnFork added in v0.3.0

func DefineSliceOfUint64sOffsetOnFork[T ~uint64](c *Codec, ns *[]T, maxItems uint64, filter ForkFilter)

DefineSliceOfUint64sOffsetOnFork defines the next field as a dynamic slice of uint64s if present in a fork.

func DefineStaticBytes

func DefineStaticBytes[T commonBytesLengths](c *Codec, blob *T)

DefineStaticBytes defines the next field as static binary blob. This method can be used for byte arrays.

func DefineStaticBytesPointerOnFork added in v0.3.0

func DefineStaticBytesPointerOnFork[T commonBytesLengths](c *Codec, blob **T, filter ForkFilter)

DefineStaticBytesPointerOnFork defines the next field as static binary blob if present in a fork. This method can be used for byte arrays.

func DefineStaticObject

func DefineStaticObject[T newableStaticObject[U], U any](c *Codec, obj *T)

DefineStaticObject defines the next field as a static ssz object.

func DefineStaticObjectOnFork added in v0.3.0

func DefineStaticObjectOnFork[T newableStaticObject[U], U any](c *Codec, obj *T, filter ForkFilter)

DefineStaticObjectOnFork defines the next field as a static ssz object if present in a fork.

func DefineUint16 added in v0.3.0

func DefineUint16[T ~uint16](c *Codec, n *T)

DefineUint16 defines the next field as a uint16.

func DefineUint16PointerOnFork added in v0.3.0

func DefineUint16PointerOnFork[T ~uint16](c *Codec, n **T, filter ForkFilter)

DefineUint16PointerOnFork defines the next field as a uint16 if present in a fork.

func DefineUint256

func DefineUint256(c *Codec, n **uint256.Int)

DefineUint256 defines the next field as a uint256.

func DefineUint256BigInt added in v0.3.0

func DefineUint256BigInt(c *Codec, n **big.Int)

DefineUint256BigInt defines the next field as a uint256.

func DefineUint256BigIntOnFork added in v0.3.0

func DefineUint256BigIntOnFork(c *Codec, n **big.Int, filter ForkFilter)

DefineUint256BigIntOnFork defines the next field as a uint256 if present in a fork.

func DefineUint256OnFork added in v0.3.0

func DefineUint256OnFork(c *Codec, n **uint256.Int, filter ForkFilter)

DefineUint256OnFork defines the next field as a uint256 if present in a fork.

func DefineUint32 added in v0.3.0

func DefineUint32[T ~uint32](c *Codec, n *T)

DefineUint32 defines the next field as a uint32.

func DefineUint32PointerOnFork added in v0.3.0

func DefineUint32PointerOnFork[T ~uint32](c *Codec, n **T, filter ForkFilter)

DefineUint32PointerOnFork defines the next field as a uint32 if present in a fork.

func DefineUint64

func DefineUint64[T ~uint64](c *Codec, n *T)

DefineUint64 defines the next field as a uint64.

func DefineUint64PointerOnFork added in v0.3.0

func DefineUint64PointerOnFork[T ~uint64](c *Codec, n **T, filter ForkFilter)

DefineUint64PointerOnFork defines the next field as a uint64 if present in a fork.

func DefineUint8 added in v0.3.0

func DefineUint8[T ~uint8](c *Codec, n *T)

DefineUint8 defines the next field as a uint8.

func DefineUint8PointerOnFork added in v0.3.0

func DefineUint8PointerOnFork[T ~uint8](c *Codec, n **T, filter ForkFilter)

DefineUint8PointerOnFork defines the next field as a uint8 if present in a fork.

func DefineUnsafeArrayOfStaticBytes

func DefineUnsafeArrayOfStaticBytes[T commonBytesLengths](c *Codec, blobs []T)

DefineUnsafeArrayOfStaticBytes defines the next field as a static array of static binary blobs. This method operates on plain slices of byte arrays and will crash if provided a slice of a non-array. Its purpose is to get around Go's generics limitations in generated code (use DefineArrayOfStaticBytes).

func EncodeArrayOfBits

func EncodeArrayOfBits[T commonBitsLengths](enc *Encoder, bits *T)

EncodeArrayOfBits serializes a static array of (packed) bits.

func EncodeArrayOfBitsPointerOnFork added in v0.3.0

func EncodeArrayOfBitsPointerOnFork[T commonBitsLengths](enc *Encoder, bits *T, filter ForkFilter)

EncodeArrayOfBitsPointerOnFork serializes a static array of (packed) bits if present in a fork.

Note, a nil pointer is serialized as a zero-value bit array.

func EncodeArrayOfStaticBytes

func EncodeArrayOfStaticBytes[T commonBytesArrayLengths[U], U commonBytesLengths](enc *Encoder, blobs *T)

EncodeArrayOfStaticBytes serializes a static array of static binary blobs.

The reason the blobs is passed by pointer and not by value is to prevent it from escaping to the heap (and incurring an allocation) when passing it to the output stream.

func EncodeArrayOfUint64s

func EncodeArrayOfUint64s[T commonUint64sLengths](enc *Encoder, ns *T)

EncodeArrayOfUint64s serializes a static array of uint64s.

The reason the ns is passed by pointer and not by value is to prevent it from escaping to the heap (and incurring an allocation) when passing it to the output stream.

func EncodeArrayOfUint64sPointerOnFork added in v0.3.0

func EncodeArrayOfUint64sPointerOnFork[T commonUint64sLengths](enc *Encoder, ns *T, filter ForkFilter)

EncodeArrayOfUint64sPointerOnFork serializes a static array of uint64s if present in a fork.

Note, a nil pointer is serialized as a uint64 array filled with zeroes.

func EncodeBool

func EncodeBool[T ~bool](enc *Encoder, v T)

EncodeBool serializes a boolean.

func EncodeBoolPointerOnFork added in v0.3.0

func EncodeBoolPointerOnFork[T ~bool](enc *Encoder, v *T, filter ForkFilter)

EncodeBoolPointerOnFork serializes a boolean if present in a fork.

Note, a nil pointer is serialized as false.

func EncodeCheckedArrayOfStaticBytes

func EncodeCheckedArrayOfStaticBytes[T commonBytesLengths](enc *Encoder, blobs []T, size uint64)

EncodeCheckedArrayOfStaticBytes serializes a static array of static binary blobs.

func EncodeCheckedStaticBytes

func EncodeCheckedStaticBytes(enc *Encoder, blob []byte, size uint64)

EncodeCheckedStaticBytes serializes a static binary blob.

func EncodeDynamicBytesContent

func EncodeDynamicBytesContent(enc *Encoder, blob []byte)

EncodeDynamicBytesContent is the lazy data writer for EncodeDynamicBytesOffset.

func EncodeDynamicBytesContentOnFork added in v0.3.0

func EncodeDynamicBytesContentOnFork(enc *Encoder, blob []byte, filter ForkFilter)

EncodeDynamicBytesContentOnFork is the lazy data writer for EncodeDynamicBytesOffsetOnFork.

func EncodeDynamicBytesOffset

func EncodeDynamicBytesOffset(enc *Encoder, blob []byte)

EncodeDynamicBytesOffset serializes a dynamic binary blob.

func EncodeDynamicBytesOffsetOnFork added in v0.3.0

func EncodeDynamicBytesOffsetOnFork(enc *Encoder, blob []byte, filter ForkFilter)

EncodeDynamicBytesOffsetOnFork serializes a dynamic binary blob if present in a fork.

func EncodeDynamicObjectContent

func EncodeDynamicObjectContent[T newableDynamicObject[U], U any](enc *Encoder, obj T)

EncodeDynamicObjectContent is the lazy data writer for EncodeDynamicObjectOffset.

Note, nil will be encoded as a zero-value initialized object.

func EncodeDynamicObjectContentOnFork added in v0.3.0

func EncodeDynamicObjectContentOnFork[T newableDynamicObject[U], U any](enc *Encoder, obj T, filter ForkFilter)

EncodeDynamicObjectContentOnFork is the lazy data writer for EncodeDynamicObjectOffsetOnFork.

Note, nil will be encoded as a zero-value initialized object.

func EncodeDynamicObjectOffset

func EncodeDynamicObjectOffset[T newableDynamicObject[U], U any](enc *Encoder, obj T)

EncodeDynamicObjectOffset serializes a dynamic ssz object.

Note, nil will be encoded as a zero-value initialized object.

func EncodeDynamicObjectOffsetOnFork added in v0.3.0

func EncodeDynamicObjectOffsetOnFork[T newableDynamicObject[U], U any](enc *Encoder, obj T, filter ForkFilter)

EncodeDynamicObjectOffsetOnFork serializes a dynamic ssz object if present in a fork.

Note, nil will be encoded as a zero-value initialized object.

func EncodeSliceOfBitsContent

func EncodeSliceOfBitsContent(enc *Encoder, bits bitfield.Bitlist)

EncodeSliceOfBitsContent is the lazy data writer for EncodeSliceOfBitsOffset.

Note, a nil slice of bits is serialized as an empty bit list.

func EncodeSliceOfBitsContentOnFork added in v0.3.0

func EncodeSliceOfBitsContentOnFork(enc *Encoder, bits bitfield.Bitlist, filter ForkFilter)

EncodeSliceOfBitsContentOnFork is the lazy data writer for EncodeSliceOfBitsOffsetOnFork.

Note, a nil slice of bits is serialized as an empty bit list.

func EncodeSliceOfBitsOffset

func EncodeSliceOfBitsOffset(enc *Encoder, bits bitfield.Bitlist)

EncodeSliceOfBitsOffset serializes a dynamic slice of (packed) bits.

Note, a nil slice of bits is serialized as an empty bit list.

func EncodeSliceOfBitsOffsetOnFork added in v0.3.0

func EncodeSliceOfBitsOffsetOnFork(enc *Encoder, bits bitfield.Bitlist, filter ForkFilter)

EncodeSliceOfBitsOffsetOnFork serializes a dynamic slice of (packed) bits if present in a fork.

Note, a nil slice of bits is serialized as an empty bit list.

func EncodeSliceOfDynamicBytesContent

func EncodeSliceOfDynamicBytesContent(enc *Encoder, blobs [][]byte)

EncodeSliceOfDynamicBytesContent is the lazy data writer for EncodeSliceOfDynamicBytesOffset.

func EncodeSliceOfDynamicBytesContentOnFork added in v0.3.0

func EncodeSliceOfDynamicBytesContentOnFork(enc *Encoder, blobs [][]byte, filter ForkFilter)

EncodeSliceOfDynamicBytesContentOnFork is the lazy data writer for EncodeSliceOfDynamicBytesOffsetOnFork.

func EncodeSliceOfDynamicBytesOffset

func EncodeSliceOfDynamicBytesOffset(enc *Encoder, blobs [][]byte)

EncodeSliceOfDynamicBytesOffset serializes a dynamic slice of dynamic binary blobs.

func EncodeSliceOfDynamicBytesOffsetOnFork added in v0.3.0

func EncodeSliceOfDynamicBytesOffsetOnFork(enc *Encoder, blobs [][]byte, filter ForkFilter)

EncodeSliceOfDynamicBytesOffsetOnFork serializes a dynamic slice of dynamic binary blob if present in a fork.

func EncodeSliceOfDynamicObjectsContent

func EncodeSliceOfDynamicObjectsContent[T DynamicObject](enc *Encoder, objects []T)

EncodeSliceOfDynamicObjectsContent is the lazy data writer for EncodeSliceOfDynamicObjectsOffset.

func EncodeSliceOfDynamicObjectsContentOnFork added in v0.3.0

func EncodeSliceOfDynamicObjectsContentOnFork[T DynamicObject](enc *Encoder, objects []T, filter ForkFilter)

EncodeSliceOfDynamicObjectsContentOnFork is the lazy data writer for EncodeSliceOfDynamicObjectsOffsetOnFork.

func EncodeSliceOfDynamicObjectsOffset

func EncodeSliceOfDynamicObjectsOffset[T DynamicObject](enc *Encoder, objects []T)

EncodeSliceOfDynamicObjectsOffset serializes a dynamic slice of dynamic ssz objects.

func EncodeSliceOfDynamicObjectsOffsetOnFork added in v0.3.0

func EncodeSliceOfDynamicObjectsOffsetOnFork[T DynamicObject](enc *Encoder, objects []T, filter ForkFilter)

EncodeSliceOfDynamicObjectsOffsetOnFork serializes a dynamic slice of dynamic ssz objects if present in a fork.

func EncodeSliceOfStaticBytesContent

func EncodeSliceOfStaticBytesContent[T commonBytesLengths](enc *Encoder, blobs []T)

EncodeSliceOfStaticBytesContent is the lazy data writer for EncodeSliceOfStaticBytesOffset.

func EncodeSliceOfStaticBytesContentOnFork added in v0.3.0

func EncodeSliceOfStaticBytesContentOnFork[T commonBytesLengths](enc *Encoder, blobs []T, filter ForkFilter)

EncodeSliceOfStaticBytesContentOnFork is the lazy data writer for EncodeSliceOfStaticBytesOffsetOnFork.

func EncodeSliceOfStaticBytesOffset

func EncodeSliceOfStaticBytesOffset[T commonBytesLengths](enc *Encoder, blobs []T)

EncodeSliceOfStaticBytesOffset serializes a dynamic slice of static binary blobs.

func EncodeSliceOfStaticBytesOffsetOnFork added in v0.3.0

func EncodeSliceOfStaticBytesOffsetOnFork[T commonBytesLengths](enc *Encoder, blobs []T, filter ForkFilter)

EncodeSliceOfStaticBytesOffsetOnFork serializes a dynamic slice of static binary blobs.

func EncodeSliceOfStaticObjectsContent

func EncodeSliceOfStaticObjectsContent[T StaticObject](enc *Encoder, objects []T)

EncodeSliceOfStaticObjectsContent is the lazy data writer for EncodeSliceOfStaticObjectsOffset.

func EncodeSliceOfStaticObjectsContentOnFork added in v0.3.0

func EncodeSliceOfStaticObjectsContentOnFork[T StaticObject](enc *Encoder, objects []T, filter ForkFilter)

EncodeSliceOfStaticObjectsContentOnFork is the lazy data writer for EncodeSliceOfStaticObjectsOffsetOnFork.

func EncodeSliceOfStaticObjectsOffset

func EncodeSliceOfStaticObjectsOffset[T StaticObject](enc *Encoder, objects []T)

EncodeSliceOfStaticObjectsOffset serializes a dynamic slice of static ssz objects.

func EncodeSliceOfStaticObjectsOffsetOnFork added in v0.3.0

func EncodeSliceOfStaticObjectsOffsetOnFork[T StaticObject](enc *Encoder, objects []T, filter ForkFilter)

EncodeSliceOfStaticObjectsOffsetOnFork serializes a dynamic slice of static ssz objects if present in a fork.

func EncodeSliceOfUint64sContent

func EncodeSliceOfUint64sContent[T ~uint64](enc *Encoder, ns []T)

EncodeSliceOfUint64sContent is the lazy data writer for EncodeSliceOfUint64sOffset.

func EncodeSliceOfUint64sContentOnFork added in v0.3.0

func EncodeSliceOfUint64sContentOnFork[T ~uint64](enc *Encoder, ns []T, filter ForkFilter)

EncodeSliceOfUint64sContentOnFork is the lazy data writer for EncodeSliceOfUint64sOffsetOnFork.

func EncodeSliceOfUint64sOffset

func EncodeSliceOfUint64sOffset[T ~uint64](enc *Encoder, ns []T)

EncodeSliceOfUint64sOffset serializes a dynamic slice of uint64s.

func EncodeSliceOfUint64sOffsetOnFork added in v0.3.0

func EncodeSliceOfUint64sOffsetOnFork[T ~uint64](enc *Encoder, ns []T, filter ForkFilter)

EncodeSliceOfUint64sOffsetOnFork serializes a dynamic slice of uint64s if present in a fork.

func EncodeStaticBytes

func EncodeStaticBytes[T commonBytesLengths](enc *Encoder, blob *T)

EncodeStaticBytes serializes a static binary blob.

The blob is passed by pointer to avoid high stack copy costs and a potential escape to the heap.

func EncodeStaticBytesPointerOnFork added in v0.3.0

func EncodeStaticBytesPointerOnFork[T commonBytesLengths](enc *Encoder, blob *T, filter ForkFilter)

EncodeStaticBytesPointerOnFork serializes a static binary blob if present in a fork.

Note, a nil pointer is serialized as a zero-value blob.

func EncodeStaticObject

func EncodeStaticObject[T newableStaticObject[U], U any](enc *Encoder, obj T)

EncodeStaticObject serializes a static ssz object.

Note, nil will be encoded as a zero-value initialized object.

Example
// ssz: Go Simple Serialize (SSZ) codec library
// Copyright 2024 ssz Authors
// SPDX-License-Identifier: BSD-3-Clause

package main

import (
	"bytes"
	"fmt"

	"github.com/karalabe/ssz"
)

type Address [20]byte

type Withdrawal struct {
	Index     uint64  `ssz-size:"8"`
	Validator uint64  `ssz-size:"8"`
	Address   Address `ssz-size:"20"`
	Amount    uint64  `ssz-size:"8"`
}

func (w *Withdrawal) SizeSSZ(siz *ssz.Sizer) uint32 { return 44 }

func (w *Withdrawal) DefineSSZ(codec *ssz.Codec) {
	ssz.DefineUint64(codec, &w.Index)        // Field (0) - Index          -  8 bytes
	ssz.DefineUint64(codec, &w.Validator)    // Field (1) - ValidatorIndex -  8 bytes
	ssz.DefineStaticBytes(codec, &w.Address) // Field (2) - Address        - 20 bytes
	ssz.DefineUint64(codec, &w.Amount)       // Field (3) - Amount         -  8 bytes
}

func main() {
	out := new(bytes.Buffer)
	if err := ssz.EncodeToStream(out, new(Withdrawal)); err != nil {
		panic(err)
	}
	hash := ssz.HashSequential(new(Withdrawal))

	fmt.Printf("ssz: %#x\nhash: %#x\n", out, hash)
}
Output:

ssz: 0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
hash: 0xdb56114e00fdd4c1f85c892bf35ac9a89289aaecb1ebd0a96cde606a748b5d71

func EncodeStaticObjectOnFork added in v0.3.0

func EncodeStaticObjectOnFork[T newableStaticObject[U], U any](enc *Encoder, obj T, filter ForkFilter)

EncodeStaticObjectOnFork serializes a static ssz object is present in a fork.

Note, nil will be encoded as a zero-value initialized object.

func EncodeToBytes

func EncodeToBytes(buf []byte, obj Object) error

EncodeToBytes serializes a non-monolithic object into a byte buffer. If the type contains fork-specific rules, use EncodeToBytesOnFork.

Don't use this method if you want to then write the buffer into a stream via some writer, as that would double the memory use for the temporary buffer. For that use case, use EncodeToStream.

func EncodeToBytesOnFork added in v0.3.0

func EncodeToBytesOnFork(buf []byte, obj Object, fork Fork) error

EncodeToBytesOnFork serializes a monolithic object into a byte buffer. If the type does not contain fork-specific rules, you can also use EncodeToBytes.

Don't use this method if you want to then write the buffer into a stream via some writer, as that would double the memory use for the temporary buffer. For that use case, use EncodeToStreamOnFork.

func EncodeToStream

func EncodeToStream(w io.Writer, obj Object) error

EncodeToStream serializes a non-monolithic object into a data stream. If the type contains fork-specific rules, use EncodeToStreamOnFork.

Do not use this method with a bytes.Buffer to write into a []byte slice, as that will do double the byte copying. For that use case, use EncodeToBytes.

func EncodeToStreamOnFork added in v0.3.0

func EncodeToStreamOnFork(w io.Writer, obj Object, fork Fork) error

EncodeToStreamOnFork serializes a monolithic object into a data stream. If the type does not contain fork-specific rules, you can also use EncodeToStream.

Do not use this method with a bytes.Buffer to write into a []byte slice, as that will do double the byte copying. For that use case, use EncodeToBytesOnFork.

func EncodeUint16 added in v0.3.0

func EncodeUint16[T ~uint16](enc *Encoder, n T)

EncodeUint16 serializes a uint16.

func EncodeUint16PointerOnFork added in v0.3.0

func EncodeUint16PointerOnFork[T ~uint16](enc *Encoder, n *T, filter ForkFilter)

EncodeUint16PointerOnFork serializes a uint16 if present in a fork.

Note, a nil pointer is serialized as zero.

func EncodeUint256

func EncodeUint256(enc *Encoder, n *uint256.Int)

EncodeUint256 serializes a uint256.

Note, a nil pointer is serialized as zero.

func EncodeUint256BigInt added in v0.3.0

func EncodeUint256BigInt(enc *Encoder, n *big.Int)

EncodeUint256BigInt serializes a big.Int as uint256.

Note, a nil pointer is serialized as zero. Note, an overflow will be silently dropped.

func EncodeUint256BigIntOnFork added in v0.3.0

func EncodeUint256BigIntOnFork(enc *Encoder, n *big.Int, filter ForkFilter)

EncodeUint256BigIntOnFork serializes a big.Int as uint256 if present in a fork.

Note, a nil pointer is serialized as zero. Note, an overflow will be silently dropped.

func EncodeUint256OnFork added in v0.3.0

func EncodeUint256OnFork(enc *Encoder, n *uint256.Int, filter ForkFilter)

EncodeUint256OnFork serializes a uint256 if present in a fork.

Note, a nil pointer is serialized as zero.

func EncodeUint32 added in v0.3.0

func EncodeUint32[T ~uint32](enc *Encoder, n T)

EncodeUint32 serializes a uint32.

func EncodeUint32PointerOnFork added in v0.3.0

func EncodeUint32PointerOnFork[T ~uint32](enc *Encoder, n *T, filter ForkFilter)

EncodeUint32PointerOnFork serializes a uint32 if present in a fork.

Note, a nil pointer is serialized as zero.

func EncodeUint64

func EncodeUint64[T ~uint64](enc *Encoder, n T)

EncodeUint64 serializes a uint64.

func EncodeUint64PointerOnFork added in v0.3.0

func EncodeUint64PointerOnFork[T ~uint64](enc *Encoder, n *T, filter ForkFilter)

EncodeUint64PointerOnFork serializes a uint64 if present in a fork.

Note, a nil pointer is serialized as zero.

func EncodeUint8 added in v0.3.0

func EncodeUint8[T ~uint8](enc *Encoder, n T)

EncodeUint8 serializes a uint8.

func EncodeUint8PointerOnFork added in v0.3.0

func EncodeUint8PointerOnFork[T ~uint8](enc *Encoder, n *T, filter ForkFilter)

EncodeUint8PointerOnFork serializes a uint8 if present in a fork.

Note, a nil pointer is serialized as zero.

func EncodeUnsafeArrayOfStaticBytes

func EncodeUnsafeArrayOfStaticBytes[T commonBytesLengths](enc *Encoder, blobs []T)

EncodeUnsafeArrayOfStaticBytes serializes a static array of static binary blobs.

func HashArrayOfBits added in v0.2.0

func HashArrayOfBits[T commonBitsLengths](h *Hasher, bits *T)

HashArrayOfBits hashes a static array of (packed) bits.

func HashArrayOfBitsPointerOnFork added in v0.3.0

func HashArrayOfBitsPointerOnFork[T commonBitsLengths](h *Hasher, bits *T, filter ForkFilter)

HashArrayOfBitsPointerOnFork hashes a static array of (packed) bits if present in a fork.

func HashArrayOfStaticBytes added in v0.2.0

func HashArrayOfStaticBytes[T commonBytesArrayLengths[U], U commonBytesLengths](h *Hasher, blobs *T)

HashArrayOfStaticBytes hashes a static array of static binary blobs.

The reason the blobs is passed by pointer and not by value is to prevent it from escaping to the heap (and incurring an allocation) when passing it to the output stream.

func HashArrayOfUint64s added in v0.2.0

func HashArrayOfUint64s[T commonUint64sLengths](h *Hasher, ns *T)

HashArrayOfUint64s hashes a static array of uint64s.

The reason the ns is passed by pointer and not by value is to prevent it from escaping to the heap (and incurring an allocation) when passing it to the hasher.

func HashArrayOfUint64sPointerOnFork added in v0.3.0

func HashArrayOfUint64sPointerOnFork[T commonUint64sLengths](h *Hasher, ns *T, filter ForkFilter)

HashArrayOfUint64sPointerOnFork hashes a static array of uint64s if present in a fork.

func HashBool added in v0.2.0

func HashBool[T ~bool](h *Hasher, v T)

HashBool hashes a boolean.

func HashBoolPointerOnFork added in v0.3.0

func HashBoolPointerOnFork[T ~bool](h *Hasher, v *T, filter ForkFilter)

HashBoolPointerOnFork hashes a boolean if present in a fork.

Note, a nil pointer is hashed as zero.

func HashCheckedArrayOfStaticBytes added in v0.2.0

func HashCheckedArrayOfStaticBytes[T commonBytesLengths](h *Hasher, blobs []T)

HashCheckedArrayOfStaticBytes hashes a static array of static binary blobs.

func HashCheckedStaticBytes added in v0.2.0

func HashCheckedStaticBytes(h *Hasher, blob []byte)

HashCheckedStaticBytes hashes a static binary blob.

func HashConcurrent added in v0.2.0

func HashConcurrent(obj Object) [32]byte

HashConcurrent computes the merkle root of a non-monolithic object on potentially multiple concurrent threads (iff some data segments are large enough to be worth it). This is useful for processing large objects, but will place a bigger load on your CPU and GC; and might be more variable timing wise depending on other load.

If the type contains fork-specific rules, use HashConcurrentOnFork.

func HashConcurrentOnFork added in v0.3.0

func HashConcurrentOnFork(obj Object, fork Fork) [32]byte

HashConcurrentOnFork computes the merkle root of a monolithic object on potentially multiple concurrent threads (iff some data segments are large enough to be worth it). This is useful for processing large objects, but will place a bigger load on your CPU and GC; and might be more variable timing wise depending on other load.

If the type does not contain fork-specific rules, you can also use HashConcurrent.

func HashDynamicBytes added in v0.2.0

func HashDynamicBytes(h *Hasher, blob []byte, maxSize uint64)

HashDynamicBytes hashes a dynamic binary blob.

func HashDynamicBytesOnFork added in v0.3.0

func HashDynamicBytesOnFork(h *Hasher, blob []byte, maxSize uint64, filter ForkFilter)

HashDynamicBytesOnFork hashes a dynamic binary blob if present in a fork.

func HashDynamicObject added in v0.2.0

func HashDynamicObject[T newableDynamicObject[U], U any](h *Hasher, obj T)

HashDynamicObject hashes a dynamic ssz object.

func HashDynamicObjectOnFork added in v0.3.0

func HashDynamicObjectOnFork[T newableDynamicObject[U], U any](h *Hasher, obj T, filter ForkFilter)

HashDynamicObjectOnFork hashes a dynamic ssz object if present in a fork.

func HashSequential added in v0.2.0

func HashSequential(obj Object) [32]byte

HashSequential computes the merkle root of a non-monolithic object on a single thread. This is useful for processing small objects with stable runtime and O(1) GC guarantees.

If the type contains fork-specific rules, use HashSequentialOnFork.

func HashSequentialOnFork added in v0.3.0

func HashSequentialOnFork(obj Object, fork Fork) [32]byte

HashSequentialOnFork computes the merkle root of a monolithic object on a single thread. This is useful for processing small objects with stable runtime and O(1) GC guarantees.

If the type does not contain fork-specific rules, you can also use HashSequential.

func HashSliceOfBits added in v0.2.0

func HashSliceOfBits(h *Hasher, bits bitfield.Bitlist, maxBits uint64)

HashSliceOfBits hashes a dynamic slice of (packed) bits.

Note, a nil slice of bits is serialized as an empty bit list.

func HashSliceOfBitsOnFork added in v0.3.0

func HashSliceOfBitsOnFork(h *Hasher, bits bitfield.Bitlist, maxBits uint64, filter ForkFilter)

HashSliceOfBitsOnFork hashes a dynamic slice of (packed) bits if present in a fork.

Note, a nil slice of bits is serialized as an empty bit list.

func HashSliceOfDynamicBytes added in v0.2.0

func HashSliceOfDynamicBytes(h *Hasher, blobs [][]byte, maxItems uint64, maxSize uint64)

HashSliceOfDynamicBytes hashes a dynamic slice of dynamic binary blobs.

func HashSliceOfDynamicBytesOnFork added in v0.3.0

func HashSliceOfDynamicBytesOnFork(h *Hasher, blobs [][]byte, maxItems uint64, maxSize uint64, filter ForkFilter)

HashSliceOfDynamicBytesOnFork hashes a dynamic slice of dynamic binary blobs if present in a fork.

func HashSliceOfDynamicObjects added in v0.2.0

func HashSliceOfDynamicObjects[T DynamicObject](h *Hasher, objects []T, maxItems uint64)

HashSliceOfDynamicObjects hashes a dynamic slice of dynamic ssz objects.

func HashSliceOfDynamicObjectsOnFork added in v0.3.0

func HashSliceOfDynamicObjectsOnFork[T DynamicObject](h *Hasher, objects []T, maxItems uint64, filter ForkFilter)

HashSliceOfDynamicObjectsOnFork hashes a dynamic slice of dynamic ssz objects if present in a fork.

func HashSliceOfStaticBytes added in v0.2.0

func HashSliceOfStaticBytes[T commonBytesLengths](h *Hasher, blobs []T, maxItems uint64)

HashSliceOfStaticBytes hashes a dynamic slice of static binary blobs.

func HashSliceOfStaticBytesOnFork added in v0.3.0

func HashSliceOfStaticBytesOnFork[T commonBytesLengths](h *Hasher, blobs []T, maxItems uint64, filter ForkFilter)

HashSliceOfStaticBytesOnFork hashes a dynamic slice of static binary blobs if present in a fork.

func HashSliceOfStaticObjects added in v0.2.0

func HashSliceOfStaticObjects[T StaticObject](h *Hasher, objects []T, maxItems uint64)

HashSliceOfStaticObjects hashes a dynamic slice of static ssz objects.

func HashSliceOfStaticObjectsOnFork added in v0.3.0

func HashSliceOfStaticObjectsOnFork[T StaticObject](h *Hasher, objects []T, maxItems uint64, filter ForkFilter)

HashSliceOfStaticObjectsOnFork hashes a dynamic slice of static ssz objects if present in a fork.

func HashSliceOfUint64s added in v0.2.0

func HashSliceOfUint64s[T ~uint64](h *Hasher, ns []T, maxItems uint64)

HashSliceOfUint64s hashes a dynamic slice of uint64s.

func HashSliceOfUint64sOnFork added in v0.3.0

func HashSliceOfUint64sOnFork[T ~uint64](h *Hasher, ns []T, maxItems uint64, filter ForkFilter)

HashSliceOfUint64sOnFork hashes a dynamic slice of uint64s if present in a fork.

func HashStaticBytes added in v0.2.0

func HashStaticBytes[T commonBytesLengths](h *Hasher, blob *T)

HashStaticBytes hashes a static binary blob.

The blob is passed by pointer to avoid high stack copy costs and a potential escape to the heap.

func HashStaticBytesPointerOnFork added in v0.3.0

func HashStaticBytesPointerOnFork[T commonBytesLengths](h *Hasher, blob *T, filter ForkFilter)

HashStaticBytesPointerOnFork hashes a static binary blob if present in a fork.

Note, a nil pointer is hashed as an empty binary blob.

func HashStaticObject added in v0.2.0

func HashStaticObject[T newableStaticObject[U], U any](h *Hasher, obj T)

HashStaticObject hashes a static ssz object.

func HashStaticObjectOnFork added in v0.3.0

func HashStaticObjectOnFork[T newableStaticObject[U], U any](h *Hasher, obj T, filter ForkFilter)

HashStaticObjectOnFork hashes a static ssz object if present in a fork.

func HashUint16 added in v0.3.0

func HashUint16[T ~uint16](h *Hasher, n T)

HashUint16 hashes a uint16.

func HashUint16PointerOnFork added in v0.3.0

func HashUint16PointerOnFork[T ~uint16](h *Hasher, n *T, filter ForkFilter)

HashUint16PointerOnFork hashes a uint16 if present in a fork.

Note, a nil pointer is hashed as zero.

func HashUint256 added in v0.2.0

func HashUint256(h *Hasher, n *uint256.Int)

HashUint256 hashes a uint256.

Note, a nil pointer is hashed as zero.

func HashUint256BigInt added in v0.3.0

func HashUint256BigInt(h *Hasher, n *big.Int)

HashUint256BigInt hashes a big.Int as uint256.

Note, a nil pointer is hashed as zero. Note, an overflow will be silently dropped.

func HashUint256BigIntOnFork added in v0.3.0

func HashUint256BigIntOnFork(h *Hasher, n *big.Int, filter ForkFilter)

HashUint256BigIntOnFork hashes a big.Int as uint256 if present in a fork.

Note, a nil pointer is hashed as zero. Note, an overflow will be silently dropped.

func HashUint256OnFork added in v0.3.0

func HashUint256OnFork(h *Hasher, n *uint256.Int, filter ForkFilter)

HashUint256OnFork hashes a uint256 if present in a fork.

Note, a nil pointer is hashed as zero.

func HashUint32 added in v0.3.0

func HashUint32[T ~uint32](h *Hasher, n T)

HashUint32 hashes a uint32.

func HashUint32PointerOnFork added in v0.3.0

func HashUint32PointerOnFork[T ~uint32](h *Hasher, n *T, filter ForkFilter)

HashUint32PointerOnFork hashes a uint32 if present in a fork.

Note, a nil pointer is hashed as zero.

func HashUint64 added in v0.2.0

func HashUint64[T ~uint64](h *Hasher, n T)

HashUint64 hashes a uint64.

func HashUint64PointerOnFork added in v0.3.0

func HashUint64PointerOnFork[T ~uint64](h *Hasher, n *T, filter ForkFilter)

HashUint64PointerOnFork hashes a uint64 if present in a fork.

Note, a nil pointer is hashed as zero.

func HashUint8 added in v0.3.0

func HashUint8[T ~uint8](h *Hasher, n T)

HashUint8 hashes a uint8.

func HashUint8PointerOnFork added in v0.3.0

func HashUint8PointerOnFork[T ~uint8](h *Hasher, n *T, filter ForkFilter)

HashUint8PointerOnFork hashes a uint8 if present in a fork.

Note, a nil pointer is hashed as zero.

func HashUnsafeArrayOfStaticBytes added in v0.2.0

func HashUnsafeArrayOfStaticBytes[T commonBytesLengths](h *Hasher, blobs []T)

HashUnsafeArrayOfStaticBytes hashes a static array of static binary blobs.

func PrecomputeStaticSizeCache added in v0.3.0

func PrecomputeStaticSizeCache(obj Object) []uint32

PrecomputeStaticSizeCache is a helper to precompute SSZ (static) sizes for a monolith type on different forks.

For non-monolith types that are constant across forks (or are not meant to be used across forks), all the sizes will be the same so might as well hard-code it instead.

func Size

func Size(obj Object) uint32

Size retrieves the size of a non-monolithic object, independent if it is static or dynamic. If the type contains fork-specific rules, use SizeOnFork.

func SizeDynamicBytes

func SizeDynamicBytes(siz *Sizer, blobs []byte) uint32

SizeDynamicBytes returns the serialized size of the dynamic part of a dynamic blob.

func SizeDynamicObject

func SizeDynamicObject[T newableDynamicObject[U], U any](siz *Sizer, obj T) uint32

SizeDynamicObject returns the serialized size of the dynamic part of a dynamic object.

func SizeOnFork added in v0.3.0

func SizeOnFork(obj Object, fork Fork) uint32

SizeOnFork retrieves the size of a monolithic object, independent if it is static or dynamic. If the type does not contain fork-specific rules, you can also use Size.

func SizeSliceOfBits

func SizeSliceOfBits(siz *Sizer, bits bitfield.Bitlist) uint32

SizeSliceOfBits returns the serialized size of the dynamic part of a slice of bits.

Note, a nil slice of bits is sized as an empty bit list.

func SizeSliceOfDynamicBytes

func SizeSliceOfDynamicBytes(siz *Sizer, blobs [][]byte) uint32

SizeSliceOfDynamicBytes returns the serialized size of the dynamic part of a dynamic list of dynamic blobs.

func SizeSliceOfDynamicObjects

func SizeSliceOfDynamicObjects[T DynamicObject](siz *Sizer, objects []T) uint32

SizeSliceOfDynamicObjects returns the serialized size of the dynamic part of a dynamic list of dynamic objects.

func SizeSliceOfStaticBytes

func SizeSliceOfStaticBytes[T commonBytesLengths](siz *Sizer, blobs []T) uint32

SizeSliceOfStaticBytes returns the serialized size of the dynamic part of a dynamic list of static blobs.

func SizeSliceOfStaticObjects

func SizeSliceOfStaticObjects[T StaticObject](siz *Sizer, objects []T) uint32

SizeSliceOfStaticObjects returns the serialized size of the dynamic part of a dynamic list of static objects.

func SizeSliceOfUint64s

func SizeSliceOfUint64s[T ~uint64](siz *Sizer, ns []T) uint32

SizeSliceOfUint64s returns the serialized size of the dynamic part of a dynamic list of uint64s.

Types

type Codec

type Codec struct {
	// contains filtered or unexported fields
}

Codec is a unified SSZ encoder and decoder that allows simple structs to define their schemas once and have that work for both operations at once (with the same speed as explicitly typing them out would, of course).

func (*Codec) DefineDecoder

func (c *Codec) DefineDecoder(impl func(dec *Decoder))

DefineDecoder uses a dedicated decoder in case the types SSZ conversion is for some reason asymmetric (e.g. encoding depends on fields, decoding depends on outer context).

In reality, it will be the live code run when the object is being parsed.

func (*Codec) DefineEncoder

func (c *Codec) DefineEncoder(impl func(enc *Encoder))

DefineEncoder uses a dedicated encoder in case the types SSZ conversion is for some reason asymmetric (e.g. encoding depends on fields, decoding depends on outer context).

In reality, it will be the live code run when the object is being serialized.

func (*Codec) DefineHasher added in v0.2.0

func (c *Codec) DefineHasher(impl func(has *Hasher))

DefineHasher uses a dedicated hasher in case the types SSZ conversion is for some reason asymmetric (e.g. encoding depends on fields, decoding depends on outer context).

In reality, it will be the live code run when the object is being parsed.

type Decoder

type Decoder struct {
	// contains filtered or unexported fields
}

Decoder is a wrapper around an io.Reader or a []byte buffer to implement SSZ decoding in a streaming or buffered way. It has the following behaviors:

  1. The decoder does not buffer, simply reads from the wrapped input stream directly. If you need buffering, that is up to you.

  2. The decoder does not return errors that were hit during reading from the underlying input stream from individual encoding methods. Since there is no expectation (in general) for failure, user code can be denser if error checking is done at the end. Internally, of course, an error will halt all future input operations.

Internally there are a few implementation details that maintainers need to be aware of when modifying the code:

  1. The decoder supports two modes of operation: streaming and buffered. Any high level Go code would achieve that with two decoder types implementing a common interface. Unfortunately, the DecodeXYZ methods are using Go's generic system, which is not supported on struct/interface *methods*. As such, `Decoder.DecodeUint64s[T ~uint64](ns []T)` style methods cannot be used, only `DecodeUint64s[T ~uint64](end *Decoder, ns []T)`. The latter form then requires each method internally to do some soft of type cast to handle different decoder implementations. To avoid runtime type asserts, we've opted for a combo decoder with 2 possible outputs and switching on which one is set. Elegant? No. Fast? Yes.

  2. A lot of code snippets are repeated (e.g. encoding the offset, which is the exact same for all the different types, yet the code below has them copied verbatim). Unfortunately the Go compiler doesn't inline functions aggressively enough (neither does it allow explicitly directing it to), and in such tight loops, extra calls matter on performance.

type DynamicObject

type DynamicObject interface {
	Object

	// SizeSSZ returns either the static size of the object if fixed == true, or
	// the total size otherwise.
	//
	// Note, StaticObject.SizeSSZ and DynamicObject.SizeSSZ deliberately clash
	// to allow the compiler to detect placing one or the other in reversed data
	// slots on an SSZ containers.
	SizeSSZ(siz *Sizer, fixed bool) uint32
}

DynamicObject defines the methods a type needs to implement to be used as a ssz encodable and decodable dynamic object.

type Encoder

type Encoder struct {
	// contains filtered or unexported fields
}

Encoder is a wrapper around an io.Writer or a []byte buffer to implement SSZ encoding in a streaming or buffered way. It has the following behaviors:

  1. The encoder does not buffer, simply writes to the wrapped output stream directly. If you need buffering (and flushing), that is up to you.

  2. The encoder does not return errors that were hit during writing to the underlying output stream from individual encoding methods. Since there is no expectation (in general) for failure, user code can be denser if error checking is done at the end. Internally, of course, an error will halt all future output operations.

  3. The offsets for dynamic fields are tracked internally by the encoder, so the caller only needs to provide the field, the offset of which should be included at the allotted slot.

  4. The contents for dynamic fields are not appended explicitly, rather the caller needs to provide them once more at the end of encoding. This is a design choice to keep the encoder 0-alloc (vs having to stash away the dynamic fields internally).

  5. The encoder does not enforce defined size limits on the dynamic fields. If the caller provided bad data to encode, it is a programming error and a runtime error will not fix anything.

Internally there are a few implementation details that maintainers need to be aware of when modifying the code:

  1. The encoder supports two modes of operation: streaming and buffered. Any high level Go code would achieve that with two encoder types implementing a common interface. Unfortunately, the EncodeXYZ methods are using Go's generic system, which is not supported on struct/interface *methods*. As such, `Encoder.EncodeUint64s[T ~uint64](ns []T)` style methods cannot be used, only `EncodeUint64s[T ~uint64](end *Encoder, ns []T)`. The latter form then requires each method internally to do some soft of type cast to handle different encoder implementations. To avoid runtime type asserts, we've opted for a combo encoder with 2 possible outputs and switching on which one is set. Elegant? No. Fast? Yes.

  2. A lot of code snippets are repeated (e.g. encoding the offset, which is the exact same for all the different types, yet the code below has them copied verbatim). Unfortunately the Go compiler doesn't inline functions aggressively enough (neither does it allow explicitly directing it to), and in such tight loops, extra calls matter on performance.

type Fork added in v0.3.0

type Fork int

Fork is an enum with all the hard forks that Ethereum mainnet went through, which can be used to multiplex monolith types that can encode/decode across a range of forks, not just for one specific.

These enums are only meaningful in relation to one another, but are completely meaningless numbers otherwise. Do not persist them across code versions.

const (
	ForkUnknown Fork = iota // Placeholder if forks haven't been specified (must be index 0)

	ForkFrontier       // https://ethereum.org/en/history/#frontier
	ForkHomestead      // https://ethereum.org/en/history/#homestead
	ForkDAO            // https://ethereum.org/en/history/#dao-fork
	ForkTangerine      // https://ethereum.org/en/history/#tangerine-whistle
	ForkSpurious       // https://ethereum.org/en/history/#spurious-dragon
	ForkByzantium      // https://ethereum.org/en/history/#byzantium
	ForkConstantinople // https://ethereum.org/en/history/#constantinople
	ForkIstanbul       // https://ethereum.org/en/history/#istanbul
	ForkMuir           // https://ethereum.org/en/history/#muir-glacier
	ForkPhase0         // https://ethereum.org/en/history/#beacon-chain-genesis
	ForkBerlin         // https://ethereum.org/en/history/#berlin
	ForkLondon         // https://ethereum.org/en/history/#london
	ForkAltair         // https://ethereum.org/en/history/#altair
	ForkArrow          // https://ethereum.org/en/history/#arrow-glacier
	ForkGray           // https://ethereum.org/en/history/#gray-glacier
	ForkBellatrix      // https://ethereum.org/en/history/#bellatrix
	ForkParis          // https://ethereum.org/en/history/#paris
	ForkShapella       // https://ethereum.org/en/history/#shapella
	ForkDencun         // https://ethereum.org/en/history/#dencun
	ForkPectra         // https://ethereum.org/en/history/#pectra

	ForkFuture // Use this for specifying future features (must be last index, no gaps)

	ForkMerge    = ForkParis    // Common alias for Paris
	ForkShanghai = ForkShapella // EL alias for Shapella
	ForkCapella  = ForkShapella // CL alias for Shapella
	ForkCancun   = ForkDencun   // EL alias for Dencun
	ForkDeneb    = ForkDencun   // CL alias for Dencun
	ForkPrague   = ForkPectra   // EL alias for Pectra
	ForkElectra  = ForkPectra   // CL alias for Pectra
)

type ForkFilter added in v0.3.0

type ForkFilter struct {
	Added   Fork
	Removed Fork
}

ForkFilter can be used by the XXXOnFork methods inside monolithic types to define certain fields appearing only in certain forks.

type Hasher added in v0.2.0

type Hasher struct {
	// contains filtered or unexported fields
}

Hasher is an SSZ Merkle Hash Root computer.

func (*Hasher) Reset added in v0.2.0

func (h *Hasher) Reset()

Reset resets the Hasher obj

type Object

type Object interface {
	// DefineSSZ defines how an object would be encoded/decoded.
	DefineSSZ(codec *Codec)
}

Object defines the methods a type needs to implement to be used as a ssz encodable and decodable object.

type Sizer added in v0.3.0

type Sizer struct {
	// contains filtered or unexported fields
}

Sizer is an SSZ static and dynamic size computer.

func (*Sizer) Fork added in v0.3.0

func (siz *Sizer) Fork() Fork

Fork retrieves the current fork (if any) that the sizer is operating in.

type StaticObject

type StaticObject interface {
	Object

	// SizeSSZ returns the total size of the ssz object.
	//
	// Note, StaticObject.SizeSSZ and DynamicObject.SizeSSZ deliberately clash
	// to allow the compiler to detect placing one or the other in reversed data
	// slots on an SSZ containers.
	SizeSSZ(siz *Sizer) uint32
}

StaticObject defines the methods a type needs to implement to be used as a ssz encodable and decodable static object.

Directories

Path Synopsis
cmd
sszgen command
tests

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL