# API Reference

Arrow.DenseUnionType
Arrow.DenseUnion

An ArrowVector where the type of each element is one of a fixed set of types, meaning its eltype is like a julia Union{type1, type2, ...}. An Arrow.DenseUnion, in comparison to Arrow.SparseUnion, stores elements in a set of arrays, one array per possible type, and an "offsets" array, where each offset element is the index into one of the typed arrays. This allows a sort of "compression", where no extra space is used/allocated to store all the elements.

source
Arrow.DictEncodeType
Arrow.DictEncode(::AbstractVector, id::Integer=nothing)

Signals that a column/array should be dictionary encoded when serialized to the arrow streaming/file format. An optional id number may be provided to signal that multiple columns should use the same pool when being dictionary encoded.

source
Arrow.DictEncodedType
Arrow.DictEncoded

A dictionary encoded array type (similar to a PooledArray). Behaves just like a normal array in most respects; internally, possible values are stored in the encoding::DictEncoding field, while the indices::Vector{<:Integer} field holds the "codes" of each element for indexing into the encoding pool. Any column/array can be dict encoding when serializing to the arrow format either by passing the dictencode=true keyword argument to Arrow.write (which causes all columns to be dict encoded), or wrapping individual columns/ arrays in Arrow.DictEncode(x).

source
Arrow.DictEncodingType
Arrow.DictEncoding

Represents the "pool" of possible values for a DictEncoded array type. Whether the order of values is significant can be checked by looking at the isOrdered boolean field.

The S type parameter, while not tied directly to any field, is the signed integer "index type" of the parent DictEncoded. We keep track of this in the DictEncoding in order to validate the length of the pool doesn't exceed the index type limit. The general workflow of writing arrow data means the initial schema will typically be based off the data in the first record batch, and subsequent record batches need to match the same schema exactly. For example, if a non-first record batch dict encoded column were to cause a DictEncoding pool to overflow on unique values, a fatal error should be thrown.

source
Arrow.FixedSizeListType
Arrow.FixedSizeList

An ArrowVector where each element is a "fixed size" list of some kind, like a NTuple{N, T}.

source
Arrow.ListType
Arrow.List

An ArrowVector where each element is a variable sized list of some kind, like an AbstractVector or AbstractString.

source
Arrow.MapType
Arrow.Map

An ArrowVector where each element is a "map" of some kind, like a Dict.

source
Arrow.PrimitiveType
Arrow.Primitive

An ArrowVector where each element is a "fixed size" scalar of some kind, like an integer, float, decimal, or time type.

source
Arrow.SparseUnionType
Arrow.SparseUnion

An ArrowVector where the type of each element is one of a fixed set of types, meaning its eltype is like a julia Union{type1, type2, ...}. An Arrow.SparseUnion, in comparison to Arrow.DenseUnion, stores elements in a set of arrays, one array per possible type, and each typed array has the same length as the full array. This ends up with "wasted" space, since only one slot among the typed arrays is valid per full array element, but can allow for certain optimizations when each typed array has the same length.

source
Arrow.StreamType
Arrow.Stream(io::IO; convert::Bool=true)
Arrow.Stream(file::String; convert::Bool=true)
Arrow.Stream(bytes::Vector{UInt8}, pos=1, len=nothing; convert::Bool=true)
Arrow.Stream(inputs::Vector; convert::Bool=true)

Start reading an arrow formatted table, from:

• io, bytes will be read all at once via read(io)
• file, bytes will be read via Mmap.mmap(file)
• bytes, a byte vector directly, optionally allowing specifying the starting byte position pos and len
• A Vector of any of the above, in which each input should be an IPC or arrow file and must match schema

Reads the initial schema message from the arrow stream/file, then returns an Arrow.Stream object which will iterate over record batch messages, producing an Arrow.Table on each iteration.

By iterating Arrow.Table, Arrow.Stream satisfies the Tables.partitions interface, and as such can be passed to Tables.jl-compatible sink functions.

This allows iterating over extremely large "arrow tables" in chunks represented as record batches.

Supports the convert keyword argument which controls whether certain arrow primitive types will be lazily converted to more friendly Julia defaults; by default, convert=true.

source
Arrow.StructType
Arrow.Struct

An ArrowVector where each element is a "struct" of some kind with ordered, named fields, like a NamedTuple{names, types} or regular julia struct.

source
Arrow.TableType
Arrow.Table(io::IO; convert::Bool=true)
Arrow.Table(file::String; convert::Bool=true)
Arrow.Table(bytes::Vector{UInt8}, pos=1, len=nothing; convert::Bool=true)
Arrow.Table(inputs::Vector; convert::Bool=true)

Read an arrow formatted table, from:

• io, bytes will be read all at once via read(io)
• file, bytes will be read via Mmap.mmap(file)
• bytes, a byte vector directly, optionally allowing specifying the starting byte position pos and len
• A Vector of any of the above, in which each input should be an IPC or arrow file and must match schema

Returns a Arrow.Table object that allows column access via table.col1, table[:col1], or table[1].

NOTE: the columns in an Arrow.Table are views into the original arrow memory, and hence are not easily modifiable (with e.g. push!, append!, etc.). To mutate arrow columns, call copy(x) to materialize the arrow data as a normal Julia array.

Arrow.Table also satisfies the Tables.jl interface, and so can easily be materialied via any supporting sink function: e.g. DataFrame(Arrow.Table(file)), SQLite.load!(db, "table", Arrow.Table(file)), etc.

Supports the convert keyword argument which controls whether certain arrow primitive types will be lazily converted to more friendly Julia defaults; by default, convert=true.

source
Arrow.ToTimestampType
Arrow.ToTimestamp(x::AbstractVector{ZonedDateTime})

Wrapper array that provides a more efficient encoding of ZonedDateTime elements to the arrow format. In the arrow format, timestamp columns with timezone information are encoded as the arrow equivalent of a Julia type parameter, meaning an entire column should have elements all with the same timezone. If a ZonedDateTime column is passed to Arrow.write, for correctness, it must scan each element to check each timezone. Arrow.ToTimestamp provides a "bypass" of this process by encoding the timezone of the first element of the AbstractVector{ZonedDateTime}, which in turn allows Arrow.write to avoid costly checking/conversion and can encode the ZonedDateTime as Arrow.Timestamp directly.

source
Arrow.ValidityBitmapType
Arrow.ValidityBitmap

A bit-packed array type where each bit corresponds to an element in an ArrowVector, indicating whether that element is "valid" (bit == 1), or not (bit == 0). Used to indicate element missingness (whether it's null).

If the null count of an array is zero, the ValidityBitmap will be "empty" and all elements are treated as "valid"/non-null.

source
Arrow.appendFunction
Arrow.append(io::IO, tbl)
Arrow.append(file::String, tbl)
tbl |> Arrow.append(file)

Append any Tables.jl-compatible tbl to an existing arrow formatted file or IO. The existing arrow data must be in IPC stream format. Note that appending to the "feather formatted file" is not allowed, as this file format doesn't support appending. That means files written like Arrow.write(filename::String, tbl) cannot be appended to; instead, you should write like Arrow.write(filename::String, tbl; file=false).

When an IO object is provided to be written on to, it must support seeking. For example, a file opened in r+ mode or an IOBuffer that is readable, writable and seekable can be appended to, but not a network stream.

Multiple record batches will be written based on the number of Tables.partitions(tbl) that are provided; by default, this is just one for a given table, but some table sources support automatic partitioning. Note you can turn multiple table objects into partitions by doing Tables.partitioner([tbl1, tbl2, ...]), but note that each table must have the exact same Tables.Schema.

By default, Arrow.append will use multiple threads to write multiple record batches simultaneously (e.g. if julia is started with julia -t 8 or the JULIA_NUM_THREADS environment variable is set).

Supported keyword arguments to Arrow.append include:

• alignment::Int=8: specify the number of bytes to align buffers to when written in messages; strongly recommended to only use alignment values of 8 or 64 for modern memory cache line optimization
• colmetadata=nothing: the metadata that should be written as the table's columns' custom_metadata fields; must either be nothing or an AbstractDict of column_name::Symbol => column_metadata where column_metadata is an iterable of <:AbstractString pairs.
• dictencode::Bool=false: whether all columns should use dictionary encoding when being written; to dict encode specific columns, wrap the column/array in Arrow.DictEncode(col)
• dictencodenested::Bool=false: whether nested data type columns should also dict encode nested arrays/buffers; other language implementations may not support this
• denseunions::Bool=true: whether Julia Vector{<:Union} arrays should be written using the dense union layout; passing false will result in the sparse union layout
• largelists::Bool=false: causes list column types to be written with Int64 offset arrays; mainly for testing purposes; by default, Int64 offsets will be used only if needed
• maxdepth::Int=6: deepest allowed nested serialization level; this is provided by default to prevent accidental infinite recursion with mutually recursive data structures
• metadata=Arrow.getmetadata(tbl): the metadata that should be written as the table's schema's custom_metadata field; must either be nothing or an iterable of <:AbstractString pairs.
• ntasks::Int: number of concurrent threaded tasks to allow while writing input partitions out as arrow record batches; default is no limit; to disable multithreaded writing, pass ntasks=1
• convert::Bool: whether certain arrow primitive types in the schema of file should be converted to Julia defaults for matching them to the schema of tbl; by default, convert=true.
• file::Bool: applicable when an IO is provided, whether it is a file; by default file=false.
source
Arrow.getmetadataMethod
Arrow.getmetadata(x)

If x isa Arrow.Table return a Base.ImmutableDict{String,String} representation of x's Schema custom_metadata, or nothing if no such metadata exists.

If x isa Arrow.ArrowVector, return a Base.ImmutableDict{String,String} representation of x's Field custom_metadata, or nothing if no such metadata exists.

Otherwise, return nothing.

source
Arrow.writeFunction
Arrow.write(io::IO, tbl)
Arrow.write(file::String, tbl)
tbl |> Arrow.write(io_or_file)

Write any Tables.jl-compatible tbl out as arrow formatted data. Providing an io::IO argument will cause the data to be written to it in the "streaming" format, unless file=true keyword argument is passed. Providing a file::String argument will result in the "file" format being written.

Multiple record batches will be written based on the number of Tables.partitions(tbl) that are provided; by default, this is just one for a given table, but some table sources support automatic partitioning. Note you can turn multiple table objects into partitions by doing Tables.partitioner([tbl1, tbl2, ...]), but note that each table must have the exact same Tables.Schema.

By default, Arrow.write will use multiple threads to write multiple record batches simultaneously (e.g. if julia is started with julia -t 8 or the JULIA_NUM_THREADS environment variable is set).

Supported keyword arguments to Arrow.write include:

• colmetadata=nothing: the metadata that should be written as the table's columns' custom_metadata fields; must either be nothing or an AbstractDict of column_name::Symbol => column_metadata where column_metadata is an iterable of <:AbstractString pairs.
• compress: possible values include :lz4, :zstd, or your own initialized LZ4FrameCompressor or ZstdCompressor objects; will cause all buffers in each record batch to use the respective compression encoding
• alignment::Int=8: specify the number of bytes to align buffers to when written in messages; strongly recommended to only use alignment values of 8 or 64 for modern memory cache line optimization
• dictencode::Bool=false: whether all columns should use dictionary encoding when being written; to dict encode specific columns, wrap the column/array in Arrow.DictEncode(col)
• dictencodenested::Bool=false: whether nested data type columns should also dict encode nested arrays/buffers; other language implementations may not support this
• denseunions::Bool=true: whether Julia Vector{<:Union} arrays should be written using the dense union layout; passing false will result in the sparse union layout
• largelists::Bool=false: causes list column types to be written with Int64 offset arrays; mainly for testing purposes; by default, Int64 offsets will be used only if needed
• maxdepth::Int=6: deepest allowed nested serialization level; this is provided by default to prevent accidental infinite recursion with mutually recursive data structures
• metadata=Arrow.getmetadata(tbl): the metadata that should be written as the table's schema's custom_metadata field; must either be nothing or an iterable of <:AbstractString pairs.
• ntasks::Int: number of concurrent threaded tasks to allow while writing input partitions out as arrow record batches; default is no limit; to disable multithreaded writing, pass ntasks=1
• file::Bool=false: if a an io argument is being written to, passing file=true will cause the arrow file format to be written instead of just IPC streaming
source