If you want to use the official Parquet Java library implementation,
you’ll quickly see that it brings along Hadoop as a large, cumbersome transitive
dependency. This makes it complicated to use parquet in small systems and simple
use cases.
In this post, I’ll show you how you can eliminate almost all of the Hadoop
dependency. I’m using this technique in production systems that need to export
data with a clean, structured data schema. This works for both reading and writing parquet files using the official java implementation.
In one case, I’ve seen an over 85% reduction in shaded jar size by cutting out most of
the Hadoop transitive dependencies.
This technique can be summarized by:
Switch to using all non-Hadoop Parquet interfaces.
Remove all Hadoop imports from your code
Explicitly bring in transitive dependencies that are still part of the parquet import graph. Exclude all other Hadoop dependencies.
For our method, where we want to write out parquet files to S3 object storage,
we’re also going to utilize AWS’ Java NIO FileSystem SPI
project to make it easy
to do I/O to S3 without having to implement some of the more intricate S3 I/O
for direct readinga and writing to S3.
Here are these steps in more detail.
Switch to non-Hadoop Parquet interfaces
Many of the reader and writer interfaces in parquet-java (formerly parquet-mr)
are explicitly coupled to Hadoop classes. If all you want to do is read and
write parquet files, you’re at a minimum going to interact with ParquetReader
, ParquetWriter and their superclasses such as AvroParquetWriter or
ProtoParquetWriter. All of these classes have constructors that take in
instances of
org.apache.hadoop.conf.Configuration to configure the writer,
and will also specify input and output paths with a Hadoop filesystem
Path.
You need to ensure that none of your code depends on these interfaces. You need
to:
Replace Hadoop Configuration with an instance of ParquetConfiguration, most likely PlainParquetConfiguration. All parquet readers and writers should have constructor interfaces that take in this configuration object, instead of the Hadoop variant.
Replace all instances of Hadoop Path with either a parquet InputFile for reads, or an OutputFile for writes. If you want to write to your local filesystem, you can use the LocalInputFile or LocalOutputFile implementations.
Concretely, if you were writing out Parquet files using an AvroParquetWriter
to export to a local FileSystem, you would change your code from something like this:
To something like this instead:
Notice how we’ve switched to using Java’s built in NIO interfaces for file I/O.
In a later section, we’ll see how we can use these same interfaces to do direct
I/O to S3 instead of local file system operations.
After you’ve moved over to these interfaces, you should remove all unused
org.apache.hadoop.* imports from your code, before proceeding to the next
step.
Dependency Changes
Even though none of your code won’t reference anything from Hadoop, we still
can’t explicitly exclude all transitive Hadoop dependencies. This is because the
core reader / writer classes we’re using still bring in these Hadoop
dependencies as imports. While our technique will not explicitly use any of
these Hadoop code paths, they are still referenced as imports from classes like
ParquetWriter and friends.
I’m guessing that when parquet moves to 2.0, the parquet team will remove this explicit
Hadoop coupling, but to do so any sooner would bring breaking interface changes.
Until then, we’ll need to do some transitive dependency surgery ourselves.
We explicitly bring in hadoop-common, and intentionally only bring in
dependencies that are referenced through the import graph. Everything else is
excluded. There might be a better way to do this with maven, but this has been
working so far.
For parquet-java version 1.14.1, here’s the relevant maven pom exclusions:
Like I mentioned, cutting out all these transitives, along with hadoop-aws
took one production shaded jar from ~657MB down to around ~96MB.
If upstream parquet removes all Hadoop interface coupling, we’d be able to get
rid of this ugly maven hack altogether.
S3 Reading and Writing
So now we theoretically can read and write parquet files from the local
filesystem using Java’s built in NIO interfaces, but what about direct I/O to S3?
One of the nicities of using the Hadoop FileSystem abstraction was that we
could read and write parquet files directly from blob storage, and we’d like to
recreate that.
Let’s fix that by leveraging AWS’ Java NIO SPI for S3.
This isn’t strictly necessary, but its implementation already can handle
producing a seekable byte channel that implements I/O buffering for us. If you’d rather
not bring this library into scope, you’re at a minimum going to need to
recreate InputFile and OutputFile implementations that can buffer I/O to and
from S3 to your liking.
If you follow the README directions to configure your credentials for S3, you’re 95% of
the way to being able to just plug in directly to Parquet reading and writing.
Instead of looking up java.nio.file.Path objects from our local file
FileSystem, we instead look up java.nio.file.Path objects from the provided
s3 filesystem implementation.
For reading and writing, lookup Path objects from the S3FileSystem.
For writing, the upstream LocalOutputFile implementation works just fine with S3 paths.
For reading, you’ll need to use a different InputFile implementation that works correctly with nio interfaces, and doesn’t fallback to legacy File operations.
Here’s an implementation that should work with our S3 NIO FileSystem
implementation, that only depends on NIO compatible interfaces:
I’d like to get this implementation pushed upstream, either to replace the
LocalInputFile implementation, or to sit alongside it for use cases like this
where we want to plug in NIO FileSystem implementations that can’t fall
back to legacy File interfaces.
Conclusion
We can now read / write Parquet files, using the official Java implementation,
without any explicit dependencies on Hadoop, but still read and write directly
to S3 blob storage. Not only does this reduce our jar
sizes, but it also cuts down on classpath dependency sprawl. You can embed
parquet functions inside smaller codebases where carrying around a prohibitively cumbersome
Hadoop dependency would be a complete non-starter.
Parquet is an amazing file format, that’s going to be here for a long time,
especially in our current age of cheap blob storage. One of the biggest things holding parquet java back from being ubiquitously usable is issues like this where the
implementation bloats your codebases and deployables. I’m eager to both help and support however I can
to reduce parquet-java’s dependency on Hadoop, and help bring the benefits of
parquet to more code bases.
“We need to deliver that feature for the business”, words you might have heard at a daily standup meeting.
or
“The business needs this data analysis so it can make a decision”, as you get a CSV dump of the latest sales transactions and hand it off somewhere.
This othering kind of language gives away your power. Treating “the business” like it’s some other entity that’s “over there” that
you’re beneath diminishes your ability to succeed in your software craft. You’ve put yourself on the wrong side of a false equation, and made a false dichotomy in your mind.
As a software engineer, you are the business, equally much as any other job role of sales, marketing or HR at your company. The business
isn’t something that happens somewhere else. It’s happening right now, with you doing your work, with you delivering useful things to customers that
need what you’re making.
Try this language instead:
“We need to deliver that feature for our biggest medical customers”
“Our marketing team needs this data analysis so they can plan our next release launch”
The funny thing is: Once you change your language like this, your mindset and your actions change too. You’re taking responsibility
for the outcomes you’re driving towards. You have a human connection to the recipients of your work. But most importantly,
you’ve recognized your own power, and not given it away to an unnamed “business person” who doesn’t actually exist at
your company.
This post builds on my last post, which covered how to develop your own custom NixOS test for custom packages and services.
If you want to contribute or work on upstream nixpkgs NixOS module tests, here’s a few pointers I picked up while experimenting with a flake based trial and error workflow.
Run the test with: nix run '.#nixosTests.$TEST_NAME.driver.
For example, to run the nginx NixOS tests via flakes, run:
$ nix run '.#nixosTests.nginx.driver'
You can also run the tests with an interactive shell via:
$ nix run '.#nixosTests.nginx.driverInteractive'
Nix package derivations often will link a related test suite via a passthru.tests attribute, so you can execute affected tests when
you update or change a package. For example, the gotosocialpackage derivation links the tests like this:
So another way to run the tests is via the linked attribute in the package derivation like so:
$ nix run '.#gotosocial.tests.gotosocial.driver'
Just like in my previous post, a QEMU VM will boot and execute the test suite and print its results to your terminal window.