Apache Commons Compress supports both file streams (via compressors) and structured content (via archivers).
The library can be used for both compression and decompression operations, working with a relative large number of archive formats, being one of the well-known such solutions in the Java community.
The library isn't perfect, some problems still being under development for each archive file format, but overall, if you need to support a large number of compression formats and don't want to use bulky individual libraries for each format, the Commons Compress package might be your best answer.
Features:
- Supported archive file formats:
- 7z
- ar
- arj
- bzip2
- cpio
- DEFLATE
- gzip
- lzma
- Pack200
- snappy
- tar
- Unix dump
- XZ
- Zip
What is new in this release:
- The snappy, ar and tar inputstreams might fail to read from a non-buffered stream in certain cases.
- IOUtils#skip might skip fewer bytes than requested even though more could be read from the stream.
- ArchiveStreams now validate there is a current entry before reading or writing entry data.
- ArjArchiveInputStream#canReadEntryData tested the current entry of the stream rather than its argument.
- ChangeSet#delete and deleteDir now properly deal with unnamed entries.
- Added a few null checks to improve robustness.
- TarArchiveInputStream failed to read archives with empty gid/uid fields.
- TarArchiveInputStream now again throws an exception when it encounters a truncated archive while reading from the last entry.
- Adapted TarArchiveInputStream#skip to the modified IOUtils#skip method. Thanks to BELUGA BEHR.
What is new in version 1.7:
- Read-Only support for Snappy compression.
- Read-Only support for .Z compressed files.
- ZipFile and ZipArchiveInputStream now support reading entries compressed using the SHRINKING method.
- GzipCompressorOutputStream now supports setting the compression level and the header metadata (filename, comment, modification time, operating system and extra flags)
- ZipFile and ZipArchiveInputStream now support reading entries compressed using the IMPLODE method.
- ZipFile and the 7z file classes now implement Closeable and can be used in try-with-resources constructs.
What is new in version 1.5:
- CompressorStreamFactory has an option to create decompressing streams that decompress the full input for formats that support multiple concatenated streams.
What is new in version 1.4:
- Support for the XZ format has been added.
What is new in version 1.3:
- Support for the Pack200 format has been added.
- Read-only support for the format used by the Unix dump(8) tool has been added.
What is new in version 1.2:
- New features:
- ZipArchiveEntry has a new method getRawName that provides the original bytes that made up the name. This may allow user code to detect the encoding.
- TarArchiveEntry provides access to the flags that determine whether it is an archived symbolic link, pipe or other"uncommon" file system object.
- Fixed Bugs:
- ZipArchiveInputStream could fail with a "Truncated ZIP" error message for entries between 2 GByte and 4 GByte in size.
- TarArchiveInputStream now detects sparse entries using the oldgnu format and properly reports it cannot extract their contents.
- The Javadoc for ZipArchiveInputStream#skip now matches the implementation, the code has been made more defensive.
- ArArchiveInputStream fails if entries contain only blanks for userId or groupId.
- ZipFile may leak resources on some JDKs.
- BZip2CompressorInputStream throws IOException if underlying stream returns available() == 0. Removed the check.
- Calling close() on inputStream returned by CompressorStreamFactory.createCompressorInputStream() does not close the underlying input stream.
- TarArchiveOutputStream#finish now writes all buffered data to the stream
- Changes:
- ZipFile now implements finalize which closes the underlying file.
- Certain tar files not recognised by ArchiveStreamFactory.
Requirements:
- Java 5 or higher
Comments not found