commit | 098064b8c94c42e86deb7689cb4648ca39f54b2c | [log] [tgz] |
---|---|---|
author | Josh Stone <jistone@redhat.com> | Mon Nov 13 11:26:47 2017 -0800 |
committer | Andrew Gallant <jamslam@gmail.com> | Wed Nov 29 16:07:51 2017 -0500 |
tree | 4dc77daedb1695cfe04f3f12849d9b0ce113fdce | |
parent | fef20557fa42c4f9f3f74ef0df08cf48fe80aaaa [diff] |
Fix prop_ext_[u]int_*::native_endian on BE targets The similar `big_endian` tests were using an offset to read from the end of the written `u64`, but the `native_endian` tests were reading directly, just like the `little_endian` tests. That's of course only correct when the target actually is little endian. That `big_endian` offset is now sliced directly, instead of cloning into another vector, and then this logic is also used in the `native_endian` test, depending on the current `#[cfg(target_endian)]`. Fixes #102.
This crate provides convenience methods for encoding and decoding numbers in either big-endian or little-endian order.
Dual-licensed under MIT or the UNLICENSE.
This crate works with Cargo and is on crates.io. Add it to your Cargo.toml
like so:
[dependencies] byteorder = "1"
If you want to augment existing Read
and Write
traits, then import the extension methods like so:
extern crate byteorder; use byteorder::{ReadBytesExt, WriteBytesExt, BigEndian, LittleEndian};
For example:
use std::io::Cursor; use byteorder::{BigEndian, ReadBytesExt}; let mut rdr = Cursor::new(vec![2, 5, 3, 0]); // Note that we use type parameters to indicate which kind of byte order // we want! assert_eq!(517, rdr.read_u16::<BigEndian>().unwrap()); assert_eq!(768, rdr.read_u16::<BigEndian>().unwrap());
no_std
cratesThis crate has a feature, std
, that is enabled by default. To use this crate in a no_std
context, add the following to your Cargo.toml
:
[dependencies] byteorder = { version = "1", default-features = false }