Peter Duniho
11/6/2008 7:33:00 PM
On Thu, 06 Nov 2008 10:55:04 -0800, Jonathan Jones
<JonathanJones@discussions.microsoft.com> wrote:
> [...]
> As a test, I created a simple 6 byte array and passed it to the
> BinaryFormatter.Serialize(stream, object) as the object to be serialized.
> The resulting MemoryStream was 35 bytes long! I know this is a poor
> example
> because I could have just sent the 6-bytes via a direct Write() call,
> but is
> that kind of bloat normal? And more importantly, does it save time if
> you are
> sending 5 - 6 times the amount of data you would normally be sending.
Does it save what kind of time?
Obviously if you are sending five times as much data as was originally
contained in your data structure, then it will take five times as long to
send the data. It's trivial to show that's not "saving time" in terms of
data transmission.
But it certainly saves time in terms of development. Whether it saves
enough time to be worth it to you, I can't say. Sending data across a
network isn't rocket science. But it does take _some_ time to implement
your own serialize/deserialize logic. And if this is code you're going to
have to revisit on a regular basis -- if your own message objects might
change, you have to support different data sources (hardware), etc. --
then it might be worthwhile to just use the built-in stuff rather than
maintaining your own serialization.
Note that for larger amounts of data (say, 1K or more), you may be able to
use the GzipStream class to compress the data. Yes, it seems kind of
silly to bloat the data, and then try to compress it back down again. But
again, if that's the more maintainable approach, maybe it's good enough
for your purposes.
After all, you're already doing something silly like writing to a
MemoryStream and then passing the generated byte array to a
NetworkStream. :)
Pete