UDP and TCP are different in that UDP is a 'fire and forget' protocol, useful for broadcast functionality (such as a computer broadcasting on a network that it is alive). Connections between computers do not need to be set up.
TCP on the other hand manages the delivery of the packets and sets up a connection. If the TCP application is well written then it will detect when the connection has died and will attempt to re-establish it. Because TCP manages the delivery of the packet more than UDP then it is a more intensive protocol. This is one of the reasons why (for example) a 56k modem connection doesn't deliver the full 56k of data bandwidth. It may be delivering something near that, but quite a lot of bandwidth is taken up with the TCP wrapper around the data packets and packets being re-sent, etc. UDP is a much lighter weight protocol that can be used when speed is more of an issue than guarunteed delivery.
At the end of the day most applications don't really care that much which IP wrapper is used to send the data packets. I used to have a lot of fun changing the default in Win NT to various formats. The default is UDP, but this is not allowed through many firewalls, so I changed a couple of my servers to be tunnelling HTTP protocol which got around the problem (as long as the client machine was set up similarly).