Our biggest repository is over 100TB, but volume size has very little impact on client-side performance because volume size doesn't directly correlate to a large database (i.e., in general 1,000 1GB files would have less impact on overall performance than 10,000,000 1KB files).
The important thing for maintenance and upkeep is to manage your volumes efficiently. We use logical volumes and point the volume at a new drive when they get close to 4TB (to make backups and migrations more manageable). Existing volumes stay the same when you change the logical volume, so you can keep rolling it over indefinitely to spread things out across multiple drives, but it is important to keep in mind that if you add pages or electronic documents to an existing entry those items get added to the same volume, so you want a little breathing room on the drives.
Document count, page counts, metadata, and other items that increase the size of the database are going to have a much larger impact on performance, especially for searches (i.e., as Chris pointed out the database has substantially more impact on performance than volume size).
Performance will also be impacted by the quantity and types of columns being displayed in the clients; more columns mean more data to retrieve, and calculated columns like page count and total document size are much more expensive.
Another thing to consider is your folder structures; having too many items in a single folder can negatively impact performance regardless of the overall repository size.