Depends how scalable it needs to be.
for 350 items into 10 bins, I would not hesitate to brute-force it.
My approach would be similar to that outlined by Diver300.
I'd maintain an array [10,350] to hold the results.
For each Starting Point [0..350], I'd store the resulting 10 bin weights so we can do stats on it at the end.
Get the total weight, divide by 10 to get the ideal bin weight.
Choose a starting point.
Add items sequentially till you go over the nominal bin weight.
Then, we have to make a decision.
Include the item that pushed us over the limit, for an overweight bin, or exclude it for an under-weight bin?
So perhaps go with what gives us the closest.
Move to the next bin, and repeat.
Once you've got all 10 bins, increment the starting point and do it all again.
As Diver33 noted, cumulative errors will all end up in the last bin.
In an 'ideal' dataset, the under/over decisions and magnitudes will average out so the cumulative error added to the final bin should be small.
But these are small datasets, and there's not enough data for things to average out 'long term', so it's likely the last bin will be significantly over/under.
My proposed solution to this would be:
Divide the total by 10, then fill the first bin to this value +/- one item per the initial idea, but then going forward to the second bin...
Subtract all assigned items from the total to get the remainder, then divide that by 9, and fill the next bin to this value etc etc.
Continue to calculate the remainder, then divide it amongst the remaining bins.
This way, we wash out the errors over the course of the calculations.
At the end, once we have iterated over all 350 possible starting points and the array is full, do stats to calculate the mean and SD of the resulting 10 bins, and choose your favourite.