Isaac Asimov's I, Robot was interesting to me not so much because of the writing or the stories, so much as the overall perspective on robots. It's fascinating to read science-fiction written 50 years in our past, about events and technology 50 years in our future. If I were writing about Asimov's robots today, I would work on the assumption that they were basically computers that were made to be as much like humans as possible. That would affect the types of situations they'd get into, as well as the problem solving skills used by the people around them when trouble happened. The approach in the book, however, seemed to start more from the human end of things. Robots seemed to be essentially living creatures, that just happened to be created mechanically. They even have robo-psychologists who try to figure out their problems, rather than programmers who flip on-off switches and run tests on them. Very interesting, but also a bit frustrating for me at times, too. I kept wanting to remind the humans that the robots they were dealing with were just machines.
The main thing that never quite sat well with me though, was the idea of the Three Laws of Robotics, which, of course, shaped the entire book. (They can't harm humans, they must obey orders, and they try not to get themselves damaged, in that order of priority.) I don't have a good concept in my mind of how those laws could be so unbreakable, regardless of whether you have human- or computer-based robots. It would have been interesting to get some theory behind that. Still, it made a good framework for all the deductive reasoning that was going on.
It'll be interesting to see the movie when it comes out.