Currently, computers can sort of (under particular viewing conditions) recognize the "basic" expressions: fear, anger, sadness, joy, surprise, disgust. That's because someone has fed in psychology datasets of people in a laboratory making these expressions.
|"Basic" expressions from CMU's Multi-PIE dataset|
But come on, some day Siri is going to look at your face while you're talking to her and she's not going to look for FEAR vs. DISGUST. She's going to look for mild annoyance and confusion and amusement and attention/distraction and concentration and stuff like that. Maybe you'll be able to communicate via subtle expressions to your phone without actually talking out loud. (Or that old-fashioned typing/screen-touching.)
Getting back to my point, I've developed this website (that still needs a lot of work) and I feel simultaneously amazed that I have cobbled together this system that mostly seems to work, and also super critical of my own work and that I have no idea what I'm doing.
So it means a whole lot to me that you're a) trying it out, making me laugh, cheering me on and b) giving me feedback, pointers to things that are broken, and clues about what to focus on to make it a better experience.
Maybe it's an awkward and inefficient way to go about work, but I rather like Facebook being a source of bug reports intermingled with grumpy cat face swaps.
Now I have grand plans to redesign the flow of the site so if you get sucked into swapping faces with Not Bad Obama, you'll be directed to even more excitement and adventure.