Thursday, November 13, 2014

Fitbit Weights

Speaking of extracting data from websites... and of marriage... Adam and I just spent the morning looking at our weight data from our fitbit scale. Surprisingly, our weight changes track each other very closely in terms of absolute pounds, despite Adam weighing about 36% more than me.

Weight over (unix timestamp) time

The swoop down in the beginning is winter settling over Seattle and us going to the gym to lift weights and play squash. We were still going to the gym as our weights trended up again.

Hypothesis: Adam making hella loaves of bread and piles of tortillas made us gain weight.
Observation: False, weight gain happened before bread-making.

Hypothesis: People let themselves go after marriage.
Observation: Effect seems small, however wedding planning is stressful and seems to leads to more weight gain... although that gain wasn't just fat in our case, it was actually from going to the gym.

Hypothesis: It was muscle mass!
Observation: Looking at the fitbit lean vs. fat charts not shown here, it was indeed a couple pounds of muscle gained.

Notes about dietary patterns: Before we moved, we were eating out a lot, and now we're eating out about once a week max. The eating out could have been 'stress', but I actually think it was the weather turning sunny and beautiful and us deciding it was our duty to experience as many restaurants within walking distance around Capitol Hill as possible before we moved. We also simply had more daylight hours to spend eating and drinking.

Notes about exercise: It's a lot harder to be active since we moved since there's no walking to/from buses and no access to a gym. I try to run around the neighborhood a couple times a week and we go hiking from time to time.

Anyway, I went into this blaming the bread, but now I see it's not a big effect. Gym + eating out + maybe more daylight seemed to be the main cause of weight gain. Hopefully we can get this chart to trend downward again, or at least stay stable.

The most interesting thing to take away is just how closely our weights follow each other!

How we got the data

From the fitbit website, we looked at the crome debugger network log to see when the data for the weight graph was loaded. It comes from a url like this:

https://www.fitbit.com/graph/getNewGraphData?userId=##USERID##&type=weight&dateFrom=2013-11-14&dateTo=2014-11-13&version=&dataVersion=12226&ts=1415904524033

Privacy-wise, you have to be logged in to your own account to see your own data.


Monday, November 10, 2014

Bride hacks into website, decodes proprietary 3D model format to save wedding!!

First off, welcome back to blogging, me! So glad you/I could join us, can't wait to see what you/I write and share!!

I got married in July to another human of nerdy-technical tendencies. We used a Kinect to make a 3D print of ourselves to go atop our cake.

Aww, there we are, printed in plastic

The Scanning



We used a scanning program from Shapify.me, which walks you through spinning around in front of a Kinect. The program says "Turn 45 degrees!" and then the Kinect nods at you, scanning you from foot to head. Then it pauses for a second to let you turn again. When it's done with you, it fuses all the scans together and make a watertight mesh.

It crashed on the first computer we tried, which was running OSX Mavericks, but worked on Snow Leopard. It was kind of tiring to stand still for so long, also to rotate yourself while holding the same pose.

I guess you're not supposed to see the bride/groom in their wedding getup before the special day, but whatever, this was important! We brought the dress and the suit in to school where the Kinect was and did the scanning in the lab. Luckily, it was a weekend around finals and no one was there. I had to pin up my dress so I could spin around. We could have had more dress-train action if we had had someone wave a Kinect around us while we stood in one position, but Richard "Mr. Kinect Fusion" Newcombe was out of town.

The Printing: Color Sandstone 


One of the nice things about Shapify.me is that it integrates directly with a 3D printing service. We paid $70 to get the model we'd just scanned printed in color.


The problem was that the color of the 3D print was kind of icky, due to the lighting in the capture room. The cream color of the dress turned into a dingy gray, especially where the black of the suit bled into it. Our skin had a greenish tinge. My aesthetic bar for this wedding was not that high, but this model did not meet that bar. More photos below.

I needed (also: wanted) to hack something to fix this. Let the headlines read:
"Bride hacks into website, decodes proprietary 3D model format to save wedding!!" 
(Heck, I'm just going to make this the title of the post.)

The Hacking

Shapify.me suggested it would let you download the models if you paid a lot of money to be a partner, or if you paid a lot of money to run your own scanning station (custom built by the company, not just a Kinect). I probably could have asked/begged for the file since I already paid for the print, but like I said, I wanted to be a badass computer bride instead.

Shapify used a web-based 3D model viewer from this other company, Sculpteo, that appeared to be written in Javascript and load some custom binary model format.  I looked at the network log to spot when the webpage loaded the big binary model file and get its URL. I poked at their viewer code and around in the Javascript browser console until I got a handle on the data after it had been parsed and put into a Javascript object for rendering. I looked in that object for the model's vertices and polygons, and then dumped those to a text file that I was able to turn into a .ply file readable by Meshlab.  My experience with Javascript, Chrome debugging, reading and rendering 3D models, even on the web in Canvas and Actionscript... made all this possible.

The Printing: White Plastic

Once we had our own copy of the 3D mesh, we took it to Metrix Create Space in Seattle. A super helpful staff lady offered to print it for us, and even insisted on fixing up the mesh where Adam had a very skinny and unnatural foot.

Printing on a MakerBot at Metrix

Here's the inside of the print, with the honeycomb scaffolding. I don't know why this particular print run got canceled.

Our feetsies/inside a 3D print

Printing took many hours, so we left and came back the next day. We saw ourselves sitting out on their display shelf, but no, that was not the model for us! That was another copy they'd printed to keep and show off because they thought it was neat.

Color vs. White

Here you can see all the details. The gloomy color, the pinned up dress, Adam's adorable bowtie, Adam's messed up foot (shiny shoes don't reflect Kinect IR beams well), the slightly larger size of the plastic print.

Funky Foot

Dreary Dress

Bodacious Bustle 
So the white topper went on the cake, and the colored topper sat on the side as a neat thing to look at. And then they/we lived happily ever after!

Photo by Jeramie Shoda of ShodaLove


Tuesday, June 10, 2014

In which I battle MTurk External Hits a second time

I wrote a blog post over two years ago about my experiences with external hits on Mechanical Turk. It was rough. The tools were bad, I used them in a way that drove me crazy, and only after much trial and error did I figure things out.

Since then, I've become more learned and wise about many things, especially certain libraries and Amazon tools. This is my attempt to write a more coherent MTurk External Hit tutorial with less swearing. I haven't actually used external hits since then, so this will be a learning adventure for all of us.

Step 1: Get the python library Boto

Boto is "one of the fancy tools built on top of the [crappy Amazon] command line tools" that I referred to in my last post. I've recently been using it to transfer data to/from S3. Get it here! https://github.com/boto/boto (or install with pip)


A wild error appears! 

The specified claims are invalid.   Based on your request, your signature should be generated using the following string: AWSAccessKeyIdXXXXXXXOperationGetAccountBalanceSignatureVersion1Timestamp2014-06-10T22:04:46ZVersion2012-03-25.  Check to make sure your system clock and timezone is not incorrect.  Our current system time: 2014-06-10T22:04:46Z.
I spent at least an hour just now convinced boto was broken... but rather than being related to this bug, it looks like my problem was extra characters typed in the secret access key. I am using boto 2.29.1, which I just installed/upgraded with pip.

Step 2: Try something simple, like checking your account balance

import boto.mturk.connection
 
sandbox_host = 'mechanicalturk.sandbox.amazonaws.com'
real_host = 'mechanicalturk.amazonaws.com'
 
mturk = boto.mturk.connection.MTurkConnection(
    aws_access_key_id = 'XXX',
    aws_secret_access_key = 'XXX',
    host = sandbox_host,
    debug = 1 # debug = 2 prints out all requests.
)
 
print boto.Version # 2.29.1
print mturk.get_account_balance() # [$10,000.00]
(Gist: https://gist.github.com/ktuite/0cdaca2d574f358bdcd3#file-mturk_boto_intro-py)


Step 3: Try to actually post an external hit

An external hit is a webpage that MTurk loads inside of an iframe so that requesters can design custom tasks that don't fit Amazon's provided templates.

To use boto to do this, you just set up a bunch of details about the hit, like the URL, frame height (how tall the iframe on turk will be), title, description, keywords, and amount paid.

url = "https://the-url-of-my-external-hit"
title = "A special hit!"
description = "The more verbose description of the job!"
keywords = ["cats", "dogs", "rabbits"]
frame_height = 500 # the height of the iframe holding the external hit
amount = .05
 
questionform = boto.mturk.question.ExternalQuestion( url, frame_height )
 
create_hit_result = mturk.create_hit(
    title = title,
    description = description,
    keywords = keywords,
    question = questionform,
    reward = boto.mturk.price.Price( amount = amount),
    response_groups = ( 'Minimal', 'HITDetail' ), # I don't know what response groups are
)
(Gist: https://gist.github.com/ktuite/0cdaca2d574f358bdcd3#file-mturk_external_hit-py)

Then you can look at your hit:


  • Go the requester console (or sandbox version requestersandbox.mturk.com)
    • Click Manage
    • Click "Manage HITs Individually" on the upper right
    • Click on the name/title of your hit to expand the detail panel about it
  • You can also log into the requester sandbox and search for your requester name or your job's title to find it "in the wild". 
Managing Mturk HITs in the requester sandbox

Aargg! Another error! 

I did all that, and all I saw was an empty box. Where's my webpage?? So I opened up the debugging/console on my browser (Chrome: View->Developer->Javascript Console). This time, the error was as so:

[blocked] The page at '[url]' was loaded over HTTPS, but ran insecure content from '[url]': this content should also be loaded over HTTPS.

Good job, Chrome... you caught me not using HTTPS.  Luckily, my site is written in Django and is hosted on Heroku, which lets you plop a https in front of your app url. So I don't have go to sign up for my own SSL certificate right now. However, this lead to a second error: 

Refused to display '[https-url]' in a frame because it set 'X-Frame-Options' to 'SAMEORIGIN'.

Around the time of this https error, I had brought Adam over to help me out. He's the one who told me Heroku does https. He also said it was probably my server sending that X-Frame-Options thing. This  stackoverflow q/a also said it was probably the web server's fault. And indeed it was. To get around this problem, I read this Django documentation and added the '@xframe_options_exempt' decorator to my view method serving up my external hit. Sure enough, this fixed it. 

Step 4: Posting answers back to Mturk


When the worker is done with the external hit, the external hit needs to phone home/notify Mturk. The submit url is either www.mturk.com/externalSubmit or https://workersandbox.mturk.com/externalSubmit according to this fine documentation. It turns out that Amazon will attach a get parameter 'turkSubmitTo' to your URL loaded into the frame, so you can look this up without hardcoding whether or not you're using the sandbox. Be sure to add /mturk/externalSubmit to the end of that url.

https://mydomain.com/myHit/?assignmentId=XXX&hitId=YYY&workerId=ZZZ&turkSubmitTo=https%3A%2F%2Fworkersandbox.mturk.com

But what kind of stuff do you submit? Make an HTML form (maybe even with hidden values that your javascript. Check out an example here.

I was getting this error...

There was a problem submitting your results for this HIT. This HIT is still assigned to you. To try this HIT again, click "HITs Assigned To You" in the navigation.

And it turned out I needed to include the assignmentId (passed in through the URL as a GET parameter) back to the form.

Step 5: Including template variables in your hits (or use Javascript to change your task on the fly)


When I wrote my first blog post, the MTurk Command Line Tools (and the web-based HIT creation tool) required you to submit a file with comma-separated parameters that outlined your hit. Figuring out how to use that correctly was a big headache (ultimately a learning experience) (it's all just HTML forms! all the way down). Since then, I've crafted Mturk tasks with other real live humans who seem to use an entirely different pattern.

The pattern: Use Javascript to set up (and possibly randomize) your task

Say you're running some experiment, and you want your worker to experience one of three experimental conditions, but not accidentally be in two different conditions because they clicked on different tasks. The solution is to have your webpage randomly assign the worker to one of the three tasks when it loads... and only let the worker have access to a single task. 

Helpful Note: When you set  'max_assignments' in 'create_hit' , that's the number of people who will see your task... and they'll each see it only once. 

*ponder ponder* Maybe that's why those batch files defining your task are important... for the cases when you DO want a worker to have access to a bunch of different variations of your task. At the moment, I'm not sure how to do this with boto, other than just manually creating copies of the task a few more times.

To conclude...

This was definitely a headache the second time around, too, but a teensy bit less mysterious. Hopefully if you embark on a similar journey and get stuck at similar places, this post can help you get unstuck faster.

A couple little code samples can be found here: https://gist.github.com/ktuite/0cdaca2d574f358bdcd3

Thursday, May 22, 2014

Dropdown-menu-like-things on iOS made from UIPickerView and UITextField

All the code on the internet that shows how to do this is broken and/or doesn't provide enough context.

I wanted something that would serve the same purpose as an HTML drop down menu, without necessarily acting exactly like one. Got some clues and ideas for different ways of handling it from @commanda, and ultimately got this slick and simple way to work.

It's one part understanding UIPickerViews and how to make them go, and one part setting the inputView of a UITextField to reference a particular picker. And yet another part making the picker hide itself when you tap on the background. And then throw in handling multiple pickers at once.

CODE HERE! https://gist.github.com/ktuite/fc23de2f129458630f18

Whee, a picker slides up from the bottom (instead of a keyboard) to fill in the text field

There it is again, not as a vine this time.

My storyboard just has two text fields (which each use a different UIPickerView as an inputView)
I have the it's complicated gender option based on the advice/insight of this article: Designing a Better Drop-Down Menu for Gender

Wednesday, February 5, 2014

My Nerd Story

My path to nerddom was pretty direct. My parents are engineers and they bootstrapped the whole thing. My mom got me set up with Geocities and a book on HTML when I was 10. I took a programming class in my junior year of high school and then transferred schools my senior year to take AP Computer Science, which my previous school didn't offer. Once I got my talons embedded in programming and CS, I didn't want to get them out. I went to college and majored in CS (and also math). When I figured out what research in CS looked like, I decided to go to grad school in CS as well. Every new thing I learn (language, framework, technique, idea) builds on top of the whole foundation of what I've learned so far and serves to strengthen the foundation as well as adding new stuff. Programming is my craft.

Because my mom is a technical person (and has a billion other hobbies and is friends with everyone and is generally awesome) I did not lack a strong female role model to show me how to tinker with computers and mechanical things, or to tell me that doing so was perfectly okay. I did not lack friends that were smart and capable and creative and supportive, and who learned HTML with me and had websites "across the street" from me on Geocities. 



I wasn't part of a tech community, though. Instead, I skulked around town with my punk friends and put safteypins in my clothes and went to shows at the YWCA in Palo Alto. Online, I was in an LJ group called "craftgrrl" and I started screen printing my own t-shirts because of a post I saw there. I also made wallets out of colorful duct tape and sold them on Ebay. 

Looking back, It seems obvious that I wasn't immersed in a tech community back then. I was a teenager with a ton of interests, and while tinkering with computers could have been one of them, there weren't enough people like me around me doing it. I did a small amount on my own (I maintained my website of photos), and then did non-computer activities with my friends. The closest I got to such a community was a friend offering to get his friend to host my website after I complained about the crapshoot Geocities had become. It wasn't until the tail end of college, meeting a bunch hackers outside of academia, and going to grad school, that I actually found such communities.

This whole #mynerdstory thing started as a response to Paul Graham saying, "We can't make these women look at the world through hacker eyes and start Facebook because they haven't been hacking for the past 10 years." Most peoples' responses seem to be, "Actually, mister, I am a woman and I've been hacking for X number of years!" 

At this point, I've known how to program for over a decade, but I've been hacking for about 8 years (coding with intent to create and learn, treating it as a craft) and I could probably start a way more interesting company than a competent, yet sheltered, 22-year-old dude.  

Here's where my post turns into a rant, because I can't figure out what I want to say even though I've been thinking about it for weeks. 

0. Later, Paul Graham wrote that he thought access and examples are two important components of getting people interested in computing. I agree with those. I had access to a computer and classes in school that would actually teach me to program, but I didn't have examples of what I could/should do with my specific level of knowledge and set of interests. Instead, I saw examples of girls doing crafty things, so I tried my hand at that instead. I have a wee bit of resentment/annoyance that I didn't have access to more examples of what to do and try -- When I finally learned PHP and CSS late in college, I was like, "my 15 year old self would have been ALL OVER THIS and she actually had free time!"

1. I don't care how long you have been programming, regardless of your gender. I care what you've built, what impact you've made, how you learned it, how you accomplished what you did, what you might make in the future based on your past experiences, and if there's anything I can learn from your experiences. Huh, sounds like what an academic paper should convey.   

2. Because this #mynerdstory is inherently lady-themed, I just want to lament the lack of lady role models. There are so many great humans and great apps/tools/APIs/libraries all around, but the women and specifically the things built by women are harder to find. There are actually a lot of really awesome women that I want to emulate (my mom, my current advisor, other awesome women in my grad program and similar programs, women who make and share things on the internet like the Kittydar face detector, the women in and running the Ada Developers Academy), but I'm greedy and I want more! Moar moar moar! I want to be drowning in these women.

3. Related to my greed and selfishness, I want to read the other #mynerdstories and be INSPIRED! And I get disappointed if I'm not left feeling motivated and uplifted, which will happen by you just talking about the length of time you've been coding or something. TALK ABOUT THE THINGS THAT YOU MAKE. Even outside of this #nerdstory thing, talk about what you're building. Seeing examples of what other people (women especially) are capable of making (and how/why they made them) makes me believe I can make those things, too. I write all about the things I've made in the rest of this blog because I want to provide the inspiration that I feel is lacking.

Because I saw this in another post... The coolest things I ever built: 
  • The original Facebook Superpoke (no, not the one that sold for millions of dollars, but I did score the URL apps.facebook.com/superpoke at the F8 platform launch)
  • Sketch-a-bit, the collaborative drawing android app (and subsequent paper) that I made with Adam
  • PhotoCity, the 3D reconstruction capture the flag game that was going to literally take over the world... until it was decreed all dried up of new research opportunities?! Oh, do I sound bitter?! 
  • This robot halloween costume
  • This dinosaur shirt  
  • The Big Race game (and spinoffs) with Ben Samuel!! Originally written in HC-11 assembly...
  • Some of the wacky, creepy swaps in the Face Frontier and the secret crowd-driven classifiers that I haven't figured out to expose in an interesting way

I asked the internet what I could say that might this post more interesting/inspiring. 

The requests: Infographic! Why did I use different languages? I had an idea for a chart of which languages I learned at which times in my life and which I actually kept using. But I don't feel like making it right now. I learned whatever language my classes dictated until I got a little more fearless and started trying things that I saw other people near me try. Peer pressure! I use Twisted for a lot of my projects for the past 6 years because Adam mentioned Jeff mentioning it like, once. There might be newer, more friendly Python-based networking libraries, but Twisted has never let me down. 

What did I dream of building? Who did I show things to and who did I dream of showing them to? When I was a youth, I wanted what today's youth seem to want: to capture and share the adventures my friends and I were having, with my friends. Plus a dash of creativity. There were these dress-up paper doll things on the internet, and I thought they looked dumb, so I drew my own and made clothes and hair for all of my friends. I wanted (still want) to make things that other people appreciate.



These days, I want to provide access to neat technology (like 3D reconstruction and/or facial expression recognition) and to make ecosystems that other people can create things in. The DIY/crafty component of my youth has stuck around and I want to enable that for other people and for research. I want to share what I make with the world and be known for creating something awesome, and/or for facilitating the collaborative construction of something huge and awesome. 

Along the way, I want to talk about the stuff I build (especially when I'm just learning something new) so that someone else might get through a sticking point by seeing how I did it. Or to get feedback on what's cool about what I've made and how I can make something better in the future.

Friday, January 24, 2014

In which I attempt to clone Imgur

I've been working on a crowdsourcing project called the Face Frontier for about a year now. It's primary goal is to amass a new database of facial expressions that spans lots of expressions (beyond the basic six ones of joy, fear, anger, sadness, surprise, and disgust) as well as many different types of faces and many different lighting conditions. We're building the database from scratch (collecting it through the crowd by asking people to take photos of themselves) so that we can have labels for expressions.

The main component of the Face Frontier is the website: www.facefrontier.com (The other main component is the server side computer vision, which trains expression classifiers and handles the face swapping.)

I have changed the website's design A LOT. In the past week, I redesigned it yet again to make it look a lot like imgur.

One of the things Face Frontier does is let you swap your face with a meme.
It'll also let you swap memes with each other...

So if you swap the original (er, more like, version 5.67) Face Frontier...
Previous face frontier
 With imgur...
Inspiration
 You get...
NEW, imgur-inspired face frontier!
Huzzah!

The basic components I copied from imgur were:

  • grid of interesting and recent and highly rated content
  • being able to interact with that grid and rate stuff and get involved right away
  • clicking on something on the grid to get more information and rating details about it
  • boxes on the side for 
    • adding your own content directly 
    • getting at content from a different axis
  • prev/next navigation with the keyboard arrow keys when viewing a specific piece of content, so it's possible to just browse content forever... 

Short term results:
  • Lots of positive feedback from the friends who pushed me to do this redesign in the first place
  • A little bit of stickiness (30-40 pictures a day), people actually using it, without it being a direct response to me sharing something online

Obviously I need to get those numbers up so that this dataset can be REALLY BIG and REALLY USEFUL. There's still a lot to improve and a lot of features that would probably make the whole thing even more satisfying... but currently, I'm quite happy with it!

Tuesday, January 7, 2014

What's up with Photosynth 2

I don't work at Microsoft or on Photosynth, but a couple years ago, I had a project called PhotoCity that was sort of the awkward academic sister of Photosynth. Both were descended from the Photo Tourism project and made the tech available to anyone online.

These speculations about Photosynth 2 are reconstructed (hah) from my own experience working on similar things and from a talk Blaise gave at CVPR last summer.

Underlying Geometry 

Take a bunch of photos of the same scene from different angles, and you can reconstruct 3D information about the scene. It's hard to reconstruct a clean, correct, complete 3D model, but you can usually get a sparse idea of what's going on, like a point cloud or depth estimates for each image, as well as how the images relate to one another in 3D space.

Some software and apps that are out there: [bundler + pmvs] [visualsfm] [123d catch]

Dense (PMVS) point cloud reconstruction of Hing Hay Park in Seattle, photographed by Erik Andersen

My notes say that Photosynth reconstructs a point cloud (probably dense, something like PMVS) and rough geometry (probably from the point cloud). It doesn't seem to use the point cloud in the rendering. Instead, each image has its own piecewise planar proxy geometry of the scene. As you move between images, that geometry (or your view of the geometry) is interpolated.

View-dependent textures

I am certain that if you looked at the Photosynth 2 geometry without any texture, it would look like crap. You can sometimes tell how bad the geometry is in the transitions.

Screenshot of mid-transition. There are ghosted objects (that moved as if they were separate geometry) and tears and other issues, but they don't actually detract from the experience. View this ship yourself. 

UPDATE: YES YES LOOK AT THOSE CAMERA FRUSTA AND WACKY GEOMETRY... press 'c' on any synth to get this inside look for yourself!

One piece of magic though, the thing that smooths out all the terrible geometry blemishes, is the fact that you're looking at the beautiful original photos plastered onto the crappy geometry. AND you're looking from an angle that is similar to where the camera was actually taken. In that sense, it doesn't really matter what shape the geometry is because it's just going to look like the original photo.

Sometimes the viewpoint and geometry will be such that you can see past something to a place that's not textured. It looks like they just use some kind of texture hole filling to deal with that.

My friend and colleague Alex Colburn has some really cool work on using view-dependent textures to do cinematic parallax photography and to model and remodel home interiors.

Camera manifolds

Ye olde Photosynth let you take photos all higglety-pigglety and then let you browse them in a similarly haphazard fashion. I think many of the design choices of Photosynth 2 stem directly from trying to address the problems with capture and navigation of the original. (My PhotoCity solution was to show the 3D point cloud instead of the photos or show the point cloud projected on a 2D map.)

Oldschool synth: a sea of photographs. But! Since you're using Photosynth and not PhotoCity, you don't get much feedback about which photos work/fail and you don't get the chance to improve your model except by doing the whole thing again.


The types of manifolds Photosynth 2 supports are (at least):
  1. object: taking photos in a loop looking in at an object
  2. panorama: taking photos in a loop looking out, e.g. around a room or at the vista on top of a mountain
  3. strafe: take photos in a line moving sideways
  4. walk: take photos in a line moving forwards

My notes about types of camera manifolds including: object, panorama, strafe, walk, and concepts like "negative radius panorama" (shooting across a room)


These manifolds give the photographer some constraints about how to shoot, probably makes the computation easier, and makes navigation a hell of a lot easier for viewers (by constraining it).

Navigation UI magic

The camera manifolds make it super easy to navigate. Spin in a circle! Move along a single path! I think the Photosynth 2 people are really proud (and rightfully so) of being able to poke at a synth (via a touch screen) and have it respond in a sensible and pleasing way.

The second piece of navigation magic is how if you stop actively moving around, the synth just drifts to the nearest actual photo. When it's moving, the artifacts kind of blend together and get ignored by your brain. When it stops, you're looking at something free of artifacts, something that looks like a photo because it is a photo. But, unlike a normal photo, you can touch it to get a new view of the scene from a different angle.

A new art medium  

I haven't used Photosynth 2 yet (or even the original because I had my own version) but I can attest to the fact that taking photos for PhotoCity or for any other reconstruction/stitching pipeline deeply changed how I take photos and think about photography. Instead of hiking up a mountain and taking a single photo at the top, I want to take a bunch of photos and reconstruct the whole valley. (But ugh, not enough parallax from one peak!)

I think Photosynth 2 is a little more modest by enabling people to make reaaallly sexy/rich/immersive panorama-like things. Something pretty familiar, but also much enhanced. And on top of that, people will uncover quirks in the medium, like being able to capture dynamic, lively scene action in a stop-motion kind of way. For example, friend/past-colleague/actual Photosynth engineer shot this synth in Hawaii and there are occasional wave splashes!  Like a cinemagraph but in 3D! Compare that to your ghosted/amputated people in normal stitched panoramas.

Sunday, January 5, 2014

Blogpact 2014

I was part of a blogpact last year. A bunch of smart and interesting folks from around the internet (but connected to people I knew) challenged themselves to write a blog post every week. Eventually, it dried up and got down to just 2 or 3 people posting intermittently. That was kind of sad and actually discouraged me from blogging. Thus, I'm starting up a new blogpact with a fresh new batch of people! YOU should totally be part of it!


Yea! Let's blog together!

Steps to be involved: 


  1. Get yourself a blog (I just use blogger) and commit to writing stuff in it on a weekly basis 
  2. Email me (kathleen . tuite AT gmail) the link to your blog's feed (e.g. mine is http://kaflurbaleen.blogspot.com/feeds/posts/default) 
  3. Visit the aggregator website http://blogpact.superfiretruck.com/ where posts will be slurped up and combined on an hourly basis! 

First week: 

Since I'm just setting this up now, the first week will start today, and the weeks will end on Sundays. That is, your first post should come in by next Sunday, January 12, 2014. If you join later, just start whenever makes sense!

Penalties: 

The only penalty for missing a post is that I will eventually add a thing that says how long it's been since each person posted. And, ya know, you'll miss out on the practice of writing something and putting your thoughts out into the world. 

Inspiration:

If you need inspiration for what to write about, I have arbitrarily thought of the following topics:
  • Something you learned recently that was cool, maybe why and how you learned it, and if others should learn it, too
  • Something you created recently, whether it be out of foodstuffs, code, clay, robot parts...
  • An introduction to something you consider yourself an expert in
  • An introduction to something you're not an expert in but would like to learn about, thus getting you to learn a bit about it
  • Books you read in 2013 that stuck with you
  • Anything else you did in 2013 that stuck with you
  • What you imagine the world to possibly be like in 5/10/50/100/1000 years or a sci-fi gadget you wish existed now
  • Someone you admire