But really, what is this all about?
Well, “OOZ” is a reference to the group of crazy people that make up the “One Over Zero” gang (to which I’ve alluded here before) and as for the “Labs” part, this show is based on some of the things we come up with in our labs! (which is to say, Nuno Correia’s awesome garage workshop.)
We’ve been making cool (or sometimes simply fun) things together in Nuno’s garage for a long time now–did you catch our foray into making longboard skates, for example?–and we came up with the idea that we could create a show around our “Maker” activities, teaching people (portuguese speaking people, more specifically) how to do some of these things and letting them see that in general this is really not so hard.
So we made a plan to start off with something we were all confortable building–a quadcopter–, got our own equipment together (we bought very few equipment for this project and what little we bought, we mainly used the show as an excuse to buy it for ourselves anyway), started making some test shots in the garage, learned a lot from them, Luís put together things like studio lights with some ingenuity and a shoestring budget, we ordered the parts for the quadcopter from Hobby King and Ebay and started shooting for real sometime around October.
In mid-December we finished shooting the first season and by then I already had most of the twenty-something videos that comprise that season edited and ready to post.
To me (well, to all of us, really) this is a gigantic project. We shoot in the evenings, after work, and when we’re not shooting I’m editing video and the other guys are preparing stuff for the next shooting session.
So what is my role in all this? I decided to tackle a few things, mainly because I didn’t know how to do them and I wanted to learn something new and challenging. There’s really no better motivation, right?
I edit the videos (and audio, we use separate audio from the cameras, sometimes with one microphone per person).
I strictly use whatever software I already had available to me, which amounts to iMovie 10 for the video itself and Keynote for the “cards” we need to have in the videos. No other animation or video editing software is used (apart from QuickTime for the screen casts we had to do later on in the series).
We definitely did not want to get mired in the video and audio copyright cesspool, so we needed some original music for the show. We wanted something fresh and upbeat, something that stuck to the ear and was easily recognizable.
With that in mind I whipped out my copy of Garage Band and sketched a few possible theme songs, until we got to the one we’re using now (for the first season, at least).
Fresh, upbeat, catchy, easily recognizable and copyright-free. Well, one out of five is not so bad, now is it?
I also decided to play the producer role in this project. In that role I have to make sure things are there when we need them, people do what they have to do on time and I organize everything and everyone for the show to happen when it must. Not an easy job, but I like to organize things so it was a natural fit for me.
I present the show–we all do, at least in this first series–because I actually have built a quad, so I do have some relevant knowledge to impart.
I will probably not have much screen time on season 2, but I’ll definitely be all over the screen on season 3. :-)
Then, at the end of the process, I am usually the one that takes care of launching the videos and blog posts and I generally handle the social media side of things (Facebook, Twitter, Google Plus, Instagram and so on), with some help from the guys.
Obviously this being a three-person project, we all end up doing a bit of everything and when push comes to shove, we all get down there on the “studio floor” and hammer away at whatever needs hammering, but broadly speaking each one of us has a few roles he adheres more to, and these are mine.
So far the experience has been really fulfilling.
The learning curve for iMovie–which I’ve used before but never on a project of this scale–has been steep, but I am now comfortable enough with it as to be able to edit a small video in just a few hours.
The producing and organizing side of things has been a real challenge. It is such a chaotic mess just getting things set up to film an episode, let alone all the rest of the stuff that has to happen for the show to go on…
But I’m loving it all and I have already fulfilled at least part of my goals: I have learned a lot with all of this!
]]>Ora aqui há uns tempos ele lembrou-se que seria interessante conjugar esses dois gostos e materializou a ideia no blog My Jazz Festival, onde ele e alguns amigos conversam sobre os seus artistas/temas/discos favoritos.
Foi com muito prazer que aceitei o convite do Pedro para passar por lá de vez em quando e falar sobre algumas coisas de que gosto e hoje publiquei o meu primeiro artigo.
Se gostam de jazz, seja lá ele de que tipo for, passem por lá, acrescentem às vossas listas de leitura e deliciem-se com o festival do Pedro e dos amigos.
]]>Meanwhile, down here on earth…
Trying to solve a slightly less challenging problem, Lit Motors is trying to build a special kind of motorcycle-car hybrid that can stand upright by itself and correct for whatever external factors try to push it into falling over–the C-1.
They have a partial prototype that shows their gyroscopically stabilized solution works as advertised, but I think there is no evidence of a full-on prototype available. You can watch the pitch, along with a rather short clip of the existing prototype here.
And as for gadgety stuff, prepare for the arrival of the Leap Motion’s The Leap.
Apparently this little piece of tech outperforms Microsoft Kinect in many important features, while being smaller and substantially cheaper.
Check out some videos demoing the device and notice it’s definition and responsiveness.
Evolution, that’s the name of the game here. The Kinect is amazing, but this is clearly one step ahead.
Our friends, the flying robots are coming along nicely, now getting free from the outer motion tracking mechanisms, and starting to do autonomous flight using only their in-board sensors and processing hardware. Some of them even do it rather aggressively.
The Opportunity Rover has woken up after the Martian winter and got back to work. Just to recap it’s history a little bit, Opportunity has landed on Mars on January 2004 and was supposed to perform a 90 day mission. It is now on it’s 8th year of service, with over 35Kms driven on the Mars surface. Talk about “above and beyond the call of duty”!
Another subject I’ve alluded to before is the legalization of self-driving cars in Nevada, USA. Well, the first license has been issued to Google and road-tests (of the legal kind) have probably already begun. Watch the seminal video of a blind man “driving” one of these cars here. A host of other entities (notably car makers) is also entering the fray and other states in the USA have also made known they are working on legalizing these kinds of vehicles, so this is a space where exciting things are bound to happen in the near future.
]]>One good example of such challenges was the famous DARPA Grand Challenge, which consisted of a long distance competition for fully autonomous (driverless) cars, along desert roads.
The first event was held on 2004 and none of the competing robots managed to finish it.
Then, in the second event held in 2005, five vehicles (out of twenty three) completed the course.
Driverless cars are something that we’ve been hearing a lot about these days, with some of the existing prototypes having driven hundreds of thousands of miles in real-world roads and some states in the USA even handing out licenses for such robots to drive on their roads, but if we think back to 2004, when the first challenge was held, it was nothing short of miraculous for any car to drive even 100 meters without assistance, let alone the full 240km of the race.
And yet, just one year after, Stanford’s University’s Stanley did just that and did it without a hitch.
Later, in 2007, DARPA ran another competition–the Urban Challenge–where the cars where also expected to drive autonomously for a long distance, under time constraints, but this time the challenge was run on a closed “urban area-style” course, where the cars encountered other traffic (both robotic and human) and had to obey all traffic rules and regulations while navigating their assigned course.
Even if it doesn’t look like it, this was a significantly tougher challenge for the robots and in the end six team finished the course successfully (out of eleven finalists).
Just as an aside, the practice of posing grand challenges to the community is not an exclusive of DARPA. I cannot help but briefly mention, by way of example, the X PRIZE Foundation’s space initiatives which gave us the first private space flight five years ago with it’s Ansari X Prize and which is currently, together with Google, trying to send a private robot to explore the moon.
Getting back on track, the robotic community has been abuzz in the last few weeks about a new challenge coming from DARPA: the DARPA Robotics Challenge.
This time, the competition revolves around the creation and/or programming of (preferably humanoid) robots to operate in search-and-rescue types of environments.
There will be different “tracks” that teams can compete on, with some of them expected to develop both the hardware and the software and others being given the robots and asked to develop “just” the software .
To give you a taste of what the challenge entails, the robots will be expected to:
All of these tasks must be accomplished by the robots in semi-autonomous mode, which means there will be some supervision of it’s actions, but that supervision must be light, performed by non-expert operators and communication links between the robot and the operator will be spotty.
This is a rather tall order given the current state of robotics, but then that’s the whole point of these challenges: to push the boundaries of what we can achieve in a very significant way.
If you want to know more about this fascinating challenge, I would point you towards the Broad Agency Announcement (the official documentation) and IEEE Spectrum’s interview with Dr. Gill Pratt (the creator of the challenge).
]]>Having had a blast recalling what little I’d learned about AI, years ago in university, with Thrun’s and Norvig’s “Introduction to Artificial Intelligence” and also exploring the–until then–unknown science of Machine Learning with Ng’s introductory course a few months ago, I’ve now been expanding my knowledge about robotics, specifically self-driving cars (but learning lots of generally applicable stuff on the way) with Udacity’s CS373–Programming a Robotic Car (again by Sebastian Thrun).
The CS373 course is just about to finish and I now know more than enough to tackle a lot of projects I’ve dreamed up in my mind and also to finally get some other projects I’ve already tried to implement to fully work. Things get a lot easier when you know the maths and the algorithms, as opposed to simply throwing guesses at the problems.
What I lack is the tools, workspace and inclination to delve deep into the hardware side of things, but luckily I have friends who are more of the hardware and tinkering persuasion and so the potential for collaboration is high.
On a slightly different note, I’m also going through Coursera’s Game Theory class (by Jackson and Shoham), just because it seems really interesting and maths is kind of fun.
But getting back to the cool tech mentioned in the title of the post, this week we have…
MIT Media Labs have been busy developing a camera that can see around corners. There’s a video that explains the process in simple terms which is actually quite interesting to watch.
It uses lazers! So it’s cool, right?
Also in the “I can see you!” department, Face.com has some rather nifty software that is really good at not only detecting faces and the mood of the person, but now also at guestimating the person’s age.
And then it gets creepy fast, when Hitachi Kokusai Electric starts selling a system that is capable of analysing 36 million images per second to detect and match people’s faces, with quite a bit of flexibility regarding the person’s pose and the size of the picture.
The fact that the system seems to be extremely well thought out and allows you to immediately jump into the relevant point in the video stream being analyzed and also to search multiple possible matches for the same person throughout the timeline can be viewed as a bit disturbing.
Watch the demonstration video for a glimpse of what it can do.
By now we should be more than aware that privacy is dead but this still creeps me out a little.
Ever worried about our health, the helpful robots are now ready to try to prove their mettle on the Hospital floor.
Of course this is early days and this first trial will mostly let the developers of the robot know how much and in what ways it gets in the way of the doctors, but it is a necessary step towards taking the drudgery out of the hands of junior doctors, allowing them to focus on more important aspects of their development as professionals.
And now for the really cool and shinny piece of tech of the week: the Mighty morphing hexapod bot is back!
This spheric hexapod has been making the rounds for some time now and it has been learning and getting new abilities at a nice pace, until it reached what you can watch in this video.
That doesn’t mean that the development is done though, after all, there’s so much potencial with this lovely toy!
I’ve been beating the flying-robots-drum quite a bit lately, because that’s where much of the current research is happening (also because these articles are entitled “Assorted Cool Technology”) and because things have been evolving at a rapid pace in that area.
A lot of the most recent videos we’ve been seeing are in fact produced by the very same group and quite recently they’ve been incorporated into a Ted Talk presentation by Vijay Kumar which you should definitely check out. More info on the whole issue can be read in this article from the Singularity Hub.
Also, on the subject of flying things, these beauties are going to be available to us, mere mortals, soon.
And then they became smaller then ever before. Harvard engineers have devised a clever and incredibly elegant way to produce tiny (think millimeter-scale) robots based on origami-like techniques.
Back on the ground, Boston Dynamic continues to impress us with legged robots that can out-do us in many ways. Be afraid, they can now out-run us in the long run. Read more about it at the ieee spectrum.
So if we’re inexorably going to be left in the dust, what can we do?
Nothing really, but at the very least we can look cool while wearing our tech-based life-enhancers. As is the case of those who wear these glasses to record what happens around them in first-person POV. The demo video and the specs are quite impressive.
One aspect that is often associated with the coming of the Singularity is that it will enable us to live forever (or at least for as long as we want to). This may or may not be desirable, but setting aside the discussion about whether we will want to live forever or even if we can cope with such a thing, at least the notion of substantially extending our current lifespan is very appealing to most people right now.
In fact, and regardless of the Singularity, for quite some time now we’ve been studying the ageing process in animals, with an especially keen eye towards the human species, in the hopes of being able to substantially delay said process or even to revert it and make it possible to rejuvenesce an ageing body into a younger, healthier one.
As it turns out, our bodies appear to have a definite expiry date after which, no matter how sound our mind is, they’ll simply shut down, independently of our efforts to keep them healthy. Assuming the following article accurately reflects the state-of-the-art of our knowledge about human ageing, trying to keep this body around for much more than a century is a losing proposition.
In “Your Body Wasn’t Built To Last: A Lesson From Human Mortality Rates” the author explains how
By looking at theories of human mortality that are clearly wrong, we can deduce that our fast-rising mortality is not the result of a dangerous environment, but of a body that has a built-in expiration date.
Now, even if we can’t live forever in our own bodies, we’ll still be inside them for quite some time, as the expected life span of the average human being increases (whether or not we do hit a biological limit). So what can we do to be better able to cope, then?
Well, some people are thinking about ways to enhance the human body, in order to make it more adapted to our living conditions here on earth. This could be taken to the natural conclusion in the form of a process of gradually replacing body parts that become defective. This process may have it’s appeal for some but it is also a controversial issue with it’s fair share of hot buttons (“When do I stop being a person and become a machine?”, “Am I less of a person as a Cyborg?”, “Am I the same person I was when I began the process?”, “And if not, when did I become a different person?” and so on and so forth…)
Others are working on ways to preserve and even enhance our brain’s abilities but only at a very small, personal level (no big society-scale jump here).
Personally I don’t put much stock in the possibility of making our current frail shells last forever, preferring to bank on another staple of the Singularity concept, which is the idea that after it comes to pass, we’ll very soon be able to codify our minds (which is not the same as our brains exactly, but does include them) in such a way that will allow us tu upload them to a different container, be it a computer, a computer network (living in the clouds, anyone?) or a new, physical, engineered body. And some container that will be, can you imagine our ability to design such things by then?
Now the fun part comes when we try to ponder such a possibility under the light of our current moral standards. If the concept of incremental body enhancements through technology is a controversial one, what can be said about the concept of living as a bodiless entity, or about the concept of “self” when your mind can be uploaded and, thus, gasp copied!
Fun stuff indeed.
]]>I’ve blogged about another cool application of technology over at OneOverZero: robotic legs. This is a stripped-down version of the full exoskeleton that’s being developed for the military, but for this to have become an actual product for “regular people” is, I think, very important. Technology can improve our lives in myriad ways, but we have to fight our basic urge to resist it and this, because of the “look & feel” of the legs, is probably going to be a very important step towards full acceptance of bionic technologies in our everyday lives.
All hail the cyborg!
So as of now, the playlist of favourites for “the beginning of the end of harmony and start of atonality” is something along the lines of Debussy’s “Prélude à l’après-midi d’un faune”, “Des pas sur la neige” and “Le fille aux cheveux de lin” and Satie’s “Le Fils des Etoiles (Preludes)”.
Small list so far, but it will grow.
Also in music-related stuff, I’ve been making public the sketches of songs I’ve been creating (very irregularly) to contribute to the twentieth project.
The songs are over at SoundCloud and and as I said, they’re mere sketches as they are though of, written, played, recorded and mixed in less than 4 hours. Yes, I still have that pesky day job to account for the rest of the day.
On a final note, according to last.fm, lately has been mostly about Sufjan Stevens (whose “Come On Feel The Illinoise!” album made a great soundtrack to the latest skiing season, by the way), John Grant, Louis Armstrong, The Portico Quartet and Jasper Steverlinck.
That feels about right.