Here at CrowdEmotion one important tech topic is capturing video in the most efficient way: we must be very careful about the quality of capture process, as on it depends part of the accuracy of our analysis processes.
On desktop web platforms the obvious choice is between Adobe Flash and, quite recently, HTML5.
Why bother to find something different from Flash? I’ll not add anything to what someone way more authoritative than me wrote on this topic; the conclusion is simply that HTML5 is the way to go for a number of very good reasons.
Currently, we use a 3rd-party Flash-based solution and we are fairly satisfied with it, but we need to do more, be more flexible. We already started experimenting with Flash development (with Haxe to try to work with open tools) and open source RTMP video servers (Wowza, Red5), but in any case you’re limited by the closed Flash platform.
One simple question first before going on more on HTML5 capture: why Google Hangouts does not yet run on HTML5 and relies instead on a native plugin?
Quick answer: because HTML5 real-time communication stack, called WebRTC, even if feature-complete, seems definitely too young to be adopted for a mainstream & strategic product like Hangouts.
The “URL shortener” concept is very simple: take an URL and transform it into another (shorter) URL, then use redirection to go back to the first URL. So what’s the deal? Continue reading URL Shortening
Everyone is aware of the problem of discovering the causes of a bug when it’s only present in one environment and, if it’s Production, the problem is even bigger, even if you have a solid error logging system in place.
Recently we faced this same situation and we didn’t have any clues to help us, only that the w3wp process was dying and the ASP.NET session remained locked. After some thought, we arrived at the conclusion that there was an infinite loop somewhere, and we had a vague idea of the “zone” of code where this was happening, but we couldn’t reproduce it in any other environment even after several hours of testing.
Some weeks ago one of my customers decided that one of its biggest ASP.NET web intranet projects needed a sort of architectural revision, mainly to support better its customers with built-in fault tolerance but also to unchain development of the various sub-projects through better separation between software modules.
When small software companies get bigger they embark on what can be a bumpy ride of change. One of those changes will probably be to do with the way they tackle the analysis phase of the software development life-cycle (SDL). Just to be clear, when I say “analysis phase”, I mean the part before coding starts i.e. requirements elicitation, analysis and system specification.
Typically (although I am sure that there are plenty of shining examples where this is not the case) small software companies with a handful of developers, where the entire SDL for a project is covered by one or two developers, tend not to have a formalised analysis phase. Why is that?
Recently I stumbled upon a couple of articles1,2 and, remembering my experience with EC2, I discovered that utility computing was not what I was searching for: I was searching for something that helped me without adding complexity, but I was not happy with simple web hosting offers, I wanted also complete control over my infrastructure to have the technical freedom that I could need and because, when I think about my customers’ data, I trust no one.
The most expensive phase of software construction is coding and this is because it’s the less intuitive: it requires constant attention and reasoning, errors (logical or not) are difficult to spot because they are immersed in text that often is long, separated in more than one file, and not written by us.