Why tomnod is nothing more then a game in the search for MH370

As so many people, the story about MH370 has fascinated me. Not so much the fact that we can “loose” a 777. It’s clear that large parts of the ocean are not so closely monitored as the coasts, but more as to what happened. Is this a perfect storm of mechanical failures that made the 777 a ghost plane, flying out until it ran out of fuel ? Was there an attempt to hijack gone wrong which the same outcome ?

Either way, the response world wide is amazing. One of these responses, was DigitalGlobe’s crowd sourcing campaign to locate MH370. Next to digital Globe, Mapbox tryed the same but it was soon obvious that their maps or satellite imagery is not recent enough to have any meaning in the search.

At first Tomnod seemed like a good idea, although some important features for being able to speed things up were missing:

  • Navigation
    You are “flying blind” with tomnod and need to check the area where they guide you to. There is the trick with the api url ( change “challenge” to “api” in a tomnod url ( ex: http://www.tomnod.com/nod/challenge/mh370_indian_ocean/map/140650) will give you the coordinates. But since tile numbers are not sequential, they still only give you an idea of location but no possibility to navigate.
  • Imagery time
    beeing able to select the specific time of the imagery ( like with mapbox) would be a huge step forward.  It is of course a fact that a satellite will not always provide good imagery ( clouds etc) but at this point, there is noway to select the imagery timing while i am sure the same location will have been photographed multiple times.

This “flaws” become even more clear once it was suggested that instead of flying north to some remote location, the plane might have taken the southern route and crashed in the south indian ocean. Once this was the working theory ( which it is at this point since most of the search is concentrated on that area) Tomnod rendered itself completely useless for following reasons:

  • Navigation
    Even more now, it would be a great help to be able to enter coordinates. At this point, multiple debris sightings have been confirmed by both the Australian and Chinese government. Hundred of thousands of people sifting through tomnod, could be of help is they were able to look in that specific area.
  • Timing
    If the plane did indeed crash on water, the best chance of finding it, would be images as close as possible to the date of the crash. As it stands now, imagery of the location where the debris has been found ( on tomnod anyway) is dated 16/03/2014, 8 days after takeoff. 8 Days debris had the chance to float away or even worse. Sink. Now these MIGHT be the best images available of the area, but we don’t know.
  • Scale
    While the scale of 1cm on screen = 20m in sea might be enough to find a 777 on a remote landing strip somewhere, it’s damn near impossible to use these images to look for floating debris in a water mass with waves many times that size, white crests and rough sea.
    Realistically, if MH370 did in fact crashed in the ocean, at this point, more then 14 days later ( or even on the images 8 days after the departure) chances to find floating parts of the boeing, are more likely to be relatively small objects. For example , when air france 447 crashed, except for the tail fin ( which was recovered  8 days in) the larger parts of debris recovered were 2 to 3m. Which would mean, looking for something of 1 to 2mm on current imagery ( not to mention most floating parts like lugage from passengers or other would be even smaller)

All of these flaws, make tomnod something to keep the masses busy, but barely useful for finding MH370

Exporting Dataset to Excel from c#

While trying to export a dataset to excel from c#, i kept receiving the error:

{“Old format or invalid type library. (Exception from HRESULT: 0×80028018 (TYPE_E_INVDATAREAD))”}

After a lot of searching, this apparently is a bug in the office interop regarding your culturalInfo.

More information can be found here:

http://support.microsoft.com/default.aspx?scid=kb;en-us;320369

to fix, simple do the following BEFORE adding workbooks:
[sourcecode language="c#"]
System.Globalization.CultureInfo old = System.Threading.Thread.CurrentThread.CurrentCulture;
System.Threading.Thread.CurrentThread.CurrentCulture = new System.Globalization.CultureInfo("en-US");</code></div>
[/sourcecode]

and reset it after you are done.

Release of log4net ActiveMQAppender 1.0-Alpha

For a while now, we were looking at how we could easily centralize logging and have a realtime view in our application. Since a large part of our backend is based on ActiveMQ, and we have implemented log4net in most of our applications, We decided to go further down this road and wrote an appender which publishes log4net logentries on ActiveMQ topics.
Source Code for this appender can be found on github together with some basic information about how it works and how to configure.

http://github.com/Noctris/log4net.Appender.ActiveMQ

Please bare in mind: This is an alpha release. We have yet to do performance testing in production but up until now, we have not noticed any issues yet. We are currently looking at a simple console app for live viewing the events coming in ..

MIT Experimental UI

One of our biggest challenges is bringing lots of information in an easy way. This involves a lot of UI design but sometimes, current methods are just not up to the task. And the only thing worse of not doing something, is doing it badly. So once in a while, we have to advice against the wishes of a customer since the UI would be too confusing for the end-user.

This being said, we are always on the lookout for new UI technology and ideas which some time in the future, might come in handy. On one of those research “trips” .. we found this nice video of MIT:

Cannot open more tabs

When researching something , i tend to have open A LOT of internet explorer / windows explorer windows and/or tabs.. I’ve found this “tweak” a long time ago but only remembered it when i upgraded my office desktop to a new machine (yes… i still run Windows XP …) and ran into the problem again that after opening x number of windows / tabs,the machine starts to run slow and/or just doesn’t want to open any more..

When i go and check my taskmanager, the processor is not really that busy and there is still plenty available memory..

Enter Desktop Heap size..

Since I am not an expert at this? i woudl rather refer to two great articles about this:

Desktop Heap Overview

and

Desktop Heap Overview, Part 2

In short: The desktop Heap sets aside a blob of memory to store user interface objects such as windows, tabs, menus and hooks… Since this value apparently is set to 48 MB (not always set in the registry… this is just a default value)  in Windows XP Professional x86 (32-bit).. this area becomes full and you cannot create any more ui objects…(and not sure but i have a feeling it assigns that memory for threads in the UI too)

Now 48 Mb probably was a lot when XP came out, but running on 4 to 8 Gb or ram these days.. it’s just a fraction of the total ram..

Fix:

To do this, you must edit the registry.

WARNING: Editing the registry could potentially harm your Operating System and render it in a state of utter not-working-ness.. BEWARE

open registry Key:

HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSession ManagerSubSystems

click right to edit the key “Windows”

you will see that the value of this key is a bunch or stuff:

%SystemRoot%system32csrss.exe ObjectDirectory=Windows SharedSection=1024,3072,512 Windows=On SubSystemType=Windows ServerDll=basesrv,1 ServerDll=winsrv:UserServerDllInitialization,3 ServerDll=winsrv:ConServerDllInitialization,2 ProfileControl=Off MaxRequestThreads=16

It’s the “Windows.SharedSection part you want to up.. and specifically: SharedSection=1024,3072,512

IMPORTANT: If you’re sharedSection does not have the last part: “,512″ , you should add it

I am using a value of 20480 which works perfect

so now, my part looks like:

SharedSection=1024,20480,512

After your reboot.. start opening extra windows.. it won’t stop ;-)

To Map or not to Map, that is the question -> Mapping Domain Entities to DTO's.. and back

I know don’t about you but for us, mapping entities to DTO’s is a very boring and timeconsuming task. While building a set of new webservices for internal usage, i needed to map a bunch of NHibernate BusinessObjects to DTO’s which could be used with webservices ( thus, being serializable).. in the past, we typically used a static class for this and did the mapping manually… this was a painfull process where you need to map: lists of objects, lists of dto’s and then the Business entity => DTO and DTO => Business Entity…

However, looking at the new stuff at one of my favorite Webcast sites ( http://www.dnrtv.com  ) I found a library called AutoMapper. This amazing piece of code of Jimmy Bogard.

Look at this Webcast for a demo( amazing stuff there): Jimmy Bogard on AutoMapper

Basically. This tool does 95% of the mapping work for you, without needing to code more then a simple line of code ( keeping in mind to follow some naming conventions). It can flatten your domain model, like mapping:

Blog.Author.Name from your domain model to:

BlogSummaryDTO.AuthorName without you needing to code anything else but:

[code]Mapper.CreateMap<Blog,BlogSummeryDTO>();[/code]

And you are done !.. we are currently looking to implement Automapper in this webservices project to see how this go’s.  And if all goes well, it won’t be long before replacing all our manual mapping code with AutoMapper !

Apache NMS 1.2.0.0 is coming (.NET Client for ActiveMQ)

At this point, Apache.NMS is in the stage or Release Candidate 2 (Apache.NMS 1.2.0.0-RC2) and it has a bunch of promising features and bugfixes in it !

Since we are very excited about this release ( we have entered some bugs , feature requests and tests), i just wanted to list some of the progress that has been done since version 1.1.0.0 ( which we use in production)

The failover protocol is a bit more thrustworthy:

1.2.0.0 Roadmap

AMQNET-171: TcpTransport still throws errors when used by FailoverProtocol FIXED

AMQNET-159: Failover protocol can now connect async

-> This speeds up things considerably when failing over

JMSStream Messages implemented

We can monitor connections now with 2 new events ( ConnectionInterrupted and ConnectionResumed)

An inactivity monitor has been built in so connection faults can be detected earlier and be handled ( this is still in beta though and we have seen some strange behaviour ( ConnectionInterreupted does not get fired however the connection is restored when a cable is unplugged and plugged back)

There where more bugs and features but these are the ones we were really looking forward too. Apache.NMS still has a long way to go but it is getting better fast when the committers put there mind on coding for this project. Especially Tim Bish does amazing work for this project

How to fix WCF Error The remote server returned an unexpected response: (417) Expectation failed.

Originally from: http://nahidulkibria.blogspot.com/2009/06/how-to-fix-wcf-error-remote-server.html

Few day’s ago one of our project(using wpf and wcf ) going live and we are start getting lots of weird error one of is 417 the remote server return unexpected response.
after some investigation and we found its only occur when client are behind a proxy in our case its squid(http://www.squid-cache.org/) and has a configuration like following in squid.conf

#This option makes Squid ignore any Expect: 100-continue header present
#in the request. Note: Enabling this is a HTTP protocol violation, but some #client may not handle it well.. #Default:
ignore_expect_100 off

we are solving this problem can be solved by changing squid configuration
ignore_expect_100 on

the following settings on the app.config also solved the problem

but if you have control over your proxy server settings change that. if you do not have chance to change proxy settings and handle this with changing app.config may be facing problem upload large amount of file

for more information you can check this
http://msdn.microsoft.com/en-us/library/system.net.servicepointmanager.expect100continue.aspx

Browsing Folders on Samba server over VPN (IPSEC)

With our new fiber in place and a couple of extra configs done, it was time to make this our main connection and disconnect the old one… Thinkin this shoulud be a relativly straight forward “replug and play” operation, i went off yesterday switching the connection, adjusting ip’s in dns and the Vpn’s on our firewalls on both our office as the colo-location. Some pinging and tracerouting later , all seemed to be well…

However, early this morning, i started noticing issues with our nightly copy to the backup server in our co-location. Trying to browse servers on the other end gave me a strange issue:

when opening a server and/or share, this was ok as long as there where little files / folders in it. If they had more then about 10, it would fail with the info that “ERROR 65 the network location is no longer available”.. the made me embark on a quest which gave me some (strange) insights into the SMB Protocol.. first of, our setup might be important

Pfsense 1.2 firewalls everywhere both as main firewall as for our “routers”, ipsec between these towards our datacenter.
OpenFiler boxes both in the main office as in our datacenter
Simple Robocopy batches for copying the data.

after looking a lot at forums , i found several posts about PFsense not being able to handle fragmented packets over IPSEC.

Now after switching from xDSL to the fiber, I did not take a look further then the ip addresses of the firewall and for the IPSEC tunnels, however, what i did forget was that the MTU for an xDSL line is 1492.. with our fiber, i did not know for sure..

since PFSense by default takes MTU 1500 for any interface except for PPOE interfaces ( there is automatically sets it to 1492), i started to check some stuff and sure enough, i quickly found out using:

(on windows)

ping -f -l 1500 xxx.mydomain.com

that my packets would get fragmented..

So looking further into it, i found out that MTU 1472 was the best i could find.. but helas.. no luck.. after some further searching i realized that altough my connection was able to handle 1472, IPSEC adds extra overhead data to your packets.. And sure enough, doing a

ping -f -l 1472 to one of my internal ip’s on the datacenter sides, resulted in fragmented packages yet again…

so another search for the best value we could use was done until i found out 1418 was the max MTU over my IPSEC Tunnel..

after this, my samba shares where browseable like they should and the scripts could find the paths again

So kids, want to do Samba browsing over VPN (cause this does not only apply to IPSEC ?) Check your MTU..

Oh.. extra info: apparently (although this was not the case for us), Samba does NOT handle NAT :s

Think i’m going to send a mail to the guys over at Pfsense to see if it is possible to set an MTU for your ipsec tunnel only since now i’m operating below the MTU of my actual connection, just to be able to run samba stuff over it..

Getting started on Nhibernate

A couple of weeks ago, i wrote about my evaluation of code generation tools. Now although i promised to update while doing so.. i didn’t .. the simple reason is:

I stopped it. After fideling around with several of them, i thought i was going to settle for codesmith.. Turns out that now, after a couple of weeks of playing, the templates and “hacks” i’ve done are so extended that i only use codesmith for generating my object classes ( yes.. i’m a lazy man).. and that is about it !.

I did buy the product for this purposes and am still trying to do as much as i can with it.. I will update the article on my findings and why i bought codesmith but that is for later…

Because the biggest advantage i got from using codesmith is: I understand Nhibernate now. Now in the forums of (N)hibernate (<- site down AGAIN as we i write this.. come on guys, fix this please…) and NHForge i see a lot of posts about the steep learning curve of nhibernate. Now at first i agreeded.. Most of the docs were pretty abstract if you were not familiar with the naming of things.. So although i encountered this before, i have given up back then not having the time to do the learning at that point..

Now with codesmith, there is an nhibernate template provided. Even though, looking back now, this is not really “production ready” ( maybe if you are going to do a (little) website.. sure..). .And this template, for me anyway, took away the learning curve..

We use a sort of “agile” development method, meaning we do a lot of research about a technology/solution/architecture and once we are satisfied, we just start on it to see where it gets us. It was the same for Nhibernate and / or codesmith. And the only way we could do this, is the template.

After generating my DAL,it took me about a day to run into trouble. Having an “example”(aKa the template) ready, the structure and inner workings of Nhibernate became very clear to me wich gave me a fasttrack to Nhibernate.

Now, only a couple of weeks later, we have 2 services, a winforms app, a ASP.NET site and some other uttilities running Nhibernate with Message Queue (ActiveMQ) middleware and MySQL database backend.. and although it is not working 100% yet, it’s looking fine.

We are using listeners for sending messages down to other applications for object inserts, updates and deletes, auditing on the tables for adding reccreated, reccreater, recmodified, recmodifier, recdeleted, recdeleter and several other things.. All of this thanks to the CodeSmith template giving us the start we needed to get going on Nhibernate…