forty-two Random thoughts filtered through a babelfish.

Goodbye Walmart

I am saying goodbye to Walmart. After years of being a customer, our time together has finally reached the end of the road. Not over some major issue, but over the very basics of customer relations.

The Goal

Earlier, I decided to get a Google Chromecast to use for a repeating presentation in a lobby. I am not a 100% sure it will work, but it was at the right price point to give it a try. I did all the normal pre-purchase rituals, specifically compared Amazon, BestBuy, Target, and Walmart. The three big box stores are only a few miles from home, and I didn’t want to wait the whole two days for the Amazon Prime delivery elves to work their magic. Besides, the prices were all within a dollar of each other, so I decided to go to Walmart. Besides, I had to be next door for something else. Once more, laziness is the ultimate motivating force.

Before I go any further, let me share the screen shots from the four store sites as they were taken Tuesday, September 8th, 2015 at just before 9:00 pm.









The Problem at Walmart

So, I make the trek over to Walmart, find the Chromecast, and go to pay. It rings up $35.00. I tell them thats the wrong price. They say “Nope. That’s the price.” I counter with the website said otherwise, and that’s when they inform me that they do not honor the prices on their own website. Yes. That is right. Walmart will not honor their own prices from their own website. Nothing on the page says something to that effect. None of their competitors do that either. I offered to buy-it on-line using my phone, and they told me it takes up to four hours for them to see the order. They were polite, but firmly said it was company policy. I chose to shop at the Target 2 miles away.

So lets recap. A store, renowned for efficiency, that already captured my business - I was in the store already - gave it away because they wanted me to order on-line, wait up to 4 hours, then drive back to the store and pick it up after their employees picked up the same unit from the shelf and brought it to the front of the store. Not only is this a model of inefficiency, it based on a deceptive website. The website said nothing about it being an only only price. To make it worse, I told them none of their competitors do this and they were selling it at the lower price. They told me to go home and print out their website as proof. The website on my phone or their own computers would not count.

As I walked out, I went to the customer service desk to tell them that I was upset as a customer, and they just apologized and said it was company policy. So, Walmart, if you are listening, your company policy cost you a customer. Not only will I avoid Walmart, I will even try and avoid Sam’s Club. Bottom line, is that you didn’t lose me over the $6, but over the deceptive practices and poor customer service. Being polite is not an excuse for not addressing my issue.

Wrapping Up

For anyone that goes back through my blog, you will notice that I have rarely, if ever, commented on a specific store or product. I actually try to avoid commentary like that. Nothing against it, just not my thing. This, however, exhibited what I think of as huge corporate policy mistake by Walmart. They are competing, in many cases successfully, against a horde of other companies - Amazon amongst them. As a Prime customer for years, I know the value that Amazon brings to the table. I also know the value of the other stores, and they all have their merits. The margins are getting smaller, and the way to be on top is through low-margin volume and the occasional high-margin item - especially the impulse buy. When I next need a new TV, or computer, or printer, or most anything else - even school supplies - Walmart is now off my list. Will I be missed or even noticed by Walmart? Not a chance. I represent a rounding error on a rounding error to a single store. The amazing thing to me, is that I cannot be the only one who noticed this practice. I also cannot be the only one who decides to change his (or her) shopping practices because of it. Those rounding errors add up.

Finally, I will throw another opinion in to this. The future of commerce is neither on-line or off-line, but a hybrid of the two. The ratio will change over time, back and forth, but the successful winners in the future of commerce will be those that blend them seamlessly. My shopping experience should not be either or. It should be totally and completely seamless. I should have a single shopping experience from home to car to office to store. In fact, I should barely even notice the difference. The store that delivers that is the one I think will dominate the next generation of commerce. Having different prices based on the on or off-line experience is no different than two salespersons in the store giving me two different prices.

But then again, what do I know, I am only a consumer.

Does anyone disagree? Have a different opinion? Similar experience? Share in the comments. I am really curious.

Switching from WordPress to Jekyll

Ten years ago, May 14 2005 specifically, I started this blog on the WordPress platform. I chose WordPress for a number of reasons, but the primary reasons were that it was simple - really simple - for me to get up an going and that the hosting providers I was considering supported it. I wanted to use the web, but not actually develop for the web. WordPress seemed like the right choice. I did consider other options, and every few years, I would look around at what else was available. I have played with Drupal, Joomla, DasBlog, and others I can’t remember. In the end, I always ended up back on WordPress. Why - simplicity. It did what I wanted.

Time passes and things change. I don’t have the worlds largest blog, only around 80 posts or so from those 10 ten years, but my WordPress installation was loaded with plug-ins. I had plug-ins for SEO, Analytics, Facebook cross-posting, Twitter cross-posting, CAPTCHA, social media, and more. Worse, I was actually starting to manage the site. WordPress, plug-ins, and themes would all have different update schedules. This meant that I needed to deal with updates more than just once in a while. It also meant that very often, I would find that my blog has some PHP error codes displayed on my home page. In the end WordPress was no longer simple.

I still stayed, because it was fast, and because my blog writing software of choice (Windows Live Writer) supported it. The problem is that it is no longer fast, and WLW is effectively abandonware. It hasn’t been updated in years, and the last glimmer of hope, Windows Live Writer going Open Source?, seems to have not materialized.

Time continues to pass, and I find myself spending more time with web development and the cloud. I decide to try an experiment and host my blog on my cloud provided host. So out comes my favorite tool, Google, and I search for WordPress on AWS and Azure. I decide to try an experiment and set up WordPress in Azure. I chose Azure since I happened to have some unused free usage there, and I have done some playing in the AWS space already. I figured it would be a good way to learn more about Azure.

The Azure Marketplace has thousands of prepackaged deployment options, and given that I refused to setup WordPress from the ground up, it was my starting point. A quick search on WordPress later, and I see the following options:

Azure Marketpalce

Two real options here - WordPress or Scalable WordPress. I should point out that the same search on the Azure website produced more results. The problem is that only first two choices above were Web Applications, which meant I would not have to deal with virtual machines. I had no desire to manage a Linux server, MySQL database, WordPress, and a VM. So Azures Web Application was a requirement. So which one to use? Easy - whichever one is cheaper. So here are some prices.

Scalable WordPress Pricing Tiers

Scalable WordPress Pricing Tier

WordPress Pricing Tiers

WordPress Pricing Tier

Given that my current WordPress hosting provider is costing me around $15 / month, I wanted to try and find similar, and on the cloud, that meant a D1 or F1 plan. Either that, or I need to rethink the economics of my cloud usage. Scott Hanselman blogged about cloud economics in Penny Pinching in the Cloud: When do Azure Websites make sense?. Without going too deep into it, the B1 makes sense only if I can host 4 websites like my blog, and the S1 makes sense only if I get to 8. The D1 makes sense per website, so long as shared can handle the load, and only until I hit 6 websites. At 6, I should move to a B1, at 8 the S1. However, not much of that matters. I have a block of free dollars per month that is large enough to cover an S1. While I considered the economics, it was more about finding out how much box I needed to make my site hum along. The one economic choice I made is to not use the Scalable WordPress option since it had a minimum of an S1 box.

Starting with WordPress on an F1 was easy enough. I followed the Azure Marketplace work flow and the site was soon created. One catch. WordPress required MySQL, and there is no option for MariaDB yet on Azure. That meant another $10 / month on top of whatever pricing tier selected. Given my free dollars, this was not a show stopper, but it did annoy me. MySQL can be provisioned for open source, and I didn’t need my blog running on a mission critical build out.

Fast forward a few days of tinkering, and I finally get the whole thing set up. Problem is - performance. It sucks. No. It is far worse than that. Just trying to configure the plug-ins or load themes was painful. So, employing the beauty that is the cloud, I upped my tier to B1. It improved from horrible to just terrible. I played with options. Did some more Googling, but unless I put more horses under the hood, this was the best it was going to get. I want to be add. The performance was not just horrible in administrative mode, it was bad as a reader of the site.

I spent more time on plumbing than I wanted. Time to look elsewhere.

GhostMy next stop along the cloudy blogosphere was Ghost. I support Ghost as a Kickstarter project, and really liked the back to basics model it was taking. Once more I searched the net for others breaking this ground before, and once more, I found a Scott Hanselman article to leverage - How to install the nodejs Ghost blogging software on Azure Websites. A few days of effort pass, and the result - no joy. I just could not get it working right. It was little issues here and there; all for another days post. The result is Ghost wasn’t going to be my platform.

Jekyll LogoHowever, the effort with Ghost was not wasted. During my research and efforts, I learned about a whole class of static web site generators. I heard bits and pieces about them before, but never paid any real attention. This time I did, and long story short (too late - I know), I settled on Jekyll. MiddleMan, Pretzel, and Hugo were close contenders. The support for Jekyll on GitHub meant there is a lot of information available for someone new. This makes it a great starting point into the static code web site world. The others I tried were not hard to work with, but Jekyll just had so much more available to learn from, that it is a great first step into this space. After porting my 80 or so posts, few pages over, and core functionality, I can see / feel some of the limits of Jekyll. However, I have been able to address my needs - at least for now.

So thats the story of how I got to Jekyll as my choice. I plan to blog my journey to make my new blog as functional as my old one was. Specifically, I had the following set of features that needed to exist to be able to claim success:

  • All my posts must be ported, and look reasonably close
  • All my posts must have the same permalink structure so that no ‘search memory’ is lost
  • All of my post comments must be migrated
  • There must be contact form
  • It must support a Link Post Format
  • Must support analytics
  • Must support source code syntax highlighting
  • Must support social sharing functions (Facebook, Twitter, and Email at the minimum)
  • Must have social follow me functions (RSS/ATOM, Twitter, GitHub)
  • Must have related post functionality
  • Must have paged and previous / next post navigation
  • Must support post to Twitter

Those were all features I used in WordPress, and whatever I moved to next, should support them. I was able to make Jekyll do all of that, plus some new functionality. In following posts, I plan to document how I solved some of these issues, and even a new Jekyll plug-in I had to write to do it.

Have you dealt with similar blog migrations? Do you have opinions on Jekyll or other static site generators? Let me in the comments below.

Kickstart Uncanny Magazine

Uncanny Logo

I am going to diverge from my normal blog post topic of programming to talk about another passion of mine – fiction. I enjoy writing it, and more importantly – greatly enjoy reading it. One of the authors I have started to read recently is John Scalzi. His Redshirts book was the first of his I read (ok – I listened to it using Audible, but I am still calling it reading), and I was hooked. So I started following his blog, and he recently posted about a Kickstarter project for the Uncanny Magazine.

I can’t predict the future of this magazine or this model of funding it, but I love the idea of what they are trying to do and how they are funding it. This funding model will help release more literature into the world, both good and bad, but more literature is generally a good thing in my opinion. Combine that with the pedigree of the authors behind this project, and I think this has all the potential for success – and more importantly – quality material. Given all of that, I chose to support them (with real money too) and I am encouraging anyone that enjoys quality material to do the same.

If you want to read more about the authors, go to the Uncanny Magazine website, the Uncanny Magazine Kickstarter site, or John’s blog post about the magazine.

If you need a few more or just different reasons to do it, here are some:

  • Just do it!
  • You will support literature
  • I told you so
  • I will call your mother and tell her you hate reading
  • They have a unicorn as a logo – C'MON! It's a frickin' unicorn!!!
  • You will sleep better since you will have done a good deed to offset that bad deed you did in grade school – you know the one I am talking about
  • A years subscription is less than a week of Starbucks – a week of burnt caffeine or a mind send wondering
  • If you do this, the space aliens that are about invade will spare you and your dog or cat
  • One more – it's a unicorn!!! Really? You need more than that?
  • Ok. Since you need more – you are supporting the arts. That means you are a Patron of the Arts, and that means you are just like the Patrons that supported da Vinci, Michelangelo, and Shakespeare. I am not saying that this magazine will have the equivalent, but you never know
  • So stop reading my blog – and go support them. Click the unicorn.

    Public Service Announcement: No unicorns were hurt during the authoring of this blog post, and the author is no way obsessed with unicorns – only quality fiction.

    Caliburn.Micro.Logging 2.0.3 Released

    Just a quick note that with the release of Caliburn.Micro 2.0.0, I have updated the logging libraries to use the latest and pushed to NuGet.


    For usage instructions read my post Re-Introducing Caliburn.Micro.Logging.


    For those that just want to NuGet and go, here are Package Manager Console commands:




    Getting the Code

    As always, the code Caliburn.Micro.Logging is on github at


    Not a really big change, but it takes the logging libraries out of the pre-release state. As always, comments, suggestions, and critiques welcome. You can comment below, add an issue to the issue tracker, or send me an email using the blog comment form.

    Great Post on NoSQL Data Modeling Techniques

    Nope. It’s not mine. Though I wish it was. It is an older post (from the distant past of 2012) on the Highly Scalable Blog entitled NoSQL Data Modeling Techniques.

    This past weekend I found myself looking for methods of storing hierarchical data in a key-value store (a subject for another blog), and in the spirit of not reinventing the wheel – I broke out my trusty Google search window. A few search refinements later, I discover this gem of a post. Ilya starts with a pretty simple explanation of the differing NoSQL models. But the real value in this post is when he describes the conceptual techniques, some of pros and cons, and the applicability to specific NoSQL engine types.

    I had been looking at adjacency lists, so started there, and found this descriptions:

    (12) Adjacency Lists

    Adjacency Lists are a straightforward way of graph modeling – each node is modeled as an independent record that contains arrays of direct ancestors or descendants. It allows one to search for nodes by identifiers of their parents or children and, of course, to traverse a graph by doing one hop per query. This approach is usually inefficient for getting an entire subtree for a given node, for deep or wide traversals.

    Applicability: Key-Value Stores, Document Databases

    source: NoSQL Data Modeling Techniques

    It was simple and right to the point. Since I like simple and to the point, I started scanning the rest of the point. The immediate predecessor on the page was equally well laid out as can be seen below.

    (11) Tree Aggregation

    Trees or even arbitrary graphs (with the aid of denormalization) can be modeled as a single record or document.

  • This techniques is efficient when the tree is accessed at once (for example, an entire tree of blog comments is fetched to show a page with a post)
  • Search and arbitrary access to the entries may be problematic
  • Updates are inefficient in most NoSQL implementations (as compared to independent nodes)
  • Tree Aggregation

    Applicability: Key-Value Stores, Document Databases

    source: NoSQL Data Modeling Techniques


    Bottom line is that there is very useful information on Ilya’s post. I may not agree with all his conclusions and assumptions, I still learned something, and found information I can use. That’s the hallmark of a great post for me.

    Go check out Ilya’s NoSQL Data Modeling Techniques post and let him know what you think, or come back here and tell me – I don’t mind.

    Re-Introducing Caliburn.Micro.Logging

    It has been a long time since I upgraded this library, and the .NET development world has shifted a bit in that time. One of the key changes is the move to Portable Class Libraries, and Caliburn.Micro is one the libraries making that change. I have decided to follow suit and make the core Caliburn.Micro.Logging library portable. This, of course, has creating some significant breaking changes, so please look over the changes list below.

    I am re-introducing this, since my other posts on this are quite old, and seem a bit dated. If you are interested in the other posts, here is the list of related posts:


    • Upgraded from Calibrun.Micro to Calibrun.Micro.Core 2.0.0-beta2
    • Converted Caliburn.Micro.Logging to a Portable Class Library
    • Removed TraceLogger [breaking]
    • Dropped support for Silverlight 4/5 and Windows Phone 7.1 [breaking]
    • Added a strong name
    • Upgraded solution / project to Visual Studio 2013 Update 2

    How to Use Caliburn.Micro.Logging


    The easiest way to use Caliburn.Micro.Logging is with NuGet with package id Caliburn.Micro.Logging. However, if you are using the pre-release version, make sure enable pre-release versions in NuGet. Here is the command for the Package Manager Console

    For the more command line challenged or GUI enabled, use the Add Package Reference dialog.

    Getting the Code

    You can find the code for Caliburn.Micro.Logging on github at Caliburn.Micro.Logging

    Configuring Your Code to use Caliburn.Micro.Logging

    Once you have added the Caliburn.Micro.Logging NuGet package to your project, just modify the bootstrapper by adding a static constructor that sets the GetLog delegate. In this sample I show the DebugLogger, but it could also be NLogLogger. You can also do this in App.cs file.

    static App()
    	LogManager.GetLog = type => new DebugLogger(type);

    Once that change is made, compile and run. That’s it. You will now see debug output from the Caliburn.Micro framework, and your own log statements.


    In the spirit of no good deed goes unpunished, there are some issues with PCL that I ran across and you may as well. If your app is targeting .NET 4.5 or .NET 4.5.1, you wont hit these, so if that’s you – move along – nothing to see here. If you are using 4.5.2, then you will get the following error messages:

    Don’t Panic! I am not sure why this happens, or what is going on in Visual Studio land to get me to this happy place only for .NET 4.5.2, but the solution is simple. You need to add a reference to System.Runtime. However, this is not listed in the normal assembly references list. I found it by going to C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETPortable\v4.6. Once you add it, all just works. I am sure this is just growing pains related to Portable Class Libraries.


    This is not the worlds biggest revision – especially given the really small code base, but it is a move in the right direction. The use of portable class libraries is major shift in the .NET world, and a welcome one. It would be nice if the vast majority of code is portable, and only a small portion of what is developed is platform specific. Seems like the holy grail of the development community. We shall see.

    In the meantime, if you have suggestions, comments, or critiques, please add a comment below, add an issue to the issue tracker, or send me email using the blog comment form.

    Caliburn.Micro.Logging updated to version 1.5

    Sometimes time flies way too fast. It has been seven months since I last updated my Caliburn.Micro logging frameworks and nearly a month since Caliburn.Micro has been upgraded. Well, I have finally caught up and upgraded the logging libraries.

    Caliburn.Micro.Logging, Caliburn.Micro.Logging.NLog, and Caliburn.Micro.Logging.log4net are now version 1.5 and available on NuGet.

    Version Changes

    • Upgraded Caliburn.Micro to 1.4
    • Upgraded solution / project files to VS2012
    • Added support for Windows Phone 8
    • Split Caliburn.Micro.Logging, Caliburn.Micro.Logging.NLog, & Caliburn.Micro.Logging.log4net into their own solutions
    • Changed NuGet packaging to be a project in the solutions
    • Minor changes to the physical directory structure

    Please see the following posts for usage:

    As always, comments and suggestions are always welcome. Source code is available on my github site.

    IoC Battle–Revisited


    I have been evangelizing the use of IoC for a number of years. Frequently the newly converted ask the following two questions:

    1. Which container to use?
    2. Which is fastest?

    My answer to the first is almost always the one you choose, and for the second question I refer them to Martin From’s IoCBattleweb page. I wont get into the reason for my answer to the first question in this post (I will save that for another day). However, I have found that Martin’s web site was missing the TinyIOC container, the Caliburn.Micro.SimpleContainer, and the existing containers needed a version refresh.

    Thankfully, Martin checked his code into github, so out came ye’ ole fork. I have updated the code accordingly, and it is available at IoCBattleon my githubpage.

    Enough talk – here are the results.


    Test Platform

    These tests were performed on an Intel i7-2630QM 2.00 Ghz with 16 gb of memory running Windows 7 64-bit.

    IoC Containers Tested

    Cautionary Note

    It is important to realize that performance is not the primary criteria to use in selecting an IOC container. It is important in some cases, but overall it is just one of a number of issues that I consider a secondary concerns.

    Each of these containers has strengths and weaknesses. They offer different features, and sometimes do so in different ways. Finally, they operate in different runtimes and environments.

    Bottom line – spend the time to find the container that does what you need and you are comfortable with.

    Source Code

    You can download my modified version of the source code from original IoCBattle at github.


    My thanks go to Martin From for doing this test and the Dynamo IoC Container.

    Gentle Introduction to MEF–Part Three

    At the Tampa C# Meetup on August 3rd, I presented this Gentle Introduction to MEF using the same project modified over three steps. This is Part Three, where I complete the application created in Part One and modified to use MEF in Part Two. This part will show MEF composing the application from multiple assemblies into one application at run time.

    Gentle Introduction to MEF–Part Three

    At the Tampa C# Meetup on August 3rd, I presented this Gentle Introduction to MEF using the same project modified over three steps. This is Part Three, where I complete the application created in Part One and modified to use MEF in Part Two. This part will show MEF composing the application from multiple assemblies into one application at run time.


    In Part One I created an application the generates some text, and transforms it based on a selected transformation method. We introduced the following interfaces:

    • IGenerator – implemented by our text generators
    • ITransformer – implemented by our transformers

    We also introduced a few implementation of those interfaces:

    • LoremIpsumGenerator – returns a Lorem Ipsum text string
    • LowerCaseTransformer – returns the supplied text converted to lower case
    • UpperCaseTransformer– returns the supplied text converted to upper case

    Finally, we introduced a composite class, TranformationEngine, to contain a reference to the single IGenerator, and collection of ITransformers.

    All of this was tied together in a simple WPF based UI shown here.

    Part Two introduced MEF into the application. We covered the basic elements of MEF: Exports, Imports, Catalogs, and Containers. We decorated our implementations with Export attributes, and built an Assembly catalog to find find them all. We modified our targets with the Import attributes, and finally used the CompositionContainer to compose our final instance.

    Breaking Up Is Not So Hard To Do

    To demonstrate how MEF composes across assemblies, we need some assemblies to work with. So lets partition our current application into a three pieces. The first will be the application itself. So, lets cover the three parts, one at a time.

    RealMEF Project

    This project should be a Windows WPF Application. It should contain two files, App.xaml, and MainWindow.xaml (and their corresponding code-behind files).





    Code Changes

    The changes are minor, and limited to line 13. I changed the AssemblyCatalog to be a DirectoryCatalog. The DirectoryCatalog is used to tell MEF to load all assemblies in the current path. There is a second constructor override that takes the path and a search pattern, and loads all assemblies that match the search pattern in the specified path. However, for this use case, I am telling MEF to search for all assemblies (*.dll) in the current directory. That’s the extent of the changes for the main application.

    TransformationEngine Project

    This class library project contains the IGenerator and ITransformer interfaces, and the TransformationEngine class. Here is the code again.




    This project defines the two interfaces that will be used by other types, and eventually imported into the TransformationEngine. Notice that I use the InheritedExport attribute on the interfaces. This attribute allows for derived types to be automatically tagged as an exported type. So your classes do not even have to be aware of MEF. Also notice that the Transformers property is decorated by the ImportManyAttribute. This allows for a collection of those types to be imported rather than just one. I covered both in Part Two, but wanted to highlight them again since they are so important.

    Those three types are all that makes up this project. This type of project is sometimes referred to a Contracts Project, since it just exports (and imports) the contracts (decorated interfaces). The purpose of it is to define the types common to both the assembly that uses the types (RealMEF project) and assembly providing the implementations (TransformationImplementation Project – see below).

    TransformationImplementation Project

    This project is another class library project that contains the classes moved from the main application. Here is the code again.




    These classes are the same as before, just in a new assembly all their own.

    Building the Projects


    The RealMEF project should have a dependency on the TransformationEngine project, but not the TransformationImplementation project. The TransformationImplementation project should also have a dependency on the TransformationEngine project, but not RealMEF. This will allow us to see what MEF can truly do for us.

    Build the projects, and copy the TransformationImplementation assembly into the bin\debug or bin\release directory of the RealMEF project. Then run RealMEF. Everything should work – even though there is no defined linkage between the RealMEF project and your implementations.

    Taking it to the Next Step

    Modify the TransformationEngine class by changing the Import(typeof(IGenerator)) to Import(“StarWars”) and recompile. If you run your program now, it will fail. So, lets fix it.

    Create a new class library project, I called it TransformationEx. Lets add two new implementations, just for the fun of it.



    Line 1 contains the most interesting piece. I added the Export attribute, and specified a name for this export as well as the type. This will override the InheritedExport on the IGenerator type for this implementation only. You may notice that the text is the same I added to the Import attribute for the TransformationEngine above.

    This project is similar to the TransformationImplementation project, so it depends only on the TransformationEngine project. Build it, and copy the assembly to the to RealMEF bin\debug or bin\release directory. Then run it. It all just works. That is the power of MEF. You just enhanced a pre-built project with new functionality very easily.


    Take a look at how the ReverseTransformer implementation is now an available transformer, and it required no import or export changes. The StarWarsTextGenerator was different because I required only one, I did not implement any logic to pick the right one, and I wanted to demonstrate named import / exports. The way I dealt with the ReverseTransformer is more likely how you will deal with adding new functionality for your applications.

    Over these three posts (Part One, Part Two, and this one), you have seen how to take a simple application, decompose it into MEF-able components, and further decompose into late-binding assemblies. While this does not cover the entirety of MEF, it gives you the knowledge needed to use the most commonly used functionality. So go decompose those monolithic applications and use MEF.

    If you have done something cool with MEF, post it as a comment and share.