Transaction log is not just any log

For those who are finding out this the hard way, this little post might just shed some light.

So if you have somehow deleted or lost your transaction log for a database in SQL Server, this will come in hand:

ALTER DATABASE DBName SET EMERGENCY;
GO
ALTER DATABASE DBName SET SINGLE_USER;
GO
DBCC CHECKDB (DBName, REPAIR_ALLOW_DATA_LOSS)
WITH NO_INFOMSGS, ALL_ERRORMSGS;
GO

Mind some buzzwords here, especially the one that says: “Allow data loss”. Because that’s what will happen. Yet, don’t be afraid – most of the transactions are commited anyway, so the loss won’t be that big.

ALTER DATABASE DBName SET MULTI_USER;

Don’t forget to change to multi user for regular usage.

Trolled by a banklink

A consulting client approached me today, asking about some problems. Well it was eyebrow-raising, so to speak.

Apparently banklinks in the UK (specifically - Worldpay) let you pay for goods using credit and debit cards, easily. You are like card number and pin away from spending whatever the amount the card holds. No names, surnames, addresses or anything like that is verified. Hence, people are using stolen card info to buy stuff (who would’ve thought). Then the service provider ships out the stuff, or provides the services. Few weeks after that – the payment is rolled back, because of the reported theft. The owner got his money back, the thief got the goods and the provider has neither goods nor money.

So, long story short, my client asks me, how do you deal with this in Lithuania? Wait, what. There are no such problems, because paying on the internet via cards is only marginally available. Like 1% of the shops has it. Personally I have never seen it, yet I suspect they exist. Everyone uses electronic banking services and banklink basically fast forwards you to your banks system.

Wow, replied the client, that’s like 5 steps forward already. We only recently stopped using checks by the way.

Checks? Seriously?

Microsoft sadness

Microsoft is doing a lot to improve itself, yet, from time to time, they give me enormous amount of sadness. It comes from small things, once again proving that devil is in the details.

So, I was trying to use NuGet on my MVC3 application to get some packages: to setup MiniProfiler all you need is 2-3 packages. So the main installed smoothly, and the second one told me that my NuGet is out of date (I was still somehow using 1.2).

Fair enough. So I download and install NuGet 1.6.

2013.02.11 10:55:38 – VSIXInstaller.SignatureMismatchException: The signature on the update version of ‘NuGet Package Manager’ does not match the signature on the installed version. Therefore, Extension Manager cannot install the update.

Whatever that is I google it and come up with two possible solutions:

  1. Remove old NuGet and install fresh
  2. Apply a patch KB2581019

As the first option got me nowhere, I tried doing the second one:

http://connect.microsoft.com/VisualStudio/Downloads/DownloadDetails.aspx?DownloadID=38654 this is the download page for that hotfix, seems that metro redesign wave skipped this one.

This web page starts very crazy stuff: it offers you to download the patch using File Transfer Manager (kinda reminds me of megaupload kind of sites which all offer their spyware-included download managers) and yes – this download button doesn’t work on Chrome.

Well, I’m patient: I open IE10 (which is nice by the way), launch this same page and install the Microsoft File Transfer Manager. First thing it does is offer me this:

IMG_11022013_111710

 

You might guess what was my choice from the dashed line on the cancel button.

IMG_11022013_111742

The UI is more or less back in Windows 98 (see the buttons with the arrows!)

IMG_11022013_112156

After downloading, validating and whatever the thing else has taken around ~5 minutes we also have to install it and this just hangs in there for few minutes more without any progress. Yet suddenly:

IMG_11022013_112226

 

 

Well, ok – while we’re at itė (or whatever) we can of course fix my recycle bin.

IMG_11022013_112400Don’t be fooled by this great success – you will also have to restart the Visual Studio before using NuGet.

I mean, yeah, this took me about 30 minutes to figure out, which is not that much, since the blog post also took me 10-15, but still I’m having second thoughts about owning Microsoft stock :)

 

 

 

 

 

 

 

Categories: Programming Tags: ,

Paying off your technical debt

Yeah, right, another blog post about technical debt. A lot of (and I mean, maybe, thousands) bloggers before me had written on technical debt, a term coined by Ward Cunnigham. Heck it even has a wikipedia page: http://en.wikipedia.org/wiki/Technical_debt

So after all has been said and done… wait. Yeah, exactly, nothing has been done – everything was said, in the form of bitchy-angry blog posts and nothing much has actually being done in the form of concentrating and solving stuff, paying off, doing things right.

The thing is, that most of the time developers learn about technical debt when they pretty much have too much of it or they jump into a project where there is a lot aquired before them. Further extending the analogy it could be either an oblivious cocaine-driven business model or an investor who buys a pretty looking company which is not exactly what the salesman tells it is. In the first case, developers were hell bent on delivering features, on time, on budget, cutting corners was an option and every corner was cut without much consideration. Or they were forced to deliver whatever the impossible-to-meet deadline was implying them to deliver. On the other case you pretty much have this same company five years later. Over-worked and under-staffed now they are and stuff is not going well.

By looking over the business data (think: bottom line, turnover, etc.), you might say that sales are down or investment has stopped or whatever it is that business data can tell you. In IT companies it comes down to this. If you have a product, and someone is interested in it, or buying it, or investing into it. If you have no ability to scale up, move fast andor adapt it, it is going to fail very quickly. Considering the amount of startups, doing a lot of really cool (though, sometimes, really simple) stuff it is easy to say that moving as fast as they go (if not faster) is something of an objective.  If you have motivated, smart people – this is not a problem. Unless you have a lot of technical debt.

Ok, let’s go there – agile methodologies do not address this at all. I mean seriously, there is nothing there about it neither in scrum, nor kanban, nor DSDM. Ok neither there is something about it on waterfall, but that’s like a totally different subject. Basically, it comes down to this – developer (or the whole team) estimates a task and plans it, but they do not take into account the technical debt that the product has. A primitive example:

As a sales manager I want to see how much sales has a client generated per month

Good for you mr. sales manager, although you might not know that we only have one table column about that client ant it only contains a total sum of client generated sales. So to split it we have to produce another structure for sales with amount and dates, and it might take a while for you to get the right data. And bam, here is the pitfall: because of the generated technical debt, where it is only logical to have the data split by exact dates we have encountered a some kind of shortcut that was made by someone, some time ago. Yet, no one has estimated this and yet the client won’t be happy when he gets something expecting to be something that he asked for – he could not go into 2 years ago (when he started using and paying for your product) and see some of his clients data, because there is no way to present him to it.

So again, coming back to paying it off. It really breaks down to something as simple as paying a real debt (yeah, right, easy):

  1. Know how deep you are. Ask everyone on the team and anyone on the product, even if in any other team, what needs to be done to make it right, bug free, rock solid.
  2. Collect this advice and sort it out. You will need great wisdom to do this, since  a lot of developers will be pushing you some new concepts, of how they imagine it would be good to do it or some new technologies they are excited about, while the old ones work just fine and will work for the foreseeable future.
  3. When you are very deep in a hole (you didn’t expect you don’t have any debt at all, did you?) – stop digging. Stop aquiring new debts along the way, which are going to get you even deeper.
  4. If you have the resources to pay it off instantly – do it. No more questions asked. If you have 20 mandays of technical debt in your product and 2 developers with nothing very important to do for 10 days, that should be your first priority.
  5. Otherwise, you have to build a plan, to get back on top.

Now this unknown state of having no technical debt should include such basic things as:

  1. You are basically bug-free. There are no software without bugs, yet those that are present in your product are so minor that you don’t even bother to report them.
  2. You can release anytime you want. Your product is stable or the new features can be stabilised in a day or two. This should be achieved by aggresive coverage of automated tests and overall quality of the code, decoupling, solid principles, etc.
  3. You can move fast and you can implement anything without a lot of deviations from the original estimate.

This state is every managers dream and it is that solid foundation on which you can evolve the product. Yet it is unreachable until you deal with your debt. So start paying your dues, the sooner the better.

DevOps reactions

I’m not exactly in dev ops but being a frontend developer who basically has a backend in the form of WCF services, I can relate to most of these memes:

http://devopsreactions.tumblr.com

Any developer in cloud business will relate as well :)

Categories: Uncategorized Tags: ,

Performance in the cloud: Javascript overview

JavaScript has became the ultimate technology in backend of our frontend. Web application UI is powered and becomes interactive mostly because of it.

A lot of great frameworks (jQuery, knockout.js/backbone.js, prototype, backbone) emerged on making things easy. Easy creation draws attention and more and more things are done in the easy-creation field (if that is to be disputed, one should compare the activity in StackOverflow on such tags like php, javascript, jquery and then on t-sql, c, regex, etc.). That is all great until it starts falling apart. Many people writing code by samples, copy pasting it putting up things together that “works”. Well yeah, until you have to extend and maintain it. Or as we sometimes have to do: make it work on that machine. That machine usually is old, has old windows, old Internet Explorer and so on.

We had this application – a mashup of technologies: jQuery, jQuery UI, some silly treeview plugin, A LOT OF our own code, helpers, implementations, loaders, somewhere classes, somewhere functions. Well you know – your typically “I rather just throw it all out” situation. Things were running OK on quad core CPU’s, stuffed with gigabytes of RAM.

And then came the go live. We had few busy weeks of late-night refactoring and performance tuning of JavaScript – one thing I thought I will never have to do (because it just works you know).

So really simple lessons learned and things to take into consideration when dealing with it.

Cache selectors

$(‘.class1:not(.class2) > input[type=text]‘).val(‘because you can do this doesnt mean you should’);

$(‘.class1:not(.class2) > input[type=text]‘).attr(‘disabled’, true);

This selector is bad. It tells a lot about your HTML/CSS structure (that you have none) and the fuzzy logic you are doing here. But still, if you need this kind of selector more than once, cache it.

var selector=’.class1:not(.class2) > input[type=text]‘;

$(selector).val(‘because you can do this doesnt mean you should’);

$(selector).attr(‘disabled’, true);

I shit you not, this was a selector caching idea of one of my senior colleagues (too many coffees and too little sleep I guess).

var $selector=$(‘.class1:not(.class2) > input[type=text]‘);

$selector.val(‘because you can do this doesnt mean you should’);

$selector.attr(‘disabled’, true);

I like to prepend jQuery variables with $, so to not get them confused. Anyway, this simple step does not only increase the performance twice, but also supports don’t repeat yourself principle (yay). Note here: it’s ok to use chaining, since the selector is reused, but it’s not always possible or good enough:

  1. There are things that you need to do in between the manipulation of selected objects.
  2. Different functions/classes can reuse the same selector which is cached somewhere in the beggining of pageload and never invalidated throughout the pages lifecycle.

Another thing to cover here is $(document).ready() or $(window).load() (surprise surprise). These functions became the location of all the initializers and basically all the code there is to execute on page. The important thing to remember is that when you write:

$(document).ready(function () {
$(“.someSelector”).on(“click”, ….

It doesn’t do anything until you click it – yes, except for the selector part. And if you don’t have any elements on that selector than binding on the click event is more or less useless. Sometimes you include a lot of stuff in the ready/load function, despite that it is not used on all pages and doing so hits your performance badly because of the selector that are executed everytime.

$(document).ready(function () {
$(“.communications .field input[type='text']“).on(“change”, function (ev) {

If you have something as complex as the example above it is wise to check whether there are actually element of class “communications” before doing anything else. The selector gets heavier with eash and every level you add:

$(document).ready(function () {
if($(“.communications”).length>0){
$(“.communications .field input[type='text']“).live(“change”, function (ev) {

Doing something like this will help a lot. This could be true to a lot of other cases, perform selectors and logic only if the things you are doing it with rendered. So adding more check is relatively reasonable – relatively, because if your selector is as as complex as your checking selector, well then it will do you no good.

This also brings us to another point about having most of the stuff in ready/load functions and keeping it together or keeping it separately in several different files. Doing multiple browser requests to a non-CDN hosts can be a performance issue as well as loading one big file with a lot of logic dedicated to different things. Also maintenance is something of an issue in the later approach. I would advocate for the approach of doing multiple requests and adding a HTML5 manifest cache to the application. This way all of the used JavaScript files are pre-loaded to the browsers cache and can be used with little overhead. The tricky part, of course[1], is cache invalidation but it can be dealt by tuning your deployment process to change a comment line in the manifest file, so every new version will have a different cache – yet it will be valid per version.

One should use databinding instead of loading partial views. Partial views are great and easy way to achieve a lot of things in ASP.NET MVC. The use of unobtrusive ajax and all great features adapted to it in the framework, makes life easy for more of HTML centric application approach. But once it shifts to JavaScript – beware. Partial loads containing JavaScript can make a great impact on the running performance, since the partial view is loaded, the DOM is manipulated and the new JavaScript is evaluated. While working with REST services or generally a more JavaScript based applications, it is relevant to switch to JavaScript binding, and since that is also common to see in WPF apps – it doesn’t get much less microsoftish.

Lastly, a mention about setTimeout, setInterval and eval is in place. Whenever using setTimeout/setInterval, use functions instead of strings, because strings invoke the compiler. Basically what you do is eval the string and that hurts, hurts a lot. So the same goes for eval.

So instead of:

setTimeout(“foo()”, 1000);

use:

setTimeout(foo, 1000);

A note has to be added on the toolbox that can be used to determine what is performing badly. Every major browser has it’s developer tools and performance profiling of scripts is included.

Internet Explorer 10 shows only a basic overview though:

IE10

And Google Chrome has it more in-depth:

chrome

Letting you check not only the CPU usage of JavaScript, but also the quality and impact of CSS selectors and also the memory usage.

This is the first post in the bigger series – the next one will be on web application performance inspection, so each step will be deeper in the clouds.


[1] – There are only two hard things in Computer Science: cache invalidation and naming things. — Phil Karlton

Knockout.js to seriously knock everything out of your way with JavaScript

Coming from “microsoftish” programming background, while also supporting, updating legacy software, it is often hard to keep up with all the new ways of programming, new frameworks and technologies around. So last month we were doing our periodical technical debt pay-off regimen which came up with some new ways of doing things. New for us, of course – sort of a local innovation.

A while ago there was jQuery template, but then it was abandoned and so a lot of people moved on to other things. Some tried to reinvent the wheel by making their own template engines, others moved to solutions like Unobtrusive ajax. I myself was a big fan of this technology, you basically do not load json, rather you use partial views, render parts of HTML code.

Now partial views and unobtrusive ajax are nice technologies. If you are making simple websites, any kind of simple stuff:

<form action="/ajax/callback" data-ajax="true" data-ajax-loading="#loading" data-ajax-mode="replace" data-ajax-update="#updateme" method="post" >

Here we see that a form is posted and the result is rendered in the element that has ID – updateme. It is all great, but what if you have to update more then one element? Off course you can add “data-ajax-success” atribute and execute custom JavaScript there (or function call perhaps). So here you go it started sucking as soon as you did something not under the intended use but still quite common. So we used this approach for a while, violating any normal development principles that are out there. And apparently it worked for some time, until the logic got way more complex –  it still works now, but we have a lot of common issues for this situation. That is hard to maintain code, unstable code which breaks after changes on one end.

So time came to change things and we were looking for some templating engines, things that normalize the JavaScript code and make it readable, maintanable and stable. And we came across this awesome framework: Knockout.js

Simple things that can be learned from the tutorial:

  1. You have templating
  2. You have JSON data binding
  3. You have readable code
  4. It performs well
  5. It is extendable

The first thing I went to is the interactive tutorial and I can say, without doubt,  that these are the best tutorials I’ve ever came across – try yourself and you will see. Even if you had no coding experience before, they are crystal clear!

Let’s stop for a minute there because I see myself getting over-excited and I maybe even translated some of that excitement to you. All new technologies are great, but as I mentioned in my last post – they are meant to solve problems and you have to be sure (or at least positive), that any technology will solve you your problems before using it. That is, indeed, a conservative way of thinking. Yet it also corelates a lot with business thinking: there are times when management is too busy to interfere with technology and there is no R&D department and no one is telling no one to invest into new stuff. Even more: it seems like everyone is trying to stop innovation from happening because everytime you go to your boss with an idea like that “oh hey, let’s rewrite our all frontend with this” you’ll get a response:

The manager is defensive, he knows that it will cost money and the benefit – well you mentioned none. So first of all get all your arguments straight, business people like when you offer them to solve their problems. Yet, rewriting everything from scratch is not a good idea afterall (read - http://www.joelonsoftware.com/articles/fog0000000069.html) unless it really really solves your problems or reduces your technical debt, which must be tracked as well. Not that it does that by letting you mess with some pieces of new technology, but basically covers a list of specific problems (even tasks), that you can clearly write down as a list.

So coming back to knockout.js – let’s define the problems we are solving and see how knockout.js fits here:

  1. We need to have configurable grids in multiple places on our frontend
  2. The grids have to be not only configurable, but of course have the basic functionality of all the grids as we know it: headers, rows, sorting, paging
  3. We need to display action buttons on specific events
  4. We need to remove our overhead of mapping and handle DTOs in the client side
  5. We need to expose JSON endpoints for some other parts of our product (mobile apps, office addins, etc.)
  6. We are seeing an impact on rendering performance done by partial view rendering and need better solutions
  7. We have few bugs that could be solved by implementing all of the above

This is list is of course generalised, because no one wants to go in deep into our product problems (trust me, you don’t want it).

So, I’ve been researching it for a few days, those days have of course spanned through out the holidays, so here are the results.

We need to have configurable grids in multiple places on our frontend

To solve this we basically have to get a few collections in our JSON data, one with the columns and another one with the data, and everything after that goes splendidly with few foreach loops:

<table border=”1″>
<thead>
<tr data-bind=”foreach: gridColumns”>
<th data-bind=”text: $data.title”></th>
</tr>
</thead>
<tbody data-bind=”foreach: gridItems”>
<tr data-bind=”foreach: $root.gridColumns”>
<td data-bind=”html: $parent[$data.name()]“></td>
</tr>
</tbody>
</table>

You can check out the full working fiddle here: http://jsfiddle.net/povilas/4FXxd/

Note here, that columns are intended to be persisted in the backend (a.k.a. the cloud), so taking that in mind, there is no need for instant GUI refresh after column hides and so on, you could just reload the data with the different configuration of the columns. One could change the booleans in the jsonData variable to change that accordingly.

Now look at that – how many nice points have we covered: this is a normal configurable grid, which can be parametrized and has minimum possible JavaScript footprint ever imaginable.  This doesn’t even remotely compare to what any OOTB grid components are trying to give you. Sure you get a lot of prebuilt features, but you have to learn some third party constructs, parameter sets, modify your backend accordingly and later on having to maintain all of its possibly crappy implementations and uncovered bugs. Here you have pure JS + HTML code, rendered really fast.

I must confess here, I haven’t done any real performance benchmarking to compare numbers, this time it was based on some simple logics: let’s not do the same thing we have tried to do the last few times (use third-party grid components). Everytime we tried that approach it got us nowhere near what we wanted: on old PC’s – performance was good when there were 20 rows with 5 columns. But when it got to 200 rows with 40 columns, it got out of hand. Yet, the heavy weight libraries of JavaScript didn’t played well on slow connections as well. Knockout.js weights merely 40kb minified and provides us with a totally new approach to build our apps.

This first fiddle already solved the 1st and 6th problem, provided a good start for the 2nd. Yet it didn’t demonstrate on how we are going to bind on different actions (3rd problem):

http://jsfiddle.net/povilas/GFEJK/

Here we also use observableArray to inject it with the object ID’s which can be later used in actions.

This usage of technology, solving the defined problems has given the programmer arguments to invest more time into it, since it is now not only technologically but also economically reasonable – it pays off, and that is what your manager wants to hear.

Follow

Get every new post delivered to your Inbox.