Wednesday 20 December 2017

ObjectifyHash Gem - My favorite over engineered solution for a stupid problem

Years after the initial development, I was finally allowed to bring this little gem to the public as part of my employer's effort to bring forth more open source code - hopefully more of these will come out soon.
Objectifyhash a very over-engineered solution for a stupid problem: dealing with nested objects on JSON responses from REST services. I find this specially painful when you need to navigate these representations, search for things in them and be able to do actions, one always ends up with code that looks like 

def find_thing_on_x things
  things.detect do |t| t['identification']['name'] == 'thing'; end
end
....
res = get 'https://my-service.com/stuff'

my_thing = find_thing_on_x res['data']['stuff']['things']

Not only is this ugly to look at, but very functional looking feels very un-Ruby like, not mentioning that on non-english keyboards like mine, brackets are hidden behind an awkward Alt Gr + numbers combo.
So, what I wanted to do in this case would be that either get the Stuff JSON mapped to a Stuff class on my Namespace so I can access handy Stuff related methods, or, if that doesn't exists, to get a generic object that allows me to handle Stuff more easily  AND (this is the fun part that other gems at the time didn't cover) repeat recursively for all nested objects inside.
That means, on the example above, it would create a Data class, a Data::Stuff and da Data::Stuff::Thing - since "things" is an array, it leaves it as an array, but each member becomes a Thing! 

The above example becomes much more beautiful with:

stuff = get('https://my-service.com/stuff').to_obj
my_thing = stuff.find_thing
OR
my_thing = stuff.data.stuff.find_things

I usually go with the second example here, as finding things is an action of Stuff and not from Data, but whatever.

So, all JSON objects become ruby objects of the class named after their key within the namespace of the previous key - easy, right?
It also has an exception handling, where you can specify several keys to map to specific existing classes on your code, this is how you inject your own methods such as the "find_things" above.

But this isn't even the part which makes it over-engineered, I opted to dynamically assigning methods to the instances with the values they should return, this means there's no setter methods and has brought some pains into the our work, the funnest of them all are when we have conflicts in Namespace or inherited methods from basic object, the supreme ofender: {"file": {"id": 123}} in which creating a File with an id method didn't work as aspected, so there's a few gotchas here , in this situation expect File_ and _id_ to happen, but overall I've enjoyed working with this tool.

Checkout the github page with more details and I hope there's more equally crazy people like me to enjoy this gem.

Wednesday 3 August 2016

Things Developers say - But it works for this case

I love my developers, some of the smartest people I've worked with, but every once in a while they say something odd about why a problem isn't a problem.
(I really need to start writing these down when they happen for better posts.)

There's a feature on Skystore, where if you rent a movie but don't play it for 28 days, we'll send you an e-mail reminding you there's only two days left to watch. The subject of the e-mail is something like: "Only 2 days left to watch [movie name]".
As you can imagine waiting 30 days to test something like this is not feasible, especially for an automation enthusiast such as myself.
Luckily we have ways to trigger this e-mails before the time runs out, which leads to an e-mail with the following subject: "Only 0.0416666666666667 days left to watch".
I didn't bother to see the code on how two days turn into less than half a day if you force the service to run, but got some reassuring that this only happened if you force to service to run earlier. It's fine, we log a bug on the issue tracker the same, just not a "fix it now" kind of bug.

A couple days go by, a dev comes back to me:
- You know, that has to do with the time left calculation, if we wait 28 days the e-mail will look well.
- So you're telling me it's fine because for the current business requirement that calculation happens to give a whole number?
- Yes.
- This value is configurable, so if someone decides it's supposed to be 3 days instead of 2, we change the config value and our customers start receiving mails telling them they have 2.333333333 days to watch?
*blank stare* - Ok we'll take care of it next sprint.
Yes I did say "3" that many times out loud.
 

Wednesday 29 June 2016

Watir Webdriver / Selenium HTTP Authentication - if it's stupid and it works, it ain't stupid

I've taken some time to write a small sanity test pack to run on the web app that uses the platform my team develops.
Although we have a very good and thorough automated test coverage, I'm very paranoid and want to make sure we didn't miss anything that breaks basic functionality, not that it happens often, or in recent memory, I'm just paranoid - it's a good quality for QA person, right?
So I went about the correct way of doing things, Page Object Model; I have a basic page object that has functionalities that are relevant through the whole app, like knowing when page is fully loaded, how to login, etc. Everything else inherits from this class.
First problem, http authentication, selenium and watir still (!!!) don't have a proper way of dealing with this well, so I went with the normal "https://username:password@url." method, which by the way, nobody else tells you this: use URI encoding on the username and password.
But then I got to the playback page on our app (which is kinda an important page if you're online movie retailer), for reasons not important for this post, on this page there's a https -> http redirect, this redirects cause our username:password deal to be lost and I got stuck with the authentication popup.
Googling how to get around this today still gives me about the same answers as a few years ago, which include this lovely (yet old) Watirmelon post that suggest using AutoAuth plugin for Firefox, usually I'd be against this, since it binds my tests to Firefox, but since for my particular sanity-for-other-team kind of work, I don't really mind, however AutoAuth no longer works with recent versions of Firefox, it stopped working on v39, currently we're on v47.
After much dive into documentation and stackoverflow questions, I gave up, I was ready to ask them to turn off HTTP Authentication for that page, until a co-worker casually mentioned:
Try opening the http version with username and password on the url first, at the start of your test, then go to the normal https start page and do the normal test, the browser will remember when you get redirected.
That a stupid solution... but it works, so it ain't stupid.
Here's how my code changed:

      [...]
      browser.goto inject_auth( url ) if url
      wait_for_load
      [...]

to:

    if url
      browser.goto inject_auth(url)
      wait_for_load
      browser.goto inject_auth(url).sub( 'https', 'http')+ 'playback/fcf7ce62-5695-4d80-be85-662798c9ac7c?token=undefined'
      wait_for_load
      browser.goto inject_auth(url)
    end

Hacky, I know, I have to load the normal page first, because I set up the http load purposely to hit a 404 so this wouldn't cause any unintentional side effect, but it also gives me another redirect, only after this little dance I get to go to the initial page I intended.
Hope this helps other people trying to automate apps with redirects.

The Watirmelon post I mentioned before resolves the problem with NTLM  auth, which could be applied to everything, however my solution doesn't solve that particular problem, if you find out how, please let me know.

Monday 18 April 2016

The importance of using proper naming in your tests

The Skystore project, which I'm the lead tester for, still has a lot of unreleased features, while the reason for this are beyond the scope of what I intend to talk here, one particular feature brought down an important, albeit obvious lesson: Naming matters.

One feature that has been worked a while ago, but is still unavailable, is called "playnext"; it's a simple feature: when giving out video playback information to the client, the server will include some information about what to suggest the user to watch next, some time before the end of the video, the client will display a little box showing that information with an easily accessible play button.
This feature has several tests along with it as what should be suggested varies in accordance with different conditions which have all to be tested and made sure the server is giving out the correct info and structure.
As an automation evangelist I've automated the shit out of this feature, every normal, erroneous, corner and edge case was covered and everything was saved into GIT, available to everyone and runs everyday on every integration environment - good times.

After client applications implemented their side of this feature and found it lacking, it was decided that a new feature would be implemented server side, it would behave just like playnext, but with a different structure that would better satisfy the needs of the clients giving more complete and extendable information a "playnext 2.0". Since different teams have different priorities and implementation times, it was decided that the two features would run side-by-side for a while, to preserve those who had already implemented the first version.
When deciding which tests would be needed, I've said to my minion responsible for it: "Look at the playnext tests, copy them, adapt to the new structure, see if there's anything missing or need to be removed and you should be done quickly".

Later the feature was developed and tested, approved and delivered upon the client teams. Then they noticed that there were several missing parts and it was mostly unusable.
How did this pass the tests? It was obviously broken! My minion told me "I did what you told me, I've copied all the tests I found, there weren't many though"

I've looked for playnext in the feature files, found only a couple of tests, but I know I wrote a bunch of them, what happened? I looked into one file I knew had a few of the tests and found the problem:

play next... play next play next....

There it was, in the tests I named the feature "play next" instead of "playnext", when searching for the correct name, only a couple tests had it.
This explains why he thought there weren't many tests.

So, when naming and describing your tests and features remember:

  • Describe your tests using the correct feature name.
  • Make sure to use the name consistently (ie: don't change the name mid way).
That's it. After this event the tests were corrected, the missing testes added, the feature was patched and integrated with and everybody is happy, hopping it's well received when available to a wider audience.

Saturday 15 August 2015

Using Dropbox to sync saved games (and more) across multiple computers

Over three years ago I made a post on Reddit explaining how to do this, since then Steam introduced cloud saving, so all saved games were sync'd without a second thought, I kinda grew used to it. Long passed is the time when I'd backup my saved games on CD's and DVD's with fear of losing all those game hours to a computer failure.
However I got hit with this problem recently when I bought Banished on GOG, while it's great that they sell all the games DRM free, it's a shame they don't have cloud saving yet, so I went back to the old technique and decided to share it on a more permanent place than reddit.
Please note that instructions cover Windows 7, Linux and OSX, now Win 8 or 10, honestly I don't know if shell link extension works there, if there's an alternative and as of now, I don't care.
The original post:


I started using this technique to sync saved games so I can continue my massive Civilization games when my gf is sleeping at her house. It's the same I've used for years to sync my pidgin settings and .vimrc across my linux machines, but yeah using it for gaming:
First of all you need [...] Dropbox (why on earth are you not using this yet anyway?) make and account and stuff. If you're on Ubunto it's available on the software center thinggy.
Windows 7 only: Get Shell Link Extension 
(my pictures examples are with wow addons and settings on windows 7)
Now find what you want to sync and MOVE it to a folder on dropbox:
In this case I'm moving the Addons folder
On windows, drag the folder back to where it was with the right button and select Drop Here -> Symbolic Link, it will pop up an authorization thinggy, but it's ok.
On Mac and Linux pop open a console and use the "ln -s /path/to/drobox/place/where/you/put/your/stuff /original/path/thing" command
And you're done! Yey
Bonus, you can share folders with your friends so you all can use each other's computers to play or whatever, for instance, next I'm making sure me and my gf can play WoW on any of our computers with the same addons and addons preferences.

Not that I condone this kind of thing, but if you got you pc games at the special Bay shop and thus don't have steam, you can use this technique.

Thursday 26 February 2015

Things Developers say

As a QA person I've been told by developers a myriad of rather amusing things or funny conversations caused by failed tests and the all too often "works on my machine".
I've been meaning to compile them, but I'll just start making posts as they happen/I remember.

All names and locations have been removed for privacy of those involved.


$Dev: On $ticket you mention that the $behaviour happens when it shouldn't. However on the requirements it says $behaviour is expected.
$Me: Oh, I must have missed that, where does it say so on requirements?
$Dev: Well, something things are common knowledge and shouldn't need to be written anywhere.

$QA2: This doesn't work
$Dev: Of course it works, look *press button on his laptop* it works, you're testing it wrong.
$QA2: But that's your machine.
$Dev: So? It works.
$QA2: "Works on my machine"
$Dev: It works!
$Me: Great, $Dev we should tell $Boss to arrange $Ops to have your laptop sent to $location.
$Dev: Why?
$Me: Your machine will be the new prod server obviously, since it only works on your machine.

At this point $Dev helped $QA2 to find the root cause of the problem, things were indeed broken.

$Me: This doesn't work when you use $resource_type.
$Dev: Can't reproduce, did what arguments did you use? Can you retest please in the RC and master branch please?
$Me: I used the arguments of a $resource_type, here's the error stack trace on both branches.
$Dev: I used $other_resource_type and it works, are you putting the arguments correctly? Or are you using dummy/broken data?
$Me: I used $resource_type, if I had used bad data I'd have said so in the first place. Just try $resource_type.
*couple minutes go by*
$Dev: Ok this doesn't work, I'll fix it.

Friday 30 January 2015

Ruby: Custom Errors for Lazy Bums with const_missing

Catching errors is quite a normal thing when writing code, every programing language has some variation of the throw/catch, in Ruby's case, it's the elegant begin, rescue and ensure blocks.
At some point in every project you stop catching "other" people's errors and start catching your own.
So you start creating you own error classes, there's a couple common approaches I've seen over the years.

  • Create an Errors namespace on each class/module you need and create errors for that there.
  • Create a generic Project::Errors with its own file bag and throw all errors in there.
I usually go for the second approach, so error classes don't distract me, but that's more of a personal preference anyway.
However, I am a lazy bum and I've got tired of specified my error classes, so I made this Gist as an example:


The main idea behind this, is letting you freely type your errors as you need them both where they're being raised and rescued and you don't need to worry about they existing since they'll be created in runtime for you.
This method is probably not very efficient, but I don't care for this case.