After a long time performing manual releases on my own laptop, I decided to jump on the CI/CD bandwagon and automate everything.

I’ve been working professionally with Jenkins and Bamboo for many years, but I never took the time to properly set it up for my open source project, Simple Java Mail. I finally decided I would combine learning a new CI/CD tool with setting up auto-releases to Maven Central.

Now Simple Java Mail is a multi-modular Maven project, so that makes things a little more complicated, so for this blog I’ve created a test project that you can fork and study:


Plan of Attack

Here’s what we want to achieve!

  1. Checkout the source code from the Github repository
  2. Compile, test the project
  3. Manually choose if the release entails a patch, minor or major version
  4. Automatically update the POM with the new release semver based on the previous choice
  5. Build the deployable artifacts (jar, source jar, Javadoc jar)
  6. Sign the artifacts with GPG so OSS Sonatype will accept them
  7. Deploy the artifacts to staging area, automatically closing and releasing to Maven Central upon successful upload
  8. Commit the updated POM and tag the commit with the new version
  9. Push changes back to the Github repository

Introducing CircleCI

My weapon of choice is CircleCI because of its intuitive design to build scripts, standard integration with Github, its potential for speeding up complex builds and standard Docker integration.

Frankly speaking, I was so glad I finally got everything working perfectly that I first wanted to write everything down before attempting the same setup with Azure Devops and Gitlab.

CircleCI (2.1) works with something called “workflows“, which is basically a pipeline of several build jobs, which if defined smartly, can run parallel. Moreover, one job type is a “manual approval” job, which can be used to force a specific path in a workflow. I use this technique to manually select an automated patch, minor or major release.


Checkout the source code from Github.com

CircleCI seamlessly integrates with public Github repo’s, so it can import Simple Java Mail automatically. CircleCI manages its own SSH key registration with the repository (with your confirmation) for read access and can checkout the code during the build.

Compile, test the project

To compile and test we need a docker image with Maven and specifically for Simple Java Mail: JDK 8. circleci/openjdk:8u171-jdk will do the trick nicely (complete list here).

Let’s define our initial flow with our selected container, run tests and collect our artifacts:

Since we have a separate build job for producing the deployable artifacts (because we don’t know the release version yet), we can skip some things here to speed up this job, such as producing javadoc.

Manually select patch, minor or major version release

In Jenkins or Bamboo I would configure target environments to pick up the “shared artifacts” and trigger the right version bump manually, but CircleCI works a bit differently with its “workflow” approach.

Instead of deployment pipelines, CircleCI has a special type of build job that will pause for manual confirmation. The subsequent build jobs will wait until it is approved. This way you can implement multiple deployment pipelines within one workflow. The way I’m using it though, I haven’t seen that on the web yet.

Here’s what the update CircleCI config looks like:

The result looks like this in CircleCI:

Auto-update POM with semver based on manual selection

Ok, so now that we know based on the workflow execution path what version bump we want to perform, how can we actually do the version bump?

There is a little bit of an obscure Maven feature that was undocumented for a long time: versions:set combined with build-helper:parse-version. For example, to bump the minor version (ie. 2.3.4 becomes 2.4.4), you can do the following:

How it works

What happens is that versions:set performs the actual update to the POM and will look for a property newVersion. We use build-helper:parse-version to produce that variable using properties available only to the build-helper. We need to escape the $-signs, because otherwise Bash will try to resolve them before they reach Maven. Finally versions:commit just gets rid of the POM backups from before the version bump.

Build the deployable artifacts (jar, source jar, javadoc jar)

Build your artifacts as you normally would, but use a custom maven settings.xml for your build. We’ll need it to configure GPG and OSS Sonatype login credentials in the next step.

We’ll use it in our deploy in the next step like so:

Since we have a separate build job for compiling and testing the code, we can skip things like testing, instrumentation, spotbugs/pmd etc. by providing the options -DskipTests and -Dspotbugs.skip=true.

Sign the artifacts with GPG so OSS Sonatype will accept them

Now it gets interesting, because you’ll have to configure some keys and secrets as environmental string variables so you can refer to it from your build script.

Here’s our checklist:
1. produce a GPG key pair with passphrase
2. distribute the public key to one of the public servers OSS Sonatype validates signed artifacts with
3. make the private available in CircleCI as environment variable
4. Include the passphrase as environment variable so you can use the private key for signing the deployable artifacts

Introducing OSS Sonatype

Sonatype is an artifact server that synchronizes to Maven Central if you release a non-SNAPSHOT deploy. It has some rules for artifacts it can accept such as source, javadoc and binary jars should all be present and signed with GPG.

To continue, please first register your OSS project with OSS Sonatype if you haven’t yet and then complete the steps outlined in Sonatype’s guide to GPG keys, including uploading it to one of the public key servers.

From CircleCI to OSS Sonatype

Now that we have an OSS Sonatype project and distributed a public GPG key, we can start signing and releasing artifacts to Maven Central.

Adding the private GPG key to CircleCI

Take your private key in ASCII, which should be something like secring.gpg.asc. If you only have a .gpg file, you need to convert it to ASCII first. This is dangerous, so throw it away after you’re done adding it to CircleCI:

To get your ASCII key on a single line, you can use sed in linux with some black magic regex, or much simpler: paste it in an base64 converter and convert it to a base64 string. Import this string as environment variable and also add you GPG passphrase:

you can use these in your CircleCI build script

Configure Maven to connect to OSS Sonatype

We’ll define a Maven profile for GPG signing that is deactivated by default, so that we don’t have to deal with that when testing things locally on our own laptops. What’s more, OSS Sonatype requires you to define a couple of things before it accepts your artifacts, such as a developer tag:

For the maven-release-settings.xml to work you need to add your OSS Sonatype credentials to CircleCI as well:

Now that we configured our deployment plugins to sign artifacts and connect to OSS Sonatype, deploy the signed artifacts to staging area, automatically closing and releasing to Maven Central upon successful upload (or else you still need to manually login into OSS Sonatype to release it):

First define a command we can call from our deploy job that will configure GPG by importing our base64 ASCII key into the GPG tool already included in the Docker image:

Then implement the deploy jobs for the three semver deploy paths:

If everything was configured correctly, your script should now build, test, sign and deploy to Maven Central via OSS Sonatype.

Commit the updated POM and tag the commit with the new version

In order to provide a commit message with the new Maven version as well as tagging with that version, you need Maven to tell you that version first so you can store it in a variable. This is a little tricky, but can be done with a Command substitution.

Notice the text “[skip ci]”? That’s so CircleCI doesn’t trigger another build for this commit. It’s a convention which is also supported by other vendors (for example TravisCI).

Push changes back to repo

CircleCI setup a read-only SSH key for checking out the repo, but now you need to push something back. This means you need to provide your own SSH key pair that has write access. Moreover, you will need to explicitly acknowledge github.com as a trusted host by providing the server’s fingerprint.

Adding Github.com as a trusted host

Following this SO, here is how you can obtain github.com’s fingerprint as base64 (1st command):

Manually verify the fingerprint (2nd command) is the same as the fingerprint Github published, and then add the entire content of the file we just created to CircleCI:

Finally add this fingerprint to trusted hosted in your deploy script:

Configuring GIT to use our SSH key and user

Generate a new key pair (I did without password) and save it to .\github_rsa.key (the command will prompt you for it):

Now copy paste the content of the public key (github_rsa.key.pub) to Github in your repo and make sure to check “Allow write access”:

Take the private key and again convert it to base64 and add it to CircleCI environment variables for your project:

Now you can refer to it from your CircleCI deploy script. Let’s take the fingerprint script and club it together with the SSH key config in a new command to keep things tidy:

Finally, performing the push to repo

With the fingerprint and SSH key in place, we can finally perform the last step in our CI/CD script: push the change and tag back to the repo.

To perform GIT commands with an SSH key, you need to write the commands a little differently:

The final deploy scripts

To make this work you need github.com’s fingerprint as environment variable as well ass OSS Sonatype login credentials, GPG signing key and passphrase, and GIT read/write SSH key.

Recently I was required to calculate an expected due date based on office working. Shockingly, this is not readily available in the SDK or even as a well-known Open Source library.

Jollyday to the rescue! And some fancy Java Streams.


Introducing Jollyday

Jollyday is a small and a little bit outdated library, that still does a fine job if you use it straightforwardly.

It has all kinds of rules with which you can have a dynamically calculated holiday in case it moves if it is in the weekend (like Holland’s King’s Day, which moves to Saturday if it is on a Sunday). It also has validity directives so you can specify different rules for different years of the same holiday, such as when our Queen’s Day moved when we got a new Queen.

It even includes Easter, which requires insane math!

Let’s calculate the due date!

Configure the weekends

We’ll use JDK date time API for defining the Weekend.

with the above, you can now do something like WEEKEND_DAYS.contains(myLocalDate.getDayOfWeek()).

There are other ways to determine if a given date lies in the Weekend, but it’s not as readable:

localDate.getDayOfWeek().getValue() >= DayOfWeek.SATURDAY.getValue())

Or using a TemporalQuery:

localDate.query(t -> t.get(ChronoField.DAY_OF_WEEK) >= 5).

Configure the holidays

Either pick a country ready to go or, if you want to configure it further, take one of the country XML’s from the library and create your own. We’ll do just that and create a new Holidays config named creatively: “MyList”.

Now to get these holidays in Java:

Determine due date

With the weekends and holidays set, we can use Java streams to find the due date rather elegantly:

Here’s what happens:

  • Generate an infinite list starting with a date and adding one day each iteration.
  • Filter out any holiday or weekend days
  • Keep the other days until the stream limit was reached, indicated by the number of working days we want to count
  • Using Guava’s Iterables.getLast we return the date found after counting operational working days.

Considering the discussion regarding story points keep popping up, I felt the need to comprehensively write down what I feel is the value of a story point. Everything is up for discussion.

What’s in a number

A story point is an arbitrary measure of work by Scrum teams, which is used to measure the effort required to implement a story. In simple terms it is a number that tells the team how hard the story is, where hard is a combination of complexity and effort.

The numbers must be in the Fibonacci range, because as the estimation goes up so does the error margin. To compensate our natural false confidence in estimating the Fibonacci range explicitly doesn’t allow for precise estimation; rather it goes up exponentially. Another reason why this Fibonacci range works so well is that complexity doesn’t scale linearly, yet humans tend to assume so.

Story points are not directly related to time at all, it is about ratios more than the number itself. Story B might be twice the size of story A for example, the stories then can be estimated as 1 and 2, or 3 and 8 (at least double according to Fibonacci). A team might say: last sprint we did Story B, for the same amount of work we can do story C and D. There’s no time involved here, just number ratios (or relative estimates).

The best thing about it is that we can have individuals with wildly different skill sets or speed agree on the amount of work for a story. It is a universal estimation technique.

How a story point relates to time

If a story point is not directly related to time, then how can you tell how much work a team can do in a sprint? After all, a sprint is a fixed time box so we end up converting amount of work to a timeline, there’s no way around this need for planning unless you do Kanban.

After a fixed time you can see how many story points have been delivered and the result is called the velocity, like 15 points per week. It’s an indicator of throughput. However, you will never be able to tell how much time a specific story or story point will take. You can try based on past performance, but as the velocity is based on the team’s average, including general external factors, a single story will never be entirely predictable. All you can say is: “for a sprint of 3 weeks, we are pretty confident we can deliver this much work.” (but under promise and over deliver)

Velocity is never constant

Velocity is always team based, averaging the individual speeds. As long as the general setup of the team remains the same (1 senior, 2 juniors, 3 devs, 1 test etc.), the velocity should be somewhat stable and dependable, extreme external disturbances during a sprint notwithstanding. So if you swap out a medior with another medior, the change should not affect the velocity much in a team of 8 members.

Velocity is influenced by many factors, such as develop skills, number of developers, down time of TEST / Bamboo / SVN, communication issues, changing requirements, meetings, sickness, unexpected extra work etc. etc. By taking the average velocity of multiple sprints you can get a cautious suggestion of what a team is capable of in a single sprint. However, know that the environment is always changing, and a velocity of any given single sprint is really a snapshot and should only be used pessimistically.

Sprint zero

When a team just forms, to get a feel for how many points a it can do in a sprint, the team members don’t plan at all; they just estimate a bunch of work. They just start and see how many points are finished at the end. That then becomes the initial velocity and serves as a basis for planning the next sprint.

How to estimate a user story

To be able to assign a number to a story point you need a baseline, a reference story. The numbers in the baseline are not really important, as long as you don’t end up going above 100 with estimations. If a simple task is baselined at 55 story points, then a more complex task ends up being over a 100. This should be avoided.

User Stories vs (Technical) Tasks

Often, we poker once we have an oral consensus on a detailed technical on-the-spot analyses. It is only natural: team members and especially developers feel more confident about an estimate if they know exactly what they need to do on a technical level. This is the reason why developers often rule the sprint plannings and other roles are present as witnesses rather than first class participants.

Ideally however, Scrum tells us we shouldn’t estimate technicalities, but rather we should estimate functional pieces. After all a User Story represents a functional concept from a customer perspective (“As a user I can…”). These functional stories should be estimated without knowing too much about it technically. This way everybody’s input becomes equally important. After estimating how much functionality we think we can do, the developers split up the stories into (technical) tasks.

I have never seen this approach in the wild, though. Somehow all the teams I have been part of gravitate towards technical analyses sessions after which a very precise poker planning is done mostly by the developers. My instincts tell me that the ‘pure’ approach only works for small companies where the technical challenges are smaller and integrations with other systems are fewer. The more complex a system becomes, the more you need to know about the technical details in order to give a realistic estimation of required effort in story points.

The definition of ‘amount of work’ / story point

It turns out the definition of amount of work is a moving target and not fixed at all. A team might say: “creating a new web service is 13 points”, but it turns out teams don’t stick to this definition in practice. In my experience teams invariably start to reduce the number of points the better or easier the task becomes. This is so, because the team perceives the work as less work and often it also really is up to a certain degree; it’s not uncommon to hear the phrase: “oh I have done this before, it’s easy now”, or: “oh, we can just copy paste it from the other project, it’s just 1 point”, or: “we have never done this before, let’s add 3 points”. The problem then is how do we estimate using reference stories.

A reference story is a story as if you do it from scratch. A reference story is to be used early in a project. Later on, a team’s implemented stories from previous sprints become the new reference stories and you will have a larger pool to compare to: “we don’t have a reference story for this, but last month we did this for 5 points”.

It’s a false conclusion to say: “we have become more efficient at creating web services using copy/paste, so instead of reducing the number of story points, we state that our velocity is higher”. It is false, because the actual work is not the same. Copy pasting is not the same amount of work as create a web service from scratch and so you shouldn’t treat it as such. We should estimate the actual work involved, because that is what a story point is about.

Story points vs business value

Often I see that story points are interpreted as business value. They are not. It is entirely possible to realize 20 story points while offering no business value. The reverse is true as well: you can realize 1sp while offering serious business value.

Business value should be assigned separately if you want to keep track of it for reporting purposes or quick overviews. During backlog grooming sessions, where the user stories are prioritized in advance, a product owner will often have a good idea of the more important stuff, but it can help to have the business value stated explicitly. This is more for the product owner and internal business discussions rather than the estimating implementation efforts; the team just estimates amount of work and the product owner distributes it among sprints.

Another widely system used to indicate business value, or priority to be more precise, is a flag indicating minor, major, critical, blocking etc. Platforms such a Atlassian’s JIRA has this built in.

Efficiency translates to velocity, not estimations

Leaving aside the ‘actual work being reduced because of the copy/paste factor’ class of arguments, estimations should never be modified because we became more efficient at some task. If creating a new web service always requires certain essential steps, it will be the same work every time, regardless how fast you have become at it. For the same reason you also shouldn’t increase an estimation because “we have never done something like this before”, or: “we have to do some research first”. The amount of work remains the same, but we simply don’t know how quickly we can do it. This will affect velocity, but not the story estimation.

Rather than changing the estimation, your team’s velocity would automatically go up as you can do more of the same work in a sprint compare to previous sprints.

Especially when research is needed, you should rather pull it out of the story as a time boxed spike. You can’t put story points on a spike, because every spike is different. Spikes therefore should not be taken into account when calculating velocity, but it does mean the sprint is shortened matching the time box (or assign story points in hindsight based on the same velocity used to plan the sprint).

Ideally you should always estimate comparing to either reference stories or stories from previous sprints. If you can’t find an exact match that’s fine, just try to relate your story in terms of effort. Try to avoid the automatic switch to gut feelings, because that’s where a team starts to mess up estimates and include invalid value modifiers. If you stick to this, your velocity should naturally go up over time as your team becomes more efficient and experienced.

The reality is: the human brain inevitably relies on gut feeling for estimations

Unfortunately, sticking to reference stories or past stories is easier said than done. In fact so much easier that I haven’t seen this followed through even once; teams always tend to forget about reference stories and rely on gut feeling a few sprints in.

The Fibonacci range alleviates some of the off-estimation impact due to this, but really it indicates an impedance mismatch between how teams operate -and indeed how humans think- and how Scrum experts classically assert how estimates should be done. To do this properly, you would need a permanent Scrum coach to act as estimation police, which goes against the very nature of agile development. You don’t want that.

As always the real point is to be predictable, so for a single Sprint relying on gut feeling this might be fine. It makes it difficult though to plan large releases many sprints ahead as many enterprise companies do, since the unanchored-by-reference-stories gut feeling is a moving target. It also makes it impossible to compare teams (more on that further down). One reason this occurs is because inherently teams are concerned with the next sprint and not some dead line in a far-off sprint. This is by design, but the company around the team often plans ahead much further and is concerned with this. This is the primary reason why at the end of six sprints, business at large can be surprised why the estimations were so far off. The Product Owner plays a key role here to keep business and development on the same page.

External factors influence capacity, not velocity

Some items you cannot estimate beforehand, such as production incidents: either these should be time-boxed or these should be updated during the sprint to match reality, based on the same velocity used to plan the sprint.

Story points are not about time, but about effort; production incidents take time but effort cannot be estimated in advance. The two are incompatible. If you get an incident that paralyzes two developers for a week, does it mean the team has a low velocity and the next sprint they should plan 20 story points less? No! Velocity remains the same, because the team’s capability remains the same. Capacity however was reduced. For the next Sprint the same velocity applies, but you might want to reconsider a team’s capacity based on external factors.

The reason I keep these two concepts separated is because environmental factors tend to be incidental, yet you want to know structurally what a team is capable of. Averaging multiple sprints alleviates the impact of incidentally reduced capacity of a team, though, but having a ‘pure’ velocity helps plan better in my opinion; it already is such moving target, let’s not muddle it further by including the effect of sickness, you can’t plan for that.

One way of dealing with incidental team disruptions is to still assign placeholder story points for incidents based on the current velocity (which effectively translates to reserving a timebox), but update during the sprint to match reality, another way is to reserve an actual time-box for incidents and reduce the Sprint duration / capacity (plan fewer story points).

An important goal of working with story points and sprints is about predictability. It is not a number game nor is it a measurement of performance. The objective is that a team can say with confidence how much work it think it can deliver in the next two-three weeks.

To be clear: it is not to say external factors are not part of a team’s core responsibilities. Especially in DevOps obviously production incidents are part of their jobs. When it comes to predicting how much a team can do and how much capacity it has, however, velocity should not be influenced by production incidents. For example, a team can becomes twice as fast compared to when it started, even though they do half the work because there are more production incidents (capacity, however, is down to one fourth due to overwhelming external factors). Discuss!

How to compare teams

Because over time reference stories start to fade away in the background in favor of the actually implemented stories in previous sprints, the actual unit size of a story point within a team starts to shift ever so slightly. This is because teams start to rely more on ‘gut feeling’ rather than methodologically compare past stories or even the reference stories. In my experience this just happens to all teams. And stories never match exactly, so estimations are always relative. Because of this, doing a comparison between teams, even if their reference stories are the same, is like comparing apples to oranges.

Another reason to avoid comparisons between teams is because there are too many external factors in play. An absolute nono is to expect a certain performance from one team based on the velocity of another team!

Never make a team feel they should do more either, because velocity is easily gamed too. I’ve experienced it several times that a team’s seemingly low performance was presented as a graph next to a team that performed well, only to have the first team increase estimates and ‘game’ the system. A team’s performance is not bad or good, it just is. Let the team do its thing and if something is not right, it will come from the team itself (preferably during retro’s).

Use velocity to gain insight about teams, not to steer teams.

Quick note on how to configure an embedded ActiveMQ server using Spring @Configuration.

 

Here’s a way to quickly generate a private and public key:

In Windows openssl needs a file in which to produce random data:

Running this results in the following CLI output:

openssl cli output

 

And these two files will be generated:

  • dkim.pem
  • dkim.der

Since I work on a Windows pc, I have a very simple batch file that quickly generates one for me:

Quoting Smashing Magazine

You know how it works: you spend hours trying to find a workaround for a problem that you have encountered, just to realize that it doesn’t quite work in, you know, that browser. Finding little techniques and tricks to help you get to results faster can immensely improve your productivity, so you don’t have to waste time on solutions that will never see the light of day.

I love finding those little useful front-end goodies that make our lives easier. Since technologies emerge and evolve permanently, keeping track on what’s going on is often difficult, especially since specifications change and so does the browser support. For a replacement talk at SmashingConf Oxford last week, I’ve decided to collect some of the useful techniques from various articles, conversations and my workshops in a slide deck — and since it proved to be useful for many front-end developers I’ve spoken to after the talk, I’m very privileged to share it with the entire community as well. To the slides.

So off we go. A few dirty tricks from the dark corners of front-end development. Beware: the things you’ll find in the deck can not be unseen, and I do no take any responsibility for whatever happens next.

You can also download the entire slide deck on SpeakerDeck (PDF). And you’ll find slides from the other SmashingConf talks in an overview of SmashingConf Oxford slides. Happy learning!

From time to time it happens that I’m blocked out from a website by a corporate proxy, or that there’s simply a problem with HTTP traffic. Whenever that happens, you have a choice: bring your mobile network-connected phone / tablet and work on that or wait until the proxy is fixed or postpone until you’re home. Another option is to redirect the traffic yourself, using a tunnel.

This is a quick guide to setting up a HTTP proxy tunnel.

(more…)

What is it

Command Query Responsibility Segregation (and Martin Fowler) or CQRS is a variation on the Command–query separation princple or CQS in short. CQS states that every method should be either an action or a query, but not both. It is a separation of responsibility where you clearly divide methods that change state from methods that report on state. CQRS is different in that it specializes in separating actions from queries for data CRUD manipulation specifically.

CQRS is an application architecture pattern or principle that is very simple in essence: split your read and write application paths.

Take a REST service for example, traditionally, you call the REST service, services goes to business layer, next integration layer and then all the way back again: synchronously the same path back. It also applies to asynchronous messaging systems such as JMS, because the application reading from a queue or topic and posting back a reply might still handle the business logic and persistence integration in a synchronous fashion using a single path.

Traditional CRUD model

Traditional CRUD model

With CQRS, the model is split up. In case of REST, there would be at least two services now: one for performing the action and one for performing the query of the result (unless you create some sort of synchronous read/writing blocking facade in front of the CQRS interface).

CRUD with CQRS

CRUD with CQRS

Why would you need it

Why would you go out of your way to split that up? To quote Microsoft’s excellent article on the subject:

The most common business benefits that you might gain from applying the CQRS pattern are enhanced scalability, the simplification of a complex aspect of your domain, increased flexibility of your solution, and greater adaptability to changing business requirements.

Because you separate these concerns, it becomes easier to maintain and understand and you can scale these concerns separately as well.

When to use it

Although we have outlined some of the reasons why you might decide to apply the CQRS pattern to some of the bounded contexts in your system, it is helpful to have some rules of thumb to help identify the bounded contexts that might benefit from applying the CQRS pattern.

In general, applying the CQRS pattern may provide the most value in those bounded contexts that are collaborative, complex, include ever-changing business rules, and deliver a significant competitive advantage to the business. Analyzing the business requirements, building a useful model, expressing it in code, and implementing it using the CQRS pattern for such a bounded context all take time and cost money. You should expect this investment to pay dividends in the medium to long term. It is probably not worth making this investment if you don’t expect to see returns such as increased adaptability and flexibility in the system, or reduced maintenance costs.

So you may have a complex domain model that is easier to understand if you separate read / write concerns. Or if you expect a lot of read / writes and you want to be more flexible in scaling these.

Martin Fowler posts a warning though:

Despite these benefits, you should be very cautious about using CQRS. Many information systems fit well with the notion of an information base that is updated in the same way that it’s read, adding CQRS to such a system can add significant complexity. I’ve certainly seen cases where it’s made a significant drag on productivity, adding an unwarranted amount of risk to the project, even in the hands of a capable team. So while CQRS is a pattern that’s good to have in the toolbox, beware that it is difficult to use well and you can easily chop off important bits if you mishandle it.

CQRS adds complexity to your application, but it removes a lot of complexity in the right situation.

Microsoft’s article quote some concerns as well:

Although you can list a number of clear benefits to adopting the CQRS pattern in specific scenarios, you may find it difficult in practice to convince your stakeholders that these benefits warrant the additional complexity of the solution.

“In my experience, the most important disadvantage of using CQRS/event sourcing and DDD is the fear of change. This model is different from the well-known three-tier layered architecture that many of our stakeholders are accustomed to.”
—Paweł Wilkosz (Customer Advisory Council)

“The learning curve of developer teams is steep. Developers usually think in terms of relational database development. From my experience, the lack of business, and therefore domain rules and specifications, became the biggest hurdle.”
—José Miguel Torres (Customer Advisory Council)

Adopting the CQRS pattern helps you to focus on the business and build task-oriented UIs.

Things to watch out for

A couple of problems arise from this pattern:

Synchronous read + write behavior becomes complex

If need to perform an action and read back the result, then you end up working around the pattern to make ends meet… literally. In this case you need a blocking mechanism that initiates the action while asynchronously waits for the result to come in to report back. The real solution is to provide a published interface that doesn’t return data from actions. CQRS system is a natural fit in an EDA landscape.

Stale data

As reading and writing to a system using CQRS becomes an asynchronous process, the problem of stale data arises. Some sort of synchronizing behavior should be present to keep up with the ever changing state of a CQRS system. Be it a blocking write/read mechanism, pulling or a batch job.

Javascript doesn’t natively support enums and it’s not included in the ECMAScript specification. Here are a couple of ways to do that in Javascript

What is an enum?

Essentially an enum is a set of unique named constants, like UP, DOWN, NORTH, WEST, LEVEL1, LEVEL2 etc. This is a great facility, because you can use the enum names as labels or flags or modes or status or whatever you like in a very readable way. Here’s an example from Java:

This is great right? Enums work because the constants get unique values in Java. You never need to know those values, because you equate the enum constants directly, not by their value:

You don’t do if (getDirection() == 0) (although this even works if UP is the first enum in the list): you compare readable names not values.

Enums in Javascript

To achieve this in Javascript, you need to emulate an enum, which is almost the same thing, except you explicitly assign the unique values initially. Observe:

The values don’t matter, as long as they are unique. They can be numbers, decimals, strings, arrays or anything else. Numbers are common, but the following pattern is often used as well:

The advantage of this is that when you print out myRocket.direction you don’t get a number, but the name of the constant. This is useful for logging and debugging.

enum

Enums in Javascript with Lodash

I keep coming back to the awesome bag of tricks that is Lodash and I keep learning new things about it every day.

Here’s a way to create an enum, without explicitly defining their values:

What happens here? _.keyBy() creates a { key: value } object for each element in the given array and uses the function argument (second parameter) to produce the keys. _.identity simply returns the value as the key, so you end up with exactly the same as { LEFT: 'LEFT', RIGHT: 'RIGHT', UP: 'UP', DOWN: 'DOWN' }.

What do you do when you want to show someone all your skills ever? You make a tag cloud ofcourse!

jason_cloud

(more…)