How does Maven resolve version conflicts and how can we deal with “omitted for conflict with ..” message in pom.xml?

Tags

, ,

We have this situation (an old screenshot with old Spring 3 versions):

CYTAf

The problem is: my direct dependency spring-security-web has a dependency on the version 3.0.7 of aop. So this version of aop is a 2. level dependency for us. My other direct dependency spring-webmvc has a dependency on spring-web, which has a dependency on the version 3.2.4 of aop. Here the newer version of aop is a 3. level dependency for us. Since Maven resolves version conflicts with a nearest-wins strategy, in our example wins the aop 3.0.7 based on this rule. Therefore we have the warning “omitted for conflict with 3.0.7” beside the 3.2.4 version. If spring-webmvc were directly dependent on the 3.2.4 version, they would be on the same level, and in that case maven would resolve this conflict by simply using the one, which has a higher position in pom.

In our example, since in our dependeny hierarchy we cannot have multiple versions of a library, our spring-webmvc will be using the older version of aop, which may cause incompatibilities. It is actually just a luck, that it has worked so far. If we had another dependency which is directly dependent on a far more older aop version like 2.0, our spring-webmvc would probably not work seamlessly.

We should be aware of what exaclty we are using, consciously decide which version we should use and resolve this conflict by defining a dependencyManagement sector in our pom.

    ...
    <properties>
        <org.springframework.version>3.2.4.RELEASE</org.springframework.version>
    </properties>

    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-aop</artifactId>
                <version>${org.springframework.version}</version>
            </dependency>
        </dependencies>
    </dependencyManagement>

    <dependencies>

        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-webmvc</artifactId>
            <version>${org.springframework.version}</version>
        </dependency>
    ...

Now all our dependendies would be using the newer version of aop, which may also (but not more likely) cause problems. But as i have stated above, this decision should be taken consciously and not accidentally.

Evaluation with Mockito Matchers and ArgumentCaptor

Tags

, , , ,

There are many ways of testing a method. I will list here some trivial examples testing my method using Mockito Matchers and ArgumentCaptor; each of them has %100 coverage, but are getting more clever each time. We will again realise how stupid that is to use just code coverage tools to ensure the quality of our tests.

	protected void createObject() {

		MyObject myObject = new MyObject();
		myObject.setCount(1);
		MyObjectDao.create(myObject, "my first object");
	}
import static org.mockito.Matchers.*;
..

@Mock
private MyObjectDao myObjectDaoMock;

@Test
public void test() {
     // Run Test
     x.createObject();

     // Control
     verify(myObjectDaoMock).create(any(Object.class), anyString());
}
     ...

     // Run Test
     x.createObject();

     // Control
     verify(myObjectDaoMock).create(any(MyObject.class), eq("x"));
}

Note: In some cases where we have multiple method arguments as above, if we use a matcher just for one argument like in this example:

     verify(myObjectDaoMock).create(any(MyObject.class), "x");

we see an InvalidUseOfMatchersException with this error:

This exception may occur if matchers are combined with raw values:
//incorrect:
someMethod(anyObject(), “raw String”);
When using matchers, all arguments have to be provided by matchers.
For example:
//correct:
someMethod(anyObject(), eq(“String by matcher”));

That’s why if a matcher is needed, all arguments must be matcher!

Since the test object (with type MyObject) is created in the test method itself, using ArgumentCaptor is the only way to test whether the create method is invoked with the intended input, and whether the attributes of this input are set properly.

     ...
     ArgumentCaptor<MyObject> argumentCaptorForMyObject = ArgumentCaptor.forClass(MyObject.class);

     // Run Test
     x.createObject();

     // Control
     verify(myObjectDaoMock).create(argumentCaptorForMyObject.capture(), eq("x"));
     MyObject myObject = argumentCaptorForMyObject.getValue();
     assertEquals(1, myObject.getCount());
}

So, when you want to test a void method which creates objects and use those somehow, you will probably need to use ArgumentCaptor.

If one of those created objects is a list of a specific type, you may wonder how to define your ArgumentCaptor, as this won’t work:

     ArgumentCaptor<List<MyObject>> captor = ArgumentCaptor.forClass(List.class);

This generics-problem can be avoided by defining the ArgumentCaptor with the @Captor annotation like this:

import static org.mockito.Matchers.*;
..

@Mock
private MyObjectDao myObjectDaoMock;

@Captor
private ArgumentCaptor<List<MyObject>> captor;

@Test
public void test() {
     ...

     // Control
     verify(myObjectDaoMock).someMethod(captor.capture());
     List<MyObject> listOfMyObjects = captor.getValue();
}

Note: Do not confuse the method getValue with getAllValues. This is especially important if the argument itself is a list, since the return type of getAllValues is also a list. When our mock will be invoked multiple times and if we want to test if the mock is invoked eachtime properly, it will look like this:

     ...

     // Control
     verify(myObjectDaoMock).someMethod(captor.capture());
     List<List<MyObject>> listOfMyObjects = captor.getAllValues();
}

Do not test something which your method is not responsible for! Love Mockito and use it properly. Also do not overuse it! Happy clever test writing!

logo

Pomodoro (Tomato) Technique

Tags

, ,

The Pomodoro Technique is a time management method we use at work everyday, which can also be applied to any other areas of life like cooking, cleaning, studying etc. It basically improves productivity in many ways, although it seems to be stupidly simple. It uses a timer to break down the work into intervals/tomatos (traditionally 25 minutes), separated by short breaks.

Here is its traditional recipe:

1- Decide your task
2- Set the timer to n minutes (traditionally 25)
3- Work on the task until the timer rings
4- Take a short break (3-5 minutes)
5- After 4 tomatos, take a longer break (15–30 minutes)

At first it seems like basically taking some breaks while working, but in a more organized way. We all know that frequent breaks improve mental agility, but this technique has more to offer. Since you know that you just have 25 minutes, you will always be trying to complete your task in that time interval. If you did not, you either did not slice your task well so it can be handled within one tomato, or you could not estimate the time required for your task. Either way, you will be always better:

1- After sometime your will realize that you complete your tasks faster.
2- You will be able to break your tasks into small pieces, which can keep you from getting frustrated with your long to-do list. It also improves collaboration in teams, since you can distribute the tasks in a more realistic way.
3- You will be estimating the time required for your tasks more precisely, which will result in planning your daily work efficiently. You will know approximately how many tomatos you need for a specific task.

You can also slightly extend this technique depending on your needs. Here is how we use it in meetings, which may really take some hours:

You know how a meeting can easily be a waste of time. An common exmaple: you would like to discuss about something and decide how to proceed. Since everbody has his/her own idea, and one topic/one idea opens another and another, it is very easy that the topic shifts to somewhere else. And only after an hour you realize that the meeting time is up, you did not make any progress; you rather have more questions in your mind now. We use our tomatos for this problem. When our timer rings (and believe me, at very that moment you will always be surprised when you realize how the time goes by so fast), we ask ourselfs those questions:

“Are we on a reasonable path?”
“Did we get any closer to our goal?”
“Should we get rid of distractions?”

And we focus, start the tomato again and try again. Even while studying or surfing the Internet you realize that you will always keep focusing on your actual goal, instead of getting lost somewhere.

There are many iOS / Android apps out there, which help you doing your tomatos. They mostly produce graphical results so you can evaluate yourself: How many tomatos for which tasks you did? How many times and why were you distracted? You can even rate your tomatos, so at the end you can see which time of the day / day of the week you were more productive for a specific task.

I want to emphasize one more time. It seems extremely simple, yet is very effective.

keep-calm-and-throw-tomatoes-3

How to ensure quality of JUnit tests?

Tags

, ,

Besides code review, the code coverage (line or/and branch) is also often considered as a medium for measuring the quality of unit tests. From my experience, this leads to developers adding some tests (let’s say it like useless tests) just to be sure their coverage result is enough. Even the tests which look cool and feel right, could be somehow stupid or not clever enough.

I do always thought though that i write reasonable tests, which do not just technically cover %100 of my code, but also cover all the functional possibilities; until last week when i realized just the opposite. As it is always better to explain with an example, here is a dummy part of my method to be tested:

protected void myMethod(MyObject o) {
     o.setComment("comment");
     MyObjectDAO.updateMyObject(o);
}

Here is a part of my JUnit, and this is the best case in our project by the way, where we always try to have %100 coverage; best case in terms of test quality, functional coverage etc.

    @Mock
    private MyObjectDAO myObjectDaoMock;

    @Test
    public void testMyMethod() {
         MyObject o = new MyObject();

         // Run Test
         x.myMethod(o);

         // Control
         assertEquals("comment", o.getComment());
         verify(myObjectDaoMock).updateMyObject(o);
    }

It looks actually promising. We have two lines of code in our method, and the test is trying to test both lines. It could also have been like this, where we also have %100 coverage:

         ...
         // Control
         verify(myObjectDaoMock).updateMyObject(o);
    }

Anyway, the above test is green; we are now testing whether the comment attribute of myObject is set, and the update method is called with myObject. But not whether the update method is called with the updated attribute. When we change the position of two lines in myMethod (first updating the object, then setting its attribute), our test is still green. Yet a good test should fail with any functional change!

I thought that the ArgumentCaptor can be useful here. So i made this one:

         ...
         // Control
         ArgumentCaptor<MyObject>; argumentCaptor = ArgumentCaptor.forClass(MyObject.class);
         verify(myObjectDaoMock).updateMyObject(argumentCaptor.capture());
         MyObject oActual = argumentCaptor.getValue();
         assertEquals("comment", oActual.getComment());
    }

… hoping that the ArgumentCaptor will capture that state of the object, by which the update method is called, so i can be sure that the update method is called with the updated comment attribute. The test is green again. But it still does not test cleverly. When we change the position of two lines in myMethod again (first updating the object, then setting its attribute), our test is still green.

I understand that, the ArgumentCaptor does not create another attribute for himself (argumentCaptor.getValue()), it is the reference of the original object. So since java passes the reference by value, for JUnit it does not make any difference whether i update the object before or after, as long as the objectIds are same.

So how can I actually test that the updateObject method is called with the updated value of myObject?

There are two solutions we can look at:

1- We can create an inner class in our test, extending the myObject type, so that we can override its equals and hashcode methods. In equals method we can add the comment attribute of the object as a requirement for objects’ equality. In that case, when our update method is not called with the intended value of comment, while comparing expect vs. actual Mockito will see another object since the objectIds will be different; and our test will fail with the changes stated above.

This solution was not an appropriate one for us; firstly since we have many other tests, which also need the original equals method of myObject type; secondly i would simply not prefer to extend a domain object in a test class.

2- (which i personally prefer) To have mocked both MyObject and MyObjectDAO. In that case we can verify whether the setter was called on MyObject and also the order of calls.

InOrder inOrder = inOrder(myObjectMock, myObjectDAOMock);

The above line assures that the dao will always be called after the object interaction (getter/setter). So the test for our modified method would fail. With this extension our test would look like this:

    @Mock
    private MyObjectDAO myObjectDaoMock;

    @Mock   // we can also use @Spy here, depending on our needs
    private MyObject myObjectMock;

    @Test
    public void testMyMethod() {
         // Run Test
         x.myMethod(o);

         // Control
         InOrder inOrder = inOrder(myObjectMock, myObjectDAOMock);

         inOrder.verify(myObjectMock).setComment("comment");
         inOrder.verify(myObjectDAOMock).updateMyObject(o);
    }

We have moved again one step further. The method above looks better since we now assure the order, the calls to our objects/mocks, the parameters which are passed by those calls. Yet we still miss something, the connection. When we think those cases we covered, one by one, all of them make sense. But we still do not test whether the update method is really called with the intended value “comment”.

Let’s change our method one more time, and see if the test would fail:

protected void myMethod(MyObject o) {
     o.setComment("comment");
     o.setComment("xxx");
     MyObjectDAO.updateMyObject(o);
}

The above test would still pass through since the ordering is correct, comment attribute of myObject is set to “comment”, and my dao is called with myObject. But at the end our object will be saved with an “xxx” in its comment attribute. Again the same problem, we still do not test whether the update method is really called with the intended value “comment”. We can extend our test one more time:

         ...
         // Control
         InOrder inOrder = inOrder(myObjectMock, myObjectDAOMock);

         inOrder.verify(myObjectMock).setComment("comment");
         verifyNoMoreInteractions(myObjectMock);
         inOrder.verify(myObjectDAOMock).updateMyObject(o);
    }

Now this test would fail for the method above. Since verifyNoMoreInteractions prevents all the interactions with myObject, we can be sure that our intended value “comment” is set, and update method is called. But again we still miss here something. As i also wrote above “Yet a good test should fail with any functional change!” (which i think the most important sentence of this post), a good test also should not fail with any change! Actually i want to rewrite that sentence again: A good test should fail if, and only if, there is a functional change! Let’s modify our method last time:

protected void myMethod(MyObject o) {
     o.setComment("comment");
     x = o.getComment();
     MyObjectDAO.updateMyObject(o);
}

Here, as we just call a getter, which do not have any effect on the functionality of my method, my test should NOT fail; since we always would like to have the chance of updating our methods as our requirements grow, but only then update their tests, when really needed. But our test above would again fail, since verifyNoMoreInteractions does not want us to use myObject anymore.

As you see, we just have a stupid method which has 2 lines of code, we still could not cover it %100. Writing unit test is really not that simple, should also not be. Let me say so; when we needed less time for writing tests for a specific method, than the time we needed to write the method itself, we should review that test again.

This is the last modification for our test:

         ...
         // Control
         InOrder inOrder = inOrder(myObjectMock, myObjectDAOMock);

         inOrder.verify(myObjectMock).setComment("comment");
         inOrder.verify(myObjectMock, times(0)).setComment(anyString());
         inOrder.verify(myObjectDAOMock).updateMyObject(o);
    }

Here, instead of preventing all the calls to myObject, we simply say “comment attribute of my object should not be set any more!”. Now we can modify our method as many time as we want. We really test whether our update method is called with the intended value of our object.

I have recently learned that the idea behind my way of thinking has actually a name: Mutation Testing. I think an old concept which is not known very well. I may post another entry about this topic someday. Until that time, happy clever test writing!

Using Nexus Repository Manager as the only developer in my network?

Tags

, , ,

There are some topics, which i discussed/searched a solution for through the Stackoverflow over the past years. Occasionally I need some of them again and again, that’s why i wanted to summarize those here, and this is one of them.

My post was:

I recently wanted to integrate Nexus to my home-based Java Project, just to learn what it is good for. As i read from the posts on the internet, and from the nexus Website, its main usage aim is:

  • Maven Central Repo has more than 200k artifacts. Get a local copy of your artifacts. Do not download every time you need them. When you specify an artifact, first nexus will be asked if it has the artifact, if yes it will be read from your local cache. if not nexus will download it from the central repo. Your build will continue to work regardless of the original artifact in Central Repo. Speed up your builds etc.

Now what i don’t understand here is: Is it not the same as maven does? First time when i define a new artifact in a pom.xml, the jar will be downloaded from the central repo. It will be placed in ~/.m2/repository. Next time it will be read from here as long as there is a copy of it. Even if i create a new project, the downloaded jars will be used from this repository.

I think, I would need Nexus if i have another developer in my network, who also needs these jars. So we would not need to download these jars separately from central. We would define a network based repo (Nexus) and the jars would be downloaded to this repository. Other developers would reach this repo without the need of reaching central repo.

In my case, i cant see any advantages of nexus. I now have this settings.xml which Nexus suggests to use:

<pre><code>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
                      http://maven.apache.org/xsd/settings-1.0.0.xsd">
  <localRepository/>
  <interactiveMode/>
  <usePluginRegistry/>
  <offline/>
  <pluginGroups/>
  <servers/>
    <mirrors>
    <mirror>
      <!--This sends everything else to /public -->
      <id>nexus</id>
      <mirrorOf>*</mirrorOf>
      <url>http://localhost:8081/nexus/content/groups/public</url>
    </mirror>
  </mirrors>
  <profiles>
    <profile>
      <id>nexus</id>
      <!--Enable snapshots for the built in central repo to direct -->
      <!--all requests to nexus via the mirror -->
      <repositories>
        <repository>
          <id>central</id>
          <url>http://central</url>
          <releases><enabled>true</enabled></releases>
          <snapshots><enabled>true</enabled></snapshots>
        </repository>
      </repositories>
     <pluginRepositories>
        <pluginRepository>
          <id>central</id>
          <url>http://central</url>
          <releases><enabled>true</enabled></releases>
          <snapshots><enabled>true</enabled></snapshots>
        </pluginRepository>
      </pluginRepositories>
    </profile>
  </profiles>
  <activeProfiles>
    <!--make the profile active all the time -->
    <activeProfile>nexus</activeProfile>
  </activeProfiles>
  <proxies/>
</settings>
</code></pre>

The jars i have in /.m2/repository/ directory are exactly same as the jars in http://localhost:8081/nexus/content/groups/public/. What is the advantage of Nexus in my case?

A part of the answer, which i may find useful in the future is:

  1. You can set up your own hosted repository and deploy your snapshot releases (complete with timestamp in the version name) “for real” – rather than just with -SNAPSHOT as in your .m2 cache
  2. It’s a great way to learn how real Maven repositories work, which is an important skill inside a larger organization (and which you don’t want to learn through trial-and-error on a production Nexus repo).

HTML5 void elements

Tags

,

A void element is an element, which can have attributes, but no contents. Based on the current HTML Specification (28 May 2013) these are the void elements:

area, base, br, col, command, embed, hr, img, input, keygen, link, meta, param, source, track, wbr

So we do not have to end those tags anymore. Example:

<meta charset="utf-8">

instead of

<meta charset="utf-8"/>  <!--(XHTML doctype)-->

If you also encounter an error on your html files saying “The element type x must be terminated by the matching end-tag x”, you are probably either not using HTML5 as your doctype, or your view resolver does not support HTML5 / its HTML5 mode is not activated.