Saturday, April 30, 2011

A snapshot of project coin in JDK7

Hi, at last I downloaded the JDK and adopted the early IntelliJ 10.5 release so to play with this new version (can't adopt Netbeans, really can't :) ).
In order to prepare a small meeting with Oracle evangelists about the new release, I tuned some really little test (yes I like that!!) to play the new stuff.
I did not practice everything but before embracing the new NIO.2 (this is a tradition for me: since nearly 15 years I always discover the I/O API improvements on each new JDK, superstition I guess).

I started very easy with the coin project testing the improvement on the switch statement concerning the ability to switch on String.

Driving my test with TDD (forever thank you Mr Beck), I designed a small StringSwitch class tested this way:
 @Test
public void switch_WithFoo_ShouldReturnBar() {
assertThat(new StringSwitch().respond("foo"), is(equalTo("bar")));
}

@Test
public void switch_WithDean_ShouldReturnMartin() {
assertThat(new StringSwitch().respond("dean"), is(equalTo("martin")));
}

@Test
public void switch_WithUnknown_ShouldReturnUnknown() {
assertThat(new StringSwitch().respond("anybody"), is(equalTo("unknown")));
}
This drove to the following bunch of code:

public String respond(final String message) {
final String response;
switch (message) {
case "foo":
response = "bar";
break;
case "dean":
response = "martin";
break;
default:
response = "unknown";
}
return response;
}
Easy stuff :)Still easy (yep I know), a small enhancement on syntax for numeric literals. The driven test looked like:

@Test public void million_ShouldBe_Divided() {
assertThat(new Amount(1_000_000).divideByThousand(), is(equalTo(1_000)));
}

@Test
public void evaluateBinary() {
assertThat(0b1110 >> 1, is(equalTo(0b111)));
}
Elegant and easy to read. A valuable feature when you have been programming in a bank believe me.
For previous C developers, one should notice that additional literal style has been added for short and bytes.

There must be a lot to say about the diamond notation and the multiple try/catch statements, but I have been lazy on these ones. I simply verified that it was possible to adopt the less chatty diamond notation, TDD programming a dumb list wrapper:

@Test
public void newListDiamond_ShouldBeEmpty() {
assertThat(new ListDiamond().isEmpty(), is(true));
}

@Test public void newListDiamond_WithOneElement_ShouldNotBeEmpty() {
assertThat(new ListDiamond().with("string").isEmpty(), is(not(true)));
}
This leads to the following definition of the ListDiamond class:
public final class ListDiamond {
private final List list;

public ListDiamond() {
super();
list = new ArrayList<>();
}

public boolean isEmpty() {
return withList().isEmpty();
}

public ListDiamond with(final String string) {
withList().add(string);
return this;
}

private List withList() {
return list;
}
}

Just imagine how simpler it will be with more complex map of list or list of list, actually obfuscating the code.

What interested me most I must confess was the try-with-resource statement. Basically this improvement prevent from the production of entangled code when we do come to the point of closing resources we have allocated. See what I mean ? You know this piggy code you got to write each time you open streams, JDBC connections etc.
Let see an example. Suppose I want to load a file of properties so I will be able to store than modify the lines.

My dumb TDD practice leads me to the following tests methods:

@Test public void newWrapper_ShouldBeEmpty() {
assertThat(new PropertiesWrapper().isEmpty(), is(true));
}

@Test
public void newWrapper_loadedWithTestFile_ShouldNotBeEmpty() throws IOException {
assertThat(new PropertiesWrapper().loaded().isEmpty(), is(not(true)));
}

At the beginning, the wrapper is empty, then I load and the wrapper is not empty anymore.
Using the try-with-resources improvement, this leads to the following code


public final class PropertiesWrapper {
private List list ;
  public PropertiesWrapper() {
super();
list = new LinkedList<>();
}

public boolean isEmpty() {
return list.isEmpty();
}

public PropertiesWrapper loaded() throws IOException {
load();
return this;
}


private void load() throws IOException {
try (final BufferedReader br = new BufferedReader(new FileReader("test.properties"))) {
String line;
while ((line = br.readLine()) != null) {
lines().add(line);
}
}

}

private List lines() {
return list;
}
}

This is it, simple, efficient. No entangled code caught in a finally scope, wrapping itself another un readable try-catch-finally clause. Because now in JDK 7, the BufferedReader implements the Closeable, so the AutoCloseable interface, this special try clause, binds the open/close life cycle of the managed AutoCloseable for it can be closed without nothing to do.

For fun I conceived another test for an imaginary FileCopyCat tool class (by the way un-necessary in JDK7 ). This class copies a file content to a file destination.
The main test looked like

@Test
public void streamCopy_WithExistingFile_ShouldDuplicateFile() throws IOException {
final File source = folder.newFile("sources.properties");
final File copy = new File(folder.getRoot().getAbsolutePath() + "\\copy.properties");
assertThat(copy.exists(), is(not(true)));
new FileCopyCat().streamCopy(source, copy);
assertThat(copy.exists(), is(true));
}

The class content I wrote became:

public final class FileCopyCat {
public FileCopyCat() {
super();
}

void streamCopy(final File source, final File target) throws IOException {
try (final InputStream fis = new FileInputStream(source);
final OutputStream fos = new FileOutputStream(target)) {

byte[] buf = new byte[8192];

int i;
while ((i = fis.read(buf)) != -1) {
fos.write(buf, 0, i);
}
}
}
}


Still no cluttered code, the compiler handles that for you. It seems that this improvement will be included in JDBC 4.1

That's a beginning. I have been working on the NIO.2 and will propose soon you some funny stuff I hope. By the way a tremendous book is being prepared at Manning: The well-grounded Java Developer, targetting the JDK7 and JVM languages. I downloaded the MEAP, sounds promising :)

Be seeing you !!




Tuesday, April 26, 2011

Combinatorial tests in JUNIT

As a friend of mine (Hi Pigelvy, still talking to me ?) recently asked about the ability to create your own parameter suppliers for combinatory tests in JUNIT 4... something.
Shame on me, I answered him to check on Google, which is not the correct behaviour of a sharing craftsman apprentice.
So I checked by myself and realised it was not so easy to achieve and understood better why Pigelvy stopped talking to me.
Facts are that I do have the same needs, even for the very basics situations when I have to check boolean flag positionings in POJO classes for example.

When we came to work together with Pigelvy, we choosed to work with JUNIT theories, because working with Theory exposes code with better readeability than paramerized tests (as a reminder about Theories in JUNIT check http://blogs.sun.com/jacobc/entry/junit_theories)

To make it simple, combinatory testing is used in pair with the Theories.class runner, flagging each testing method with the Theory annotation. Then when I had to apply test suites involving boolean flags I used to create a

@DataPoints boolean[] FLAGS = new boolean[]{true, false};

Really this sucks when abused. It is ugly because cluttering the code, and repeating this kind of declaration in all your tests classes is clearly a DRY principal violation.
Moreover, in a test case manipulating multiple annoted Theory methods with the same signature, involves that implicitely all arrays of values will be shared by all methods during the process of combining the all the values into arrays.
Some tests may not need the same experimental set of values and this behavior can become a real pain.

JUNIT, as far as I can understand, offers an easy way to "supply" sets of parameters to combine. Suppose I need to play with a range of integers (yes, that' s a classic).
The test method would be self explanatory if it looked like:

@Theory
public void number_ShouldAlwaysBePositive(final @IntegerInRange(from = 1, to = 10) int number) {
assertThat(number, is(greaterThan(0)));
}

Obviously, JUNIT is going to execute my method with all the numbers in the interval from 1 to 10. All I need is apecial annotation like the following:

@Retention(RUNTIME)
@ParametersSuppliedBy(IntegerRangeSetSupplier.class)
public @interface IntegerInRange {

int from();

int to();

}

... and a supplier class. The supplier class is where the stuff(magic ? :)) of supplying parameters really happens.

What I think I understood is that as you expose the nice DSL into the annotation definition and deals with the plumbing into the parameter supplier.
A parameter supplier extends a ParameterSupplier class. This constrains you by contract to implement the

public List getValueSources(final ParameterSignature parameterSignature)

method.

By fail and retry (as I did not find documentation), I analysed the ParameterSignature parameter, used the tool methods and built up a list of "potential assignement".

@Override
public List getValueSources(final ParameterSignature parameterSignature) {
final IntegerInRange rangeSet = parameterSignature.getAnnotation(IntegerInRange.class);
final int lowerBound = rangeSet.from();
final int upperBound = rangeSet.to();

final List list = new ArrayList(upperBound - lowerBound);
for (int i = lowerBound; i < upperBound; i++) {
list.add(PotentialAssignment.forValue(String.valueOf(i), i));
}
return list;
}

Blame me, I did not check the possible nul referencing of the rangeSet instance, but I hope the code is clear enough.
Using the getAnnotation tool method, I got access to the annotation instance
I then extract the boundaries using the boundaries I create a list of potential assignment for JUNIT to deal with.

One should remark that I preferred using the PotentialAssignment.forValue factory method in order to create the PotentialAssignment instances (As Joshua Kerievsky , I do prefer creation knowledge confined into some factory).

As I like clarity too, I decided I preferred creating a one to one relationship between an annotation and a supplier.
Back to my egotist simpler problem of the begining (set of boolean parameters) I immediatly created an annotation:

@Retention(RUNTIME)
@ParametersSuppliedBy(BooleanParametersSupplier.class)
public @interface BooleanParameter {}

with the matching BooleanParametersSupplier supplier getValueSources method implementation:

@Override
public List getValueSources(final ParameterSignature parameterSignature) {
final List list = new LinkedList();
list.add(PotentialAssignment.forValue("true", TRUE));
list.add(PotentialAssignment.forValue("false", FALSE));
return list;
}

An example of use being:

@Theory
public void check_WithFlag_ShouldBeBound(final @BooleanParameter boolean check) {...}

This set of combinatory classes seems to offer far more possibilities I did not explore yet. For example, I did not succeed in using the description field of the PotentialAssignment class even provoking a failure. I have a hung there are numerous other possibilities and will gladly welcome feed backs about different uses.












Monday, April 25, 2011

Working In Scala with both IntelliJ and Maven

As an IntelliJ advocate and Maven lover (yes it's possible, just
compare it to ANT), I faced the problem of practicing small Scala katas
in a sober environment deprived of a Maven. This was till I discovered
how to include a nice Maven plugin for Scala located here:

http://scala-tools.org/mvnsites/maven-scala-plugin/

So it took me a few second to setup a new file pattern for a fresh pom
file targetting work with Scala. Following the how-to on the above web
site, one just have to include the following repositories:

<repositories>
<repository>
<id>scala-tools.org</id>
<name>Scala-Tools Maven2 Repository</name>
<url>http://scala-tools.org/repo-releases</url>
</repository>
</repositories>

<pluginRepositories>
<pluginRepository>
<id>scala-tools.org</id>
<name>Scala-Tools Maven2 Repository</name>
<url>http://scala-tools.org/repo-releases</url>
</pluginRepository>
</pluginRepositories>

Then update your dependency management as following:

<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.8.1</version>
</dependency>
...
</dependencies>
</dependencyManagement>


And naturally follows the dependency declaration, where you specify
your interest versus the Scala library:

<dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
</dependency>
...
</dependencies>


You can then tune your environment to work with scala and resources
directories as your source/tests directories:

<build>
<sourceDirectory>src/main/scala</sourceDirectory>
<testSourceDirectory>src/test/scala</testSourceDirectory>
<plugins>
<plugin>
<groupId>org.scala-tools</groupId>
<artifactId>maven-scala-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
<configuration>
<scalaVersion>2.8.1</scalaVersion>
</configuration>
</plugin>
</plugins>
</build>


Integration with IntelliJ works nicely when you have activated the
Scala plugin, just importing your freshly created maven project.
IntelliJ automatically flags the scala and resource directories as
source/test directories and proposes your the creation of both Scala
and Java classes when you hit the magic combination ALT + Insert

At last

This is my first technical blog and after seeing Uncle Bob Martin and Tyler Jennings presentations about software craftsmanship, I committed myself into opening a blog in order to communicate about these first steps into the world of software craftsmanship.
I hope I will find the time to nurture these pages with own discoveries, code exploration and of course strongly hope, I will receive advises from my peer craftsman .

And as good news never come alone, I signed the Manifesto for software craftsmanship a few days ago and get certified today as SCRUM Master after attending to Jeff Sutherland training.

A nice day.

See you :)