Sunday, April 26, 2020

SudokuFX: Migration to Java11 LTS

Better late than never: now I want to migrate SudokuFX to Java11.

Up to version 2020.1.1,  SudokuFX needs a Java Runtime with JavaFX integrated - version 8. Choosing the correct Java version is the first hurdle for the casual internet visitor, very easy to 'do wrong'. Cryptic errors will drive away newcomers, will cost even pro's some time until they realize that the wrong platform is used.

Java != Java

Choosing the wrong Java version is a common problem for beginners, I must admit that it becomes more and more complicated.

Every half year a new Java version is released, with new features, and several organisations provide builds. This is a good thing, the industry has to evolve the platform, and it is good if there exist different sources for Java binaries. Many years of development went into a build process which is now simple enough for the average Bob D. Eveloper to build OpenJDK themselves, which is - considering the details - a huge accomplishment. 

For the rest of us, the question is which version of Java to choose? Oracle is still the place to go if you have much money to buy their licenses and subscriptions. Alternatives can be used from AdoptOpenJDK, Azul and others. To ease the migration costs, LTS versions are maintained for a longer period of time - Java version 11 is the one the industry migrates to at this very moment. 

And even me with my SudokuFX project.

Still one has to keep in mind that everybody is advertising their latest release, as such often the newest runtimes and jdks are downloaded, and projects written for older incarnations of those technologies maybe just don't work. 

If your project has dependencies on libraries which are also only written for a specific version of the JDK the game becomes even more complex. Then, as a Scala User, you have even more troubles since libraries for a specific version of Scala doesn't really work together with a different Scala version ... 

I didn't even mention the modularization effort of the JDK itself, which - only very slowly but in a foundational manner - reinvents the whole ecosystem yet again. 

Don't get me wrong, everything happens because of a perfectly sound reason, but it is not trivial to get everything working, considering all those moving targets. There are also not many engineers around who can really cope with those topics to be honest.

Well. Enough motivation to do the move to Java11, without any JavaFX backed in the java runtime. 

SDK! Man!

To easily switch between Java versions (and jvm based applications like sbt, lein, scala and others) a tool called sdkman exists, which is very handy and lets you download and change used version of Java with a simple command. It's like a package manager, just for specific (java) applications. Highly recommended!

Here is an example of how it looks if you want to change a java version from current one to to the version AdoptOpenJDK provided, Java version 11:

homemac:sudokufx lad$ sdk list java
================================================================================
Available Java Versions
================================================================================
 Vendor        | Use | Version      | Dist    | Status     | Identifier
--------------------------------------------------------------------------------
 AdoptOpenJDK  |     | 14.0.1.j9    | adpt    |            | 14.0.1.j9-adpt      
               |     | 14.0.1.hs    | adpt    |            | 14.0.1.hs-adpt      
               |     | 14.0.0.j9    | adpt    | local only | 14.0.0.j9-adpt      
               |     | 13.0.2.j9    | adpt    |            | 13.0.2.j9-adpt      
               |     | 13.0.2.hs    | adpt    |            | 13.0.2.hs-adpt      
               |     | 13.0.1.hs    | adpt    | local only | 13.0.1.hs-adpt      
               |     | 12.0.2.j9    | adpt    |            | 12.0.2.j9-adpt      
               |     | 12.0.2.hs    | adpt    |            | 12.0.2.hs-adpt      
               |     | 11.0.7.j9    | adpt    |            | 11.0.7.j9-adpt      
               |     | 11.0.7.hs    | adpt    | installed  | 11.0.7.hs-adpt      
               |     | 8.0.252.j9   | adpt    |            | 8.0.252.j9-adpt     
               |     | 8.0.252.hs   | adpt    |            | 8.0.252.hs-adpt     
 Amazon        |     | 11.0.7       | amzn    |            | 11.0.7-amzn         
               |     | 8.0.252      | amzn    |            | 8.0.252-amzn        
               |     | 8.0.202      | amzn    |            | 8.0.202-amzn        
 Azul Zulu     |     | 14.0.1       | zulu    |            | 14.0.1-zulu         
               |     | 13.0.3       | zulu    |            | 13.0.3-zulu         
               |     | 13.0.3.fx    | zulu    |            | 13.0.3.fx-zulu      
               |     | 12.0.2       | zulu    |            | 12.0.2-zulu         
               |     | 11.0.7       | zulu    |            | 11.0.7-zulu         
               |     | 11.0.7.fx    | zulu    |            | 11.0.7.fx-zulu      
               |     | 11.0.5       | zulu    | local only | 11.0.5-zulu         
               |     | 10.0.2       | zulu    |            | 10.0.2-zulu         
               |     | 9.0.7        | zulu    |            | 9.0.7-zulu          
               |     | 8.0.252      | zulu    |            | 8.0.252-zulu        
               |     | 8.0.252.fx   | zulu    |            | 8.0.252.fx-zulu     
               |     | 8.0.232.fx   | zulu    |            | 8.0.232.fx-zulu     
               |     | 8.0.202      | zulu    |            | 8.0.202-zulu        
               |     | 7.0.262      | zulu    |            | 7.0.262-zulu        
               |     | 7.0.181      | zulu    |            | 7.0.181-zulu        
 BellSoft      |     | 14.0.1.fx    | librca  |            | 14.0.1.fx-librca    
               |     | 14.0.1       | librca  |            | 14.0.1-librca       
               |     | 13.0.2.fx    | librca  |            | 13.0.2.fx-librca    
               |     | 13.0.2       | librca  |            | 13.0.2-librca       
               |     | 12.0.2       | librca  |            | 12.0.2-librca       
               |     | 11.0.7.fx    | librca  |            | 11.0.7.fx-librca    
               |     | 11.0.7       | librca  |            | 11.0.7-librca       
               |     | 8.0.252.fx   | librca  |            | 8.0.252.fx-librca   
               |     | 8.0.252      | librca  |            | 8.0.252-librca      
 GraalVM       |     | 20.0.0.r11   | grl     | installed  | 20.0.0.r11-grl      
               |     | 20.0.0.r8    | grl     |            | 20.0.0.r8-grl       
               |     | 19.3.1.r11   | grl     |            | 19.3.1.r11-grl      
               |     | 19.3.1.r8    | grl     |            | 19.3.1.r8-grl       
               |     | 19.3.0.r11   | grl     |            | 19.3.0.r11-grl      
               |     | 19.3.0.r8    | grl     |            | 19.3.0.r8-grl       
               |     | 19.3.0.2.r11 | grl     |            | 19.3.0.2.r11-grl    
               |     | 19.3.0.2.r8  | grl     |            | 19.3.0.2.r8-grl     
               |     | 19.2.1       | grl     | installed  | 19.2.1-grl          
               |     | 19.1.1       | grl     |            | 19.1.1-grl          
               |     | 19.0.2       | grl     |            | 19.0.2-grl          
               |     | 1.0.0        | grl     |            | 1.0.0-rc-16-grl     
 Java.net      |     | 15.ea.20     | open    |            | 15.ea.20-open       
               |     | 14.0.1       | open    |            | 14.0.1-open         
               |     | 13.0.2       | open    |            | 13.0.2-open         
               |     | 12.0.2       | open    |            | 12.0.2-open         
               |     | 11.0.2       | open    |            | 11.0.2-open         
               |     | 10.0.2       | open    |            | 10.0.2-open         
               |     | 9.0.4        | open    |            | 9.0.4-open          
 SAP           |     | 14.0.1       | sapmchn |            | 14.0.1-sapmchn      
               |     | 13.0.2       | sapmchn |            | 13.0.2-sapmchn      
               |     | 12.0.2       | sapmchn |            | 12.0.2-sapmchn      
               |     | 11.0.7       | sapmchn |            | 11.0.7-sapmchn      
 Unclassified  | >>> | 8.0.201      | none    | local only | 8.0.201-oracle      
================================================================================
Use the Identifier for installation:

    $ sdk install java 11.0.3.hs-adpt
================================================================================
homemac:sudokufx lad$  sdk use java 11.0.7.hs-adpt

Now this shell is configured to use Java11. Neat, isn't it? (You also get a taste of how many different variations of Java distributions exist, and the list is surely not complete ... )

I choose AdoptOpenJDK without JavaFX packaging, since I installed and experimented with it already some time ago. 

homemac:sudokufx lad$ java -version
openjdk version "11.0.7" 2020-04-14
OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.7+10)
OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.7+10, mixed mode)

In IntelliJ I have to change the used JDK as well of course.

Nice. But what does it need to change for Java11 LTS support?

Here is an enforcer plugin rule to make sure we run at least at Java11 and with a certain maven version:


            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-enforcer-plugin</artifactId>
                <version>3.0.0-M3</version>
                <executions>
                    <execution>
                        <id>enforce-jdk</id>
                        <goals>
                            <goal>enforce</goal>
                        </goals>
                        <configuration>
                            <rules>
                                <requireMavenVersion>
                                    <version>3.6.2</version>
                                </requireMavenVersion>
                                <requireJavaVersion>
                                    <version>11</version>
                                </requireJavaVersion>
                            </rules>
                        </configuration>
                    </execution>
                </executions>
            </plugin>



This rules out already some problems and support tickets which is nice. 

Like mentioned above, I most certainly have to change some dependencies in my poms (my hobby).

A valuable resource for JavaFX is of course openjfx.io. Here you will find simple tutorials to get you up to speed with simple JavaFX projects which you can then use for your apps. They've developed a maven plugin which does almost everything you ever want concerning JavaFX, I hope I have time to look at this in more detail soon.

The company behind this is Gluon, the most active contributor to JavaFX. They work very hard to provide a Java based platform for user interfaces on Desktop, embedded and mobile, which is very valuable for the Java community.

Thanks to them - and others - JavaFX can be used as a 'normal' maven dependency in your project. Which is awesome, from a library developer point of view as well as a user who wants to use it in their next project. 

As such, JavaFX got ripped out from the core JRE, like other elements of the JDK it got modularized in Project Jigsaw. If you want to know more about this go buy this book, thank me later.

We are getting closer to what I actually had to change in SudokuFX - namely introducing new dependencies and throwing out old unsupported ones.



            <dependency>
                <groupId>org.openjfx</groupId>
                <artifactId>javafx-base</artifactId>
                <version>11.0.2</version>
            </dependency>
            <dependency>
                <groupId>org.openjfx</groupId>
                <artifactId>javafx-swing</artifactId>
                <version>11.0.2</version>
            </dependency>
            <dependency>
                <groupId>org.openjfx</groupId>
                <artifactId>javafx-fxml</artifactId>
                <version>11.0.2</version>
            </dependency>
            <dependency>
                <groupId>org.openjfx</groupId>
                <artifactId>javafx-controls</artifactId>
                <version>11.0.2</version>
            </dependency>


In my case, I struggled a little bit with the javafx-swing dependency, since it was not obvious (at the time) it had to be included as well. As you can see I use four javafx maven dependencies.

Along the way I had to get rid of jfxtras and jfxtras-labs, which don't provide builds for Java11, at least I found none. I also got problems with the scala compiler plugins and IntelliJ (I introduced them in my last post), but I need them only occasionally and as such I commented them out again.

So to sum it up, the migration was quite painless, you can have a look at the result on github - version 2020.2 of SudokuFX ....

Thanks again for reading

Thursday, April 23, 2020

ScalaFix SudokuFX

I released a new version of SudokuFX, this time a maintenance release. I some warnings I got from compiling.

To make this a little bit more interesting, I used scalafix and autofix. What are those things?

Scalafix is a compiler plugin on top of Scalameta which helps rewriting source code. It provides a framework which can be used for creating own code rewriting rules as well. Autofix is an example for that - it encodes migration rules specifically for ScalaTest.

After a little bit of maven pom fiddling, autofix is configured for my project. All I needed to do was to add a compiler plugin and an additional plugin in my pluginsection of my root pom.xml:


        <pluginManagement>
            <plugins>
                <plugin>
                    <groupId>net.alchim31.maven</groupId>
                    <artifactId>scala-maven-plugin</artifactId>
                    <version>4.3.1</version>
                    <executions>
                        <execution>
                            <goals>
                                <goal>compile</goal>
                                <goal>testCompile</goal>
                            </goals>
                        </execution>
                    </executions>
                    <configuration>
                        <scalaVersion>${scala.major.version}</scalaVersion>
                        <args>
                        ...
                        </args>
                        <compilerPlugins>
                            <compilerPlugin>
                                <groupId>org.scalameta</groupId>
                                <artifactId>semanticdb-scalac_${scala.full.version}</artifactId>
                                <version>4.3.9</version>
                            </compilerPlugin>
                        </compilerPlugins>
                    </configuration>
                </plugin>
                <plugin>
                    <groupId>io.github.evis</groupId>
                    <artifactId>scalafix-maven-plugin</artifactId>
                    <version>0.1.2_0.9.5</version>
                    <dependencies>
                        <dependency>
                            <groupId>org.scalatest</groupId>
                            <artifactId>autofix_2.12</artifactId>
                            <version>3.1.0.0</version>
                        </dependency>
                    </dependencies>
                </plugin>
            </plugins>
        </pluginManagement>

Thats it ... more or less.

What is still missing is a configuration for scalafix, which is placed in a file called .scalafix.conf. (Note the '.' at the beginning of the file name). This file should be placed next to the main pom file.

Here is the file how I configured it:

rules = [
  RemoveUnused,RewriteDeprecatedNames,ProcedureSyntax,NoValInForComprehension,LeakingImplicitClassVal,NoAutoTupling
]

The inverted rule name is an autofix rule, which comes with the additional dependency for scalafix-maven-plugin. 

After configuring this, you have to call maven like that:

mvn clean compile test-compile scalafix:scalafix

After that, have a look at your git status, you'll see that many files have changed, and the plugin did what was promised: it rewrote source code, applied trivial but useful transformations. For example, it can remove unused imports out of the box, or rewrite code written in procedural style and many other things.

Scalafix can be extended with own plugins, that means in theory you could implement your own rules, specific to your needs. This enables then automatic rewrites which are not only syntactic (which could also be achieved by clever scripting) but semantic: rewrite rules know about the structure of the code. This helps for automating tasks which otherwise would bind valuable developer time ...

I removed other warnings as well, albeit manually, from the source. By doing that, I got new 'unused imports', which swiftly went away by calling just beforementioned mvn command again while I gazed with amazement to my editor watching the computer doing the work for me.

Here is a list of available scalafix rules - or at least the closest to such a list I could find.


Hope I raised your interest in this amazing tooling, check out how to configure ScalaFix for maven (and try to solve some Sudokus and let me know if it worked ;-))

Sunday, April 19, 2020

SudokuFx revisited

This blog post covers my lazy sunday afternoon where I tried to get my ancient Sudoku project running again.

screen capture of application
(If you wonder why those numbers dance around - depending on what the application recognises different solutions are calculated and presented ... yes it's not perfect ;-) )

Years ago I played around with the idea of solving Sudokus with an algorithm and used JavaFX and OpenCV to do it. Quite an unusual combination, then as it is now - but it wasn't really a bad experience at all. In my view the Java OpenCV API is quite usable, and as such I wanted to prove at least for me that I could get it to run (albeit its far from perfect ;-))

I remember spending hours playing around with different filters and effects, always proud of new things I discovered.

Today, I decided to breathe life again into this project, let's see which problems I'll encounter.

First, getting source code from github still works, my last commit already 4 years ago, zero contributors along all those years, that's what a normal OSS project looks like ;-).

I try to revive it again, just the desktop version of it, leaving the android version on my todo list.

First obstacle I hit is to compile it, OpenCV is missing on my system. OpenCV doesn't have a proper maven integration, there is an ancient issue about this, nobody had the mood or skills to solve it yet. 

Update: I learned that there is somekind of maven integration in the meantime! Great!! Anyway, I build OpenCV from scratch for this article, maybe I'll have a go at the maven build another time.

Step 1 - download and compile OpenCV 4.3.0


First, I want to mention that there are several package mangers which provide prebuilt binaries for OpenCV, homebrew for example. It may well be that this is an option you prefer If you stumbled on this page by googling (seems to be rather unlikely but well).

Ok, in the meantime they reached version 4.3.0 something, I download the zip from here but I'm too dumb to find build instructions ...  No obvious links are neither on the main page nor documentation. (They surely exist but I just didn't see them apparently)

Finally, I found them.


There are numerous steps to follow, you'll need Cmake and quite some time during compilation, educated guesses what to choose, we'll go with defaults for the first try. Unzip it somewhere, I choose

~/custom-builds/opencv-4.3.0/src/ 

since we have the famous 'out of source' build CMake proposes as best practice. This means that there is a dedicated directory for build artifacts.

As such I choose following setup:

~/custom-builds/opencv-3.4.0/src/<actual contents of zip>
~/custom-builds/opencv-3.4.0/target/<here build artefacts will be placed>

I jump to this target directory and execute there following command:

cmake ../src/

... then CMake does it's job, checks for all available goodies found on my system, I pray that the right ones I need are contained, otherwise I will have to configure it and fight the dragons of OpenCV ... again.

... But it seems that I'm lucky, at the end of the cmake output it says something about java bindings, which I'm interested in, as such it should compile it like I want.

What I personally don't want is that it installs itself to some system directory, which will definitely pollute my system and break some stuff already fine tuned there, as such I have to Issue CMake again with a proper parameter:

cmake ../src -DCMAKE_INSTALL_PREFIX=../out

(Pro tip: delete your target directory beforehand completely, thank me later). This would put OpenCv then to a directory:

~/custom-builds/opencv-3.4.0/out/

Like that I can delete it again without any trace.

I'm quite sure I don't need 98% what is going to be compiled, and some magic incantation of CMake or it's configuration would prevent compiling and linking it, but I don't want to go down that road, not today.

After configuring CMake you have to invoke 'make' as well, and do it with full throttle, meaning with some concurrent threads to speed things up (yes, you can get your coffee break since it will take a while).

That said, still being in the target directory, now execute

make -j8

which will use 8 threads to compile OpenCV (takes 5min+ on my machine)

In the meantime, I want to mention it is a huge accomplishment that building from scratch 'just works' for many environments, be it MacOsX, Windows, Linux ... this is much work and should be praised!

Everytime I compile something I'm amazed if it works first time without much hassle.... I've spent already countless hours on resolving build problems, it is a major time drain.

Anyhow, make succeeded, and all build outputs are already somewhere lying in the target dir. But with

make install

everything important for the runtime will be put to the ../out directory.

Specifically, one can find dylibs in the out/lib and out/share/java/opencv4 directory.

Step 2 - Make OpenCV available to Maven


I tried to make this easy, all you have to do is to edit the main pom file and tweak some properties and point to your opencv installation directories. Here is an example configuration:


    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <scala.major.version>2.12</scala.major.version>
        <scala.full.version>${scala.major.version}.10</scala.full.version>
        <opencv.major.version>4.3</opencv.major.version>
        <opencv.full.version>${opencv.major.version}.0</opencv.full.version>
        <opencv.install.path>/Users/lad/custom-builds/opencv-4.3.0/out/</opencv.install.path>
        <opencv.java.jar>${opencv.install.path}share/java/opencv4/opencv-430.jar</opencv.java.jar>
    </properties>

After you've changed those settings to your liking, enter following command:

mvn initialize -Pinstall-opencv -N 

Now you should be ready to go, build the project with

mvn package

It shouldn't take too long, and everything should be set up correctly. OpenCV has many dylibs / dlls, which are referenced in this project, but via maven property filtering and some pseudo magic everything should just work.

You can fire it up via (being in the main directory of the project)

java -jar sudoku-javafx/target/sudoku-javafx-2020.1-SNAPSHOT-jar-with-dependencies.jar

A JavaFx Gui should appear, on the first invocation you maybe have to tell your Os that it is ok to give SudokuFx access to the webcam.

I've blogged about this project in several other blogposts, maybe you want to read them as well.

This post is referencing code state from the 2020.1 resurrection release.

To sum up, compiling OpenCV turned out to be much easier now as it was years ago, only my code was a little bitrotten - still is - but at least it should be easier to get it to run now than it was before this little article.

Thanks for reading!

Thursday, April 16, 2020

Artifact Dashboard: Make it pretty

In this blog post I'll cover how to use webjars in conjunction with sbt. The plan is to download bootstrap and maybe other web techniques via maven artefacts. 

This blog post is a successor to another blog post I made only recently, I had the feel it should have better looks, so here we go.

Motivation

I'm well aware that there are package managers for javascript, at least I've heard of them - not overly good things - but I heard of them. Chances are high that if you continue to read to the end of this post there has been a new one invented.

In my bubble, in the past - well, many years - I had to fiddle around with JVM de factor standard build systems like maven or gradle, and if i neglect countless hours of fustration with them I have to admit they served me well. As such, I'll stay with this ecosystem, at least I use it in creative ways. Well, sort of. I'll use sbt to download dependencies and build the example project, I hope I got your head spinning in the meantime.

Reality chck: One thing I would advise to avoid is to have too many package managers in your corporate setting, it is a maintenance burden which gets multiplied by each new system. Naturally, every build system wants to outmatch the other ones, either in complexity or in the wealth of features it provides. (One advice: Sometimes it pays off to bet on the most conservative option, but I doubt you'll find such a thing in javascript land. 😉)

In some bright moments however, different worlds work together, and this blog post wants to shed a light on how to combine web techniques with tools used for corporate settings.

To put it bluntly, I try to download a jar file containing prepacked static content containing a bunch of javascript and other resources and put it at the right place - on demand.

For example, I want to use bootstrap in my project - just go to their website, download it, extract things you are interested in, move on? 

Of course this is a valid - and most probably - the most used approach by a majority of developer teams, but it has some drawbacks. 

First, it is unclear where such frameworks come from. It is also hard to tell 'afterwards' which file belongs to which framework. You are left to best guessing based on directory or zip file names. A concern might be the licensing situation - how to tell if a bunch of code maybe inflicts legal issues in your code? What about if you want to update a certain library - often you can be glad if you even know which version you currently use. 

Of course you can document it, but odds are high that this documentation is not really in sync with your code, at least you have to check manually that this is the case. Another anti pattern which comes to mind is that third party code will be checked in your code repository, no matter how hard you fight it, by some innocent but mindless colleague. The best thing about is, that you'll surely find the same library downloaded and checked in multiple times, maybe also in different versions, open doors for subtle bugs, o my god, in PRODUCTION. Insane! Mind blowing! (Yes I play bass and yes I slap and yes I know davie504)

There has to be a reason why such a zoo of package managers exist, for each programming language several of them, along with their ecosystems, everybody fighting for the attention of the very confused poor Mr. Bob Developer.

As such, for all of the good reasons cited above, there is an excuse to automate it yet again and write a blog post about it, mainly for you who reads this with intense boredom waiting for the anticlimax which is yet to come.

Implementation

I'll continue to work on the artifact-dashboard project, and enhance version 0.4 by adding necessary commands to the sbt build file to download bootstrap.

Before reading on, maybe you want to inform yourself about webjars.org, whose mission statement is 
WebJars are client-side web libraries (e.g. jQuery & Bootstrap) packaged into JAR (Java Archive) files.
This is exactly what we need, what I tried to convey above with my ranting about problems if you don't use any sort of package manager. Webjars.org is a really cool project for all JVM engineers who don't want to get their hands too dirty with front end development, stay cozy in their own world, not to touch those strange javascript package managers alltogether.

Besides, source code management is trivial, right?

Don't worry, to download bootstrap via sbt is far less text than the rant above, but maybe more interesting.

To download and unpack it, not much is necessary, witness it here:


enablePlugins(ScalaJSPlugin)

name := "artifact-dashboard"
scalaVersion := "2.12.10"

// This is an application with a main method
scalaJSUseMainModuleInitializer := true

libraryDependencies ++=
  Seq("org.scala-js" %%% "scalajs-dom" % "1.0.0"
    , "org.webjars" % "bootstrap" % "4.4.1")

// defines sbt name filters for unpacking
val bootstrapMinJs: NameFilter = "**/bootstrap.min.js"
val bootstrapMinCss: NameFilter = "**/bootstrap.min.css"
val bootstrapFilters: NameFilter = bootstrapMinCss | bootstrapMinJs

// magic to invoke unpacking stuff in the compile phase
resourceGenerators in Compile += Def.task {
  val jar = (update in Compile).value
    .select(configurationFilter("compile"))
    .filter(_.name.contains("bootstrap"))
    .head
  val to = (target in Compile).value
  unpackjar(jar, to, bootstrapFilters)
  Seq.empty[File]
}.taskValue

// a helper function which unzips files defined in given namefilter
// to a given directory, along with some reporting
def unpackjar(jar: File, to: File, filter: NameFilter): File = {
  val files: Set[File] = IO.unzip(jar, to, filter)
  // print it out so we can see some progress on the sbt console
  println(s"Processing $jar and unzipping to $to")
  files foreach println
  jar
}

This is part of v0.5 of the artifact-dashboard project, and this is the complete sbt file which is necessary to download a version of bootstrap - at the time of writing the most current one - unpacking it to the right place and make it available for the javascript code.

As usual, you only have to provide the correct paths in your html file to the javascript code, and you can profit from all the goodies bootstrap has to offer.

As a side note, I tweaked also a little bit how Scala.js is used, have a look in the source code / html file.

I think it is a nice combination of technologies, don't know if it is popular, at least it works for me.

Using bootstrap, our download page looks far better, users will surely be more willing to click on a download link than ever before.

Here is a little screenshot of the current state of affairs with v0.5 of artifact dashboard in action for my test instance:

Screenshot of Artifact Dashboard v0.5


Thanks for reading.

Tuesday, April 14, 2020

Artifact Dashboard: Custom Nexus Summary Page

Motivation

In this post I want to discuss a small project called artifact-dashboard which shows

how to use Nexus Repository Manager, Version 3, and it's Rest API. 

(This article is a part of a mini series of blog posts, see the second part here.)

I use my preferred technologies (and by doing that I almost successfully circumvent using Javascript entirely), namely Scala and Scala.js, maven, sbt powered by severe googling.

Introduced with version 3, Nexus Repo Manager's REST API provides means to query about the state of artifacts which are deployed on the server. (Of course, many other things are possible doing with this API - like for example query licensing status or health status etc.)

My plan is to create a self contained website with some 'dynamic' download links by using information retrieved from Nexus REST API, namely latest snapshots or 'interesting' artifacts like product packages, installers etc. 

In a typical small company setting, this alleviates engineering from daily questions like: 'where is the latest version of our software?...' since everything up to the point of downloading artifacts is automated. (I'm talking about SNAPSHOT / LATEST versions which potentially change permanently and don't have a stable url to download them from Nexus.

(Thinking of it, maybe there is a custom workflow directly baked in to Nexus Repo Manager, I didn't check recent docs, maybe in the process of writing this I'll learn more about it. (UPDATE: I think there is an alternate (or maybe first hand) way to do what I describe by using well configured 'Content Selector's - a concept I didn't yet know starting this post); UPDATE 2: Obviously I'm not the only one who would like such a thing, it was resolved already, but anyway I'll continue with my effort to learn more about it. UPDATE 3: My whole approach reduces itself mainly to a single rest call - which is nice)

Anyway, it will be a way to learn both about Scala.js and creating minimalistic UI's with it. My goal is to create a standalone html page which uses Javascript to automatically create correct download urls, just for the fun of it. So let's dive in.

Prerequisites

If you have not already done so, I would recommend installing IntelliJ Community Edition along with its excellent Scala plugin and checkout the github project which accompanies this post.

LPT: Checkout code and skip the whole blog post!

We need a rest service which we want to query, the whole purpose of this post is to be able to talk to the Nexus Rest API. It is available starting with version 3 of this software package (in fact I learned that Nexus provided an API already in the 2.x series, but here I'll cover v3), which can be downloaded on Sonatype's website for OSS Nexus Repository.

You'll have to register an account, then you can download a version for your platform. You have to do a basic setup, and create an administrative account, we'll see what we need along the way.

You'll need also a small maven project to deploy to your test repository, I would recommend something small and simple without many dependencies or huge build times, or you deploy your artifacts directly to Nexus via the web gui. 

Step 1 - Basic Scala.js project setup

I start from scratch by creating more or less a Scala.js 'Hello world' minimal example.

For convenience, I made some releases in artifact-dashboard which covers different milestones during development, and can be downloaded as zip files.

Release v0.1 contains just a raw setup with more or less current versions of Scala.js and friends at the time of writing this post.

Nothing fancy here, just a minimal sbt build definition. On the sbt console you have to issue a 'fastOptJS' command such that Scala.js compiles (later then do a 'fullOptJS' to obtain the best optimized Javascript available ;-) ).

The most interesting part are the few lines of Scala code shown below:


package net.ladstatt.adash

import org.scalajs.dom.document
import org.scalajs.dom.raw.Element

object ArtifactDashboard {

  def main(args: Array[String]): Unit = {
    document.body.appendChild(p("Paragraph filled via Scala.js"))
  }

  def p(text:String): Element = {
    val p = document.createElement("p")
    p.textContent = text
    p
  }

}

By importing 'org.scalajs.dom' you'll get access to the page dom ... and there you go. We don't need much more than that (ignoring all the magic behind the scenes) to create more or less what we want on the html page. Attention: Don't forget to import a library in your build.sbt (scalajs-dom) otherwise it won't work.

Side note: It would be more coherent if we would only use Scala + Maven (there is a plugin available for this) - sadly enough it doesn't support Scala.js compilation. However, there exists another maven plugin - I don't use for no particular reason - which should support Scala.js compilation. Furthermore I learned that there is a possibility to execute Scala.js cross compiler as a command line tool which would also be a possible approach, but not today. We'll stick to sbt.

Step 2 - setup Nexus for a simple rest call

Ok, up to here you should have a running Nexus 3 Instance somewhere. If you don't have access to one already, make sure to download a free OSS version from Sonatype's website. To make things easy for me and this blog post, I assume from now on that a running Nexus on localhost is present.

When you download a fresh one, make sure that you execute Maven once - configured to use this local installation. This is necessary as to fill your Nexus instance with some data we can query later.

The easiest way for that is to change your settings.xml file to point to your local Nexus, for me it worked to configure the 'mirror's section like shown below:


<?xml version="1.0" encoding="UTF-8"?>
<settings
        xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"
        xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

    <mirrors>
        <mirror>
            <id>local-nexus</id>
            <mirrorOf>*</mirrorOf>
            <url>http://127.0.0.1:8081/repository/maven-public/</url>
        </mirror>
    </mirrors>

</settings>


Now we should have some data in our Nexus, to my current understanding Nexus should now return some useful result on its API. Luckily enough Nexus provides itself a nice testbed for API calls, directly baked in in its user interface. The UI for the Rest API can be easily found in the administration console (default address: http://127.0.0.1:8081/#admin/system/api)

Let's try the easiest api call, or is it, at least the first one, '/v1/assets'.

You have to provide which repository to query, just enter 'maven-public' and click on 'execute'.

Do it! Do it now. I'll wait.

Step 3 - implement rest call with Scala.js

At this point you should now gaze with amazement on the Nexus Administrative Console, and see a response for the rest api call. The guys from sonatype even included a directly executable command line invocation for curl, which looks like this in my case:

curl -X GET "http://127.0.0.1:8081/service/rest/v1/assets?repository=maven-public" -H "accept: application/json"

I can copy paste this into a terminal, and it returns a json list of assets which proves that at least something is going on, and thus we can continue with implementing a corresponding call in Scala.js.

... and then I rediscovered a thing called 'CORS' ;-) 

TLDR, the approach of calling a REST api from an html page doesn't work if it is not from the same origin like the api, that means it has to be served from the same server. See this article for more information.

So curl is working perfectly, but having a website calling this rest api is not possible, which is a good thing security wise. There are ways to work around for this problem, the easiest one is to place the final html and javascript in the public folder of the Nexus installation itself, which may not be viable for a variety of reasons.

For educational purposes I pursue this approach however. Keep in mind that the whole point of this blog post is to learn about how to work with Scala.js and communicate with webservices, not necessarily with Nexus, maybe create your own little rest service and play with it.

Version v0.2 of the project is tagged at a state where a minimal example for a webservice setup is given, for completeness and also to show how easy it is I'll include the source code here as well:


package net.ladstatt.adash

import org.scalajs.dom.document
import org.scalajs.dom.ext.Ajax
import org.scalajs.dom.raw.Element

import scala.concurrent.ExecutionContextExecutor
import scala.scalajs.js
import scala.scalajs.js.JSON
import scala.util.{Failure, Success}

@js.native
trait CheckSum extends js.Object {
  def md5: String

  def sha1: String
}

@js.native
trait Asset extends js.Object {
  def id: String

  def path: String

  def downloadUrl: String

  def repository: String

  def format: String

  def checksum: CheckSum
}

@js.native
trait AssetsResult[T] extends js.Object {
  def items: js.Array[T]

  def continuationToken: String
}


object ArtifactDashboard {

  def main(args: Array[String]): Unit = {
    queryNexus()
  }

  def queryNexus(): Unit = {
    // executioncontext needed for ajax call
    implicit val ec: ExecutionContextExecutor = scala.concurrent.ExecutionContext.global

    Ajax.get("http://127.0.0.1:8081/service/rest/v1/assets?repository=maven-public") onComplete {
      case Success(v) =>
        val assetsResult = JSON.parse(v.responseText).asInstanceOf[AssetsResult[Asset]]
        for (a <- assetsResult.items) {
          document.body.appendChild(p(a.id))
        }
        org.scalajs.dom.window.alert(assetsResult.continuationToken)
      case Failure(e) =>
        e.printStackTrace()
        org.scalajs.dom.window.alert(e.getMessage)
    }
  }

  def p(text: String): Element = {
    val p = document.createElement("p")
    p.textContent = text
    p
  }

}

I didn't yet mention two very cool things up there right before your eyes in the sourcecode. Firstly, it demonstrates how easy it is to parse a json response in Scala.js, and with a little more effort than just calling JSON.parse you get full fledged case classes back from your call, quite effortless, by using those magical @js.native annotations (and a little hardcore casting with asInstanceOf ;-))

If you look closely you'll see that the example code uses those case classes innocently, as if this wouldn't be a big deal - but there you have it: a rest call, a json response, conversion to a case class.

Don't bother me with details! Nice.

Step 4 - Setup a test repository and deploy test data to it

For convenience, I've created a dedicated Nexus Maven2 hosted repository for my experiments. I've used my before mentioned toy project to deploy it to my test repository and checked on Nexus UI and API that a deployment was successful.

In order to make this happen you have to declare a distribution management section in your pom:

    <distributionManagement>
        <repository>
            <id>testrepo</id>
            <name>testrepo</name>
            <url>http://127.0.0.1:8081/repository/testrepo/</url>
        </repository>
        <snapshotRepository>
            <id>testrepo-snapshot</id>
            <name>testrepo-snapshot</name>
            <url>http://127.0.0.1:8081/repository/testrepo-snapshot/</url>
        </snapshotRepository>
    </distributionManagement>

As you can see above, in my test instance I've created two repositories: one where I can deploy releases, one where I deploy snapshots. The latter one can now also be queried, and we are done configuring Nexus for our purposes.

Deploying to our testrepo-snapshot repository now only needs a 'mvn deploy' command, which should be issued several times to create more than one snapshot to test it on the server (since we want to have the url of the last SNAPSHOT artifact, remember?)

Step 5 - Give Rest call proper parameters

Nexus Rest API provides a search functionality, which is exactly what we need. It can be configured in a way that we need to call it only once per artifact, like shown below:

http://127.0.0.1:8081/service/rest/v1/search/assets?repository=testrepo-snapshot&maven.classifier=jar-with-dependencies&maven.baseVersion=1.0-SNAPSHOT&direction=asc&maven.extension=jar&maven.groupId=net.ladstatt&maven.artifactId=fx-animations&sort=version

Nexus provides a convenient way to experiment with its search api, the url above yields all snapshot versions deployed on Nexus with Version 1.0-SNAPSHOT, groupId: net.ladstatt, artifactId=fx-animations. We just take the first element of the result list and can retrieve the correct url. See version  v0.3 for a source code snapshot containing described webservice call. Note also the usage of scala's Future or Either construct which can be used in Scala.js as well.

Step 6 - make it configurable

Lets recall what we have accomplished so far:
  • setup a Scala.js project from scratch
  • setup a Nexus Webserver
  • fought against CORS
  • call a webservice
  • parse json
  • map 'native' javascript objects to Scala case classes
  • build a website's dom 'dynamically'.
  • ...

What is left? The page should get a decent look, but also an easy way to configure different assets, maybe groups of assets ... For this to acheive we could again use Nexus and json parsing, or directly include the meta information (which artifacts should be displayed) in our source code itself. For simplicity, take this route first.

We still have to solve issues like be able to query more than one repository on a nexus. We initially wanted to have this functionality for a way to get a link to the most current SNAPSHOT version of a maven artifact, but it is surely nice to include released artifacts as well.

To further elaborate, we want to include artifacts with a given groupId/artifactId/version scheme, with or without an extension or classifier; things that are not possible with version 0.3. 

See v0.4 for a version which has more artifacts configured and a case class structure prepared for easy configuration.

I'll stop here, maybe this project is of use for somebody or motivates to give Scala.js and / or Nexus a try.

I used this website to hightlight some source code snippets.

Thanks for reading.