Wednesday, 30 September 2015

Blog Has Moved > www.adrianmilne.com

This blog was something I originally set up in 2013 for writing some technical articles - mainly as a means to record some development notes for my own personal use for future reference.

These were usually related to areas I wasn't working on full time on my contracting work or on my personal projects. Writing a blog article was a good focus for writing a small application and putting it on Github.

It became a sort of unofficial blog for my company, Corsoft Limited. However, it never really sat right with me in that role - it needed better branding and some different content if it was to do that properly.

So after a bit of thought - I've decided this should become my personal blog - largely focussing on technical articles, but with scope to expand into areas it otherwise couldn't if it remained a company blog.

Taking my usual approach of developing personal projects on my own time - of not over-thinking things too much (analysis paralysis) - I've registered my personal domain, set up a Wordpress site and migrated the content over.

All future development will happen on my new site (probably). Check me out here:



Saturday, 15 August 2015

Websphere Woes - "A composition unit with name 'X' already exists"

I've been using Websphere (8.5.x) for the first time recently. Not a fan, I have to say - certainly not when using in a development environment. 

This is probably due in no small part to having to run on an underpowered virtual desktop, which struggles when running Websphere, IDE and database client simultaneously.It generally runs pretty slowly, but very occasionally Websphere becomes unresponsive to the point where I have to kill it at the process level. 

This has always occurred during a deployment where I've been updating a WAR - which leaves it in a state with the application I was trying to update being removed from the Enterprise Applications console. 

Any attempt to reinstall the WAR at this point gives the following error (frustratingly, only at the very end of the install process):

"com.ibm.websphere.management.exception.AdminException: A composition unit with name MY_WAR already exists. Select a different application name." (where MY_WAR is your application name)

The only way round this that I have found is to see if any of the following directories exist: 

  • <profile root>/config/cells/cellname/applications/MY_WAR 
  • <profile root>/config/cells/cellname/blas/MY_WAR 
  • <profile root>/config/cells/cellname/cus/MY_WAR

Delete these directories and restart the server. You can then reinstall MY_WAR in the usual way (console or script).

Wednesday, 14 January 2015

Building a HATEOAS Hypermedia RESTful Record Store Web Service with Spring

Introduction

  • This is a very simple example of developing a hypermedia-driven RESTful web service, using Spring HATEOAS
  • A companion project is available to download on github (using Java, Maven and Spring Boot)

HATEOAS? What is it?

  • HATEOAS ("Hypermedia as the Engine of Application State") is an approach to building RESTful web services, where the client can dynamically discover the actions available to it at runtime from the server. All the client should require to get started is an initial URI, and set of standardised media types. Once it has loaded the initial URI, all future application state transitions will be driven by the client selecting from choices provided by the server. 
  • The choices of available actions may be driven by state in time (e.g. different actions available to the user based on access control or stateful data through workflow or previous actions), or by available functionality (e.g. as new functionality becomes available - the client can hook into them with no design changes on its part)

Where did it come from?

  • HATEOAS constraint is an essential part of the "uniform interface" feature of REST, as defined in Roy Fielding's doctoral dissertation (Ray was one of the principal authors of the HTTP specification, and co-founder of the Apache HTTP server project)
  • In his view if an API is not being driven by hypertext, then it cannot be RESTful

The Record Store Spring Example




  • The entry point to our API is :
    • http://localhost:8080/albums/
    • This will basically list all the albums available at our store, along with Stock Levels


  • It provides links to any client of our api telling it the URLs it can use to:
  • View and Albums details:
    • http://localhost:8080/album/{id}



  • View details of the Artist:
    • http://localhost:8080/artist/{id}


  • Purchase a copy of the Album:
    • http://localhost:8080/album/purchase/{id}
    • Note: this action is only displayed next to an album when the stock level is > 0 (an example of how the server will control the transitions available to the client based on current application state)
    • Look at the example below - first call to view album '3' shows it is available to purchase:
      • http://localhost:8080/album/3


      • If you then purchase it, using http://localhost:8080/album/purchase/3, next time you view it you can see there is no longer a purchase option available



The Code


Application.java


Album.java

Artist.java

MusicService.java


  • This is just a mock service that simulates a service/data access layer
  • It provides some methods to retrieve Album and Artist data, and also to purchase an Album (which for now will just reduce the stock level by 1)


ArtistController.java

  • Spring web controller for Artist operations


AlbumController.java

  • Spring web controller for Album operations
  • Note how in here a check is made on stock Levels in determining whether to allow the client to make a purchase



pom.xml


Building and running the example

Building

  • You will require Maven installed and configured (see .... for more details on this)
  • The example project uses Spring Boot to run the example. This basically builds a JAR file and spins up an embedded Tomcat container using a very simple command (so no WAR file is built, and no deployment to an external server is needed). This is great for getting up and running quickly - all you need is Maven and a JDK.

Running

Running with Eclipse STS

  • If you use Eclipse STS - you can just run the project as a Spring Boot App (just right-click the project in the package explorer and click 'Run As .. Spring Boot App'

Running with Maven

  • On the command line, navigate to the project folder root (containing pom.xml), and use 'mvn clean install spring-boot:run'

Either of these will start the Spring Boot deployment, and you should see a screen similar to this:



Changing the Embedded Tomcat Port

  • Just change the 'server.port' setting in application.properties

Further Reading

 - http://en.wikipedia.org/wiki/HATEOAS
 - http://en.wikipedia.org/wiki/Roy_Fielding
 - http://projects.spring.io/spring-hateoas/
 - http://roy.gbiv.com/

Wednesday, 28 May 2014

Querying XML CLOB Data Directly in Oracle

Oracle 9i introduced a new datatype - XMLType- specifically designed for handling XML naively in the database.

However, we may find we have a database that stores XML directly in a CLOB/NCLOB datatype (either because of legacy data, or because we are supporting multiple database platforms).

It can be useful to query this data directly using the XMLType functions available. This post just shows a very simple example of a query to do this.

(Note: there may be better ways to do this - but this worked for me when I needed an ad-hoc query quickly!)

Our Table

Assume we have a  table called USER, which has the following columns



Our embedded XML (example for Ringo)



Our Query

Say we want to look at account details in the XML - e.g. find which users all have the same sort code. A query like the following:



Will produce output like this. Obviously from here you can use all the standard SQL operators to filter, join etc further to get what you need:




Friday, 14 March 2014

Data Pumper - Migrating Data from MySQL to PostgreSQL

I recently made the decision to migrate from MySQL to PostgreSQL (the reasons why to be covered later). I thought I'd just jot down my notes on a tool I tried which helped with the migration - SQL Workbench/J Data Pumper (no sniggering at the back..).

Migrating the Database

My MySQL database was running on Amazon RDS. I first created a new PostgreSQL instance on Amazon RDS and created the basic Table structures (I'm using JPA/Hibernate in my app - so this was just a case of updating my JPA config and pointing it at the new PostgreSQL instance and let it create all the tables for me) - the benefits of ORM!

Migrating the Data

I then needed to migrate the data over. I didn't have a huge amount of data (in the region of a few hundred/low thousands across a number of tables).

I was initially going to just use mysqldump - possibly to a csv and then just import this into Postgres. However, I had a few issues getting the data out this time (I've used mysqldump before on a semi-regular basis - so not sure exactly what the problem was this time - though it was the first time I'd used it on RDS with MySQL 5.6). As I was moving away from MySQL - I wasn't particularly bothered about getting to the bottom of this - so changed tack and thought I'd give the SQL Workbench Data Pumper a whirl.

It worked really well - and is a great little tool for jobs like this (undoubtedly not the best tool of choice if you had large datasets and remote systems - but for small 'hacky' jobs like this it was ideal).

Just to outline below - this is what I did:

Pre-requisites

  • Download SQL Workbench/J   
    • http://www.sql-workbench.net/ 
    • I did this on linux - so it was simple extract and run 'chmod +x *.sh' to make the shell scripts executable
    • run 'sqlworkbench' to fire up the app
  • Download the JDBC Database Drivers
  • Configure SQL Workbench with connections to both your databases

  • Try connecting to a database just to confirm it all works as expected


  • Open the 'Data Pumper' (Tools .. Data Pumper)

  • Select Source connection in top left (MySQL in my case), and Target connection in top right (PostgreSQL). Select the Source and Target tables and field mappings, and then click the 'Start' button.

  • Bingo - data transferred! Simple and easy to use. I'd definitely use it again for small jobs like this - useful to have in the toolbox (and supports a wide range of database platforms).

Sunday, 4 August 2013

MongoDB in 3 Easy Steps!

If  you're entirely new to MongoDB - this is just a very gentle introduction that gets you up and running with a sample database in just a few easy steps.

It's really easy to get started. The steps are:

1. Install MongoDB
2. Create a Database
3. Query the Database

1. Install MongoDB

  • The mongoDB site has simple, clear instructions for downloading and installing. Just click on the link below and follow the instructions for your particular operating system (much clearer that trying to explain them again here!):
      http://docs.mongodb.org/manual/installation/
  • (Make sure you have the have started the 'mongod' server process as explained in these instructions - you should have a mongod server up and running before moving on to Step 2 - like in this screenshot below)

2. Create A Database

  • Now we have a server up and running - the next step is to create a database and pop some data in. For our example we have a very simple Library database, with a single 'book' table (or 'collection' in mongo-speak).
  • Instead of running SQL against mongo - you use Javascript - an example build file is included below (you could enter the commands in the interactive console window - shown later - but for speed and ease of use - you can also run them all together from a script file - which is the approach we use here)
  • Copy the javascript code from the GIST below, and save it to a file called 'library_build_script.js' build script on your machine


  • Open a new console window, navigate to the 'bin' folder in the extracted mongo installation files,  and use the mongo command to load the script - just type:
mongo SCRIPT_LOCATION/library_build_script.js'

  • (Note: if you set the PATH up on your machine, you can of course run the mongo executables from anywhere)
  • this will run the build script to create the database, collection and indexes, and prepopulate with some test data.
  • Note - we are using getSiblingDB() to create and work with a sibling database. This is slightly more complex than the default, but if you create multiple databases, this helps organise them better (as you build databases in a tree like structure). It keeps it easier to manage/view multiple databases on the same server. Have a play around with different options when you get up and running to see what works best for you.

 

3. Query the Data

  • It's as simple as that to have a populated database up and running on your machine - we can now use the interactive mongo command line tool to query the data
  • Start up the interactive mongo console. Still in the 'bin' folder, type
mongo

  • Find All Books Query - list all books in the database
db.getSiblingDB("library").book.find()


  • Find All Books With Type 'Fantasy' Only
db.getSiblingDB("library").book.find({'type':'Fantasy'})
   


As you can see - it's really easy to get up and running in just a few minutes. Of course there's much more to cover - but this should be a good starting point to start playing.

I'll hopefully be expanding on the Library database in some future posts - exploring more advanced features and functions.

 

Addendum: Using A GUI Client

You can also use a GUI client to explore your local MongoDB database if you prefer that to using the command line.

Check out UMongo - you can download it for free from the website here:

http://edgytech.com/umongo/

Just follow the installation instructions for your OS, start it up and create a connection to your local mongod server.



You can then interact with your local mongo instance using the tools in UMongo




Thursday, 1 August 2013

JPA Searching Using Lucene - A Working Example with Spring and DBUnit

Working Example on Github

 


There's a small, self contained mavenised example project over on Github to accompany this post - check it out here: https://github.com/corsoft/jpa-lucene-spring-demo


Running the Demo


See the README file over on GitHub for details of running the demo. Essentially - it's just running the Unit Tests, with the usual maven build and test results output to the console - example below. This is the result of running the DBUnit test, which inserts Book data into the HSQL database using JPA, and then uses Lucene to query the data, testing that the expected Books are returned (i.e. only those int he SCI-FI category, containing the word 'Space', and ensuring that any with 'Space' in the title appear before those with 'Space' only in the description.



The Book Entity


Our simple example stores Books. The Book entity class below is a standard JPA Entity with a few additional annotations to identify it to Lucene:

@Indexed - this identifies that the class will be added to the Lucene index. You can define a specific index by adding the 'index' attribute to the annotation. We're just choosing the simplest, minimal configuration for this example.

In addition to this - you also need to specify which properties on the entity are to be indexed, and how they are to be indexed. For our example we are again going for the default option by just adding an @Field annotation with no extra parameters. We are adding one other annotation to the 'title' field - @Boost - this is just telling Lucene to give more weight to search term matches that appear in this field (than the same term appearing in the description field).

This example is purposefully kept minimal in terms of the ins-and-outs of Lucene (I may cover that in a later post) - we're really just concentrating on the integration with JPA and Spring for now.

The Book Manager


The BookManager class acts as a simple service layer for the Book operations - used for adding books and searching books. As you can see, the JPA database resources are autowired in by Spring from the application-context.xml. We are just using an in-memory hsql database in this example.

application-context.xml


This is the Spring configuration file. You can see in the JPA Entity Manager configuration the key for 'hibernate.search.default.indexBase' is added to the jpaPropertyMap to tell Lucene where to create the index. We have also externalised the database login credentials to a properties file (as you may wish to change these for different environments), for example by updating the propertyConfigurer to look for and use a different external properties if it finds one on the file system).


Testing Using DBUnit


In the project is an example of using DBUnit with Spring to test adding and searching against the database using DBUnit to populate the database with test data, exercise the Book Manager search operations and then clean the database down. This is a great way to test database functionality and can be easily integrated into maven and continuous build environments.

Because DBUnit bypasses the standard JPA insertion calls - the data does not get automatically added to the Lucene index. We have a method exposed on the service interface to update the Full Text index 'updateFullTextIndex()' - calling this causes Lucene to update the index with the current data in the database. This can be useful when you are adding search to pre-populated databases to index the  existing content.

The source data for the test is defined in an xml file