Wednesday, 3 February 2010

GlassFish v3 and Java EE 6 Sun-Oracle roadshow - key notes

GlassFish Roadshow 2010 - London 03-02-2010

The following is some notes I took down during the above event.

In brief summary of GlassFish v3 and Java EE 6, here are some of the key takeaways:

  • GlassFish v3 continues to be developed and supported, as the Java EE 5 & 6 RI app server
  • GlassFish v3 currently has no clustering, but offers OSGi modularization and extensibility
  • Supports and takes advantage of new Java EE 6 specifications
  • Big push on modularity and flexibility in both GlassFish and Java EE 6
  • Java EE 6 supports annotation based EJBs, RESTful web services
  • Java EE 6 greatly simplified configuration, optional web-inf.xml etc
  • Java EE 6 simplied simple class EJBs and improved JPA specification
  • Ongoing road-map for GlassFish, details TBA later this year


Java EE 6 (Roberto Chinnici):

Released Dec 10 2009

Key new features: New API, Web profiles, Pluggabiliy, dependancy injection

New technologies:
  • Jax-RS 1.1
  • Bean validation 1.0
  • DI 1.0
  • CDI 1.0
  • Managed beans 1.0
Closed down gap between EJB and POJO with unification and annotations

CDI works with POJO and EJB classes

Unification of platform types, more uniform programming model

  • EJB 3.1
  • JPA 2.0
  • Servlet 3.0
  • JSF 2.0
  • Connectors 1.6
  • Interceptors 1.1
  • JAX-WS 2.2
  • JSR-109 1.3
  • JSP 2.2
  • JSR-250 1.1
  • JACC 1.4
  • JASPIC 1.1

Key goal in Java EE 6 - flexibility, pruning, extensibility and web profiles


Profiles:

Bundles of technologies targeting specific classes of applications
Decoupled from each other and the platform
Guarantees compatibility of required technologies, individually and in combination - must satisfy all joint requirements


Web Profile:

Modern Java web application stack
First profile to be definite
Mid sized, fully functional (expected 80% web-app coverage), add additional components via extensibility (such as web services API, 3rd party frameworks)


Web profile contents:

Servlet, JSP/EL, JSTL, JSF, Bean Validation, EJB Lite, JPA, JTA, Di, CDI, Managed beans, Interceptors, JSR-250


Pruning:

Goal - to address bloat concerns

2 step process:
Declare components as "Proposed optional"
The make fully optional for the next release

Proposed optional technologies:

JAX-RPC (JAX-WS)
EJB 2.x Entity beans (JPA entity classes)
JAXR (little use)
JSR-88 (deployment, tools API - not used)

* Don't use - use replacements / alternatives


Pluggability / extensibility:

Focus on web tier
Level playing field for 3-rd party libraries and frameworks
Two major extensibility points: Servlets and CDI
Simplify packaging of web-apps
Zero-configuration!


Modular Web applications:

Libraries can contain web-fragment.xml descriptor
web.xml is now optional
Can server resources out of jars with: /META-INF/resources

e.g.
/WEB-INF/lib/catalog.jar
/META-INF/resources/catalog/books.html

e.g. Dojo jar in resources


Web fragments in servlet 3.0:

META-INF.web-fragment.xml

same structure as web.xml, can override in web.xml


Servlet container pluggability:

ServletContainerInitializer interface implements by extensions

@HandlesTypes to declare interest in on or more annotation types

ServletContext now contains method to dynamically register servlets and filters
i.e. ServletContext API has been extended

Registered using META-INF/services


onStartup method gets called with a Set of classes available

** Can only add services at startup, not dynamically once running


Asynchronous HTTP processing:

New programming model for async request processing
e.g. Comet, char, push apps
Opt-in model
Threads are managed by the container

@WebServlet(asyncSupported=true)
public class MyServlet extends HTTPServlet {

}

Goal - Decouple requests from threads

* Changes to the way filters work - thread not attached to the socket, response not written. Filters modified to make safe - option than can be turned on to identify asyncSupported=true
- do this on servlet and any filter involved in the chain

Ideal when waiting for some external resource etc.
Low level API - not very elegant
Idea is that frameworks will use this - see for example Atmosphere framework which is based on annotations - under the hood this async API is used.


JSF 2.0:

Facelet as a standard view declaration language
Composite components
Ajax (declarative and programmatic) e.g. f:ajax tag
Partial state saving (track deltas and sends changes in response, previously all would have been sent even if not changed!)
System events e.g. f:event tag
Resources
Validation e.g. new f:validateBean tag (better integration, validation API. can handle multiple errors, not first one at a time!)


EJB 3.1:

@Singleton beans
@Startup beans (invoked at app startup, works well with Singleton pattern)
@Asynchronous invocations (biggest change, allows non-blocking sync EJB invocations as first class call. Method must be either void or return a Future object)
No interface view (bean impl does not need an interface anymore, now 1 class = 1 EJB!)
Define EJBs directly inside a web app, inside a war file
New API - EJBContainer API works on Java SE, can bootstrap an EJB container in a Java SE application (ideal for testing/development, could be useful in client applications)


Simplified packaging:

EJB class directly into the war file
Previously must built an EJB jar to be included


EJB 3.1 lite:

A subset of EJB 3.1
All types of session beans (stageful, stateless, singleton) - other bean types not supported (timer, entity etc)
Declarative transactions and security
Interceptors
ejb-jar.xml descriptor allows (is optional, probably not useful)

* Class loading, new visibility rules in Java EE spec

Slight differences in class loading rules - if in doubt check the specs


New JNDI Namespaces:

Until now - only java:comp
Now added:
java:module - a module (war, ejb, jar)
java:app - an application
java:global the whole server / cluster

e.g. @Resource(lookup="java:app/CustomerDB") DataSource db;

EJB components have global names now:

e.g. java:global/app1/module2/SomeBeanIcom.acme.Foo

Helps solve problem of remote EJB communication in same app server


Dependency injection (DI):

Combination of DI 1.0 / CDI 1.0
New @Inject annotation
@Inject @LoggedIn User user; (@LoggedIn = "Which one", User = "What")
Beans auto-discovered at start-up
Extensible
Injection metamodel (BeanManager API)
@Resource still available for container resources

* Identified by type and qualifiers (no longer just a string name alone)

* No bean declaration as per Spring declaration, bean discovery at startup
* Injection errors all reported at startup, rather than on use at runtime

Beans can be associated with session - i.e. loggedIn
Beans can be more ephemeral, i.e. for request
Beans can be more long lived, i.e. shopping cart conversation flows
Can add class for beans on the fly, via APIs at runtime

Example of DI annotation:

@Inject
CheckoutHandler(
@LoggedIn User user,
@Reliable @PayBy(CREDIT_CARD)
PaymentProcessor processor,
@Default Cart cart)

* Note that constructor injection is possible
* Note different scopes, PaymentProcessor is probably conversation scope, LoggedIn is session, PaymentProcessor is probably application scope singleton

* Instance and state management handled "for free" by framework/APIs

JAX-RS 1.1:
Already widely adoped
Really a high level HTTP API
Annotation-based programming model
Programmatic API when needed

* think of it as the new HTTP level API (higher level than HTTPServlets, remove low level detail and tedium)

Jax-RS resource class, example:

Identified by the @Path annotation

@Path("widgets./{id}")
@Produces("application/widgets+xml")
public class WidgetResource {
pubic WidgetResource(@PathParam("id") String id { … }

@GET
Widget getWidget() { … }
}

Provides higher level HTTP handling, more declarative, better match with conceptual needs of developer


Bean Validation 1.0:

Integrated with JSF, JPA
Constraints represented by annotations

e.g.

@NotNull
@Size(max=40)
String address;

Fully extensible
@Email
String recipient;

Validation API's for validation directly, create a new validation object etc
Validation of trees of objects is possible (including loops)


JPA 2.0:

Supported for collections of basic types and embeddable objects
JPQL enhancements e.g. CASE WHEN, NULLIF
Pessimistic locking added (annotations added)
Criteria API for dynamic query construction

Criteria API: Uses the canonical metamodel classes

CriteriaBuilder, create criteria
CriterialQuery, typed criteria

Strongly types checking, type parasitised equals checking, compiler errors generated if query does not have the right types etc, so robust and safe query (rather than say String SQL construction directly).

Connectors 1.6 added too (not covered in any detail)


Summary overview:

Improved, more powerful, more flexible, more extensible, easier to use

http://java.sun.com/javaee


--

GlassFish V3 (Alexis Moussine-Pouchkine):

Java EE 6 Reference Implementation (RI)

Geographic download map:

http://maps.glassfish.org/server

Healthy increase in downloads and usage over time

GlassFish V1 first shipped 2006, reusing much from Tomcat

V2.1.1 Nov 2009, V3 (Java EE 6) Dec 10th 2009

GlassFish V3 Open Source CDDL, GPL (with 'classpath exception') licensing

Java EE 5 & 6, enterprise quality - full support is available.

Sub projects:
  • Jersey (JAX-RS)
  • Metro (JAX-WS)
  • Grizzly (NIO)
  • Atmosphere (Comet)
  • OpenMQ (JMS)
  • and scripting jRoR Grails and now Django (python)

Main difference from Tomcat - Grizzly core (rewritten)

Netbeans 6.8 tooling
Support available in Eclipse too

GlassFish development continues
Support contracts through to 2017+ unlimited
Remains the Java EE reference implementation
Now also sold with WebLogic and standalone

Roadmap -> expected soon for remaining year

Don't have to deliver as much standard runtime / frameworks jars as part of the application jar

Netbeans in-place edit of classes, incremental compilation, deploy on save, GF v3 preserves session across redeployments(!)

Session retention:
Deployment option to maintain statefull sessions across re-deployments!


GlassFish v3 key goals:

Modular and dynamic
Modular: Apache Felix (OSGi)
Extensible: HK2 (100k kernel)
Still very fast!

Centralized configuration, modules configured through centralised control


Key Features:

No ejbjar.xml needed, no web.xml, annotation driven, EJB's as single classes

Declarative annotations:

@Stateless annotation for class stateless bean
@Schedule annotation for timers

Eclipse - GlassFish tool bundle for eclipse (contains everything!)

Ultra fast auto-deploy of all Java EE and static artefacts

Maven support: mvn gf:run gf:start gf-deploy

Containers can be added / removed dynamically


New API for EJB testing (EJBContainer):

Example:

EJBContainer c = EJBContainer.createEJBContainer();
Context ice = c.getContext();
SimpleEjb ejb (SImpleEjb)ic.lookup("java:global/sample/SimpleEjb");
ejb.sayHello();


GlassFish "Embedded" - allows all features of GlassFish to be automated

org.glassfish.api.embedded.Server server;
Server.Build builder = new Server.Builder();
server = builder.build();
ContainerBuilder b = server.createConfig(ContainerBuilder.Type.web);
server.addContainer(b);

File archive = new File("hello.war");
server.getDeployer().deply(archive);

i.e. Ship app server inside an application!


OSGi:

GlassFish runs on top of OSGi (Felix by default)
Also runs unmodified on Knopflerfish and Equinox
GlassFish ships with 200+ bundles
Can run without OSGi (static mode based on HK2)
Can use OSGi management tools (CLI or Web)

Any OSGi bundles will run in GlassFish v3
Drop it in glassfish/modules

Servlets can get hold of OSGi bundles (using @Resource DI)


Update centre:

Graphical tool (GlassFish does not need to be started), available in web admin console
CLI version available
Was in v2.x but not from admin console


RESTful admin API:

JAX-RS/Jersey + Grizzly to provide REST interfaces to
Configure runtime (via GET, POST, DELETE)
Invoke commands (restart, stop, deploy, etc)
Monitoring (GET only)
Log rotation etc

e.g. Available from:
localhost:4848/management/domain
localhost:4848/monitoring/domain


Further advantages:

Dynamic language support via modules:
Rails, Grails, Django, Scala/Lift


Comet:
Cometd/Bayeux
Atmosphere

Full support for:
mod_jk
WebDAV, CGI, SSI

OpenMQ 4.4

Web Services Metro 1.4
.NET 3.5 interoperability


v3 Clustering:
Lower priority after Java EE 6 and modularity, so not yet...
Clustering is not built in as per v2
More similar to v1, single instance
Have to take on own clustering (load balancing, deployment)
See roadmap for details...


Doing More with GlassFish (Steve Elliott):

GlassFish v3 - Management and monitoring

Management:

User Friendly, pluggable and extensible for administration
Feature rich Admin console (GUI)
Easy to use Command Line Interface (CLI)
RESTful management and monitoring API
Fully documented AMX API (app server management API)
All management features built on AMX API

OSGi, load on demand = fast initial start up

v3 AdminConsole:
Frame-set removed, now Ajax based pages
Pluggable console (admin panes, trees etc loaded on demand)


Monitoring:

Lightweight prode architecture
Ad hoc monitoring in Production
Client-scripting (JavaScript)
DTrace integration on Solaris (similar to OS probes, uniform tracing experience with MySQL etc)
Extensibility / Pluggability

No overhead when there is no monitoring
Allows Monitoring to be turned on in a production environment with minimal impact
Generate and listen to only interested
Turn on monitoring when needed
BTrace integration
Portable and dynamic


Instrumentation:

Modules expose probes
POJO with annotations
XML

Modules register probe listeners

In code, POJO annotations can be used

@ProbeProvider(providername="glassfish", modulename="web")

ProbeProvider XML configuration also possible

ProbeListeners
JMX exposed

@ManagedAttribute(id="jspcount")


OSGi:

big move to OSGi technology
Big move to more modular development approach

Demands and enforces stronger modularity

OSGi is largely under the covers
Visible to GlassFish developers, but not to GlassFish users


Service based architecture:

Core modules loaded on app startup
Rest loaded on demand

Module Management:
add, remove, update installed modules

OSGi as a container


Web services - Metro : JAX-WS / Jersey : JAX-RS

Metro - SOAP-based web services stack
Built into GlassFish
Works with any servlet 2.5 compliant web container
WebLogic, WebSphere, JBoss, Tomcat
Also standalone
Advances interoperability with .NET 3.x/4.0

Project Tango - focused on interoperability with .NET

JAXB based XML Data Binding (XSD, XPATH)

SOAP Messaging MTOM etc

Bi-directional interoperability with .NET (Java or .NET as client or server)


Standards:

JCP: JAX-WS 2.2 & JAXB 2.2
W3C SPAP 1.1/1.2 WSDL 1.1, WS-Addressing
… etc


JAX-RS:

JAX-RS 1.1 is final and part of EE6

Not a web profile
but included with GlassFish v3 web profile
JCP 311
Spec - JSR 311

http://www.oracle.com/java
Contains links to GlassFish etc


.

Sunday, 1 November 2009

Project Euler 12 - Scala Streams, creating triangle numbers

Just a short blog today on Scala Streams and generating numeric progressions, such as a simple arithmetic series

This is a solution to project Euler problem 12: "What is the value of the first triangle number to have over five hundred divisors?"

This is actually a fairly easy problem to solve, providing you use a sufficiently efficient function to count the divisors!

The method I've adopted for this is particularly simple, but performs sufficiently well to solve this problem as stated within a few seconds.

Triangle Numbers:

Just a quick refresher, the triangle numbers are the sum of integers from 1 to n.

An Arithmetic series is a number series of the form:

a + (a + b) + (a + 2b) + ....

and using this form the Triangle numbers can be written as:

1 + (1 + 1) + (1 + 2) + (1 + 3) + ...

so they clearly are of the form of an Arithmetic series.


The more interesting aspect of this problem was the generation of the Triangle number sequence. Naturally, wanting to use Scala to solve this problem, the idea of using a Stream to evaluate the triangle number sequence lazily on demand is a natural approach.

The implementation I settled on to produce the Stream of triangle numbers was the iterative version as follows:


val trianglesI = Stream.iterate((1L,0L)){
case (n, sum) => (n + 1L, sum + n) }.map(_._2)


Which is a fairly natural way to solve via the additive definition of the triangle numbers.

What's perhaps more interesting is finding some alternative ways to achieve the same thing, here are some alternatives to the iterative additive stream definition:

Using the multiplicative definition of the Triangle numbers and the map function:



// use a stream map and the multiplicative solution for triangle numbers
val trianglesM = Stream.from(1).map(n => n * (n + 1) / 2)

println("trianglesM take(5)")
trianglesM.take(5).foreach(println)



Using "inits" - inits are a concept that can be found in languages such as Haskell:


// using inits (not efficient)
// inits -> inits(Stream.from(1)) is Stream(Stream(), Stream(1), Stream(1, 2), Stream(1, 2, 3), ...)
// and tails -> tails(Stream(1)) = Stream(Stream(1,2,3,...), Stream(2,3,4,...), Stream(3,4,5,...), ...)
def inits[T](xs: Stream[T]) = Stream.from(0).map(xs.take(_))

val trianglesInits = inits(Stream.from(1)).tail.map(_.sum)

println("trianglesInits take(5)")
trianglesInits.take(5).foreach(println)



Using Stream.cons and a helper function zipWith that can zip tuples:


// using zipWith
def zipWith[T1,T2,T3](s1: Stream[T1], s2: Stream[T2])(fn: (T1,T2) => T3) =
s1.zip(s2).map(Function.tupled(fn))

lazy val trianglesZ: Stream[Int] = Stream.cons(1, zipWith(trianglesZ,
Stream.from(2))(_ + _))

println("trianglesZ take(5)")
trianglesZ.take(5).foreach(println)




So, whilst problem 12 is not too hard, the production of triangle numbers as lazy lists is quite interesting, being that there are several approaches that can be taken!


The whole program solution to problem 12 (and the alternative triangle number streams)


package projecteuler

object Problem12 {

def main(args : Array[String]) {

println("result = " + problem12(500))
}

def problem12(target : Int) {

// using Stream.iterate to create a lazy stream of triangle numbers
val trianglesI = Stream.iterate((1L,0L)){
case (n, sum) => (n + 1L, sum + n) }.map(_._2)

// test it
//println("trianglesI take(5)")
//trianglesI.take(5).foreach(println)

// count the divisors (not massively efficient, but sufficient)
def numDivisors(n : Long) = {

var count = 0

for (i <- 1L to Math.sqrt(n).toInt) {
if (n % i == 0) {
count = count + 2
}
}
count
}

trianglesI.find(p => numDivisors(p) > target)
}

def otherTriangleStreams {

// use a stream map and the multiplicative solution for triangle numbers
val trianglesM = Stream.from(1).map(n => n * (n + 1) / 2)

println("trianglesM take(5)")
trianglesM.take(5).foreach(println)

// using inits (not efficient)
// inits -> inits(Stream.from(1)) is Stream(Stream(), Stream(1), Stream(1, 2), Stream(1, 2, 3), ...)
// tails -> tails(Stream(1)) = Stream(Stream(1,2,3,...), Stream(2,3,4,...), Stream(3,4,5,...), ...)
def inits[T](xs: Stream[T]) = Stream.from(0).map(xs.take(_))

val trianglesInits = inits(Stream.from(1)).tail.map(_.sum)

println("trianglesInits take(5)")
trianglesInits.take(5).foreach(println)

// using zipWith
def zipWith[T1,T2,T3](s1: Stream[T1], s2: Stream[T2])(fn: (T1,T2) => T3) = s1.zip(s2).map(Function.tupled(fn))

lazy val trianglesZ: Stream[Int] = Stream.cons(1, zipWith(trianglesZ, Stream.from(2))(_ + _))

println("trianglesZ take(5)")
trianglesZ.take(5).foreach(println)
}
}



.

Sunday, 27 September 2009

Scala & GUIs, a simple maze generator and solver applet

In this blog we will touch on using Scala for a simple GUI, in the form of a simple 2D Maze Applet.

An example of this applet can be viewed running from here http://www.chillipower.com/examples/maze/maze1.html to give an illustration of what it does. This uses a maze space of 100x100 cells on a regular grid.

The applet illustrates 2 aspects of GUI programming in Scala, the first is how to interact with basic javax.swing APIs directly and the second is how to go one step further and achieve the same effect using the scala.swing wrapper API instead.

The function of the Applet can be broken down into 3 simple steps:

Step 1: Produce a random maze. This uses a modified version of Prim's algorithm, running backwards from the exit until all of the available space has been covered. This will produce a simple path from the source to the exit, which is the path we seek when we try to solve the maze by starting at the start. The standard Prim's algorithm is used to produce a minimum spanning tree from a directed acyclic graph by recursively adding safe edges to the current minimum spanning tree recursively, starting at a source node.

Step 2: Use a regular (recursive) DFS algorithm to search the maze, stopping only when the exit is reached (or all paths are explored fully if there were no reachable exit).

Step 3: Remove all backtrack paths and show the single simple path from start to exit as the solution to the maze.

During maze exploration, the parts of the maze explored are marked with coloured breadcrumbs, red indicates a forward search path and blue a backtracking path (which happens when a path reaches a dead-end).

In step 2, in order to stop immediately when the exit is found (rather than conclude all other possible search branches) the recursive maze solving function throws an exception to escape out of the call stack immediately.

The code given is compatible with the very latest Scala 2.8.x trunk development builds, and with 2.7.6 the latest stable release (some 2.7.6 only versions of functions such as shuffle are commented out in preference to the newer and better 2.8 versions). Note that there are currently some issues with Arrays in 2.8.x causing run time errors with trying to create a multidimensional array. The solution that still works with current 2.8 builds is to use the Array.tabulate function.

i.e. the following works in 2.7.6 but not currently in 2.8:

var cells : Array[Array[Cell]] = new Array[Array[Cell]](WIDTH, HEIGHT)

and there are similar problems with the new Array.ofDims function too. These issues should be resolved in 2.8 final release but recently arrays in scala have been undergoing some re-factoring and so stability in this area has not been too strong.

One interesting thing to note is the use of the new Scala 2.8+ Stream.iterate function to avoid undesirable vars instead of vals, here is an example of that extracted from the code.

A typical iterative piece of code, the while loops is used to iterate over some code block whilst a particular condition is met for a given variable. For this to work the variable n needs to be updated in the loop, hence it can't be a val and has to be a var instead. This is somewhat paradigmatic of iterative imperative programming, but doesn't sit well with the strive for a more functional style.


var n = cells(exit.x)(exit.y)

while (n != null) {

n.trail = Forward
n = n.pi
update
}



What we can do here is make use of the Stream.iterate function to avoid the unsightly var n, as follows:



Stream.iterate(cells(exit.x)(exit.y))(_.pi).takeWhile(_ != null).foreach{
n => {
n.trail = Forward
update
}
}


Given that this is new in Scala 2.8 and 2.8 isn't released yet, you can augment existing 2.7.6 code with a helper function to aid the transition and improve 2.7 based code in readiness for 2.8. Here is an example of a util class that defines iterate so the same idea can be used in 2.7 based code:


def iterate[T](x:T)(f : T => T) : Stream[T] =
Stream.cons(x, iterate(f(x))(f))



Another interesting thing to note in the code is the use of a permutation on a list (to create random directions when generating or exploring the maze), here is the code:


def shuffle[T](xs: List[T], r: scala.util.Random) = {
xs.toStream.zip(Stream.const(r.nextDouble _).map(_())).toList.sort(_._2 < _._2).map(_._1)
}


The code maps the elements with a stream of random real numbers and then permutes by sorting the list, extracting the sorted items to create the random permutation (neat huh!)

The above code is 2.7.x & 2.8+ compatible, but perhaps even better is the following 2.8+ code that makes use of the new Stream.continually function:


def shuffle[T](xs: List[T])(implicit r: scala.util.Random) = {
xs.zip(Stream.continually(r.nextDouble)).sortWith(_._2 < _._2).map(_._1)
}


In terms of the GUI aspects, the application is an applet, and the example code shows 2 ways of achieving this in Scala. The first, using the swing API directly is as follows:


class MazeApplet2 extends JApplet {

override def init() {
...
}

override def start() {
}
}



Which is familiar to any Java programmer, but the more interesting approach is using the scala.swing API instead, whereby we have:



class MazeApplet extends Applet {

object ui extends UI with Reactor {

def init() = {
...
}

override def start() = {
}
}
}



Where the abstract ui attribute (of type UI) is implemented using the Reactor mixin trait (http://www.scala-lang.org/docu/files/api/scala/swing/Reactor.html), to provide the framework for event handling.

One thing that's quite clear is the obviously similarity between the function to generate and the function to solve the maze. Using the DSL-like features of Scala it should be possible to extract the common control structure / algorithm and create a new utility function to provide built-in like support for this, passing in a body function to provide the difference implementation of maze generate vs. solve.


Full code example:



/**
* Scala 2.8+ Maze Applet v0.1 20-09-2009 Louis Botterill
*/
package maze

import scala.swing.Applet
import scala.swing.Panel
import scala.swing.Reactor

import javax.swing._;
import java.awt.Graphics;
import java.awt.Color;


object Direction extends Enumeration {
type Direction = Value
val North, South, East, West = Value
}

object Breadcrumb extends Enumeration {
type Breadcrumb = Value
val Forward, Backward, Clear = Value
}

import Direction._
import Breadcrumb._

class MazeModel {
val HEIGHT = 100
val WIDTH = 100

// instantiate and initialise the 2d array - there is a direct 2d version but not working currently in scala 2.8.x
var cells = Array.tabulate(WIDTH)(i => Array.tabulate(HEIGHT)(j => Cell(i, j)))

// set everything as not visited and with no trail
def clearVisited() = {

cells.foreach(_.foreach(c => {
c.visited = false;
c.trail = Clear
}))
}

// generate maze
def generateMaze(update : => Unit) {

val exit = new Point(WIDTH - 1, HEIGHT - 1)
val start = new Point(0, 0)

// 1. Start at a particular cell, call it the "exit"
// 2. Mark the current cell as visited, and get a list of all its neighbors.
// For each neighbor, starting with a randomly selected neighbor:
// 1. If that neighbor hasn't been visited,
// remove the wall between this cell and that neighbor,
// and then recurse with that neighbor as the current cell.

// cell 0,0 is the start, open the north wall to highlight this
var s = cells(0)(0)
s.clear(North);

// cell width-1,height-1 is the exit, open the south wall to highlight this
val c = cells(exit.x)(exit.y)
c.clear(South);

// recursively process the next cell
def doNextCell(c : Cell) {

c.visited = true
c.trail = Forward

var dirs = c.getRndDirections()

while (!(dirs isEmpty)) {
val dir = dirs.head
val x = getCell(c, dir)

x match {
case Some(n) =>

if (n.visited != true) {
n.visited = true
c.clear(dir)
n.clear(getInv(dir))
n.trail = Forward
update
doNextCell(n)
n.trail = Clear
update
}

case None =>
}

dirs = dirs.tail

//Thread.sleep(1)
}
}

doNextCell(c)
}

// find the maze solution using dfs from the start node until the end
// node is located
def solveMaze(update : => Unit) {

val start = new Point(0, 0)
val exit = new Point(WIDTH - 1, HEIGHT - 1)

val s = cells(0)(0)
s.clear(North);

def doNextCell(c : Cell) {

c.visited = true
c.trail = Forward

var dirs = c.getDirections()

while (!(dirs isEmpty)) {
val dir = dirs.head
val x = getCell(c, dir)

x match {
case Some(n) =>

if (n.visited != true) {
n.visited = true
n.pi = c // set predecessor node
n.trail = Forward
update
if (n.i == exit.x && n.j == exit.y) throw new Exception("Done")
doNextCell(n)
n.trail = Backward
update
}

case _ =>
}

// take the top item off and use the rest of the list
dirs = dirs.tail

Thread.sleep(1) // add a little delay so we can watch the dfs explore and find the solution
}
}

doNextCell(s)
}

// from the exit recurse over the trail of predecessors,
// marking with a forward trail
def showSolution(update : => Unit) {

val exit = new Point(WIDTH - 1, HEIGHT - 1)

// 2.8 avoid var using Stream
// Stream.iterate(cells(exit.x)(exit.y))(_.pi).takeWhile(_ != null)
Stream.iterate(cells(exit.x)(exit.y))(_.pi).takeWhile(_ != null).foreach{
n => {
n.trail = Forward
update
}
}

// the following works fine but replaced with the above to avoid unnecessary var
/*
var n = cells(exit.x)(exit.y)

while (n != null) {

n.trail = Forward
n = n.pi
update
}
*/
}

// get the cell if possible based on current cell and the given direction
def getCell(c : Cell, dir : Direction) : Option[Cell] = dir match {

case North => if (c.j > 0) Some(cells(c.i)(c.j-1)) else None
case South => if (c.j < east =""> if (c.i < west =""> if (c.i > 0) Some(cells(c.i-1)(c.j)) else None
}

// find the inverse direction for a given direction
def getInv(dir : Direction) = dir match {

case North => Direction.South
case South => Direction.North
case East => Direction.West
case West => Direction.East
}
}

// a scala.swing.Panel, override paint(Graphics2D) to paint each of the cells
class MazePanel(m : MazeModel) extends Panel {

override def paint(g : java.awt.Graphics2D) {

m.cells.foreach(_.foreach(_.draw(g)))
}
}

// a javax.swing.JPanel, override paint(Graphics) to paint each of the cells
class MazePanel2(m : MazeModel) extends JPanel {

override def paint(g : Graphics) {

m.cells.foreach(_.foreach(_.draw(g)))
}
}

// a scala.swing.Applet based applet using scala swing wrappers
class MazeApplet extends Applet {
val m = new MazeModel()
lazy val mp = new MazePanel(m)

object ui extends UI with Reactor {

def init() = {
contents = mp
}

override def start() = {
def update = {
this.repaint()
}

m.generateMaze(update)

Thread.sleep(2000)

m.clearVisited

try {
m.solveMaze(update);
}
catch {
case e : Exception =>
}

Thread.sleep(2000)

m.clearVisited

m.showSolution(update)

mp.repaint()
this.repaint()
}
}
}

// standard javax.swing.JApplet of the maze
class MazeApplet2 extends JApplet {

val m = new MazeModel()
lazy val mp = new MazePanel2(m)

def MazeApplet2 = {
}

override def init() {

this.add(mp)
}

override def start() {
def update = {
this.repaint()
}

m.generateMaze(update)

Thread.sleep(2000)

m.clearVisited

try {
m.solveMaze(update);
}
catch {
case e : Exception =>
}

Thread.sleep(2000)

m.clearVisited

m.showSolution(update)

mp.repaint()
this.invalidate()
this.repaint()
}

override def stop() {
}

override def destroy() {
}
}

// standard Java app of the maze using JFrame
object Maze {

def main(args: Array[String]) {

val mazeApplet = new MazeApplet()
val frame = new JFrame()
frame.setBounds(50, 50, 750, 750)
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);

frame.add(mazeApplet)

frame.setVisible(true)
mazeApplet.init
mazeApplet.start
}
}

// represent a Cell of the maze
case class Cell(i: Int, j : Int) {

val size = 7; // each cell is 7px by 7px
var north : Boolean = true
var south : Boolean = true
var east : Boolean = true
var west : Boolean = true
var visited : Boolean = false
var pi : Cell = null // predecessor cell
var trail : Breadcrumb = Clear;

def draw(g : Graphics) {

val x = i * size
val y = j * size

trail match {
case Forward => {
g.setColor(Color.RED);
g.fillRect(x + 2, y + 2, size - 4, size - 4)
}
case Backward => {
g.setColor(Color.BLUE);
g.fillRect(x + 2, y + 2, size - 4, size - 4)
}
case _ =>
}

g.setColor(Color.BLACK);
if (north) {
g.drawLine(x, y, x + size, y);
}
if (south) {
g.drawLine(x, y + size, x + size, y + size);
}
if (east) {
g.drawLine(x + size, y, x + size, y + size);
}
if (west) {
g.drawLine(x, y, x, y + size);
}
}

def clear(dir : Direction) = dir match {

case North => north = false
case South => south = false
case East => east = false
case West => west = false
}

def getDirections() : scala.List[Direction] = {

var d = scala.List[Direction]()

if (!north) d = North :: d
if (!south) d = South :: d
if (!east) d = East :: d
if (!west) d = West :: d

d
}

def getRndDirections() : scala.List[Direction] = {

val directions = North :: South :: East :: West :: Nil
implicit val r = Rand.rand
Utils.shuffle(directions) //, Rand.rand)
}
}

// represent a 2d point
case class Point(x : Int, y : Int)

// init a random generator singleton
object Rand {
val rand : scala.util.Random = new scala.util.Random();
}

object Utils {

// ref: http://okmij.org/ftp/Haskell/perfect-shuffle.txt
// scala 2.7+ here's an O(n log n) solution:
/*
def shuffle[T](xs: List[T], r: scala.util.Random) = {
xs.toStream.zip(Stream.const(r.nextDouble _).map(_())).toList.sort(_._2 < nil =""> scala.List(Nil)
case _ => xs.flatMap(x => permute(xs.filter(_ != x)).map(x :: _))
}

// make a stream of x, f(x), f(f(x)), etc.
// in Haskell this is called "iterate". it ought to be in the standard library
// as Stream.iterate. "unfold" should be more general, but nonetheless I'm
// going to call this unfold for the moment...
def unfold[T](x:T)(f:T=>T):Stream[T] =
Stream.cons(x,unfold(f(x))(f))

def iterate[T](x:T)(f:T=>T):Stream[T] =
Stream.cons(x,iterate(f(x))(f))
}



.

Monday, 31 August 2009

Scala, trees, pattern matching and tree recursion

Hi! Long time no see...

Well after some outage due to Holiday (road trip around Europe and down to the Swiss Alps!), work commitments and some personal things to deal with, I'm back again.

I thought I'd start with some stuff on Tree structures, whilst continuing on with Scala too. The following code illustrates some good examples of case classes and pattern matching, along with extensive use of Some, None and Option (Scala's improved way of dealing with optional types, rather than relying on null everywhere).

The following code defines a simple recursive binary tree definition as a case class, using option to define the left and right children.

case class Tree[+T](value: T, left: Option[Tree[T]], right: Option[Tree[T]])
The above definition defines a tree recursively, exactly as a tree is defined as a data structure in computer science. Each tree node has a value of type T and an Optional left and right child node (bin tree). The +T makes the Tree type covarient about T, which is a generally useful property of a Tree.

A few helper functions are used that make life a little easier when working with Option types, by extracting the value or providing a safe default when no value exists. These help keep the code a little tidy and manageable.


/**
* Helper function: For o return f: A => Int if Some or 0 if None
* also could try opt map f getOrElse 0
*/
def zeroIfNone[A](o: Option[A], f: A => Int): Int = o match {
case None => 0
case Some(x) => f(x)
}

/**
* Helper function: Return value of function provided if Some, or 0 otherwise
*/
def zeroIfNoneVal[A](o: Option[A], n: => Int) = zeroIfNone[A](o, _ => n)

/**
* Helper function: Default to provided value if option is None
*/
def defaultIfNone[A](o: Option[A], f: A => Int, d : Int): Int = o match {
case None => d
case Some(x) => f(x)
}


The example code demonstrates preorder, inorder and postorder recursive tree traversal (which are all simple permutations of a basic depth first search), applying the provided unit function to each node as it is visited.

Recall that the preorder, inorder and postorder functions are defined as follows:

  • PreOrder - visit parent node, left and then right child node
  • InOrder - visit left, parent and then right nodes
  • PostOrder - visit left, right and then parent node


Using recursion and the call stack, depth first search based tree traversal is very simple and concise. The basic DFS inorder traversal looks like:


/**
* Perform an inorder tree traversal using the provided Tree[A] => Unit function
*/
def inOrder[A](t: Option[Tree[A]], f: Tree[A] => Unit) : Unit = t match {

case None =>
case Some(x) =>
if (x.left != None) inOrder(x.left, f)
f(x)
if (x.right != None) inOrder(x.right, f)
}


The breadth first search algorithm is perhaps more interesting. This uses a List as a basic FIFO queue to provide the breadth first search tree traversal:


/**
* Breadth first search on a tree, a list is used as a basic fifo queue to
* give the bfs traversal
*/
def breadthFirstSearch[A](t: Tree[A], f: Tree[A] => Unit): Unit = {
def bfs(ts: List[Tree[A]]): Unit = ts match {
case Nil => // ignore
case _ =>
val children = for{tree <- ts
Some(child) <- List(tree.left, tree.right)}
yield child
ts map f
bfs(children)
}
bfs(List(t))
}


The example also shows a treeFold function. Whether this is a useful function as-is is perhaps debatable, but it makes for an interesting example. The code traverses the tree in a DFS inorder traversal and applies the mapping function f : A -> B and then it applies the join function j : B, B -> B to combine the results of the tree map. In this way the function is similar to say flatMap where a list is mapped and then flattened into a single list, or perhaps more closely to foldLeft/foldRight where items in a list a joined into a single resulting value.



/**
* Fold up a tree into a single value, using provided map and join functions
*/
def treeFold[A, B](t: Option[Tree[A]],
f: Option[Tree[A]] => Option[B],
j: (Option[B], Option[B]) => Option[B]) : Option[B] = t match {

case None => None
case Some(x) => j(j(f(t), treeFold(x.left, f, j)), treeFold(x.right, f, j))
}


Some utility functions are also defined, to return the size and depth of the tree for illustration of how this can be achieved.

In summary Scala can quite elegantly represent Tree data structures, using case classes and operate over them using pattern matching and higher order functions to provide powerful and concise support for trees and the operations that can be performed on them.

Some additional things that could be done here would be to provide an example of a binary search and perhaps insertion, update and delete too. A job for another followup blog perhaps...


The full example code (is complete and self contained) is as follows:


package trees

// Define a recursive Bin Tree structure
case class Tree[+T](value: T, left: Option[Tree[T]], right: Option[Tree[T]])

object TreeObj {

/**
* Helper function: For o return f: A => Int if Some or 0 if None
* also could try opt map f getOrElse 0
*/
def zeroIfNone[A](o: Option[A], f: A => Int): Int = o match {
case None => 0
case Some(x) => f(x)
}

/**
* Helper function: Return value of function provided if Some, or 0 otherwise
*/
def zeroIfNoneVal[A](o: Option[A], n: => Int) = zeroIfNone[A](o, _ => n)

/**
* Helper function: Default to provided value if option is None
*/
def defaultIfNone[A](o: Option[A], f: A => Int, d : Int): Int = o match {
case None => d
case Some(x) => f(x)
}

/**
* Return the size of the tree
*/
def size[A](t : Option[Tree[A]]) : Int = t match {

case None => 0
case Some(x) =>
zeroIfNoneVal(x.left, size(x.left)) + zeroIfNoneVal(x.right, size(x.right)) + 1
//zeroIfNone(x.left, (_:Tree[Any]) => (size(x.left))) +
//zeroIfNone(x.right, (_:Tree[Any]) => (size(x.right))) + 1
}

/**
* Alternative size using nested function
*/
def size2[A](t : Option[Tree[A]]) : Int = {

def sizeB(b : Option[Tree[A]]) : Int = b match {
case None => 0
case Some(x) => size(b);
}

t match {
case None => 0
case Some(x) =>
sizeB(x.left) + sizeB(x.right) + 1
}
}

/**
* Return the max depth of the tree
*/
def depth[A](t : Option[Tree[A]]) : Int = t match {

case None => 0
case Some(x) =>
Math.max(zeroIfNoneVal(x.left, depth(x.left)), zeroIfNoneVal(x.right, depth(x.right))) + 1
}

/**
* Alternative deptch, return the max depth of the tree using nested function
*/
def depth2[A](t : Option[Tree[A]]) : Int = {

def depthB(b : Option[Tree[A]]) : Int = b match {
case None => 0
case Some(x) => depth(b);
}

t match {

case None => 0
case Some(x) =>
Math.max(depthB(x.left), depthB(x.right)) + 1
}
}

/**
* Breadth first search on a tree, a list is used as a basic fifo queue to
* give the bfs traversal
*/
def breadthFirstSearch[A](t: Tree[A], f: Tree[A] => Unit): Unit = {
def bfs(ts: List[Tree[A]]): Unit = ts match {
case Nil => // ignore
case _ =>
val children = for{tree <- ts Some(child) <- List(tree.left, tree.right)} yield child ts map f bfs(children) } bfs(List(t)) } /** * Sum up the elements of a numeric tree */ def sumTree[A <: Int](t: Option[Tree[A]]) : Int = t match { case None => 0
case Some(x) =>
sumTree(x.left) + sumTree(x.right) + x.value
}

/**
* Fold up a tree into a single value, using provided map and join functions
*/
def treeFold[A, B](t: Option[Tree[A]],
f: Option[Tree[A]] => Option[B],
j: (Option[B], Option[B]) => Option[B]) : Option[B] = t match {

case None => None
case Some(x) => j(j(f(t), treeFold(x.left, f, j)), treeFold(x.right, f, j))
}

/**
* Perform an inorder tree traversal using the provided Tree[A] => Unit function
*/
def inOrder[A](t: Option[Tree[A]], f: Tree[A] => Unit) : Unit = t match {

case None =>
case Some(x) =>
if (x.left != None) inOrder(x.left, f)
f(x)
if (x.right != None) inOrder(x.right, f)
}

/**
* Perform a preorder tree traversal using the provided Tree[A] => Unit functio
*/
def preOrder[A](t: Option[Tree[A]], f: Tree[A] => Unit) : Unit = t match {

case None =>
case Some(x) =>
f(x)
if (x.left != None) inOrder(x.left, f)
if (x.right != None) inOrder(x.right, f)
}

/**
* Perform a postorder tree traversal using the provided Tree[A] => Unit functio
*/
def postOrder[A](t: Option[Tree[A]], f: Tree[A] => Unit) : Unit = t match {

case None =>
case Some(x) =>
if (x.left != None) inOrder(x.left, f)
if (x.right != None) inOrder(x.right, f)
f(x)
}

/**
* build tree and run tree functions on it
*/
def main(args : Array[String]) {

/* Test Tree structure

1
/ \
2 3
/ \ / \
4 x 5 6

*/

// build a tree
val myTree = Tree(1,
Some(Tree(2,
Some(Tree(4, None, None)),
None
)
),
Some(Tree(3,
Some(Tree(5, None, None)),
Some(Tree(6, None, None))
)
)
)

printBfsTree(myTree)

println("Size = " + size(Some(myTree)))

println("Depth = " + depth(Some(myTree)))

println("Sum = " + sumTree(Some(myTree)))

var sum = treeFold[Int, Int](
Some(myTree),
(t: Option[Tree[Int]]) => t match { case None => Some(0); case Some(x) => Some(x.value) },
(x: Option[Int], y: Option[Int]) =>
Some(zeroIfNone[Int](x, (x:Int) => x) + zeroIfNone[Int](y, y => y))
)

var mult = treeFold[Int, Int](
Some(myTree),
(t: Option[Tree[Int]]) => t match { case None => Some(0); case Some(x) => Some(x.value) },
(x: Option[Int], y: Option[Int]) =>
Some(defaultIfNone[Int](x, (x:Int) => x, 1) * defaultIfNone[Int](y, y => y, 1))
)

println("Sum Result = " + sum)
println("Mult Result = " + mult)

println("InOrder = ")
inOrder(Some(myTree), (t: Tree[Any]) => println(t.value))

println("PreOrder = ")
preOrder(Some(myTree), (t: Tree[Any]) => println(t.value))

println("PostOrder = ")
postOrder(Some(myTree), (t: Tree[Any]) => println(t.value))
}

/**
* Print a tree using BFS, passing in print unit function
*/
def printBfsTree[A](tree: Tree[A]) {

breadthFirstSearch(tree, (t: Tree[A]) => println(t.value))
}
}


.

Thursday, 9 July 2009

Notes on Adobe LiveCycle, Flash, Flex and Air

Having recently attended Adobe LiveCycle training at Adobe Uxbridge, here are a few notes and links from it.

Adobe LiveCycle is an integrated Java based SOA and document management server framework. LiveCycle can be hosted on a JEE application server, including JBoss, WebSphere and WebLogic. LiveCycle requires a Database, mySql, Oracle and MS SQL are supported.

LiveCycle is offers best performance on Microsoft/MS SQLServer platforms - currently.

LiveCycle is compatible with Windows, Solaris, AIX and Linux

LiveCycle is designed to integrate with RIA technologies, such as Flash, Flex and Air.


Key LiveCycle features include:

  • Forms builder - a PDF based form generation engine for PDF/HTML and Flash presentation
  • Form Guides - based on PDF but with wizard style Flash presentation
  • Reader/Reader extensions - Acrobat/Reader document management including digital signatures, revocation, permissions etc
  • Process Management
  • Data Services - Web Services, support for RESTful webservices in the next version
  • Output - print/production services


  • Flash is the original rich embedded content for multimedia, video, graphics and sound - flash runs as a plugin in the browser (although stand alone players are available too).
  • Flex builds on Flash and adds a rich component model.
  • Air is Flex technology that can be run in the browser or as desktop applications transparently.

LiveCycle and Flex based technologies can use ActionScript, a form of ECMA (JavaScript), Flex itself uses an XML format called mxml to define applications and component layout.


Key Tools:

  • FlexBuilder - IDE for Flex/Air development
  • LiveCycle Workbench - for LiveCycle development including forms and process management


Main Development WebSite: http://www.adobe.com/devnet/
Main Download home: http://www.adobe.com/downloads/
Flex Downloads: http://www.adobe.com/products/flex/flexdownloads/

Flex Development center: http://www.adobe.com/devnet/flex/?view=home
TourDeFlex: http://www.adobe.com/devnet/flex/tourdeflex/
TourDeLiveLycle: http://www.adobe.com/devnet/livecycle/tourdelivecycle/
LiveCycle Cafe: http://www.adobe.com/devnet/livecycle/cafe/

LiveDocs (online documentation): http://livedocs.adobe.com/
ActionScript 3: http://livedocs.adobe.com/flash/9.0/ActionScriptLangRefV3/
Flex 3 LiveDocs: http://livedocs.adobe.com/flex/3/

Adobe Partners: https://www.adobe.com/cfusion/partnerportal/index.cfm

FlashDevelop: Getting started http://nuigroup.com/forums/viewthread/1689/


.

Sunday, 5 July 2009

Scala, the Google App Engine and an iPhone client...

In this Blog I wanted to explore Scala on the Google App Engine. The Google App Engine has supported Java (as well as Python) for a little while now. There have been a few blogs about running Scala on the App engine and I wanted to further demonstrate this by taking a simple Java Servlet and converting it to a Scala based web application suitable for the App Engine.

The Google App Engine home is here http://code.google.com/appengine/ and contains excellent documentation on how to get stated. I don't want to repeat the documentation, so I'll just point out the essentials relevant to this blog.

The Getting Started guide can be found here: http://code.google.com/appengine/docs/java/gettingstarted/ for Java, and a Python equivalent is available too.

This example is based on the previous blog, the Sieve of Eratosthenes as a simple prime number generator. The request accepts one parameter n whose integer value is the upper limit to generate all the primes until (from 2). The response output is JSON encoded to create a simple web service.


Starting with the very basics, the recommended project directory structure is of the form:


|-src
|---META-INF
|---primes
|-war
|---WEB-INF
|-----classes
|-------META-INF
|-------primes
|-----lib



For any Web App for the Google App Engine, one additional GAE specific file is required alongside the standard web.xml deployment descriptor. This file is called appengine-web.xml and contains the name of the Application, for example:


<appengine-web-app xmlns="http://appengine.google.com/ns/1.0">
<application>primes-chillipower-com</application>
<version>1</version>
</appengine-web-app>


In this simple GAE deployment descriptor the main thing is that the name of the application is specified, in this example the application was called 'primes-chillipower-com'.

Applications must be created on the App Engine, this can be done here: http://appengine.google.com/

Note that there are some limitations currently, you are allowed up to a maximum of 10 applications and applications can not be deleted once created!


The simple Java version:

In the simple Java version we can place a simple Servlet in the src directory and build using ant.


Example Java servlet:


package primes;

import java.io.IOException;
import javax.servlet.http.*;

public class PrimesServletJ extends HttpServlet {
public void doGet(HttpServletRequest req, HttpServletResponse resp)
throws IOException {
resp.setContentType("text/plain");

int n = 100;

String nStr = req.getParameter("n");

try {

n = Integer.parseInt(nStr);
}
catch (Exception e) {}

int[] primes = primes(n);

resp.getWriter().println("{ \"n\" : \"" + n + "\",");
resp.getWriter().println(" \"primes\" : [ ");

for (int i = 0; i < primes.length; i++) {
String sep = (i < primes.length - 1) ? ", " : "";
resp.getWriter().println(" { \"" + i + "\" : \"" + primes[i] + "\" }" + sep);
}

resp.getWriter().println(" ] }");
}

public static int[] primes(int n) {

// initially assume all integers are prime
boolean[] isPrime = new boolean[n + 1];

for (int i = 2; i <= n; i++) {
isPrime[i] = true;
}

// mark non-primes <= N using Sieve of Eratosthenes
for (int i = 2; i*i <= n; i++) {

// if i is prime, then mark multiples of i as nonprime
// suffices to consider mutiples i, i+1, ..., N/i
if (isPrime[i]) {
for (int j = i; i * j <= n; j++) {
isPrime[i * j] = false;
}
}
}

// count primes
int primeCount = 0;
for (int i = 2; i <= n; i++) {
if (isPrime[i]) primeCount++;
}

System.out.println("The number of primes <= " + n + " is " + primeCount);

int[] primes = new int[primeCount];
int idx = 0;
for (int i = 0; i <= n; i++) {
if (isPrime[i] == true) {

primes[idx] = i;
idx++;
}
}

return primes;
}

public static void main(String[] args) {

int n = 100;

if (args.length > 0 && args[0] != null) {
n = Integer.parseInt(args[0]);
}

int[] primes = primes(n);

for (int p : primes) {
System.out.println("Prime: " + p);
}
}
}



The ant build script goes in the root of the project, to start with I based mine on the example on the Google App Engine documentation here http://code.google.com/appengine/docs/java/tools/ant.html#Uploading_and_Other_AppCfg_Tasks

Using the standard ant script the project can be compiled and even run locally using the SDK.

ant compile

ant runserver


If you start the dev app server locally, it will be accessibly using a URL of the form:

http://localhost:8080/yourappcontextpath


Where the port 8080 is the default and can be changed in the script.

The App Engine SDK comes with appcfg.sh script in the bin directory, which can be used to perform operations such as uploading your app to the App Engine.


../../../../tools/appengine-java-sdk-1.2.1/bin/appcfg.sh update war


The Scala version:

The main tasks in converting this conventional Java Servlet based web app to a Scala based one is to:
  • Rewrite the Java Servlet in Scala.
  • Reconfigure the ant build.xml to modify the classpath to include the Scala libraries and to run the Scala scalac compiler to create the Java .class files from the .scala source files.

Since the Servlet API is a Java based API, there will of course be some reference to Java based classes in the Scala Servlet.


Example Scala based Servlet:


package primes

import javax.servlet.http._


class PrimesServlet extends HttpServlet {
override def doGet(request: HttpServletRequest, response: HttpServletResponse) = {

var n = 100

try {
val nStr = request.getParameter("n");
n = Integer.parseInt(nStr);
} catch {
case e : Exception => {
println("Exception e " + e.getMessage())
}
}

val ps = primes take n
val out = response.getWriter()

out.println("{ \"n\" : \"" + n + "\", \"primes\" : [")

var i = 0
for (p <- ps) {
val sep = if (i < ps.length - 1) ", " else ""
out.println(" { \"" + i + "\" : \"" + p + "\" }" + sep)
i = i + 1
}

out.println(" ] } ")
}

def primes = {
def sieve(is: Stream[Int]): Stream[Int] = {
val h = is.head
Stream.cons(h, sieve(is filter (_ % h > 0)))
}

sieve(Stream.from(2))
}
}





Example modified and build.xml file:


<project>
<property name="scala.home" location="/Users/chillipower_uk/dev/tools/scala/scala-2.7.5.final/" />
<property name="sdk.dir" location="/Users/chillipower_uk/dev/tools/appengine-java-sdk-1.2.1/" />

<import file="${sdk.dir}/config/user/ant-macros.xml" />

<path id="project.classpath">
<pathelement path="war/WEB-INF/classes" />
<fileset dir="war/WEB-INF/lib">
<include name="**/*.jar" />
</fileset>
<fileset dir="${sdk.dir}/lib">
<include name="shared/**/*.jar" />
</fileset>
</path>

<!-- Copy over Scala jars -->
<target name="init">
<property
name="scala-library.jar"
value="${scala.home}/lib/scala-library.jar"
/>
<path id="build.classpath">
<pathelement location="${scala-library.jar}" />

<pathelement location="${build.dir}" />
</path>
<taskdef resource="scala/tools/ant/antlib.xml">
<classpath>
<pathelement location="${scala.home}/lib/scala-compiler.jar" />
<pathelement location="${scala-library.jar}" />
</classpath>
</taskdef>
</target>

<!-- Copy Scala JARs -->
<target name="copyscala"
description="Copies the Scala JARs to the WAR.">
<copy
todir="war/WEB-INF/lib"
flatten="true">
<fileset dir="${scala.home}/lib">
<include name="**/scala-library.jar" />
</fileset>
</copy>
</target>

<!-- Copy App engine jars -->
<target name="copyjars"
description="Copies the App Engine JARs to the WAR.">
<copy
todir="war/WEB-INF/lib"
flatten="true">
<fileset dir="${sdk.dir}/lib/user">
<include name="**/*.jar" />
</fileset>
</copy>
</target>

<target name="compile" depends="copyscala, copyjars, init"
description="Compiles Java source and copies other source files to the WAR.">
<mkdir dir="war/WEB-INF/classes" />
<copy todir="war/WEB-INF/classes">
<fileset dir="src">
<exclude name="**/*.scala" />
<exclude name="**/*.java" />
</fileset>
</copy>
<scalac
srcdir="src"
destdir="war/WEB-INF/classes"
classpathref="project.classpath"
/>

<!--
<copy todir="war/WEB-INF/classes">
<fileset dir="src">
<exclude name="**/*.scala" />
<exclude name="**/*.java" />
</fileset>
</copy> -->
<javac
srcdir="src"
destdir="war/WEB-INF/classes"
classpathref="project.classpath"
debug="on" />

</target>

<target name="datanucleusenhance" depends="compile"
description="Performs JDO enhancement on compiled data classes.">
<enhance_war war="war" />
</target>

<target name="runserver" depends="datanucleusenhance"
description="Starts the development server.">
<dev_appserver war="war" />
</target>

<target name="update" depends="datanucleusenhance"
description="Uploads the application to App Engine.">
<appcfg action="update" war="war" />
</target>

<target name="update_indexes" depends="datanucleusenhance"
description="Uploads just the datastore index configuration to App Engine.">
<appcfg action="update_indexes" war="war" />
</target>

<target name="rollback" depends="datanucleusenhance"
description="Rolls back an interrupted application update.">
<appcfg action="rollback" war="war" />
</target>

<target name="request_logs"
description="Downloads log data from App Engine for the application.">
<appcfg action="request_logs" war="war">
<options>
<arg value="--num_days=5"/>
</options>
<args>
<arg value="logs.txt"/>
</args>
</appcfg>
</target>

</project>




Example request and output:

Request: http://primes-chillipower-com.appspot.com/primes?n=50


{ "n" : "50", "primes" : [ { "0" : "2" }, { "1" : "3" }, { "2" : "5" }, { "3" : "7" }, { "4" : "11" }, { "5" : "13" }, { "6" : "17" }, { "7" : "19" }, { "8" : "23" }, { "9" : "29" }, { "10" : "31" }, { "11" : "37" }, { "12" : "41" }, { "13" : "43" }, { "14" : "47" }, { "15" : "53" }, { "16" : "59" }, { "17" : "61" }, { "18" : "67" }, { "19" : "71" }, { "20" : "73" }, { "21" : "79" }, { "22" : "83" }, { "23" : "89" }, { "24" : "97" }, { "25" : "101" }, { "26" : "103" }, { "27" : "107" }, { "28" : "109" }, { "29" : "113" }, { "30" : "127" }, { "31" : "131" }, { "32" : "137" }, { "33" : "139" }, { "34" : "149" }, { "35" : "151" }, { "36" : "157" }, { "37" : "163" }, { "38" : "167" }, { "39" : "173" }, { "40" : "179" }, { "41" : "181" }, { "42" : "191" }, { "43" : "193" }, { "44" : "197" }, { "45" : "199" }, { "46" : "211" }, { "47" : "223" }, { "48" : "227" }, { "49" : "229" } ] }



The iPhone client:

I'm currently developing using the iPhone 3.1 beta SDK. I'm not going to post all the source code the the simple client, but rather give some essential snippets relevant to the invocation of the simple JSON web service once deployed on the Google App Engine.

To Consume this JSON in an iPhone app, we can use the JSON Objective-C library, which is available on Google code here http://code.google.com/p/json-framework/wiki/FAQ



- (void)loadData {
[super viewDidLoad];

NSString *urlString = [NSString stringWithFormat:@"http://primes-chillipower-com.appspot.com/primes?n=1000"];
NSURL *url = [NSURL URLWithString:urlString];

NSDictionary *primesResponse = (NSDictionary *)[JsonClientHelper fetchJSONValueForURL:url];

if (primesResponse != nil) {

NSArray *primeEntries = [primesResponse objectForKey:@"primes"];

NSMutableArray *primesTemp = [[NSMutableArray alloc] init];
[primesTemp retain];

for (int i = 0; i < [primeEntries count]; i++) {

NSDictionary *primeEntry = (NSDictionary*)[primeEntries objectAtIndex:i];

NSString *key = [NSString stringWithFormat:@"%d", i];

NSString *p = [primeEntry objectForKey:key];

NSString *msg = [NSString stringWithFormat:@"Prime %d = %@", i, p];

NSLog(msg);

[primesTemp addObject:p];
}

primes = [primesTemp copy];

[primesTemp release];
}
}




The above code makes a request to the JSON web service on the App Engine and decodes the response, which once decoded using the JSON library is contained in Dictionaries (NSDictionary) and Arrays (NSArray).


Where JsonClientHelper contains a basic class helper method fetchJSONValueForURL to fetch and decode a JSON response from the supplied URL.



+ (id)fetchJSONValueForURL:(NSURL *)url
{
NSString *jsonString = [[NSString alloc] initWithContentsOfURL:url encoding:NSUTF8StringEncoding error:nil];

id jsonValue = [jsonString JSONValue];

[jsonString release];

return jsonValue;
}

I hope this Blog has helped give some insight into creating a Scala based Web Application for the Google App Engine, as well as the consumption of JSON in an iPhone Objective-C client.

(I appologise for the formatting of the code in this blog, finding it awkward to get the code into the blog without it being mangled in some way!).


Resources:

Handy JSON Lint tool: http://www.jsonlint.com/

Scala Servlet development: http://blogs.oracle.com/aseembajaj/2008/08/scala_servlet_development.html

Scala on the Google App Engine: http://froth-and-java.blogspot.com/2009/04/scala-on-google-appengine.html

The Google App Engine provides support for JPA and JDO based persistence - a comparison of the two can be found here: http://www.jpox.org/docs/persistence_technology.html Further details of the JDO specification can be found here: http://java.sun.com/jdo/index.jsp and the JDO specification here: http://java.sun.com/javaee/technologies/persistence.jsp

Support for the Google App Engine in Netbeans: http://kenai.com/projects/nbappengine/pages/Home


.

Friday, 3 July 2009

Scala, lazy evaluation and the Sieve of Eratosthenes

This blog posting is primarily concerned with looking at lazy evaluation in functional programing, what it means and how it can be put to use.

Lazy evaluation can be described as an expression that has a value, but that is not evaluated until it's actually needed. Conversely strict refers to the opposite property where all expressions are evaluated up front when declared.

Some functional languages are described as Pure Lazy (no, this is not a derogatory term!), to reflect the fact that all evaluation is performed on demand. Haskell is on such language. Scala has both aspects of strict evaluation and lazy evaluation, as such it couldn't be referred to as 'pure' in either sense.

By means of the simplest example I can think of to illustrate Lazy evaluation, consider the following interaction with the Scala CLI:


scala> lazy val a = b + 1; lazy val b = 1;
a: Int = <lazy>
b: Int = <lazy>

scala> a
res9: Int = 2

scala> b
res10: Int = 1


I think this illustrates lazy evaluation, without defining the 'vals' as lazy the statement lazy val a = b + 1; lazy val b = 1; as val a = b + 1; val b = 1; couldn't be evaluated because b would be undefined when evaluating a!

The examples are based on the Sieve of Eratosthenes. Eratosthenes was an ancient Greek Mathmetician from 276 BC – c. 195 BC. The Sieve of Eratosthenes is a ancient algorithm for finding prime numbers from 2 .. n. Stated simply, this algorithm can be expressed as:


  • Create a list of all the numbers from 2 .. n.
  • Starting at p = 2
  • Strike out all multiples of p up until n from the list
  • Let p now be the first (lowest) number that has not been struck-off the list
  • Repeat the last two steps until p^2 > n
  • All remaining numbers not struck from the list are prime

In Haskell this can be expressed very elegantly and succinctly, as follows:

primes = sieve [2..]
sieve (p : xs) = p : sieve [x | x <− xs, x `mod` p > 0]


Note that [2..] creates an infinite steam of all the integers from 2 to infinity. You might think this is impossible and would create an out of memory error on the machine it ran on, but this is perfectly valid in a pure lazy language because none of the numbers are actually generated until they are requested!

In Scala there's not really a way to represent this quite so succinctly, but it is possibly to create a Lazy stream of numbers using:

var is = Stream from 2

which is essentially equivalent to Haskell's:

[2..]

Syntax.


Another important thing to note and be aware of is that Scala's List class is not a lazy list. Therefore, trying to make the equivalent of the Haskell algorithm in Scala using a List is not going to work.

Scala's Stream class however is Lazy, and so it can form the basic of the equivalent algorithm in Scala. To be specific, Scala's Stream class is strict about head and lazy about tail - which is ok since head contains a finite set of 0|1 elements.


The following Scala code shows two ways to implement the Sieve using Scala Lazy streams.


Example 1:


package primes;

/* Haskell...

primes = sieve [2..]
sieve (p : xs) = p : sieve [x | x <− xs, x `mod` p > 0]
*/
object Primes1 {

def primes : Stream[Int] = {

var is = Stream from 2

def sieve(numbers: Stream[Int]): Stream[Int] = {
Stream.cons(
numbers.head,
sieve(for (x <- numbers.tail if x % numbers.head > 0) yield x))
}

sieve(is)
}

def main(args : Array[String]) = {

primes take 100 foreach println
}
}

object Primes2 {
def primes = {
def sieve(is: Stream[Int]): Stream[Int] = {
val h = is.head
Stream.cons(h, sieve(is filter (_ % h > 0)))
}

sieve(Stream.from(2))
}

def main(args: Array[String]) {
println(primes take 100 toList)
}
}



Note the use of the 'take' method to extract the first 100 primes from the stream (so the program terminates!).


This second example shows how we can define a Lazy list like trait and then use it to implement the same, equivalent behavior:


Example 2:


package primes


object PrimesLazyTrait {

sealed trait Lazy[+A] {
def head: A = this match {
case Nil => error("head called on empty")
case Cons(h, _) => h()
}

def tail: Lazy[A] = this match {
case Nil => error("tail on empty")
case Cons(_, t) => t()
}

def filter(p: A => Boolean): Lazy[A] = this match {
case Nil => Nil
case Cons(h, t) => if(p(h())) Cons(h, () => t() filter p) else t() filter p
}

def foreach(f: A => Unit) {
this match {
case Nil =>
case Cons(h, t) => {
f(h())
t() foreach f
}
}
}

def toList: List[A] = this match {
case Nil => scala.Nil
case Cons(h, t) => h() :: t().toList
}
}

final case class Cons[+A](h: () => A, t: () => Lazy[A]) extends Lazy[A]
case object Nil extends Lazy[Nothing]

def from(n: Int): Lazy[Int] = Cons(() => n, () => from(n + 1))

def primes = {
def sieve(is: => Lazy[Int]): Lazy[Int] = {
lazy val h = is.head
Cons(() => h, () => sieve(is filter (_ % h > 0)))
}

sieve(from(2))
}

def main(args: Array[String]) {
primes foreach println
}
}




In this version we see the Lazy trait provides the necessary methods head, tail, filter, foreach and toList.


These little examples help demonstrate some of the power, elegance and conciseness of lazy functional programming - as well as the beauty of this Ancient Greek algorithm for finding primes.


The basic algorithms complexity is:

  • O((nlogn)(loglogn))

And has a memory requirement of:

  • O(n).

Without any optimization.

--

Additional example - e (Euler's number):

Calculating e, a list of converging numbers in the sequence of e

e can be defined as lim n->inf ( 1 + 1/n)^n and therefore expressed directly as a function e of n as follows:

def e(n : Int) = { Math.pow((1.0 + (1.0/n)), n) }

So, how can we turn this into a stream? If we create a stream of numbers from 1 that are mapped using the e function, which can be defined as an anon lambda argument to map:

val es = Stream.from(1).map((n => { Math.pow((1.0 + (1.0/n)), n) }))

Example usage of the e stream:

es take 5 toList


.