Resource --
http://www.zytrax.com/books/ldap/ch6/ppolicy.html
Thursday, October 14, 2010
Choosing the right Collection
Here is a guide for selecting the proper implementation of a Set, List, or Map. It was compiled for Java 1.4. Many additions have been made to the Collections Framework since then (notably the Queue and Deque interfaces, and various items in java.util.concurrent). These later additions have been omitted here, since this briefer summary should suffice for most cases.
The best general purpose or 'primary' implementations are likely ArrayList, LinkedHashMap, and LinkedHashSet. They are marked below as " * ". Their overall performance is better, and you should use them unless you need a special feature provided by another implementation. That special feature is usually ordering or sorting.
Here, "ordering" refers to the order of items returned by an Iterator, and "sorting" refers to sorting items according to Comparable or Comparator.
Principal features of non-primary implementations :
While being used in a Map or Set, these items must not change state (hence, it is recommended that these items be immutable objects):
Resource
http://www.javapractices.com/topic/TopicAction.do?Id=65
The best general purpose or 'primary' implementations are likely ArrayList, LinkedHashMap, and LinkedHashSet. They are marked below as " * ". Their overall performance is better, and you should use them unless you need a special feature provided by another implementation. That special feature is usually ordering or sorting.
Here, "ordering" refers to the order of items returned by an Iterator, and "sorting" refers to sorting items according to Comparable or Comparator.
Interface | HasDuplicates? | Implementations | Historical | ||||
Set | no | HashSet | ... | LinkedHashSet* | ... | TreeSet | |
List | yes | ... | ArrayList* | ... | LinkedList | Vector, Stack | |
Map | no duplicate keys | HashMap | ... | LinkedHashMap* | ... | TreeMap | Hashtable, Properties |
- HashMap has slightly better performance than LinkedHashMap, but its iteration order is undefined
- HashSet has slightly better performance than LinkedHashSet, but its iteration order is undefined
- TreeSet is ordered and sorted, but slow
- TreeMap is ordered and sorted, but slow
- LinkedList has fast adding to the start of the list, and fast deletion from the interior via iteration
- HashSet - undefined
- HashMap - undefined
- LinkedHashSet - insertion order
- LinkedHashMap - insertion order of keys (by default), or 'access order'
- ArrayList - insertion order
- LinkedList - insertion order
- TreeSet - ascending order, according to Comparable / Comparator
- TreeMap - ascending order of keys, according to Comparable / Comparator
While being used in a Map or Set, these items must not change state (hence, it is recommended that these items be immutable objects):
- keys of a Map
- items in a Set
- the stored items implement Comparable
- a Comparator for the stored objects be defined
Resource
http://www.javapractices.com/topic/TopicAction.do?Id=65
Continuous Integration
Any Agile Project Manager worth his salt should be aware of the term ‘Continuous Integration’ (often shortened to ‘CI’). But what is it, and how is it done?
This series of short blog articles aims to answer these two questions, so you can start your next project, or re-configure an existing project, armed with the necessary understanding about this key practice within agile software delivery.
Background
The basic premise of CI is pretty straightforward. An agile team needs a repeatable and reliable method to create a build of the software under development. Why so? Well, if its not already obvious, you may want to revisit the principles behind the Agile Manifesto. Within them you will notice a number of references to ‘working software’, and the foundation of any working software is a stable, tested build.
Recipe for CI
So how does CI help to create this build? Lets list the essential ingredients that we need :
This series of short blog articles aims to answer these two questions, so you can start your next project, or re-configure an existing project, armed with the necessary understanding about this key practice within agile software delivery.
Background
The basic premise of CI is pretty straightforward. An agile team needs a repeatable and reliable method to create a build of the software under development. Why so? Well, if its not already obvious, you may want to revisit the principles behind the Agile Manifesto. Within them you will notice a number of references to ‘working software’, and the foundation of any working software is a stable, tested build.
Recipe for CI
So how does CI help to create this build? Lets list the essential ingredients that we need :
- Source Code Control – in a typical agile project, developers turn User Stories into source code, in whatever programming language(s) the project is using. Once their work is at an appropriate level of completeness, they check-in or commit their work to the source code (a.k.a version) control system; for example, Subversion
- Build Tool – if the source code needs to be compiled (e.g. Java or C++) then we will need tooling to support that. Modern Integrated Developer Environments (IDE), such as Eclipse or Visual Studio are able to perform this task as developers save source code files. But if we want to build the software independently of an IDE in an automated fashion, say on a server environment, we need an additional tool to do this. Examples of this type of tool are Ant, Maven and Rake and Make. These tools can also package a binary output from the build. For example, with Java projects this might be a JAR or WAR file – the deployable unit that represents the application being developed.
- Test Tools – as part of the build process, in addition to compilation and the creation of binary outputs, we should also verify that (at minimum) the unit tests pass. For example, in Java these are often written using the JUnit automated unit testing framework. The tools in (2) often natively support the running of such tests, so they should always be executed during a build. In addition to unit testing, there are numerous other quality checks we can perform and status reports CI can produce. I’ll cover these in detail in a subsequent part to this series.
- Schedule or Trigger – we might want to create our build according to a schedule (e.g ‘every afternoon’) or when there is a change in the state of the project source code. In the latter case we can set up a simple rule that triggers a build whenever a developer changes the state of the source code by committing his/her changes, as outlined in (1). This has the effect of ensuring that your teams work is continuously integrated to produce a stable build, and, as you may have guessed, is where this practice gets its name from.
- Notifications – the team needs to know when a build fails, so it can respond and fix the issue. There are lots of ways to notify a team these days – instant messaging, Twitter etc, but the most common by far is still email. Continuous Integration Recipe The tool that wires these five elements together is a Continuous Integration Server. It interacts with the source control system to obtain the latest revision of the code, launches the build tool (which also runs the unit tests) and notifies us of any failures. And it does this according to a schedule or state change based trigger. A CI server often also provides a web-based interface that allows a team to review the status, metrics and data associated with each build. CI Server options There is a pretty overwhelming choice of available tools in this space. Some are open source, some proprietary. I don’t have time to go into all the available options here unfortunately. However, there is a handy feature comparison matrix available here. Of course, it would be remiss of me not to mention our own hosted service, which allows you to get started with CI in no time at all, without having to be an ‘expert’ user. Resource -- http://www.theserverside.com/discussions/thread.tss?thread_id=60718
Implementing Singleton in cluster envirnoment - Option 3
Clustering and RMI Singletons
Clustering is when you have J2EE containers that are running on different VMs talk to each other. Clustering is used to provide load balancing and fail over for J2EE clients.The simple/local Singleton as shown is a non-distributed object. Therefore in a clustered environment you will end up with at least one Singleton object on each server. This of course may be ok for the design requirements.
However if the design is to have one Singleton for the cluster then a common approach is to implement a "pinned service". This refers to an RMI object that is only located on one container in the cluster. Its stub is then registered on the clustered JNDI tree making the object available cluster wide. This raises of causes one issue, what happens when the server containing the RMI Singleton crashes?
A Container in the cluster could try to bind a new RMI Singleton if it notices it is missing out of the JNDI tree. However this could cause issues if all the containers try to bind new RMI Singletons at the same time in response to a failure.
Generally at the end of the day RMI Singletons do tend to have the drawback that they end up as single points of failure.
In the following code example a local Singleton is used to act as a Wrapper around a RMI object that is bound into the clusters JNDI tree.
import javax.naming.*; import java.rmi.*; public class RMISingletonWrapper { private static RMISingletonWrapper instance = null; private static String SINGLETON_JNDI_NAME = "RMISingleton"; public static RMISingletonWrapper getInstance() { return instance; } // All methods in delegate the method call to the actual // Singleton that lives on the clustered JNDI tree. public void delegate() { try { RMISingleton singleton = getRMISingleton(); singleton.delegate(); } catch (Exception e) { // Could try and recover e.printStackTrace(); } } // Locate the true Singleton object in the cluster. private RMISingleton getRMISingleton() { RMISingleton rmiSingleton = null; try { Context jndiContext = new InitialContext(); Object obj = jndiContext.lookup(SINGLETON_JNDI_NAME); rmiSingleton = (RMISingleton)PortableRemoteObject.narrow( obj, Class.forName("examples.singleton.rmi.RMISingleton")); } catch (Exception e) { // Could try and recover e.printStackTrace(); } return rmiSingleton; } }
Distributed Singleton Caches
One of the most common usages of Singletons is as caches of data. This use has issue for non RMI Singletons in a clustered environment. Problems happen when you attempt to do an update to the cache. Since a Singleton instance exists on each Container any update to the cached data by one Singleton will not be replicated to the other Singletons that exist on the other Containers.This issue can be resolved by the use of the Java Messaging API to send update messages between Containers. In this approach if an update is made to the cache on one Container a message is published to a JMS Topic. Each Container has a listener that subscribes to that topic and updates its Singleton cache based on the messages it receives. This approach is still difficult as you have to make sure that the updates received on each container are handled in a synchronous fashion. JMS messages also take time to process so the caches may spend some time out of sync.
In the following simplistic implementation of a distributed Cache a CacheManager Singleton holds a Map of cached items. Items to be cached are placed in a CachItem object which implements the ICacheItem interface.
The CacheManager does not make any attempt to remove old items from the Cache based on any criteria like "Last Accessed Time".
import javax.jms.*; public class CacheManager implements MessageListener { public static CacheManager instance = null; public static Map cache = new HashMap(); private TopicConnectionFactory topicConnectionFactory; private TopicConnection topicConnection; private TopicSession topicSession; private Topic topic; private TopicSubscriber topicSubscriber; private TopicPublisher topicPublisher; private final static String CONNECTION_FACTORY_JNDI_NAME = "ConnectionFactory"; private final static String TOPIC_NAME = "TopicName"; public static void initInstance() { instance = new CacheManager(); } public static CacheManager getInstance() { return instance; } public synchronized void addCacheItem(ICacheItem cacheItem) { CacheMessage cacheMessage = new CacheMessage(); cache.put(cacheItem.getId(), cacheItem.getData()); cacheMessage.setMessageType(CacheMessage.ADD); cacheMessage.setCacheItem(cacheItem); sendMessage(cacheMessage); } public synchronized void modifyCacheItem(ICacheItem cacheItem) { CacheMessage cacheMessage = new CacheMessage(); cache.put(cacheItem.getId(), cacheItem.getData()); cacheMessage.setMessageType(CacheMessage.MODIFY); cacheMessage.setCacheItem(cacheItem); sendMessage(cacheMessage); } public ICacheItem getCacheItem(String key) { return (ICacheItem)cache.get(key); } private CacheManager() { try { InitialContext context = new InitialContext(); topicConnectionFactory = (TopicConnectionFactory) context.lookup(CONNECTION_FACTORY_JNDI_NAME); topicConnection = topicConnectionFactory.createTopicConnection(); topicSession = topicConnection.createTopicSession( false, Session.AUTO_ACKNOWLEDGE); topic = (Topic) context.lookup(TOPIC_NAME); topicSubscriber = topicSession.createSubscriber(topic); topicSubscriber.setMessageListener(this); topicPublisher = topicSession.createPublisher(topic); topicConnection.start(); } catch (Exception e) { e.printStackTrace(); } } public void onMessage(Message message) { try { if (message instanceof ObjectMessage) { ObjectMessage om = (ObjectMessage)message; CacheMessage cacheMessage = (CacheMessage)om.getObject(); ICacheItem item = cacheMessage.getCacheItem(); interpretCacheMessage(cacheMessage); } } catch (JMSException jmse) { jmse.printStackTrace(); } } private void interpretCacheMessage(CacheMessage cacheMessage) { ICacheItem cacheItem = cacheMessage.getCacheItem(); if (cacheMessage.getMessageType()==CacheMessage.ADD) { synchronized (this) { cache.put(cacheItem.getId(), cacheItem.getData()); } } else if (cacheMessage.getMessageType()==CacheMessage.MODIFY) { synchronized (this) { cache.put(cacheItem.getId(), cacheItem.getData()); } } } private void sendMessage(CacheMessage cacheMessage) { try { Message message = topicSession.createObjectMessage(cacheMessage); topicPublisher.publish(message); } catch (Exception e) { e.printStackTrace(); } } }
Class Loading
Containers tend to implement their own class loading structures to support hot deployment for J2EE components and class isolation WAR files.Class isolation in WAR files means that all classes found in a WAR file must be isolated from other deployed WAR files. Each WAR file therefore is loaded by a separate instance of the Class loader. The purpose is to allow each WAR file have its own version of commonly named JSPs like "index.jsp".
If a Singleton class is located in several WAR files it will mean that a separate Singleton instance will be created for each WAR file. This may of course be ok for the required design but it is worth being aware of.
Resource --
http://www.roseindia.net/javatutorials/J2EE_singleton_pattern.shtml
Implementing Singleton in cluster envirnoment - Option 2
This can achieved using intial context and bind the map to it.
Resource --
http://java.sun.com/blueprints/patterns/ServiceLocator.html
http://www.roseindia.net/javatutorials/J2EE_singleton_pattern.shtml
- Improving performance with the Singleton pattern and caching. The Singleton pattern [
GHJV95
] ensures that only a single instance of a class exists in an application. The meaning of the term "singleton" is not always clear in a distributed environment; inServiceLocator
it means that only one instance of the class exists per class loader.
The Singleton pattern improves performance because it eliminates unnecessary construction ofServiceLocator
objects, JNDIInitialContext
objects, and enables caching (see below).
The Web-tier service locator also improves performance by caching the objects it finds. The cache lookup ensures that a JNDI lookup only occurs once for each name. Subsequent lookups come from the cache, which is typically much faster than a JNDI lookup.
The code excerpt below demonstrates how theServiceLocator
improves performance with the Singleton pattern and an object cache.
A private class variablepublic class ServiceLocator { private InitialContext ic; private Map cache; private static ServiceLocator me; static { try { me = new ServiceLocator(); } catch(ServiceLocatorException se) { System.err.println(se); se.printStackTrace(System.err); } } private ServiceLocator() throws ServiceLocatorException { try { ic = new InitialContext(); cache = Collections.synchronizedMap(new HashMap()); } catch (NamingException ne) { throw new ServiceLocatorException(ne); } } static public ServiceLocator getInstance() { return me; }
me
contains a reference to the only instance of theServiceLocator
class. It is constructed when the class is initialized in the static initialization block shown. The constructor initializes the instance by creating the JNDIInitialContext
and theHashMap
that is used a cache. Note that the no-argument constructor isprivate
: only classServiceLocator
can construct aServiceLocator
. Because only the static initialization block creates the instance, there can be only one instance per class loader.
Classes that use service locator access the singletonServiceLocator
instance by calling public methodgetInstance
.
Each object looked up has a JNDI name which, being unique, can be used as a cacheHashMap
key for the object. Note also that theHashMap
used as a cache is synchronized so that it may be safely accessed from multiple threads that share the singleton instance.
Resource --
http://java.sun.com/blueprints/patterns/ServiceLocator.html
http://www.roseindia.net/javatutorials/J2EE_singleton_pattern.shtml
Implementing Singleton in cluster envirnoment - Option 1
Steps 1: Write a singleton class which willl implement interface -
weblogic.cluster.singleton.SingletonService
public void activate()
-
public void deactivate()
This method should obtain any system resources and start any services required for the singleton service to begin processing requests. This method is called in the following cases:
Steps 2: make a jar and copy to App-Inf/lib folder so that it is picked during application intialisation.
Step 3:
A
dd the following entry to the weblogic-application.xml
descriptor file. <weblogic-application> ... <singleton-service> <class-name>mypackage.MySingletonServiceImpl</class-name> <name>Appscoped_Singleton_Service</name> </singleton-service> ... </weblogic-application>
Resource --
http://download.oracle.com/docs/cd/E11035_01/wls100/cluster/service_migration.html#wp1051471
Subscribe to:
Posts (Atom)