Thursday, October 14, 2010

Openldap overlays

Resource --


http://www.zytrax.com/books/ldap/ch6/ppolicy.html

Weblogic Scripting Tool WLST

Resource

file:///F:/bea/weblogic92/samples/server/docs/core/index.html

Weblogic Node Manager importance

Resource

http://blogs.oracle.com/jamesbayer/2010/01/weblogic_nodemanager_quick_sta.html

GC Viewer tool to analyze gc logs

Resource --

http://www.javaperformancetuning.com/tools/gcviewer/index.shtml

Free TTS - Text To Speech library

http://www.linuxfromscratch.org/blfs/view/6.2.0/multimedia/freetts.html

CXF webservice

Resource--

http://www.jroller.com/gmazza/entry/web_service_tutorial

Unit Testing Servlets with Weblogic and Cactus

Resource

http://www.abcseo.com/papers/cactus-wl51.htm

Lunt Demo

link for lunt build demo --

http://demo.pmease.com/

Choosing the right Collection

Here is a guide for selecting the proper implementation of a Set, List, or Map. It was compiled for Java 1.4. Many additions have been made to the Collections Framework since then (notably the Queue and Deque interfaces, and various items in java.util.concurrent). These later additions have been omitted here, since this briefer summary should suffice for most cases.
The best general purpose or 'primary' implementations are likely ArrayList, LinkedHashMap, and LinkedHashSet. They are marked below as " * ". Their overall performance is better, and you should use them unless you need a special feature provided by another implementation. That special feature is usually ordering or sorting.
Here, "ordering" refers to the order of items returned by an Iterator, and "sorting" refers to sorting items according to Comparable or Comparator.
 
Interface HasDuplicates? Implementations Historical
Set no HashSet ... LinkedHashSet* ... TreeSet
...
List yes ... ArrayList* ... LinkedList
...
Vector, Stack
Map no duplicate keys  HashMap ... LinkedHashMap* ... TreeMap Hashtable, Properties
Principal features of non-primary implementations :
  • HashMap has slightly better performance than LinkedHashMap, but its iteration order is undefined
  • HashSet has slightly better performance than LinkedHashSet, but its iteration order is undefined
  • TreeSet is ordered and sorted, but slow
  • TreeMap is ordered and sorted, but slow
  • LinkedList has fast adding to the start of the list, and fast deletion from the interior via iteration
Iteration order for above implementations :
  • HashSet - undefined
  • HashMap - undefined
  • LinkedHashSet - insertion order
  • LinkedHashMap - insertion order of keys (by default), or 'access order'
  • ArrayList - insertion order
  • LinkedList - insertion order
  • TreeSet - ascending order, according to Comparable / Comparator
  • TreeMap - ascending order of keys, according to Comparable / Comparator
For LinkedHashSet and LinkedHashMap, the re-insertion of an item does not affect insertion order. For LinkedHashMap, 'access order' is from the least recent access to the most recent access. In this context, only calls to get, put, and putAll constitute an access, and only calls to these methods affect access order.
While being used in a Map or Set, these items must not change state (hence, it is recommended that these items be immutable objects):
  • keys of a Map
  • items in a Set
Sorting requires either that :
To retain the order of a ResultSet as specified in an ORDER BY clause, insert the records into a List or a LinkedHashMap.


Resource

http://www.javapractices.com/topic/TopicAction.do?Id=65

Continuous Integration

Any Agile Project Manager worth his salt should be aware of the term ‘Continuous Integration’ (often shortened to ‘CI’). But what is it, and how is it done?
This series of short blog articles aims to answer these two questions, so you can start your next project, or re-configure an existing project, armed with the necessary understanding about this key practice within agile software delivery.
Background
The basic premise of CI is pretty straightforward. An agile team needs a repeatable and reliable method to create a build of the software under development. Why so? Well, if its not already obvious, you may want to revisit the principles behind the Agile Manifesto. Within them you will notice a number of references to ‘working software’, and the foundation of any working software is a stable, tested build.
Recipe for CI
So how does CI help to create this build? Lets list the essential ingredients that we need :
  1. Source Code Control – in a typical agile project, developers turn User Stories into source code, in whatever programming language(s) the project is using. Once their work is at an appropriate level of completeness, they check-in or commit their work to the source code (a.k.a version) control system; for example, Subversion
  2. Build Tool – if the source code needs to be compiled (e.g. Java or C++) then we will need tooling to support that. Modern Integrated Developer Environments (IDE), such as Eclipse or Visual Studio are able to perform this task as developers save source code files. But if we want to build the software independently of an IDE in an automated fashion, say on a server environment, we need an additional tool to do this. Examples of this type of tool are Ant, Maven and Rake and Make. These tools can also package a binary output from the build. For example, with Java projects this might be a JAR or WAR file – the deployable unit that represents the application being developed.
  3. Test Tools – as part of the build process, in addition to compilation and the creation of binary outputs, we should also verify that (at minimum) the unit tests pass. For example, in Java these are often written using the JUnit automated unit testing framework. The tools in (2) often natively support the running of such tests, so they should always be executed during a build. In addition to unit testing, there are numerous other quality checks we can perform and status reports CI can produce. I’ll cover these in detail in a subsequent part to this series.
  4. Schedule or Trigger – we might want to create our build according to a schedule (e.g ‘every afternoon’) or when there is a change in the state of the project source code. In the latter case we can set up a simple rule that triggers a build whenever a developer changes the state of the source code by committing his/her changes, as outlined in (1). This has the effect of ensuring that your teams work is continuously integrated to produce a stable build, and, as you may have guessed, is where this practice gets its name from.
  5. Notifications – the team needs to know when a build fails, so it can respond and fix the issue. There are lots of ways to notify a team these days – instant messaging, Twitter etc, but the most common by far is still email.
  6. Continuous Integration Recipe The tool that wires these five elements together is a Continuous Integration Server. It interacts with the source control system to obtain the latest revision of the code, launches the build tool (which also runs the unit tests) and notifies us of any failures. And it does this according to a schedule or state change based trigger. A CI server often also provides a web-based interface that allows a team to review the status, metrics and data associated with each build. CI Server options There is a pretty overwhelming choice of available tools in this space. Some are open source, some proprietary. I don’t have time to go into all the available options here unfortunately. However, there is a handy feature comparison matrix available here. Of course, it would be remiss of me not to mention our own hosted service, which allows you to get started with CI in no time at all, without having to be an ‘expert’ user. Resource -- http://www.theserverside.com/discussions/thread.tss?thread_id=60718

Implementing Singleton in cluster envirnoment - Option 3

Clustering and RMI Singletons

Clustering is when you have J2EE containers that are running on different VMs talk to each other. Clustering is used to provide load balancing and fail over for J2EE clients.
The simple/local Singleton as shown is a non-distributed object. Therefore in a clustered environment you will end up with at least one Singleton object on each server. This of course may be ok for the design requirements.
However if the design is to have one Singleton for the cluster then a common approach is to implement a "pinned service". This refers to an RMI object that is only located on one container in the cluster. Its stub is then registered on the clustered JNDI tree making the object available cluster wide. This raises of causes one issue, what happens when the server containing the RMI Singleton crashes?
A Container in the cluster could try to bind a new RMI Singleton if it notices it is missing out of the JNDI tree. However this could cause issues if all the containers try to bind new RMI Singletons at the same time in response to a failure.
Generally at the end of the day RMI Singletons do tend to have the drawback that they end up as single points of failure.
In the following code example a local Singleton is used to act as a Wrapper around a RMI object that is bound into the clusters JNDI tree.
import javax.naming.*;
import java.rmi.*;

public class RMISingletonWrapper {
  private static RMISingletonWrapper instance = null;
  private static String SINGLETON_JNDI_NAME = "RMISingleton";

  public static RMISingletonWrapper getInstance() {
    return instance;
  }

  // All methods in delegate the method call to the actual
  // Singleton that lives on the clustered JNDI tree.
  public void delegate() {
    try {
      RMISingleton singleton = getRMISingleton();
      singleton.delegate();
    } catch (Exception e) {
      // Could try and recover
      e.printStackTrace();
    }
  }

  // Locate the true Singleton object in the cluster.
  private RMISingleton getRMISingleton() {
    RMISingleton rmiSingleton = null;
    try {
      Context jndiContext = new InitialContext();
      Object obj = jndiContext.lookup(SINGLETON_JNDI_NAME);
      rmiSingleton = (RMISingleton)PortableRemoteObject.narrow(
        obj,
        Class.forName("examples.singleton.rmi.RMISingleton"));
    } catch (Exception e) {
      // Could try and recover
      e.printStackTrace();
    }
    return rmiSingleton;
  }
}

Distributed Singleton Caches

One of the most common usages of Singletons is as caches of data. This use has issue for non RMI Singletons in a clustered environment. Problems happen when you attempt to do an update to the cache. Since a Singleton instance exists on each Container any update to the cached data by one Singleton will not be replicated to the other Singletons that exist on the other Containers.
This issue can be resolved by the use of the Java Messaging API to send update messages between Containers. In this approach if an update is made to the cache on one Container a message is published to a JMS Topic. Each Container has a listener that subscribes to that topic and updates its Singleton cache based on the messages it receives. This approach is still difficult as you have to make sure that the updates received on each container are handled in a synchronous fashion. JMS messages also take time to process so the caches may spend some time out of sync.
In the following simplistic implementation of a distributed Cache a CacheManager Singleton holds a Map of cached items. Items to be cached are placed in a CachItem object which implements the ICacheItem interface.
The CacheManager does not make any attempt to remove old items from the Cache based on any criteria like "Last Accessed Time".
import javax.jms.*;

public class CacheManager implements MessageListener {
  public static CacheManager instance = null;
  public static Map cache = new HashMap();

  private TopicConnectionFactory topicConnectionFactory;
  private TopicConnection topicConnection;
  private TopicSession topicSession;
  private Topic topic;
  private TopicSubscriber topicSubscriber;
  private TopicPublisher topicPublisher;

  private final static String CONNECTION_FACTORY_JNDI_NAME =
    "ConnectionFactory";
  private final static String TOPIC_NAME = "TopicName";

  public static void initInstance() {
    instance = new CacheManager();
  }

  public static CacheManager getInstance() {
    return instance;
  }

  public synchronized void addCacheItem(ICacheItem cacheItem) {
    CacheMessage cacheMessage = new CacheMessage();
    cache.put(cacheItem.getId(), cacheItem.getData());
    cacheMessage.setMessageType(CacheMessage.ADD);
    cacheMessage.setCacheItem(cacheItem);
    sendMessage(cacheMessage);
  }

  public synchronized void modifyCacheItem(ICacheItem cacheItem) {
    CacheMessage cacheMessage = new CacheMessage();
    cache.put(cacheItem.getId(), cacheItem.getData());
    cacheMessage.setMessageType(CacheMessage.MODIFY);
    cacheMessage.setCacheItem(cacheItem);
    sendMessage(cacheMessage);
  }

  public ICacheItem getCacheItem(String key) {
    return (ICacheItem)cache.get(key);
  }

  private CacheManager() {
    try {
      InitialContext context = new InitialContext();
      topicConnectionFactory = (TopicConnectionFactory)
        context.lookup(CONNECTION_FACTORY_JNDI_NAME);
      topicConnection = topicConnectionFactory.createTopicConnection();
      topicSession = topicConnection.createTopicSession(
        false, Session.AUTO_ACKNOWLEDGE);
      topic = (Topic) context.lookup(TOPIC_NAME);
      topicSubscriber = topicSession.createSubscriber(topic);
      topicSubscriber.setMessageListener(this);
      topicPublisher = topicSession.createPublisher(topic);
      topicConnection.start();
    } catch (Exception e) {
      e.printStackTrace();
    }
  }

  public void onMessage(Message message) {
    try {
      if (message instanceof ObjectMessage)  {
        ObjectMessage om = (ObjectMessage)message;
        CacheMessage cacheMessage = (CacheMessage)om.getObject();
        ICacheItem item =  cacheMessage.getCacheItem();
        interpretCacheMessage(cacheMessage);
      }
    } catch (JMSException jmse) {
      jmse.printStackTrace();
    }
  }

  private void interpretCacheMessage(CacheMessage cacheMessage) {
    ICacheItem cacheItem = cacheMessage.getCacheItem();
    if (cacheMessage.getMessageType()==CacheMessage.ADD) {
      synchronized (this) {
        cache.put(cacheItem.getId(), cacheItem.getData());
      }
    } else if (cacheMessage.getMessageType()==CacheMessage.MODIFY) {
      synchronized (this) {
        cache.put(cacheItem.getId(), cacheItem.getData());
      }
    }
  }

  private void sendMessage(CacheMessage cacheMessage) {
    try {
      Message message = topicSession.createObjectMessage(cacheMessage);
      topicPublisher.publish(message);
    } catch (Exception e) {
      e.printStackTrace();
    }
  }
}

Class Loading

Containers tend to implement their own class loading structures to support hot deployment for J2EE components and class isolation WAR files.
Class isolation in WAR files means that all classes found in a WAR file must be isolated from other deployed WAR files. Each WAR file therefore is loaded by a separate instance of the Class loader. The purpose is to allow each WAR file have its own version of commonly named JSPs like "index.jsp".
If a Singleton class is located in several WAR files it will mean that a separate Singleton instance will be created for each WAR file. This may of course be ok for the required design but it is worth being aware of.




Resource --

http://www.roseindia.net/javatutorials/J2EE_singleton_pattern.shtml

Implementing Singleton in cluster envirnoment - Option 2

This can achieved using intial context and bind the map to it.

  • Improving performance with the Singleton pattern and caching. The Singleton pattern [ GHJV95 ] ensures that only a single instance of a class exists in an application. The meaning of the term "singleton" is not always clear in a distributed environment; in ServiceLocator it means that only one instance of the class exists per class loader.
    The Singleton pattern improves performance because it eliminates unnecessary construction of ServiceLocator objects, JNDI InitialContext objects, and enables caching (see below).
    The Web-tier service locator also improves performance by caching the objects it finds. The cache lookup ensures that a JNDI lookup only occurs once for each name. Subsequent lookups come from the cache, which is typically much faster than a JNDI lookup.
    The code excerpt below demonstrates how the ServiceLocator improves performance with the Singleton pattern and an object cache.
    
    public class ServiceLocator {
    
        private InitialContext ic;
        private Map cache;
    
        private static ServiceLocator me;
    
        static {
          try {
            me = new ServiceLocator();
          } catch(ServiceLocatorException se) {
            System.err.println(se);
            se.printStackTrace(System.err);
          }
        }
       private ServiceLocator() throws ServiceLocatorException  {
          try {
            ic = new InitialContext();
            cache = Collections.synchronizedMap(new HashMap());
          } catch (NamingException ne) {
                throw new ServiceLocatorException(ne);
           }
        }
    
       static public ServiceLocator getInstance() {
          return me;
        }
              
    A private class variable me contains a reference to the only instance of the ServiceLocator class. It is constructed when the class is initialized in the static initialization block shown. The constructor initializes the instance by creating the JNDI InitialContext and the HashMap that is used a cache. Note that the no-argument constructor is private: only class ServiceLocator can construct a ServiceLocator. Because only the static initialization block creates the instance, there can be only one instance per class loader.
    Classes that use service locator access the singleton ServiceLocator instance by calling public method getInstance.
    Each object looked up has a JNDI name which, being unique, can be used as a cache HashMap key for the object. Note also that the HashMap used as a cache is synchronized so that it may be safely accessed from multiple threads that share the singleton instance.


Resource --
http://java.sun.com/blueprints/patterns/ServiceLocator.html
http://www.roseindia.net/javatutorials/J2EE_singleton_pattern.shtml

    Implementing Singleton in cluster envirnoment - Option 1

    Steps 1: Write a singleton class which willl implement interface - weblogic.cluster.singleton.SingletonService              
    • public void activate()
    • This method should obtain any system resources and start any services required for the singleton service to begin processing requests. This method is called in the following cases:
      • When a newly deployed application is started
      • During server start
      • During the activation stage of service migration
    • public void deactivate()
    • This method is called during server shutdown and during the deactivation stage of singleton service migration. This method should release any resources obtained through the activate() method. Additionally, it should stop any services that should only be available from one member of a cluster. 

    Steps 2:  make a jar and copy to App-Inf/lib folder so that it is picked during application intialisation.


    Step 3:
    Add the following entry to the weblogic-application.xml descriptor file.
    <weblogic-application>
    ...
       <singleton-service>
          <class-name>mypackage.MySingletonServiceImpl</class-name>
          <name>Appscoped_Singleton_Service</name>
       </singleton-service>
    ...
    </weblogic-application>
      
     
     
    Resource -- 
    http://download.oracle.com/docs/cd/E11035_01/wls100/cluster/service_migration.html#wp1051471 
     

    Monday, October 11, 2010

    Configuring the Sun one webserver Reverse Proxy Plug-in




    1. magnus.conf

    # ****** Weblogic Proxy plug-in ******
    Init fn="load-modules" funcs="wl_proxy,wl_init" shlib="E:/Sun/WebServer6.1/plugins/weblogic/proxy61.dll"
    Init fn="wl_init"
    # ****** End Weblogic plug-in *****


    1. obj.conf:
    Configuration of the obj.conf varies depending on the intended use. See the Java System Web Server documentation for use and syntax of the obj.conf.
    Example 1
    This configuration will proxy the URI “/example” if it does not exist locally. A local copy of “/example” is preferred to a remote copy:


    <Object name="default">
    # Assign the URI "/example" (and any more specific URIs;
    # /example/foo.html, /example/qwe.jsp, etc) the object name
    # "server.example.com"
    NameTrans fn="assign-name"
    from="/example(|/*)"
    name="server.example.com"
    ...
    </Object>
    # Execute these instructions for any resource with the assigned name
    # "server.example.com"
    <Object name="server.example.com">
    # Check to see if a local copy of the requested resource exists. Only
    # proxy the request if there is not a local copy.
    ObjectType fn="check-passthrough"
    Sun Microsystems, 11 Configuring the Reverse Proxy Plug-in , Inc.
    type="magnus-internal/passthrough"
    # Proxy the requested resource to the URL
    # "http://server.example.com:8080" only if the "type" has been set to
    # "magnus-internal-passthrough"
    Service type="magnus-internal/passthrough"
    fn="service-passthrough"
    servers="http://server.example.com:8080"
    </Object>


    Example 2
    This configuration will proxy all requests for the URI “/app” without first checking for a local version. The reverse proxy plug-in provides its own credentials via Basic-Auth to the origin server.


    <Object name="default">
    # Assign the URI "/app" (and any more specific URIs;
    # /app/foo.html, /app/qwe.jsp, etc) the object name
    # "server.example.com"
    NameTrans fn="assign-name"
    from="/app(|/*)"
    name="server.example.com"
    ...
    </Object>
    # Execute these instructions for any resource with the assigned name
    # "server.example.com"
    <Object name="server.example.com">
    # Proxy the requested resource to the URL
    # "http://server.example.com:8080"
    Service fn="service-passthrough"
    servers="http://server.example.com:8080"
    user="blues"
    password="j4ke&elwOOd"
    </Object>



                   
    The following obj.conf snippet demonstrates the use of auth-passthrough (note that these lines are not indented in a real obj.conf):
                   
    <Object name="default">
    AuthTrans fn="auth-passthrough"
    ...
    </Object>



    check-passthrough:
    The check-passthrough ObjectType SAF checks to see if the requested resource (for example, the HTML document or GIF image) is available on the local server. If the requested resource does not exist locally, check-passthrough sets the type to indicate that the request should be passed to another server for processing by service-passthrough.
    The check-passthrough SAF accepts the following parameters:
    • type — (Optional) The type to use for files that do not exist locally. If not specified, type defaults to magnusinternal/passthrough.

    service-passthrough

    The service-passthrough Service SAF forwards a request to another server for processing.
    The service-passthrough SAF accepts the following parameters:

    servers — A quoted, space-delimited list of servers that receive the forwarded requests. Individual server names may optionally be prefixed with http:// or https:// to indicate the protocol, or suffixed with a colon and integer to indicate the port.

    sticky-cookie — (Optional) The name of a cookie that causes requests from a given client to “stick” to a particular server. Once a request containing a cookie with this name is forwarded to a given server, service-passthrough attempts to forward subsequent requests from that client to the same server by sending a JROUTE header back to the client. If not specified, sticky-cookie defaults to JSESSIONID.

    user — (Optional) The username that service-passthrough uses to authenticate to the remote server via Basic-Auth. Note that ‘user’ requires that ‘password’ also be specified. Sun Microsystems, 6 Sun Java System Web Server Reverse Proxy Plug-in , Inc.

    password — (Optional) The password that service-passthrough uses to authenticate to the remote server via Basic-Auth. Note that ‘password’ requires that ‘user’ also be specified.

    client-cert-nickname — (Optional) Nickname of the client certificate that service-passthrough uses to authenticate to the remote server.

    validate-server-cert — (Optional) Boolean that indicates whether service-passthrough should validate the certificate presented by the remote server. If not specified, validate-server-cert defaults to false.
    rewrite-host — (Optional) Boolean that indicates whether service-passthrough should rewrite the Host header sent to remote servers, replacing the local server’s hostname with the remote server’s hostname. If not specified, rewrite-host defaults to false.

    rewrite-location — (Optional) Boolean that indicates whether service-passthrough should rewrite the Location headers returned by a remote server, replacing the remote server’s scheme and hostname with the local server’s scheme and hostname. If not specified, rewrite-location defaults to true.

    ip-header — (Optional) Name of the header that contains the client’s IP address, or "" if the IP address should not be forwarded. If not specified, ip-header defaults to Proxy-ip.

    cipher-header — (Optional) Name of the header that contains the symmetric cipher used to communicate with the client (when SSL/TLS is used), or "" if the symmetric cipher name should not be forwarded. If not specified, cipher-header defaults to Proxy-cipher.

    keysize-header — (Optional) Name of the header that contains the symmetric key size used to communicate with the client (when SSL/TLS is used), or "" if the symmetric key size name should not be forwarded. If not specified, keysizeheader defaults to Proxy-keysize.

    secret-keysize-header — (Optional) Name of the header that contains the effective symmetric key size used to communicate with the client (when SSL/TLS is used), or "" if the effective symmetric key size name should not be forwarded. If not specified, secret-keysize-header defaults to Proxy-secret-keysize.

    ssl-id-header — (Optional) Name of the header that contains the client’s SSL/TLS session ID  (when SSL/TLS is used), or "" if the SSL/TLS session ID should not be forwarded. If not specified, ssl-id-header defaults to Proxy-ssl-id.

    issuer-dn-header — (Optional) Name of the header that contains the client certificate issuer DN (when SSL/TLS is used), or "" if the client certificate issuer DN should not be forwarded. If not specified, issuer-dn-header defaults to Proxy-issuer-dn.

    user-dn-header — (Optional) Name of the header that contains the client certificate user DN (when SSL/TLS is used), or "" if the client certificate user DN should not be forwarded. If not specified, user-dn-header defaults to Proxy-user-dn.

    auth-cert-header — (Optional) Name of the header that contains the DER-encoded client certificate in Base64 encoding (when SSL/TLS is used), or "" if the client certificate should not be forwarded. If not specified, auth-cert-header defaults to Proxy-auth-cert.

    When multiple remote servers are configured, service-passthrough chooses a single remote server from the list on a request-by-request basis. If a remote server cannot be contacted or returns an invalid response, service-passthrough sets the status code to 502 Bad Gateway and returns REQ_ABORTED. This returns an error to the browser. This error can be customized in the Web Server by configuring a customized response for the 502 error code. When user and password are specified, service-passthrough uses these credentials to authenticate to the remote server using HTTP basic authentication. When one or more of the servers in the servers parameter are configured with a https:// prefix, client-cert-nickname specifies the nickname of the client certificate service-passthrough uses to authenticate to the remote server. Sun Microsystems, 7 Sun Java System Web Server Reverse Proxy Plug-in , Inc. Note that service-passthrough generally uses HTTP/1.1 and persistent connections for outbound requests, with the following exceptions:
    • When forwarding a request with a Range header that arrived via HTTP/1.0, service-passthrough issues an HTTP/1.0 request. This is done because the experimental Range semantics expected by Netscape HTTP/1.0 clients differ from the Range semantics defined by the HTTP/1.1 specification.
    • When forwarding a request with a request body (e.g. a POST request), service-passthrough does not reuse an existing persistent connection. This is done because the remote server is free to close a persistent connection at any time, and service-passthrough does not retry requests with a request body.

    In addition, service-passthrough encodes information about the originating client in the headers named by the ip-header, cipher-header, keysize-header, secret-keysize-header, ssl-id-header, issuer-dn-header, user-dn-header, and auth-cert-header parameters (removing any client-supplied headers with the same name) before forwarding the request. Applications running on the remote server may examine these headers to extract information about the originating client.


    Additional Resources
    Sun Java System Web Server: www.sun.com/webserver
    • Downloads: Web Server and Reverse Proxy Plug-in: http://www.sun.com/download/
    • Security and reverse proxy information: http://wwws.sun.com/software/products/web_srvr/security.html

    Friday, October 8, 2010

    Sun One performance configuration paramters




    SJSW can be used in several ways, like a Servlet/JSP engine and or a Static File serving server and or running traditional NSAPI plug ins.
    Unless a lot of caching or huge Java-Heap is needed, the 32bit web server is good for most generic cases.
    Given below are some generic tunings applicable to a server capable of serving all of the above for about 8000 connections.
    magnus.conf
    ---------------------

    ListenQ: 8192
    ConnQueueSize: 8192
    RqThrottle: 128
    ThreadIncrement: 128
    UseNativePoll: 1
    KeepAliveTimeout: 30
    MaxKeepAliveConnections: 8192
    KeepAliveThreads: 2
    KeepAliveQueryMeanTime: 50

    Init fn="cache-init" disable="true"
    Init fn="pool-init" block-size="65536"

    nsfc.conf
    ---------------------

    FileCacheEnable=on
    CacheFileContent=on
    TransmitFile=off
    MaxAge=3600
    MediumFileSizeLimit=1000001
    MediumFileSpace=1
    SmallFileSizeLimit=500000
    SmallFileSpace=1000000000
    MaxFiles=16384
    MaxOpenFiles=16384

    server.xml
    ---------------------

    Make sure to use the following JVM parameters
    <JVMOPTIONS>-server</JVMOPTIONS>
    <JVMOPTIONS>-Xbatch</JVMOPTIONS>
    <JVMOPTIONS>-Xloggc:/tmp/gc.log</JVMOPTIONS>
    <JVMOPTIONS>-Xmx1024m</JVMOPTIONS>
    <JVMOPTIONS>-Xms1024m</JVMOPTIONS>
    <JVMOPTIONS>-XX:ParallelGCThreads=4</JVMOPTIONS>
    <JVMOPTIONS>-XX:+DisableExplicitGC</JVMOPTIONS>
    <JVMOPTIONS>-XX:-BindGCTaskThreadsToCPUs</JVMOPTIONS>
    Replace
    LIBMTMALLOC=/usr/lib/libmtmalloc.so
    with
    LIBMTMALLOC=/usr/lib/libumem.so

    Configure SSL in Weblogic application server


    1.      Create a Directory C:\MyCertificates

    2.      Go to above created folder & add new file – build.xml

    <project name=”Generate Keystores” default=”all” basedir=”.”>
    <property name=”alias” value=”alias” />
    <property name=”dname” value=”CN=localhost, OU=Customer Support, O=BEA Systems Inc, L=Denver, ST=Colorado, C=US”/>
    <property name=”keypass” value=”keypass” />
    <property name=”identity.jks” value=”identity.jks” />
    <property name=”storepass” value=”storepass” />
    <property name=”cert.cer” value=”cert.cer” />
    <property name=”trust.jks” value=”trust.jks” />
    <property name=”jdk.home” value=”C:/bea/jdk150_06
    />
    <target name=”all” depends=”create-keystores”/>

    <target name=”create-keystores”>
    <echo>Generating Identity of the Server</echo>
    <exec executable=”${jdk.home}/bin/keytool.exe”>
    <arg line=’-genkey -alias ${alias} -keyalg RSA -keysize 1024 -dname “${dname}” -keypass ${keypass} -keystore ${identity.jks} -storepass ${storepass}’ />
    </exec>
    <echo>Self Signing the Certificate</echo>
    <exec executable=”${jdk.home}/bin/keytool.exe”>
    <arg line=’-selfcert -alias ${alias} -dname “${dname}” -keypass ${keypass} -keystore ${identity.jks} -storepass ${storepass}’ />
    </exec>
    <echo>Exporting the Server certificate</echo>
    <exec executable=”${jdk.home}/bin/keytool.exe”>
    <arg line=’-export -alias ${alias}  -file  ${cert.cer} -keystore ${identity.jks} -storepass ${storepass}’ />
    </exec>
    <echo>Creating Trust Store</echo>
    <exec executable=”${jdk.home}/bin/keytool.exe”>
    <arg line=’-import -alias ${alias}  -file  ${cert.cer} -keystore ${trust.jks} -storepass ${storepass} -noprompt’ />
    </exec>
    </target>

    </project>

     

    3.      Now Open a command/Shell Prompt and then run the <bea.home>\weblogic92\server\bin\startWLS.cmd to weblogic specific environment details.

    4.      Run the <ant.home>/bin/ant to create all the required Certificates.

     

    5.      Create a wlst script to configure the ssl on weblogic, copy below contain to text file and name as ssl.py.

    Note that we need to edit the details highlighted in

    cd ("/Servers/" + server_name)
    set ("ListenAddress", “”)
    set ("ListenPort", “
    7001”)
    set("AdministrationPort",
    server_domain_override_port)
    set ("KeyStores", "CustomIdentityAndCustomTrust")
    enc_pass = encrypt (trustpass,
    domain_home)
    set ("CustomTrustKeyStorePassPhraseEncrypted", enc_pass)
    set ("CustomTrustKeyStoreType", "JKS")
    set ("CustomIdentityKeyStoreFileName",
    keystore_file)
    enc_pass = encrypt (keypass,
    domain_home)
    set ("CustomIdentityKeyStorePassPhraseEncrypted", enc_pass)
    set ("CustomIdentityKeyStoreType", "JKS")
    set ("CustomTrustKeyStoreFileName",
    truststore_file)
    ###set ("MSIFileReplicationEnabled", "true")

    # Managed Server SSL Settings
    cd ("/Servers/" +
    server_name + "/SSL/" + server_name)
    set ("Enabled", "true")
    set ("ListenPort",
    server_ssl_listen_port)
    set ("HostnameVerificationIgnored", "true")
    set ("ServerPrivateKeyAlias", "
    weblogic-key")
    set ("ServerPrivateKeyPassPhraseEncrypted", enc_pass)

     

    6.      Run the command <bea.home>\weblogic92\common\bin\wlst.cmd

    Or even we can configure manually by using Admin console for that follow below steps

    7.      Now Login to the Admin Console to Configure these Certificates…

    Home >Summary of Servers >AdminServer > General
    SSL Listen Port: Enabled (Check)
    SSL Listen Port: 7002

    Home >Summary of Servers >AdminServer > Keystores
    Keystores: Custom Identity Custom Trust
    Identity
    Custom Identity Keystore: <path>/identity.jks
    Custom Identity Keystore Type: JKS
    Custom Identity Keystore Passphrase: storepass
    Confirm Custom Identity Keystore Passphrase: storepass
    Trust
    Custom Trust Keystore:<path>/trust.jks
    Custom Trust Keystore Type: JKS
    Custom Trust Keystore Passphrase: storepass
    Confirm Custom Trust Keystore Passphrase: storepass
    Click SAVE

    Home >Summary of Servers >AdminServer > SSL
    Identity and Trust Locations: Keystores
    Private Key Alias: alias
    Private Key Passphrase: keypass
    Confirm Private Key Passphrase: keypass
    Click SAVE

     

     

     

    Now try to access the Admin Console…on HTTPS port

    https://localhost:7002/console

    Openldap - opensource ldap server

    1.     Download the below two files,
                                                                    i.      openldap-2.2.29-db-4.3.29-openssl-0.9.8a-win32_Setup.exe
                                                                 ii.      openldap-for-windows.msi
    2.     Edit slapd.conf under C:\Program Files\OpenLDAP location with below


    # $OpenLDAP: pkg/ldap/servers/slapd/slapd.conf,v 1.23.2.8 2003/05/24 23:19:14 kurt Exp $
    #
    # See slapd.conf(5) for details on configuration options.
    # This file should NOT be world readable.
    #
    ucdata-path     ./ucdata
    #include                      ./schema/core.schema

    ## updated selfcare schemas
    include                        ./schema/selfcare/Attributes.schema
    include                        ./schema/selfcare/ObjClass.schema
    include                        ./schema/selfcare/ppolicy.schema

    #include                      ./schema/cosine.schema
    #include                      ./schema/nis.schema
    #include                      ./schema/inetorgperson.schema
    #include                      ./schema/openldap.schema
    #include                      ./schema/dyngroup.schema
    #include                      ./schema/java.schema
    #include                      ./schema/attribute.schema
    #include                      ./schema/object.schema


    # Load dynamic backend modules:
    # modulepath /usr/lib/openldap # or /usr/lib64/openldap
    # moduleload accesslog.la
    # moduleload auditlog.la
    # moduleload back_sql.la
    # moduleload denyop.la
    # moduleload dyngroup.la
    # moduleload dynlist.la
    # moduleload lastmod.la
    # moduleload pcache.la
    # moduleload ppolicy.la
    # moduleload refint.la
    # moduleload retcode.la
    # moduleload rwm.la
    # moduleload syncprov.la
    # moduleload translucent.la
    # moduleload unique.la
    # moduleload valsort.la


    # Global Definitions

    serverID          1
    password-hash     {SHA}
    threads           20
    concurrency       20
    #gentlehup         on
    #idletimeout       300
    #loglevel          -1
    sizelimit         1000
    #timelimit         3600
    #readonly          off
    lastmod           on
    #schemacheck            on

    # Define global ACLs to disable default read access.

    # Do not enable referrals until AFTER you have a working directory
    # service AND an understanding of referrals.
    #referral          ldap://root.openldap.org

    pidfile              ./run/slapd.pid
    argsfile            ./run/slapd.args

    # Load dynamic backend modules:
    # modulepath  ./libexec/openldap
    # moduleload  back_bdb.la
    # moduleload  back_ldap.la
    # moduleload  back_ldbm.la
    # moduleload  back_passwd.la
    # moduleload  back_shell.la

    # Enable TLS if port is defined for ldaps

    TLSVerifyClient never
    TLSCipherSuite HIGH:MEDIUM:-SSLv2
    TLSCertificateFile ./secure/certs/server.pem
    TLSCertificateKeyFile ./secure/certs/server.pem
    TLSCACertificateFile ./secure/certs/server.pem

    # Sample security restrictions
    #          Require integrity protection (prevent hijacking)
    #          Require 112-bit (3DES or better) encryption for updates
    #          Require 63-bit encryption for simple bind
    # security ssf=1 update_ssf=112 simple_bind=64

    # Sample access control policy:
    #          Root DSE: allow anyone to read it
    #          Subschema (sub)entry DSE: allow anyone to read it
    #          Other DSEs:
    #                      Allow self write access
    #                      Allow authenticated users read access
    #                      Allow anonymous users to authenticate
    #          Directives needed to implement policy:
    # access to dn.base="" by * read
    # access to dn.base="cn=Subschema" by * read
    # access to *
    #          by self write
    #          by users read
    #          by anonymous auth
    #
    # if no access controls are present, the default policy is:
    #          Allow read by all
    #
    # rootdn can always write!

    #######################################################################
    # bdb database definitions
    #######################################################################

    database         bdb
    suffix               "o=Root"
    rootdn              "o=Root"
    # Cleartext passwords, especially for the rootdn, should
    # be avoid.  See slappasswd(8) and slapd.conf(5) for details.
    # Use of strong authentication encouraged.
    #rootpw                       secret
    rootpw {SSHA}ZKKuqbEKJfKSXhUbHG3fG8MDn9j1v4QN
    # The database directory MUST exist prior to running slapd AND
    # should only be accessible by the slapd and slap tools.
    # Mode 700 recommended.
    directory ./data
    dirtyread
    searchstack 20
    # Indices to maintain
    index mail pres,eq
    index objectclass pres
    index default eq,sub
    index sn eq,sub,subinitial
    index telephonenumber
    index cn
    index ou
    #index numsubordinates pres



    ##extra
    #pwdFailureCountInterval 1


    3.     Start ldap server by running the run.cmd file from path
    C:\Program Files\OpenLDAP\run
    4.     Commands

    slapd -d -1 -h ldap://127.0.0.1 -f slapd1.conf
    ldapadd   -h <ip> -p <port> -D "o=Root" -w <password> -f openldap.ldif
    ldapsearch -v -h <ip> -p <port> -D "o=Root" -w <password> -b 'o=Root' (ou=*)'
    ldapmodify -h <IP> -p <port> -D "o=Root" -w <password> -f