Friday, August 20, 2010

mysql error: Got a packet bigger than 'max_allowed_packet' bytes

I had recently been working on a project where I was getting db dumps (using mysqldump) - and having issues importing the db back into a fresh install of mysql:
% mysql -u [username] -p [database] < sqlDump.sql 
Enter password: 
ERROR 1153 (08S01) at line 1191: Got a packet bigger than 'max_allowed_packet' bytes
Which seemed related to blobs containing larger pdf files mostly (in my case at least). In trying to figure out how to get past this - seeing the default size for 'max_allowed_packet' was too low - in the daemon side (mysqld):
mysql> select @@max_allowed_packet;
+----------------------+
| @@max_allowed_packet |
+----------------------+
|              1048576 | 
+----------------------+
To fix this, I used the configuration files (optional) - located on my local system (OS X - Snow Leopard) at: /usr/local/mysql/support-files/my-xxx.cnf. I copied one of these (the 'my-small.cnf' specifically) to /etc/my.cnf, and edited the file to increase the default for the server to 64M:
# The MySQL server
[mysqld]
port            = 3306
socket          = /tmp/mysql.sock
skip-locking
key_buffer = 16K
max_allowed_packet = 64M
...
This will increase the limit (globally - since the file is located in /etc/my.cnf and not in ~/.my.cnf) for the server. This increased limit can then be seen here after a server restart (64*1024*1024):
mysql> select @@max_allowed_packet;
+----------------------+
| @@max_allowed_packet |
+----------------------+
|             67108864 | 
+----------------------+
After trying to re-import my file, I found out I also needed to have this via the client as well while importing, since setting the above limit did not seem to solve my issue:
%  mysql --max_allowed_packet=64M -u  [username] -p [database] < sqlDump.sql 

This finally worked in getting past the limitation I was hitting.

Maybe this will be a quick fix help to someone else running into this problem as well.

Blank lines in JSP output

I had posted some small entries on my old blog at Sun, and wanted to transfer some of those here for reference sake.


I was having an issue with jsp outputting blank lines at the top of output - and if the contentType being text/xml - causing parsing error being that the <?xml... directive not being the first line - causing the exception:

'XML or text declaration not at start of entity'

Come to find out, this had been addressed in JSP2.1 - but was a bit hard to track down.

Adding the line:

<%@page trimDirectiveWhitespaces="true"%>

to the top of your jsp will remove these, thus letting the XML feeds parse correctly.

Small fix - but has cured some headache in creating some feed proxies.

Wednesday, August 18, 2010

OSGi, JavaMail, and the mailcap issue

When developing some of the components for our application, I have been seeing some issues with ClassLoaders when creating them as OSGi bundles.

One main case that had me curious for a while was using JavaMail inside an OSGi bundle, and having to send a multipart mail.

The issue was - JavaMail relies on JAF (the activation framework), which houses a file (mailcap) in it's META-INF directory. So, if these (the javamail and jaf) are stored in separate bundles, then javamail cannot access the configuration file to determine which MIME types it can handle, and thus throwing an UnsupportedDataTypeException:

javax.activation.UnsupportedDataTypeException: no object DCH for MIME type multipart/alternative; 

An UnsupportedDataTypeException usually occurs because JAF cannot find the DataContentHandler (DCH) for a given MIME type by reading the mailcap.

Glassfish 3 actually bundles these together in one bundle (modules/mail.jar), but I was still having the issue described above.

So I went down the path trying to figure out what in the world I could do to get past this. You can't really export resources like you do packages in the manifest, so importing into my bnd file didn't work, and even trying to manually force new mailcaps (which seemed to work elsewhere) didn't work:

MailcapCommandMap mc = (MailcapCommandMap) CommandMap.getDefaultCommandMap();
mc.addMailcap("text/plain;; x-java-content-handler=com.sun.mail.handlers.text_plain");
mc.addMailcap("text/html;; x-java-content-handler=com.sun.mail.handlers.text_html");
mc.addMailcap("text/xml;; x-java-content-handler=com.sun.mail.handlers.text_xml");
mc.addMailcap("multipart/*;; x-java-content-handler=com.sun.mail.handlers.multipart_mixed; x-java-fallback-entry=true");
mc.addMailcap("message/rfc822;; x-java-content-handler=com.sun.mail.handlers.message_rfc822");
CommandMap.setDefaultCommandMap(mc);

This is basically just pushing through exactly what is in the mailcap file directly. But - this didn't work either. Odd...

I then went as far as create a new instance of the specific handler that is being used, and testing the support for that DataFlavor:

DataContentHandler dhmm = new com.sun.mail.handlers.multipart_mixed();
DataFlavor[] dtf = dhmm.getTransferDataFlavors();
for (DataFlavor tmpdf : dtf) {
 log.debug("   isSupported? " + tmpdf.getMimeType() + ":" + message.getDataHandler().isDataFlavorSupported(tmpdf));
}

And it shows it is supported: isSupported? multipart/mixed:true

Yet - when sending the message, same Exception. Ugh...

Finally, Sahoo (from the Glassfish team) gave me a suggestion of manipulating the ClassLoaders when I needed to to make the calls, saving the current ClassLoader so it can be put back into place.

In our bundle, we create the session and send the message in two different methods, so this had to be implemented twice, but finally - it worked!

// There is an issue in the OSGi framework preventing the MailCap
// from loading correctly. When getting the session here,
// temporarily set the ClassLoader to the loader inside the bundle
// that houses javax.mail. Reset at the end.
ClassLoader tcl = Thread.currentThread().getContextClassLoader();

try {
    // Set the ClassLoader to the javax.mail bundle loader.
    Thread.currentThread().setContextClassLoader(javax.mail.Session.class.getClassLoader());

    ...
} finally {
    // Reset the ClassLoader where it should be.
    Thread.currentThread().setContextClassLoader(tcl);
}

This is now working fine. I was a bit leery about mucking with the ClassLoaders in here - which was an issue with using JRuby code inside OSGi bundles as well, but this seems to be OK in that we are temporarily changing and immediately changing back.

The Pollers - MDB, Singleton, Glassfish, JRuby

It has been a bit since last posting - has been a whirlwind since then. :)

Ended up utilizing MDBs to be able to get past the issue with the way the app was using the pollers to listen to certain events in the application (observer) and processing them.

Keeping the original logic in ruby, I ended up using a Singleton Bean to create and store a rails instance that was shared with the same VM as the rest of the app. Also slated this Singleton to be instantiated at Startup (@Startup) so it would be available when the rest of the app was ready:

@Singleton
@Startup
public class EIScriptingContainer {


With this, created a separate MDBs for each ruby poller, giving commands via the message selectors to determine which poller to utilize. This gave us the ability to use a central "SystemManager" to send messages to a Topic that the MDBs were listening to, and depending on the serviceName, would know what to do:

@MessageDriven(mappedName = "jms/SysMgrReq", activationConfig = {
    @ActivationConfigProperty(propertyName = "messageSelector", propertyValue = "serviceName='EventNotifier' AND messageAction IS NOT NULL"),
    @ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge"),
    @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Topic")
})

I then bundled all of these up in a single jar and deployed as an app to GF.

A big issue was the fact that sometimes we needed to have more than one instance of a poller running, but when re-loading the original script (via the JRuby ScriptingContainer), all of the Rails stack was loading. That was where the Singleton instance of the ScriptingContainer was created and pre-loaded with the rails environment, and this was utilized each time in the stopping, starting, starting additional instance, and refreshing these pollers.

Tuesday, April 6, 2010

JRuby, JMS and Pollers

A common way in Ruby to utilize messaging is to use ActiveMessaging, and for ActiveMessaging to listen to queues/topics, etc. - implement "pollers", which will run as separate processes and listen/react to the messages.

ActiveMessaging takes these processors, and runs them in the environment (which is started externally) - creating new processes.

In trying to get everything in one box (VM), we needed a way to get around this, on top of using ActiveMessaging as a whole (since most of the app will be converted to java eventually).

In doing this, we took the poller processors and turned them into MessageListeners:
class QueueProcessor  < ApplicationProcessor
  include javax.jms.MessageListener

defined them as classes, and made sure they could run on their own and still function much like they did before:

class QueueProcessor  < ApplicationProcessor
  include javax.jms.MessageListener

  def initialize
    ...
  end

  def run
    # Instantiate a Sun Message Queue ConnectionFactory
    queueConnFactory = ConnectionFactory.new

    # Create a connection to the Sun Message Queue Message service
    queueConn = queueConnFactory.createConnection()

    # Create a session within the connection 
    queueSess = queueConn.createSession(false, Session::AUTO_ACKNOWLEDGE)

    # Instantiate a System Message Queue Destination
    # ToDo: Need to lookup via JNDI or some sort of aliasing.
    queueQueue = Queue.new("MyQueue")

    # Create a message consumer and listener
    queueMsgConsumer = queueSess.createConsumer(queueQueue)
    queueMsgConsumer.setMessageListener(self);

    # Start the Connection
    queueConn.start()
  end

  def onMessage(message)
    begin
      # Process our message as needed
      ...
    rescue
      @queue_logger.error "QueueProcessor caught #{$!} \n #{$!.backtrace.join("\n")}"
      ...
    end
  end
end

### Make sure to only run this process, and not any others.
if __FILE__ == $0
 queueProc = QueueProcessor.new
 queueProc.run
end

The next step in getting this to run separately, yet still in the same VM? I am thinking OSGi and ScriptEngine - so onto the next step to see if that is possible. :)

JRuby, JMS, OpenMQ, and Serialization

In working towards getting the current application running entirely within the Glassfish context, one of the issues I had to do was get the current implementation of the communication to the message queues away from ActiveMessaging/ActiveMQ into using JMS/OpenMQ.

There are a lot of fingers that ActiveMessaging has in here, so we will have to do some cleanup, but something that was causing me a few issues we figured out today was with the way the serializing of objects and placing in the queue as TextMessages was getting a bit out of whack when we converted some of the publishing models of ActiveMessaging to the sending of the message via JMS.

The serialization (which we are using Marshal dump/load for the ruby objects) was becoming an instance of TextMessageImpl (Java::ComSunMessagingJmqJmsclient::TextMessageImpl).

Simply pulling the text out of this on the jruby side fixed for us:
message = deserialize(message.getText())
and the deserialization worked ok.

Tuesday, March 30, 2010

Joe Satriani Guitar Clinic

My son and I got to see an incredible guitar clinic put on by Joe Satriani at Sweetwater on March 27th. Was a great time! He played a lot of the Surfing album, and discussed the theory behind the songs.

Some of the videos I was able to get from my cell phone are posted on my YouTube channel.

Converting Ruby/Rails JMS to JRuby/Glassfish/OpenMQ

I have been working lately on trying to get the existing stack - which is a Ruby on Rails app utilizing ActiveMessaging with ActiveMQ via Stomp - and getting it to work completely within a Glassfish JRuby container, using JMS and OpenMQ instead.

It has been challenging for sure, but am making progress little by little. I will be posting any progress I make soon after I get some of the issues ironed out.

Some of the issues I am working on include:
  • Converting current separate daemonized pollers that are the message queue listeners into either jruby pollers or separate jruby classes that run in the same VM as the main app itself (as opposed to running a separate 'jruby' call of some sort).
  • Getting the messaging.rb/broker.yml to use jms and jndi lookups for the ConnectionFactories.
  • How to use/register OSGi bundles in the environment so that they can be queried and monitored.
  • Converting existing component/plugins from a combination of java component (which use sysjava as daemon wrappers) and ruby components into OSGi bundle plugins.
  • Many more... :)
 I will update the solutions as they are worked through.

Wednesday, March 17, 2010

JDBC/JNDI Pooling with a JRuby/Rails app

In working on the transition of this rails app over to the JRuby/Glassfish camp, one of the things I needed to take advantage of was using jdbc/jndi database pooling and configurations.

Of course, using rails, the application was using ActiveRecord for it's DB/ORB interactions, and in researching, the steps needed to get this setup were:
  1. Create the JDBC connection pool
  2. Create the resource with a JNDI name
  3. Update the database.yml
  4. Configure ActiveRecord for disconnects
I also needed to download and put the postgresql jdbc driver in place ($domain/lib/ext/ - using the JDBC version 4 driver).

In looking at the jdbc templates included with Glassfish (glassfish/lib/install/templates/resources/jdbc), I attempted to use the template for my driver (postgresql_type4_datasource.xml). This was causing issues using the 'url' property, so ended up using asadmin to create, using serverName and databaseName as properties.

Starting up the asadmin interactive utility:


asadmin> create-jdbc-connection-pool
--datasourceclassname org.postgresql.ds.PGConnectionPoolDataSource
--restype javax.sql.ConnectionPoolDataSource
--property user=XXX:password=XXX:serverName=localhost:databaseName=extension_dev extensionDevPool

Command create-jdbc-connection-pool executed successfully.
asadmin> create-jdbc-resource --connectionpoolid extensionDevPool jndiExtensionDev

Command create-jdbc-resource executed successfully.
asadmin> list-jdbc-connection-pools
__TimerPool
DerbyPool
extensionDevPool

Command list-jdbc-connection-pools executed successfully.

asadmin> list-jdbc-resources
jdbc/__TimerPool
jdbc/__default
jndiExtensionDev

Command list-jdbc-resources executed successfully.

asadmin> ping-connection-pool extensionDevPool

Command ping-connection-pool executed successfully.
I created separate pools/resources for dev/test/production - which will be commented accordingly for now in the domain.xml file.

Next came setting up the database.yml file to use jndi instead of the regular ActiveRecord drivers. There are some excellent resources on the web for getting this done, but had to do some digging to get this correct.

One of the issues in setting this up correctly was, that especially during development and testing, we are using the jruby console (jruby -S script/console) to create and activate objects and events, which in turn looks at the database.yml file to get its connection.

This was hurting me because once set to use jndi, none of these settings were setup correctly and I would continue to get connection issues as well as the jms missing class issues.

So, to fix, in the database.yml file, we not only test for the RAILS_ENV to be java (or the JRUBY_VERSION to be set), but we needed to test to see if we were in a servlet context (ala Glassfish) as well as to use jndi based connections or regular connections.

So, our database.yml file ended up looking like:
defaults: &defaults
<% jdbc = defined?(JRUBY_VERSION) ? 'jdbc' : '' %>
<% if defined?($servlet_context) %>
adapter: jdbc
driver: org.postgresql.Driver
<% else %>
adapter: <%= jdbc %>postgresql
<% end %>
username: xxx
password: xxx
host: localhost

development:
<% if defined?($servlet_context) %>
jndi: jndiExtensionDev
<% end %>
database: extension_dev
<<: *defaults

test:
<% if defined?($servlet_context) %>
jndi: jndiExtensionTest
<% end %>
database: extension_test
<<: *defaults

production:
<% if defined?($servlet_context) %>
jndi: jndiExtensionProd
<% end %>
database: extension_prod
<<: *defaults

This enabled us to access the db in our app as well as run the console and connect correctly.

We now need to configure ActiveRecord to disconnect after every query - which was not needed before since we are now using JDBC to manage the connection persistence. (See resources below for links to some of the sites that were used to research all of this).
# config/initializers/close_connections.rb
if defined?($servlet_context)
require 'action_controller/dispatcher'

ActionController::Dispatcher.after_dispatch do
ActiveRecord::Base.clear_active_connections!
end
end
Some of the excellent resources I used:

RoR App to run under JRuby

After deciding on the platform, one of the big things to get accomplished was getting the current application to run inside a JRuby container inside Glassfish.

This was a bit challenging at first, and it still isn't all quite there, but the main core of the app is now running in there, with a few changes.

We are not deploying via a war file (yet) - since we are just building this, but deploying as a directory (for development) from my git workspace to build up the configuration steps.

There were quite a few challenges (and more to come) - some of which were based on the gem compatibilities between ruby and jruby - more notably libxml and libxslt - which rely on native libraries.

Before I got here, this was tackled a little bit by another engineer here, Rich, who created an xml_lib.rb that wrapped what we needed via the libxml and libxslt libraries to utilize their java counterparts - so that was a big boost for this process.

We will probably replace those with more robust ones if we need in the future, but this library works great for what we are using it for at the moment. There is a port of libxml-ruby called libxml-jruby, written by Dylan Vaughn, which will do what we need - and we will probably pull out the XSLT functions that Rich wrote and separate the lib this way.

A big part of this was getting the correct gems installed and being used for jruby, and adjustments in the configurations for the current app to separate what to load if running under jruby as opposed to ruby.

Example - in 'conf/environment.rb' - the config gems were separated out:
if RUBY_PLATFORM =~ /java/
config.gem 'jdbc-postgres', :lib => 'jdbc/postgres'
config.gem 'activerecord-jdbc-adapter', :lib => 'jdbc_adapter'
config.gem 'activerecord-jdbcpostgresql-adapter',
:lib => 'active_record/connection_adapters/jdbcpostgresql_adapter'
else
config.gem 'pg', :version => '0.8.0'
config.gem 'libxml-ruby', :lib => 'xml/libxml', :version => '1.1.3'
config.gem 'libxslt-ruby', :lib => 'libxslt', :version => '0.9.2'
end
After installing and setting up Glassfish with a new domain and installing jruby, needed to make the jruby container available to Glassfish:
% asadmin create-domain --adminport 4848 extension
...
...
Command create-domain executed successfully.

% asadmin configure-jruby-container --jruby-home=/usr/local/jruby
and then deployed my current app (being in the parent dir of the rails app directory)
% asadmin deploy --property jruby.rackEnv=development core/

Starting my domain here, I was able to access the app successfully via the context of the app name (http://localhost:8080/core/). Adding 'context-root="/"' to the <application ... section allowed me to access without adding '/core/' to my URL.

Notice the setting of:

--property jruby.rackEnv=development

This is basically the equivalent of setting RAILS_ENV=development in your environment.

Next step was getting the db to work with jdbc/jndi pooling.

*Note: One issue I was having was the complaining of missing the class javax.jms.MessageListener:
/usr/local/jruby/lib/ruby/site_ruby/shared/builtin/javasupport/core_ext/object.rb:37:
in `get_proxy_or_package_under_package':
NameError: cannot load Java class javax.jms.MessageListener
Following the directions on http://wiki.glassfish.java.net/Wiki.jsp?page=OpenMQJRuby, I created and moved the appropriate jms/imq jar files to my domain/lib/ext directory and these are no longer an issue. I will have to see why this was when I work with the MQ issues (and converting from ActiveMQ to OpenMQ).
  1. .../mq/lib/jms.jar
  2. .../mq/lib/imq.jar
  3. .../mq/lib/imqjmsra.jar
imqjmsra.jar is created by extracting it from imqjmsra.rar:
jar xvf imqjmsra.rar imqjmsra.jar

Move these into the domain/lib/ext directory. These will also need to be included in your classpath when using the jruby console.

Deciding on a platform

One of the tasks in this new role was to determine which platform to take this application/appliance to. We knew it was to move towards a Java/JEE platform, but we needed to make sure what we were choosing was right for the long term, and had the most viability for expansion and technology.

I have worked with many app servers/web containers in the past, and definitely had my mind on what I wanted to work with, but used the time to research as many alternatives as I could to make sure the conclusion was the right one.

The way this application works, we are relying on being able to plug in new "interfaces" that the core application can talk to - via JMS messaging, etc. The application is built using RoR (Ruby on Rails) with a Postgresql backing store.

Clustering and HA were of a concern as well, but the type of clustering that this application requires was beyond the scope of the normal web/http traffic type clustering that is solved via mod_jk or mod_cluster type solutions - although this can help for certain aspects of it.

JVM-level clustering - such as Terracotta - was also an option, but that didn't really solve what we were looking for as a turn-key solution either - so the decision here is to create our own custom solution, using what was available as a basis for the types of clustering that was needed for that type of traffic. Creating an observer of our JMS cloud(s) to peer for relevant information (via JMX hooks) such as acceptable load thresholds and speaking to other nodes to open/start new modules as need to offset the load.

This is going to be a challenge, but a welcome one. :)

I definitely wanted to utilize the OSGi concepts in creating the daemon like plugins that we would be using, as well as using the framework for monitoring activity, so having that capability was a big factor in the decision making. I looked at other modularity solutions, such as JPF (Java Plugin Framework) and Impala, as well as what Project Jigsaw would bring to the table, but decided OSGi would do well for what we needed.

There were many containers that were looked at, including:
  • Glassfish v2 and v3
  • JBoss 5 and JBoss 6 (M2)
  • Geronimo
  • WebSphere
  • Weblogic
  • resin
and others. The final decision for me boiled down to Glassfish v3 and JBoss 6, because of the JEE6 specifications and where they were headed.

So going forward, we will be building this platform using Glassfish v3 - which is exciting to me, because I always thought Glassfish was a great platform, and coming from the NAS world of old, it was great to see where this was going.

Clustering is a bit of an issue that will need to be tackled, but in reading Bill Shannon's roadmap for clustering in v3.1 of Glassfish, as well as needing to build our own, this was not as much of a deal breaker as I was worried about.

The current application also uses ActiveMessaging to communicate with an ActiveMQ server, and with Glassfish's embedded OpenMQ based messaging system, I think the ability to have as much as we can under one container is a big win in this situation.

Thanks to the support of the Glassfish community, and people such as Arun, Alexis, the Glassfish team (blog and twitter) and the others involved, I think using this going forward is going to a fun and exciting project!


Tuesday, March 16, 2010

New Adventures (so long Sun, hello Extension)!

It has been a long time since I blogged about anything, but with new events that have happened recently, I figured now was as good a time as any!

On January 29th, 2010, I was laid off from Sun Microsystems, where I enjoyed an almost 13 year career. Being bought out by Oracle was going to be an exciting time. I thought we (or I) were going to see a resurgence of the Sun of old. It was a great adventure, and I learned quite a lot, as well as worked with a number great of people, so it was sad to leave.

I did a great number of things while in Sun, including working on the installation environment for Solaris 8 and Solaris 9, creating the installation kiosk for CD0, co-founding the BigAdmin portal (which lived in the installation kiosk, and which stayed with me for 10 years, up until my last day), worked on the sysid suite of tools, ereg, iChange jumpstart application, customized CMS applications and more.

But that being said, I am ready for new challenges and to move onto something new.

On February 22, 2010, I started working for a new company, Extension, Inc., which is developing a health-care communication appliance that has a lot of potential. Sounds very challenging, and involves a lot of research and development which will result in a very compelling product in a field that is very open to new technology right now.

Being in Sun for so many years, I am very used to working under pressure, and getting to develop in some of the latest technologies, and that will continue in my new role (as Sr. Software Engineer), which I am very grateful for.

My new role will involve a lot of Java/JEE development, utilizing Glassfish 3 (awesome!) as well as digging into Ruby/JRuby, and building on the Ubuntu platform to begin with.

So, my first entries here will involve the work I have been doing and will be doing on converting an existing Rails application to work inside of a JRuby container inside of Glassfish 3, the trials involved in that, converting ruby components into OSGi compliant Java modules, and building a solid communication platform for many different devices from within a health-care environment.