Since CXF 3.1 the message logging code was moved into a separate module and gathered some new features.
- Auto logging for existing CXF endpoints and clients
- Uses slf4j MDC to log meta data separately
- Adds meta data for Rest calls
- Adds MD5 message id and exchange id for correlation
- Simple interface for writing your own appenders
The LoggingFeature can be used with JAXWS as well JAXRS Endpoints and Clients. It can also be specified using the @Features annotation. The feature should be used instead of adding the LoggingIn/OutInterceptors manually.
The following properties can be set on the LoggingFeature:
|limit||The size limit at which messages are truncated in the log. The default is 48 * 1024.|
Size limit when messages are written to disk. The default is -1, which means do not write to disk.
|prettyLogging||For XML content, turn on pretty printing in the logs. The default is false.|
|logBinary||Log binary payloads by default. The default is false.|
|logMultipart||Log multipart payloads by default. The default is true.|
This is the raw logging information you get for a SOAP call:
A lot of the details are in the MDC values which are by default normally not displayed in the log file. You need to change your pax logging config to make these visible.ogged or log some services to another file.
Enablling / disabling logging by changing the logger config
The logger name is "<service namespace>.<ServiceName>.<type>". In the karaf log file it by default only shows the type but you can change this.
You can use the logger name to fine tune which services you want to log this way. For example set the debug level to WARN for noisy services to avoid that they are logged.
Message id and exchange id
The messageId allows to uniquely identify messages even if they were collected from several servers. It is also transported over the wire so a request sent on one machine can be correlated with the request received on another machine.
The exchangeId will be the same for an incoming request and the response sent out or on the other side for an outgoing request and the response for it. This allows to correlate request and responses and so follow the conversations.
Simple interface to write custom appenders
Write a custom LogSender and set it on the LoggingFeature to do custom logging. All meta data can be access from the class LogEvent.
Auto logging for existing CXF endpoints and clients in Apache Karaf
To use the message logging in karaf it needs to be installed as a feature. It can then be activated for all endpoints using a config.
feature:repo-add cxf 3.1.0
config:property-set -p org.apache.cxf.features.logging enabled true
Any CXF endpoints installed after the logging feature will automatically be enhanced with the message logging feature.
By default then all SOAP and Rest calls will be logged using slf4j. So the logging data will be processed by pax logging and by default end up in your karaf log.
A log entry looks like this:
2015-06-08 16:35:54,068 | INFO | qtp1189348109-73 | REQ_IN | 90 - org.apache.cxf.cxf-rt-features-logging - 3.1.0 | <soap:Envelope
This does not look very informative. You only see that it is an incoming request (REQ_IN) and the SOAP message in the log message. The logging feature provides a lot more information though. To leverage these the pax logging config can be changed to show the relevant MDC values.
Karaf decanter support to write into elastic search
Many people use elastic search for their logging. Fortunately you do not have to write a special LogSender for this purpose. The standard CXF logging feature will already work.
It works like this:
- CXF sends the messages as slf4j events which are processed by pax logging
- Karaf Decanter LogCollector attaches to pax logging and sends all log events into the karaf message bus (EventAdmin topics)
- Karaf Decanter ElasticSearchAppender sends the log events to a configurable elastic search instance
As Decanter also provides features for a local elastic search and kibana instance you are ready to go in just minutes.
feature:install decanter-collector-log decanter-appender-elasticsearch elasticsearch kibana
After that open a browser at http://localhost:8181/kibana. When decanter is released kibana will be fully set up. At the moment you have to add the logstash dashboard and change the index name to [karaf-]YYYY.MM.DD.
Then you should see your cxf messages like this:
Kibana easily allows to filter for specific services and correlate requests and responses.
This is just a preview of decanter. I will do a more detailed post when the first release is out.