Cryptographic Pills Episode 1 - Modern Cryptography

In the last days I started to look at the Cryptography's Theory again. I guess it's an interesting subject for anyone involved in the IT world: for developers and security specialists understand some key concepts is essential to manage particular scenarios. Yesterday there was the Google's announce about the first SHA-1 collision: hash functions will be in one of the articles I currently have in mind. I'll try to write about Cryptographic pills in episodes and I hope you'll appreciate the subject. Let's start!

In the beginning

Until the late 20th century Cryptography was considered an art form. Constructing good codes, breaking existing ones and be able to understand the algorithm behind a code relies on creativity and personal skill. There wasn't so much theory around the subject. With the Cryptography's spread in areas different from Military and Defense the whole picture radically changed: a rich theory and rigorous study made the Cryptography an applied Science. Today Cryptography is everywhere.

Private-key Encryption

Cryptography was historically concerned with secret communications. In the past we were talking about ciphers, today we talk about encryption schemes: the concepts are the same. Two parts want to communicate in a secret manner and they decide to use a particular encryption scheme. An assumption in this scenario is that the communicating parties must have a way to initially sharing a key in a secret manner.

A private-key Encryption scheme is based on three algorithms [1]:

  1. A Key-generation algorithm Gen is a probabilistic algorithm that outputs a key k chosen according to some distribution that is determined by the scheme.
  2. The Encryption Algorithm Enc takes as input a key k and a plaintext message m and outputs a cyphertext c. We denote by the encryption of the plaintext m using the key k in the following way
    $$Enc_k(m)$$
  3. The Decryption Algorithm Dec takes as input a key k and a cyphertext message c and outputs a plaintext m. We denote by the decryption of the cyphertext c using the key k in the following way
    $$Dec_k(c)$$
  • The set of all possible keys generated by the Gen algorithm is denoted as K and it is called the Key Space.
  • The set of all "legal" messages (supported by the Encryption Algorithm Enc) is denoted as M and it is called the Message Space.
  • Since any Cyphertext is obtained by encrypting some plaintext under some key, the sets K and M together define a set of all possible Cyphertexts denoted with C.

Kerckhoffs's principle

From the definitions above it's clear that if an eavesdropping adversary knows the Dec algorithm and the chosen key k, he will be able to decrypt the messages exchanged between the two communicating parts. The key must be secret and this is a fundamental point, but do we need to maintain secret the Encryption and Decryption algorithms too? In the late 19th century August Kerckhoffs gave his opinion:

The cipher method must not be required to be secret, and it must be able to fall in the hands of the enemy without incovenience

It's something related to Open Source and Open Standards too. Today any Encryption Scheme is usually demanded to be made public. The main reasons for this can be summarized in the following points:

  1. The public scrutiny make the design stronger.
  2. If flaws exist it can be found by someone hacking on the scheme
  3. Public design enables the establishment of standards.

The Kerckhoffs' principle is obvious, but in same cases it was ignored with disastrous results. Only publicly tried and tested algorithm should be used by organizations. Kerchoffs was one of the first person with an Open vision about these things.

What's next

In the next articles I would like to talk about the theory behind the Cryptographic schemes strenght. How we can say a scheme is secure? What are the attack scenarios? What is the definition of a secure scheme?

[1] Introduction to Modern Cryptography, Chapman & Hall/Crc, Jonathan Katz, Yehuda Lindell, Chapter 1.

Using Camel-cassandraql on a Kubernetes Cassandra cluster with Fabric8

In the last months I blogged about Testing Camel-cassandraql on a Dockerized cluster. This time I'd like to show you a similar example running on Kubernetes. I've worked on a fabric8 quickstart splitted in two different projects: Cassandra server and Cassandra client. The server will spin up a Cassandra cluster with three nodes and the client will run a camel-cassandraql route querying the keyspace in the Cluster. You'll need a running Kubernetes cluster to follow this example. I decided to use Fabric8 on Minikube, but you can also try to run the example using Fabric8 on Minishift. Once you have your environment is up and running you can start to execute the following sections commands.

Spinning up an Apache Cassandra cluster on Kubernetes

For this post we will use Kubernetes to spin up our Apache Cassandra Cluster. To start the cluster you can use the Fabric8 IPaas Cassandra app. Since we are using Minikube, you'll need to follow the related instructions.

This project will spin up a Cassandra Cluster with three nodes.

As first step, in the cassandra folder, we will need to run

mvn clean install

and

mvn fabric8:deploy

Once we're done with this commands we can watch the Kubernetes pods state for our specific app and after a while we should be able to see our pods running:

oscerd@localhost:~/workspace/fabric8-universe/fabric8-ipaas/cassandra$ kubectl get pods -l app=cassandra --watch
NAME                         READY     STATUS    RESTARTS   AGE
cassandra-2459930792-lfh93   1/1       Running   0          15s
cassandra-2459930792-m8c9p   1/1       Running   0          15s
cassandra-2459930792-zxiv3   1/1       Running   0          15s

I usually prefer the command line, but you can also watch the status of your pods on the wonderful Fabric8 Console and with Minikube/Minishift is super easy to access the console, simply run the following command:

minikube service fabric8

or on Minishift

minishift service fabric8

In Apache Cassandra nodetool is one of the most important command to manage a Cluster and executing command on single node.

It is interesting to look at the way the Kubernetes team developed the Seeds discovery into them Cassandra example. The class to look at is KubernetesSeedProvider and it implements the SeedProvider interface from Apache Cassandra project. The getSeeds method is implemented here: if the KubernetesSeedProvider is unable to find seeds it will fall back to createDefaultSeeds method that is exactly the simpleSeedProvider implementation from Apache Cassandra Project.

The Kubenertes-cassandra jar is used in the Cassandra DockerFile from Google example and it is added to the classpath in the run.sh script.

In the Cassandra fabric8-ipaas project we used the deployment.yml and service.yml configuration. You can find the details in the repository. You can also scale up/down your cluster with this simple command:

kubectl scale --replicas=2 deployment/cassandra

and after a while you'll see the scaled deployment:

oscerd@localhost:~/workspace/fabric8-universe/fabric8-ipaas/cassandrakubectl get pods -l app=cassandra --watch
NAME                         READY     STATUS    RESTARTS   AGE
cassandra-2459930792-lfh93   1/1       Running   0          28m
cassandra-2459930792-m8c9p   1/1       Running   0          28m

Our cluster is up and running now and we can run our camel-cassandraql route from Cassandra client.

Running the Cassandra-client Fabric8 Quickstart

In Fabric8 Quickstarts organization I've added a little Cassandra-client example. The route will do a simple query on the keyspace we've just created on our Cluster. This quickstart requires Camel 2.18.0 release to work fine, now that 2.18.0 is out we can cover the example. The quickstart will first execute a bean to pre-populate the Cassandra keyspace as init-method and it defines a main route. It's a very simple quickstart (single query/print result set), but it shows how it is easy to interact with the Cassandra Cluster. You don't have to add contact points of single nodes to your Cassandra endpoint URI or use a complex configuration, you can simply use the service name ("cassandra") and Kubernetes will do the work for you.

You can run the example with:

mvn clean install
mvn fabric8:run

You should be able to see your cassandra-client running:

oscerd@localhost:~/workspace/fabric8-universe/fabric8-quickstarts/cassandra-client$ kubectl get pods
NAME                                      READY     STATUS    RESTARTS   AGE
cassandra-2459930792-lfh93                1/1       Running   0          45m
cassandra-2459930792-m8c9p                1/1       Running   0          45m
cassandra-2459930792-ms265                1/1       Running   0          15m
cassandra-client-3789635050-e1f2g         1/1       Running   0          1m

With the fabric8:run command you'll see the logs of the application in the console directly:

2016-10-14 09:06:15,786 [main           ] INFO  DCAwareRoundRobinPolicy        - Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
2016-10-14 09:06:15,789 [main           ] INFO  Cluster                        - New Cassandra host cassandra/10.0.0.67:9042 added
2016-10-14 09:06:15,790 [main           ] INFO  Cluster                        - New Cassandra host /172.17.0.10:9042 added
2016-10-14 09:06:15,790 [main           ] INFO  Cluster                        - New Cassandra host /172.17.0.12:9042 added
2016-10-14 09:06:20,701 [main           ] INFO  SpringCamelContext             - Apache Camel 2.18.0 (CamelContext: camel-1) is starting
2016-10-14 09:06:20,703 [main           ] INFO  ManagedManagementStrategy      - JMX is enabled
2016-10-14 09:06:20,928 [main           ] INFO  DefaultTypeConverter           - Loaded 190 type converters
2016-10-14 09:06:20,977 [main           ] INFO  DefaultRuntimeEndpointRegistry - Runtime endpoint registry is in extended mode gathering usage statistics of all incoming and outgoing endpoints (cache limit: 1000)
2016-10-14 09:06:21,228 [main           ] INFO  SpringCamelContext             - StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
2016-10-14 09:06:21,231 [main           ] INFO  ClockFactory                   - Using native clock to generate timestamps.
2016-10-14 09:06:21,503 [main           ] INFO  DCAwareRoundRobinPolicy        - Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
2016-10-14 09:06:21,548 [main           ] INFO  Cluster                        - New Cassandra host cassandra/10.0.0.67:9042 added
2016-10-14 09:06:21,548 [main           ] INFO  Cluster                        - New Cassandra host /172.17.0.11:9042 added
2016-10-14 09:06:21,548 [main           ] INFO  Cluster                        - New Cassandra host /172.17.0.10:9042 added
2016-10-14 09:06:22,047 [main           ] INFO  SpringCamelContext             - Route: cassandra-route started and consuming from: timer://foo?period=5000
2016-10-14 09:06:22,075 [main           ] INFO  SpringCamelContext             - Total 1 routes, of which 1 are started.
2016-10-14 09:06:22,083 [main           ] INFO  SpringCamelContext             - Apache Camel 2.18.0 (CamelContext: camel-1) started in 1.374 seconds
2016-10-14 09:06:23,133 [0 - timer://foo] INFO  cassandra-route                - Query result set [Row[1, oscerd]]
2016-10-14 09:06:28,092 [0 - timer://foo] INFO  cassandra-route                - Query result set [Row[1, oscerd]]

As you can see everything is straightforward and you can think about more complex architectures. For example: an Infinispan cache with a persisted cache store based on Apache Cassandra Cluster with Java application, Camel routes or Spring-boot applications interacting with the Infinispan platform.

The route of the example is super simple:

  <camelContext xmlns="http://camel.apache.org/schema/spring" depends-on="populate">
    <route id="cassandra-route">
      <from uri="timer:foo?period=5000"/>
      <to uri="cql://cassandra/test?cql=select * from users;&amp;consistencyLevel=quorum" />
      <log message="Query result set ${body}"/>
    </route>

  </camelContext>

and it abstracts completely the details of the Cassandra cluster (how many nodes? Where are the nodes? What are the addresses of them? and so on).

Conclusions

This example shows only a bit of the power of Kubernetes/Openshift/Fabric8/Microservices world and in the future I would like to create a complex architecture quickstart. Stay tuned!

Apache Camel 2.18.0 release, what is coming

The Camel community is actually in the process of releasing the new Apache Camel 2.18.0. This is a big release with a lot of new features and new components. Lets take a deeper look at what is coming.

  • Java 8

    This is the first release requiring Java 8. We worked hard on this and we are working on Java 8 code style in the codebase.

  • Automatic Documentation

    In Apache Camel 2.18.0 you'll find new documentation. The components, endpoints, dataformats and languages are now documented completely in automatic way. This is important because in the past the Confluence documentation was often misaligned with respect to the codebase. The documentation is generated, if some modification has been done, during the component/endpoint/dataformat/language build. This feature will be the base for a new website including this material. You can see the generated docs in the camel-website folder in the Apache Camel Repository.

  • Spring-boot and Wildfly-Swarm support

    Camel is now present on the Spring-starter site and on the Wildfly-swarm one. All the new feature related to Spring-boot can be found in the article from Nicola Ferraro's blog Apache Camel meets Spring-boot. Running Camel on Spring-boot has never been so easy.

  • Hystrix Circuit Breaker and Netflix OSS

    This release will have a circuit breaker implementation using the Netflix Hystrix project (with dashboard too).

  • Distributed message tracing with camel-zipkin component

    This component is used for tracing and timing incoming and outgoing Camel messages using Zipkin. For more information take a look at the documentation or the Asciidoc file in the Apache Camel repository.

  • Service Call

    This provide the feature of calling a remote service in a distributed system where the service is looked up from a service registry of some sorts. Camel won't need to know where the service is hosted. Camel will lookup the service from from kubernetes, openshift, cloud foundry, zuul, consul, zookeeper etc.

  • New components

    • camel-asterisk - For interacting with Asterisk PBX Servers
    • camel-cm-sms - For sending SMS messages using SM SMS Gateway.
    • camel-consul - For integrating your application with Consul.
    • camel-ehcache - For interacting with Ehcache 3 cache.
    • camel-flink - Bridges Camel connectors with Apache Flink tasks.
    • camel-lumberjack - For receiving logs over the lumberjack protocol (used by Filebeat for instance)
    • camel-ribbon - To use Netflixx Ribbon with the Service Call EIP.
    • camel-servicenow - For cloud management with ServiceNow.
    • camel-telegram - For messaging with Telegram.
    • camel-zipkin - For tracking Camel message flows/timings using zipkin.
    • camel-chronicle - For interacting with OpenHFT's Chronicle-Engine.
  • New Dataformats

    • camel-johnzon - Apache Johnzon is an implementation of JSR-353 (JavaTM API for JSON Processing).
  • New examples

    In this release there will be some new examples. To name a few:

    • camel-example-cdi-kubernetes - An example of camel-kubernetes component used to retrieve a list of pod from your Kubernetes cluster. The example is CDI-based.
    • camel-example-java8 - Demonstrates the Java DSL with experimental new Java8 lambda support for expression/preidcate/processor's. We love feedback on this DSL and expect to improved the API over the next couple of releases.
    • camel-example-java8-rx - Demonstrates the Java DSL with experimental new Java8 lambda support for typesafe filtering and transforming of messages wit Rx-Java. We love feedback on this DSL and expect to improved the API over the next couple of releases.
  • Important changes

    • Karaf 2.4.x is no longer supported. Karaf 4.x is the primary supported OSGi platform.
    • Jetty 8.x is no longer supported and camel-jetty8 has been removed
    • Spring 4.0 is no longer supported and camel-test-spring40 has been removed
    • Spring 3.x is no longer supported
    • Upgraded to Spring 4.3.x and Spring Boot 1.4.x (only spring-dm using spring 3.2.x as part of camel-spring in osgi/karaf is still in use - but spring-dm is deprecated and we recommend using blueprint)
    • Camel-gae has been removed (was not working anyway)
    • MongoDB component is migrated to MongoDB 3.
    • The camel-cache module is deprecated, you should use camel-ehcache instead.
    • The camel-docker module has been removed from Karaf features as it does not work in OSGi

For more informations about the upcoming release you can read the Camel 2.18.0 Release page. We hope you'll enjoy this release and obviously feedback are more than welcome, like Contributions.

Testing camel-cassandraql on a Dockerized Apache Cassandra cluster

From Camel-2.15.0 we have a component for the integration with Apache Cassandra. With this post I'd like to show you what you can do with this component on a real Cassandra cluster. We will use the docker images from my personal account on Docker Hub. As first step we need to setup our Cluster.

Spinning up an Apache Cassandra cluster with Docker

For this post we will use Docker to spin up our Apache Cassandra Cluster. When everything will be up and running we will run a simple example to show how the Camel-cassandraql component can be used to interact with the cluster.

As first step we will need to run a single node cluster:

docker run --name master_node -dt oscerd/cassandra

This command will run a single container with an Apache Cassandra instance (the Cassandra version is the one of latest tick-tock releases, the 3.6). We could use a single-node cluster but in this way we won't exploit the Cassandra features. So let's add two other nodes!

docker run --name node1 -d -e SEED="$(docker inspect --format='' master_node)" oscerd/cassandra
docker run --name node2 -d -e SEED="$(docker inspect --format='' master_node)" oscerd/cassandra

Now we have two other nodes in our Cluster. The environment variable SEED will be used to make the node aware of, at least, one of the nodes in the Cluster. In the cassandra.yaml file there will be a seeds entry to track this information. More informations can be found in the Cassandra documentation. To be sure everything is up and running we need to use the Nodetool utility, by running the following command:

docker exec -ti master_node /opt/cassandra/bin/nodetool status

and we should get the Cluster status as output:

Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address     Load       Tokens       Owns (effective)  Host ID                               Rack
UN  172.17.0.3  102.67 KiB  256          65.9%             1a985c48-33a1-44aa-b7e9-f1a3620a6482  rack1
UN  172.17.0.2  107.64 KiB  256          68.2%             da54ce5e-6433-4ea0-b2c3-fbc6c63ea955  rack1
UN  172.17.0.4  15.42 KiB  256          65.8%             0f2ba25a-37b0-4f27-a10a-d9a44655396a  rack1

To run the example we need to create a keyspace and a table. Locally I have downloaded my Apache Cassandra package with version 3.7. Run the cqlsh command:

<LOCAL_CASSANDRA_HOME>/bin/cqlsh $(docker inspect --format='' master_node)

You should see the Cqlsh prompt

Connected to Test Cluster at 172.17.0.2:9042.
[cqlsh 5.0.1 | Cassandra 3.6 | CQL spec 3.4.2 | Native protocol v4]
Use HELP for help.
cqlsh>

Let's create a namespace test with a table users

create keyspace test with replication = {'class':'SimpleStrategy', 'replication_factor':3};
use test;
create table users ( id int primary key, name text );
insert into users (id,name) values (1, 'oscerd');
quit;

and check everything is fine by run a simple query:

cqlsh> use test;
cqlsh:test> select * from users;

 id | name
----+--------
  1 | oscerd

(1 rows)
cqlsh:test> 

You can do the same on the other two nodes, to check you cluster is working as you expect. Following this steps we now have a three nodes cluster up and running. Let's use a simple camel-cassandraql example.

Run the example from Apache Camel

In Apache Camel I added a little example showing how to query (a select and an insert operations) an Apache Cassandra cluster with the related component, you can find the example here Camel-example-cdi-cassandraql You can run the example with:

mvn clean compile
mvn camel:run

and you should have the following output:

2016-07-24 15:33:50,812 [cdi.Main.main()] INFO  Version                        - WELD-000900: 2.3.5 (Final)
Jul 24, 2016 3:33:50 PM org.apache.deltaspike.core.impl.config.EnvironmentPropertyConfigSourceProvider <init>
INFO: Custom config found by DeltaSpike. Name: 'META-INF/apache-deltaspike.properties', URL: 'file:/home/oscerd/workspace/apache-camel/camel/examples/camel-example-cdi-cassandraql/target/classes/META-INF/apache-deltaspike.properties'
Jul 24, 2016 3:33:50 PM org.apache.deltaspike.core.util.ProjectStageProducer initProjectStage
INFO: Computed the following DeltaSpike ProjectStage: Production
2016-07-24 15:33:51,064 [cdi.Main.main()] INFO  Bootstrap                      - WELD-000101: Transactional services not available. Injection of @Inject UserTransaction not available. Transactional observers will be invoked synchronously.
2016-07-24 15:33:51,170 [cdi.Main.main()] INFO  Event                          - WELD-000411: Observer method [BackedAnnotatedMethod] protected org.apache.deltaspike.core.impl.message.MessageBundleExtension.detectInterfaces(@Observes ProcessAnnotatedType) receives events for all annotated types. Consider restricting events using @WithAnnotations or a generic type with bounds.
2016-07-24 15:33:51,174 [cdi.Main.main()] INFO  Event                          - WELD-000411: Observer method [BackedAnnotatedMethod] protected org.apache.deltaspike.core.impl.interceptor.GlobalInterceptorExtension.promoteInterceptors(@Observes ProcessAnnotatedType, BeanManager) receives events for all annotated types. Consider restricting events using @WithAnnotations or a generic type with bounds.
2016-07-24 15:33:51,189 [cdi.Main.main()] INFO  Event                          - WELD-000411: Observer method [BackedAnnotatedMethod] private org.apache.camel.cdi.CdiCamelExtension.processAnnotatedType(@Observes ProcessAnnotatedType<?>) receives events for all annotated types. Consider restricting events using @WithAnnotations or a generic type with bounds.
2016-07-24 15:33:51,195 [cdi.Main.main()] INFO  Event                          - WELD-000411: Observer method [BackedAnnotatedMethod] protected org.apache.deltaspike.core.impl.exclude.extension.ExcludeExtension.vetoBeans(@Observes ProcessAnnotatedType, BeanManager) receives events for all annotated types. Consider restricting events using @WithAnnotations or a generic type with bounds.
2016-07-24 15:33:51,491 [cdi.Main.main()] WARN  Validator                      - WELD-001478: Interceptor class org.apache.deltaspike.core.impl.throttling.ThrottledInterceptor is enabled for the application and for the bean archive /home/oscerd/.m2/repository/org/apache/deltaspike/core/deltaspike-core-impl/1.7.1/deltaspike-core-impl-1.7.1.jar. It will only be invoked in the @Priority part of the chain.
2016-07-24 15:33:51,491 [cdi.Main.main()] WARN  Validator                      - WELD-001478: Interceptor class org.apache.deltaspike.core.impl.lock.LockedInterceptor is enabled for the application and for the bean archive /home/oscerd/.m2/repository/org/apache/deltaspike/core/deltaspike-core-impl/1.7.1/deltaspike-core-impl-1.7.1.jar. It will only be invoked in the @Priority part of the chain.
2016-07-24 15:33:51,491 [cdi.Main.main()] WARN  Validator                      - WELD-001478: Interceptor class org.apache.deltaspike.core.impl.future.FutureableInterceptor is enabled for the application and for the bean archive /home/oscerd/.m2/repository/org/apache/deltaspike/core/deltaspike-core-impl/1.7.1/deltaspike-core-impl-1.7.1.jar. It will only be invoked in the @Priority part of the chain.
2016-07-24 15:33:52,244 [cdi.Main.main()] INFO  CdiCamelExtension              - Camel CDI is starting Camel context [camel-example-cassandraql-cdi]
2016-07-24 15:33:52,245 [cdi.Main.main()] INFO  DefaultCamelContext            - Apache Camel 2.18-SNAPSHOT (CamelContext: camel-example-cassandraql-cdi) is starting
2016-07-24 15:33:52,246 [cdi.Main.main()] INFO  ManagedManagementStrategy      - JMX is enabled
2016-07-24 15:33:52,352 [cdi.Main.main()] INFO  DefaultTypeConverter           - Loaded 189 type converters
2016-07-24 15:33:52,367 [cdi.Main.main()] INFO  DefaultRuntimeEndpointRegistry - Runtime endpoint registry is in extended mode gathering usage statistics of all incoming and outgoing endpoints (cache limit: 1000)
2016-07-24 15:33:52,465 [cdi.Main.main()] INFO  DefaultCamelContext            - StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
2016-07-24 15:33:52,547 [cdi.Main.main()] INFO  NettyUtil                      - Did not find Netty's native epoll transport in the classpath, defaulting to NIO.
2016-07-24 15:33:52,789 [cdi.Main.main()] INFO  DCAwareRoundRobinPolicy        - Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
2016-07-24 15:33:52,790 [cdi.Main.main()] INFO  Cluster                        - New Cassandra host /172.17.0.3:9042 added
2016-07-24 15:33:52,791 [cdi.Main.main()] INFO  Cluster                        - New Cassandra host /172.17.0.2:9042 added
2016-07-24 15:33:52,791 [cdi.Main.main()] INFO  Cluster                        - New Cassandra host /172.17.0.4:9042 added
2016-07-24 15:33:52,914 [cdi.Main.main()] INFO  DCAwareRoundRobinPolicy        - Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
2016-07-24 15:33:52,914 [cdi.Main.main()] INFO  Cluster                        - New Cassandra host /172.17.0.3:9042 added
2016-07-24 15:33:52,914 [cdi.Main.main()] INFO  Cluster                        - New Cassandra host /172.17.0.2:9042 added
2016-07-24 15:33:52,914 [cdi.Main.main()] INFO  Cluster                        - New Cassandra host /172.17.0.4:9042 added
2016-07-24 15:33:52,985 [cdi.Main.main()] INFO  DefaultCamelContext            - Route: route1 started and consuming from: timer://stream?repeatCount=1
2016-07-24 15:33:52,986 [cdi.Main.main()] INFO  DefaultCamelContext            - Total 1 routes, of which 1 are started.
2016-07-24 15:33:52,987 [cdi.Main.main()] INFO  DefaultCamelContext            - Apache Camel 2.18-SNAPSHOT (CamelContext: camel-example-cassandraql-cdi) started in 0.742 seconds
2016-07-24 15:33:53,018 [cdi.Main.main()] INFO  Bootstrap                      - WELD-ENV-002003: Weld SE container STATIC_INSTANCE initialized
2016-07-24 15:33:54,041 [ timer://stream] INFO  route1                         - Result from query [Row[1, oscerd]]

Running the query again you should see another entry in the users table:

cqlsh> use test;
cqlsh:test> select * from users;

 id | name
----+-----------
  1 |    oscerd
  2 | davsclaus

(2 rows)
cqlsh:test>

The route of the example is simple:

    @ContextName("camel-example-cassandraql-cdi")
    static class KubernetesRoute extends RouteBuilder {

        @Override
        public void configure() {
            from("timer:stream?repeatCount=1")
                .to("cql://{{cassandra-master-ip}},{{cassandra-node1-ip}},{{cassandra-node2-ip}}/test?cql={{cql-select-query}}&consistencyLevel=quorum")
                .log("Result from query ${body}")
                .process(exchange -> {
                    exchange.getIn().setBody(Arrays.asList("davsclaus"));
                })
                .to("cql://{{cassandra-master-ip}},{{cassandra-node1-ip}},{{cassandra-node2-ip}}/test?cql={{cql-insert-query}}&consistencyLevel=quorum");
        }
    }

As you may see we are using a Quorum Consistency Level for both the select and insert operations. You can find more informations about it in the Cassandra Consistency Level documentation

Conclusions

Using Docker to spin up a Cassandra Cluster can be useful during integration and performance testing. In this post we saw a little example of the camel-cassandraql features in a real Apache Cassandra cluster.

Before the camel-cassandraql Pull Request submission, I was working on a Camel-Cassandra component too. This component is a bit different from the cassandraql one and you can find more information in the Github Repository. It is aligned with the latest Apache Camel released version 2.17.2.

An introduction to Camel Kubernetes component

Since Camel 2.17.0 we have a Kubernetes component to interact with a Kubernetes cluster. The aim of this post is providing a first introduction to the component: how it works, what you can do with it and a little example of producer endpoint. The article is based on the Camel-Kubernetes 2.18-SNAPSHOT version.

Main Features

Camel-Kubernetes is based on the popular Kubernetes/Openshift client from the Fabric8 team. The component provides both Producer and Consumer endpoints and it has the following options (directly from the brand new automatic generated documentation of Apache Camel 2.18). To better understand the component you may need to take a look at Kubernetes site

Name Group Default Java Type Description
masterUrl common String Required Kubernetes Master url
apiVersion common String The Kubernetes API Version to use
category common String Required Kubernetes Producer and Consumer category
dnsDomain common String The dns domain used for ServiceCall EIP
kubernetesClient common DefaultKubernetesClient Default KubernetesClient to use if provided
portName common String The port name used for ServiceCall EIP
bridgeErrorHandler consumer false boolean Allows for bridging the consumer to the Camel routing Error Handler which mean any exceptions occurred while the consumer is trying to pickup incoming messages or the likes will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions that will be logged at WARN/ERROR level and ignored.
labelKey consumer String The Consumer Label key when watching at some resources
labelValue consumer String The Consumer Label value when watching at some resources
namespace consumer String The namespace
poolSize consumer 1 int The Consumer pool size
resourceName consumer String The Consumer Resource Name we would like to watch
exceptionHandler consumer (advanced) ExceptionHandler To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this options is not in use. By default the consumer will deal with exceptions that will be logged at WARN/ERROR level and ignored.
operation producer String Producer operation to do on Kubernetes
exchangePattern advanced InOnly ExchangePattern Sets the default exchange pattern when creating an exchange
synchronous advanced false boolean Sets whether synchronous processing should be strictly used or Camel is allowed to use asynchronous processing (if supported).
caCertData security String The CA Cert Data
caCertFile security String The CA Cert File
clientCertData security String The Client Cert Data
clientCertFile security String The Client Cert File
clientKeyAlgo security String The Key Algorithm used by the client
clientKeyData security String The Client Key data
clientKeyFile security String The Client Key file
clientKeyPassphrase security String The Client Key Passphrase
oauthToken security String The Auth Token
password security String Password to connect to Kubernetes
trustCerts security Boolean Define if the certs we used are trusted anyway or not
username security String Username to connect to Kubernetes

The headers used by the component are the following:

Name Type Description
CamelKubernetesOperation String The Producer operation
CamelKubernetesNamespaceName String The Namespace name
CamelKubernetesNamespaceLabels Map The Namespace Labels
CamelKubernetesServiceLabels Map The Service labels
CamelKubernetesServiceName String The Service name
CamelKubernetesServiceSpec io.fabric8.kubernetes.api.model.ServiceSpec The Spec for a Service
CamelKubernetesReplicationControllersLabels Map Replication controller labels
CamelKubernetesReplicationControllerName String Replication controller name
CamelKubernetesReplicationControllerSpec io.fabric8.kubernetes.api.model.ReplicationControllerSpec The Spec for a Replication Controller
CamelKubernetesReplicationControllerReplicas Integer The number of replicas for a Replication Controller during the Scale operation
CamelKubernetesPodsLabels Map Pod labels
CamelKubernetesPodName String Pod name
CamelKubernetesPodSpec io.fabric8.kubernetes.api.model.PodSpec The Spec for a Pod
CamelKubernetesPersistentVolumesLabels Map Persistent Volume labels
CamelKubernetesPersistentVolumesName String Persistent Volume name
CamelKubernetesPersistentVolumesClaimsLabels Map Persistent Volume Claim labels
CamelKubernetesPersistentVolumesClaimsName String Persistent Volume Claim name
CamelKubernetesPersistentVolumesClaimsSpec io.fabric8.kubernetes.api.model.PersistentVolumeClaimSpec The Spec for a Persistent Volume claim
CamelKubernetesSecretsLabels Map Secret labels
CamelKubernetesSecretsName String Secret name
CamelKubernetesSecret io.fabric8.kubernetes.api.model.Secret A Secret Object
CamelKubernetesResourcesQuotaLabels Map Resource Quota labels
CamelKubernetesResourcesQuotaName String Resource Quota name
CamelKubernetesResourceQuotaSpec io.fabric8.kubernetes.api.model.ResourceQuotaSpec The Spec for a Resource Quota
CamelKubernetesServiceAccountsLabels Map Service Account labels
CamelKubernetesServiceAccountName String Service Account name
CamelKubernetesServiceAccount io.fabric8.kubernetes.api.model.ServiceAccount A Service Account object
CamelKubernetesNodesLabels Map Node labels
CamelKubernetesNodeName String Node name
CamelKubernetesBuildsLabels Map Openshift Build labels
CamelKubernetesBuildName String Openshift Build name
CamelKubernetesBuildConfigsLabels Map Openshift Build Config labels
CamelKubernetesBuildConfigName String Openshift Build Config name
CamelKubernetesEventAction io.fabric8.kubernetes.client.Watcher.Action Action watched by the consumer
CamelKubernetesEventTimestamp String Timestamp of the action watched by the consumer
CamelKubernetesConfigMapName String ConfigMap name
CamelKubernetesConfigMapsLabels Map ConfigMap labels
CamelKubernetesConfigData Map ConfigMap Data

The whole component divides his features in a set of categories:

  • namespaces
  • services
  • replicationControllers
  • pods
  • persistentVolumes
  • persistentVolumesClaims
  • secrets
  • resourcesQuota
  • serviceAccounts
  • nodes
  • configMaps
  • builds
  • buildConfigs

The producer endpoints are based on a set of Operations:

  • Namespaces

    • listNamespaces
    • listNamespacesByLabels
    • getNamespace
    • createNamespace
    • deleteNamespace
  • Services

    • listServices
    • listServicesByLabels
    • getService
    • createService
    • deleteService
  • Replication Controllers

    • listReplicationControllers
    • listReplicationControllersByLabels
    • getReplicationController
    • createReplicationController
    • deleteReplicationController
    • scaleReplicationController
  • Pods

    • listPods
    • listPodsByLabels
    • getPod
    • createPod
    • deletePod
  • Persistent Volumes

    • listPersistentVolumes
    • listPersistentVolumesByLabels
    • getPersistentVolume
  • Persistent Volumes Claims

    • listPersistentVolumesClaims
    • listPersistentVolumesClaimsByLabels
    • getPersistentVolumeClaim
    • createPersistentVolumeClaim
    • deletePersistentVolumeClaim
  • Secrets

    • listSecrets
    • listSecretsByLabels
    • getSecret
    • createSecret
    • deleteSecret
  • Resources quota

    • listResourcesQuota
    • listResourcesQuotaByLabels
    • getResourceQuota
    • createResourceQuota
    • deleteResourceQuota
  • Service Accounts

    • listServiceAccounts
    • listServiceAccountsByLabels
    • getServiceAccount
    • createServiceAccount
    • deleteServiceAccount
  • Nodes

    • listNodes
    • listNodesByLabels
    • getNode
  • Config Maps

    • listConfigMaps
    • listConfigMapsByLabels
    • getConfigMap
    • createConfigMap
    • deleteConfigMap
  • Builds

    • listBuilds
    • listBuildsByLabels
    • getBuild
  • Build Configs

    • listBuildConfigs
    • listBuildConfigsByLabels
    • getBuildConfig

The documentation part is usually so boring, so let's see how you can declare a producer or a consumer to interact with a Kubernetes cluster:

    from("direct:list")
        .to("kubernetes://https://localhost:8443?oauthToken=xxxxxxxx&category=pods&operation=listPods")
        .to("mock:result")

In this Producer snippet you will ask for the list of Pods in any namespace of your Kubernetes cluster. The operation will return a list of Pod (io.fabric8.kubernetes.api.model.Pod).

    from("kubernetes://https://localhost:8443?oauthToken=xxxxxxxx&category=pods&namespace=default&labelKey=kind&labelValue=http")
        .to("mock:result")

In this Consumer snippet you will consume events from the Kubernetes cluster related to Pods, in the default namespace, labeled with key kind and value http. The exchange will contain the Pod object (in the body) with all the related metadata, a timestamp (in the CamelKubernetesEventTimestamp header) and an action (in the CamelKubernetesEventAction header).

A Little example

In the last days I worked on a little example that lists pods from a Kubernetes Cluster. I added the example to the Apache Camel examples folder. The Camel CDI Kubernetes example assumes you have a Kubernetes Cluster running in your environment. Personally, for testing I prefer to use the Vagrant Openshift Image from the Fabric8 team: to run the image and getting started you can follow the guide from Fabric8 documentation.

After the vagrant up command has completed remember to run the following command:

fabric8-installer/vagrant/openshift$ export KUBERNETES_DOMAIN=vagrant.f8
fabric8-installer/vagrant/openshift$ export DOCKER_HOST=tcp://vagrant.f8:2375

and login in Openshift

fabric8-installer/vagrant/openshift$ oc login https://172.28.128.4:8443

you will be asked from user/password (admin/admin). After the login will be successful you will be able to get your OAuth token (this will be useful for the Camel-Kubernetes example):

fabric8-installer/vagrant/openshift$ oc whoami --token
XXxnZi2YghQyke-eARXBZ38K-p5zIpco0ShdzO8gK6U

Now we have all the elements to run our example. Edit the apache-deltaspike.properties with the correct OAuth Token from Openshift.

oscerd@localhost:~/workspace/apache-camel/camel/examples/camel-example-cdi-kubernetes$ mvn compile camel:run

The route is triggered with a timer and it has a repeatCount option equals to 3. The output of the examples will be:

2016-07-12 12:00:42,220 [cdi.Main.main()] INFO  CdiCamelExtension              - Camel CDI is starting Camel context [camel-example-kubernetes-cdi]
2016-07-12 12:00:42,220 [cdi.Main.main()] INFO  DefaultCamelContext            - Apache Camel 2.18-SNAPSHOT (CamelContext: camel-example-kubernetes-cdi) is starting
2016-07-12 12:00:42,222 [cdi.Main.main()] INFO  ManagedManagementStrategy      - JMX is enabled
2016-07-12 12:00:42,355 [cdi.Main.main()] INFO  DefaultTypeConverter           - Loaded 188 type converters
2016-07-12 12:00:42,376 [cdi.Main.main()] INFO  DefaultRuntimeEndpointRegistry - Runtime endpoint registry is in extended mode gathering usage statistics of all incoming and outgoing endpoints (cache limit: 1000)
2016-07-12 12:00:42,468 [cdi.Main.main()] INFO  DefaultCamelContext            - StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
2016-07-12 12:00:42,828 [cdi.Main.main()] INFO  DefaultCamelContext            - Route: route1 started and consuming from: timer://stream?repeatCount=3
2016-07-12 12:00:42,831 [cdi.Main.main()] INFO  DefaultCamelContext            - Total 1 routes, of which 1 are started.
2016-07-12 12:00:42,832 [cdi.Main.main()] INFO  DefaultCamelContext            - Apache Camel 2.18-SNAPSHOT (CamelContext: camel-example-kubernetes-cdi) started in 0.611 seconds
2016-07-12 12:00:42,878 [cdi.Main.main()] INFO  Bootstrap                      - WELD-ENV-002003: Weld SE container STATIC_INSTANCE initialized
We currently have 13 pods
Pod name docker-registry-1-c6ie5 with status Running
Pod name fabric8-docker-registry-pgo5y with status Running
Pod name fabric8-forge-wvyw7 with status Running
Pod name fabric8-tr0b9 with status Running
Pod name gogs-2p4mn with status Running
Pod name grafana-754y7 with status Running
Pod name infinispan-client-a7z3k with status Running
Pod name infinispan-client-iubag with status Running
Pod name infinispan-server-wl0in with status Running
Pod name jenkins-cr2ez with status Running
Pod name nexus-aarks with status Running
Pod name prometheus-mp0kr with status Running
Pod name router-1-dkjsb with status Running
We currently have 13 pods
Pod name docker-registry-1-c6ie5 with status Running
Pod name fabric8-docker-registry-pgo5y with status Running
Pod name fabric8-forge-wvyw7 with status Running
Pod name fabric8-tr0b9 with status Running
Pod name gogs-2p4mn with status Running
Pod name grafana-754y7 with status Running
Pod name infinispan-client-a7z3k with status Running
Pod name infinispan-client-iubag with status Running
Pod name infinispan-server-wl0in with status Running
Pod name jenkins-cr2ez with status Running
Pod name nexus-aarks with status Running
Pod name prometheus-mp0kr with status Running
Pod name router-1-dkjsb with status Running
We currently have 13 pods
Pod name docker-registry-1-c6ie5 with status Running
Pod name fabric8-docker-registry-pgo5y with status Running
Pod name fabric8-forge-wvyw7 with status Running
Pod name fabric8-tr0b9 with status Running
Pod name gogs-2p4mn with status Running
Pod name grafana-754y7 with status Running
Pod name infinispan-client-a7z3k with status Running
Pod name infinispan-client-iubag with status Running
Pod name infinispan-server-wl0in with status Running
Pod name jenkins-cr2ez with status Running
Pod name nexus-aarks with status Running
Pod name prometheus-mp0kr with status Running
Pod name router-1-dkjsb with status Running
^C2016-07-12 12:00:50,946 [Thread-1       ] INFO  MainSupport$HangupInterceptor  - Received hang up - stopping the main instance.
2016-07-12 12:00:50,950 [Thread-1       ] INFO  CamelContextProducer           - Camel CDI is stopping Camel context [camel-example-kubernetes-cdi]
2016-07-12 12:00:50,950 [Thread-1       ] INFO  DefaultCamelContext            - Apache Camel 2.18-SNAPSHOT (CamelContext: camel-example-kubernetes-cdi) is shutting down
2016-07-12 12:00:50,951 [Thread-1       ] INFO  DefaultShutdownStrategy        - Starting to graceful shutdown 1 routes (timeout 300 seconds)
2016-07-12 12:00:50,953 [ - ShutdownTask] INFO  DefaultShutdownStrategy        - Route: route1 shutdown complete, was consuming from: timer://stream?repeatCount=3
2016-07-12 12:00:50,953 [Thread-1       ] INFO  DefaultShutdownStrategy        - Graceful shutdown of 1 routes completed in 0 seconds
2016-07-12 12:00:50,965 [Thread-1       ] INFO  MainLifecycleStrategy          - CamelContext: camel-example-kubernetes-cdi has been shutdown, triggering shutdown of the JVM.
2016-07-12 12:00:50,968 [Thread-1       ] INFO  DefaultCamelContext            - Apache Camel 2.18-SNAPSHOT (CamelContext: camel-example-kubernetes-cdi) uptime 8.748 seconds
2016-07-12 12:00:50,968 [Thread-1       ] INFO  DefaultCamelContext            - Apache Camel 2.18-SNAPSHOT (CamelContext: camel-example-kubernetes-cdi) is shutdown in 0.018 seconds
2016-07-12 12:00:50,977 [Thread-1       ] INFO  Bootstrap                      - WELD-ENV-002001: Weld SE container STATIC_INSTANCE shut down

As you can see the returned pods are 13 in my environment and all of them are in Running status.

Conclusions

This post was just a little introduction on what Camel-Kubernetes can do. I think the most interesting part is the consumer side anyway. I will focus on the different types of consumer in one of next Camel related post. Meanwhile please try the component yourself and if you find something wrong, you think something can be improved or you think there is something else we can focus on, don't hesitate and post on Camel Dev mailing list or on the Camel Users mailing list, or, if you are sure there is a bug or you think an improvement is truly needed, raise a JIRA on Apache Camel JIRA. You can find examples of different operations you can do with Producer Endpoints and Consumer Endpoints in the Component unit test.

Stay tuned for the next Camel-Kubernetes consumers post.