Apache Tamaya as a Toolkit

I mentioned in my previous post that Apache Tamaya is not a framework, but a flexible toolkit that can be used in a variety of ways. This requires Tamaya to be adaptable in various ways. This blog shows how Tamaya deals with such requirements in more detail. Summarizing Tamaya provides the following hooks and plugin mechanisms:

  • Tamaya provides a global access to a shared Configuration instance via it’s ConfigurationProvider singleton. This singleton is backed up by a ConfigurationProviderSpi, which allows to exchange the provider implementation used completely.
  • Instead of sharing a single Configuration instance you can also define your own lifecycle and configuration scope. To support these scenarios Tamaya provides a ConfigurationContextBuilder, which can be used to create your own completely independent Configuration instances.
  • By default Tamaya uses the Java ServiceLoader for locating services such as SPI implementations, PropertySources etc. But Tamaya internally indirects these accesses via it’s own ServiceContext, which is managed as a service by the ServiceContextManager service locator. This makes Tamaya flexible enough to integrate seemlessly with OSGI services or IoC containers.
  • Tamaya comes with a lean small core library, which only comes with a few dependencies (refer also to my last blog post here). So integration of Tamaya does only add a few bytes to your application size. If you want to support external configuration with Tamaya, but nevertheless you don’t want to ship your library or product with Tamaya, you can still use the tamaya-optional module, that allows you to integrate with Tamaya automatically, when it available on your classpath.
  • Finally, because Tamaya is built in a very modular way, you can exactly add and use the features you want. There is no need to add tens of Mega-Bytes of dependencies you do not require in your application.

This blog will discuss the first two concepts in more detail. The other points will follow in subsequent blog post(s).

Managing the shared Configuration context

Tamaya provides access to a shared default Configuration instance with it’s ConfigurationProvider singleton:

Configuration config = ConfigurationProvider.getConfiguration();

The default implementation in tamaya-core actually shares one single instance globally. This is completely enough for microservice-styled applications, but for more complex environments such as Java EE or OSGI, it might be not the best match. Fortunately you can replace the default implementation by registering your own implementation of org.apache.tamaya.spi.ConfigurationProviderSpi:

public interface ConfigurationProviderSpi {

    Configuration getConfiguration();
    Configuration createConfiguration(ConfigurationContext context);
    ConfigurationContextBuilder getConfigurationContextBuilder();
    void setConfiguration(Configuration config);
    boolean isConfigurationSettable();

    @Deprecated
    default ConfigurationContext getConfigurationContext(){...}
    @Deprecated
    default void setConfigurationContext(ConfigurationContext context){...}
    @Deprecated
    default boolean isConfigurationContextSettable(){...}
}

Above already the new interface version, which will be shipped  with 0.4-incubating is shown. In version 0.3-incubating and earlier you also have to implement the deprecated methods (fortunately the implementation can easily delegate to the other methods of the interface, so this is a no brainer).

Looking at the method you see the most important plugin APIs of the Tamaya API:

  • Configuration getConfiguration() defines, which Configuration instance is the shared default instance returned by ConfigurationProvider.getConfiguration(). You can return a single shared instance or manage multiple instances related to context data not known to Tamaya.
  • Configuration createConfiguration(ConfigurationContext) defines the logic that implements a Configuration instance based on a given ConfigurationContext (the instance containing the underlying property sources, filters, converters, policies etc; including their ordering and significance). In most cases you will reuse an existing implementation, such as org.apache.tamaya.core.internal.DefaultConfiguration (tamaya-core) or org.apache.tamaya.spisupport.DefaultConfiguration (tamaya-spisupport).
  • ConfigurationContextBuilder getConfigurationContextBuilder() creates a new builder for creating a ConfigurationContext. This builder allows you to assemble your own configuration setup, including the property sources, converters and filters to be used, as well as their ordering and significance. Given a ConfigurationContext you can use the createConfiguration factory method (see next section) to create a new Configuration instance.
    The builder API will be discussed in more detail later in this post.
  • void setConfiguration(Configuration) allows you to explicitly replace the current shared default instance, with an instance of your choice. Since this might be an operation not wanted to be available in production scenarios, the implementation of the SPI active might declare the operation to be not available using the boolean isConfigurationSettable() method.

Example: ConfigurationProvider for multithreaded testing

To demonstrate the power of this SPI let’s implement a ConfigurationProviderSPI, which supports Configuration isolation on Thread level. This allows us to set Configuration for isolated testing similar to the following snippet:

public class MyTest{

  private Configuration config = buildConfiguration();
  private PropertySource testPropertySource = new Config();
 
  /** Installs the test configuration for this thread. */
  @Before
  public void initConfig(){
    ConfigurationProvider.setConfiguration(config);
  }

  /** Resets the test configuration for this thread. */
  @After
  public void resetConfig(){
    ConfigurationProvider.setConfiguration(null);
  }

  /** Tests ... */
  @Test
  public void testFoo(){
    // execute any tests on any classes that consume the sharewd configuration:
    System.out.println("Thread: " + Thread.currentThread().getId() 
                       + " - config: " 
                       + ConfigurationProvider.getConfiguration().getProperties());
  }

  /** Builds the temporal configuration used by this test class. */
  private Configuration buildConfiguration(){
    ConfigurationContext context = ConfigurationProvider.getConfigurationContextBuilder()
                  .addDefaultConverter()
                  .addPropertySource(testPropertySource)
                  .build();
    return ConfigurationProvider.createConfiguration(context);
  }

  /** Configuration for this test. */
  private static final class Config extends BasePropertySource{
    ...
  }
}

The idea is that you are use parallel multi-threaded test execution in your test framework. The default shared provider instance of Tamaya will lead to conflicts in your test for different configurations. Our implementation now will allow isolating the configurations on a per thread level. To make live more easier we directly inherit from the default implementation based class and just add an optional ThreadLocal layer on top:

/**
 * Configuration provider that allows to set and reset a configuration
 * different per thread.
 */
public class TestConfigProvider extends DefaultConfigurationProvider{

    private ThreadLocal<Configuration> threadedConfig = new ThreadLocal<>();

    @Override
    public Configuration getConfiguration() {
        Configuration config = threadedConfig.get();
        if(config!=null){
            return config;
        }
        return super.getConfiguration();
    }

    @Override
    public void setConfiguration(Configuration config) {
        if(config==null){
            threadedConfig.remove();
        }else {
            threadedConfig.set(config);
        }
    }
}

Finally we have to register our new service implementation using the Java ServiceLoader. So let’s add a classpath resource to

   META-INF/org.apache.tamaya.spi.ConfigProviderSpi

with the following contents:

com.mycompany.test.TestConfigProvider

Given that the new provider is picked up by the API on startup and ensures configurations can be set and reset for each test thread running. By default, if no configuration has been explicitly set, still the default shared configuration instance is returned.

Using the ConfigurationContextBuilder

The ConfigurationContext is the container that manages all required resources to build up a Configuration such as property sources, converters, filters etc. By default, Tamaya creates a new Configuration based on the service provided by default through it’s current ServiceContext (more details are discussed later in this post).  Tamaya also comes with a builder, which allows you to assemble your own custom ConfigurationContext. The default bootstrap code gives you a first impression of the capabilities of this API:

ConfigurationContext context = new DefaultConfigurationContextBuilder()
    .addDefaultPropertyConverters()
    .addDefaultPropertyFilters()
    .addDefaultPropertySources()
   .build();

This code snippet actually does the following:

  1. It Creates a new ConfigurationContextBuilder instance (this can also be done by use of the public Tamaya API:  ConfigurationProvider.getConfigurationContextBuilder().
  2. Load all available PropertyConverter instances from the ServiceContext and add them to the current chain of converters (one chain for each target type). Order them based on their @Priority annotations (if missing a priority of 0 is assumed).
  3. Load all available PropertyFilter instances from the ServiceContext and add them to the current chain of converters. Order them based on their @Priority annotations (if missing a priority of 0 is assumed).
  4. Load all available PropertySource instances from the ServiceContext and add them to the current chain of converters. Load all available PropertySourceProvider instances from the ServiceContext and add all provided (calling PropertySourceProvider.getPropertySources()PropertySource instances to the current chain of converters.
  5. Order the property sources based on their int getOrdinal() value (API method of PropertySource).
  6. By default the PropertyValueCombinationPolicy used is initialized with an overriding policy. This policy will ensure that non-null values returned from more significant PropertySources (being at higher positions in the chain order) will override any existing values for the same key.
  7. Finally the builder instances and lists are rendered into a read-only ConfigurationContext instance.

Give this context a Configuration can be easily created by calling the corresponding API method:

Configuration config = ConfigurationProvider.createConfiguration(context);

Given that, it should be obvious how easy it is, to apply his logic for different classloaders and reuse Tamaya’s bootstrapping logic for different classloader hierarchies. But the builder effectively provides quite a lot of additional functionality to manipulate/adapt a configuration context:

public interface ConfigurationContextBuilder {
 
  ConfigurationContextBuilder setContext(ConfigurationContext context);
  
  // setting up the property source chain
  ConfigurationContextBuilder addPropertySources(PropertySource... propertySources);
  ConfigurationContextBuilder addPropertySources(Collection<PropertySource> propertySources);
  ConfigurationContextBuilder addDefaultPropertySources();
  ConfigurationContextBuilder removePropertySources(PropertySource... propertySources);
  ConfigurationContextBuilder removePropertySources(Collection<PropertySource> propertySources);
  ConfigurationContextBuilder increasePriority(PropertySource propertySource);
  ConfigurationContextBuilder decreasePriority(PropertySource propertySource);
  ConfigurationContextBuilder highestPriority(PropertySource propertySource);
  ConfigurationContextBuilder lowestPriority(PropertySource propertySource);
  ConfigurationContextBuilder sortPropertySources(Comparator<PropertySource> comparator);
  List<PropertySource> getPropertySources();
  
  // Setting up property filters
  ConfigurationContextBuilder addPropertyFilters(PropertyFilter... filters); 
  ConfigurationContextBuilder addPropertyFilters(Collection<PropertyFilter> filters);
  ConfigurationContextBuilder addDefaultPropertyFilters();
  ConfigurationContextBuilder removePropertyFilters(PropertyFilter... filters);
  ConfigurationContextBuilder removePropertyFilters(Collection<PropertyFilter> filters);
  List<PropertyFilter> getPropertyFilters();
  ConfigurationContextBuilder sortPropertyFilter(Comparator<PropertyFilter> comparator);
  
  // Setting up converters
  <T> ConfigurationContextBuilder addPropertyConverters(TypeLiteral<T> typeToConvert, PropertyConverter<T>... propertyConverters);
  <T> ConfigurationContextBuilder addPropertyConverters(TypeLiteral<T> typeToConvert, Collection<PropertyConverter<T>> propertyConverters);
  ConfigurationContextBuilder addDefaultPropertyConverters();
  <T> ConfigurationContextBuilder removePropertyConverters(TypeLiteral<T> typeToConvert, PropertyConverter<T>... propertyConverters);
  <T> ConfigurationContextBuilder removePropertyConverters(TypeLiteral<T> typeToConvert, Collection<PropertyConverter<T>> propertyConverters);
  ConfigurationContextBuilder removePropertyConverters(TypeLiteral<?> typeToConvert);
  Map<TypeLiteral<?>, Collection<PropertyConverter<?>>> getPropertyConverter();
  
  // Setting the combination policy
  ConfigurationContextBuilder setPropertyValueCombinationPolicy(PropertyValueCombinationPolicy policy);
  
  // building a context
  ConfigurationContext build();

}

Summarizing with the builder you can:

  • loading the default instances of property sources, converters and filters reusing the default loading logic already implemented.
  • add, remove any kind of property sources, converters and filters.
  • reorder property sources programmatically regardless of their provided getOrdinal() value.
  • reorder property sources and filters based on your own custom comparator implementations.
  • changing the conbination policy programmatically as well.

Ĥereby you have full control about what is happening. Unless you explicitly call the sortXXX methods, the builder will exactly presume the ordering of artifacts (e.g. property sources) as they were added to the builder. Given that you can very easily define your configuration context based on your own requirements. As an example we would like to build a configuration (context) with the following property sources used:

  1. Lookup environment properties first.
  2. Check for system properties.
  3. Add CLI parameters as third source.
  4. And let configuration be overriden by a centrally managed etcd server.
  5. For conversion and filtering we are satisfied with the default logic.

In code we can build up a corresponding Context/Configuration instance and initialize our application as follows:

public static void main(String[] args) {
  CLIPropertySource.initMainArgs(args);
  ConfigurationContext ctx = ConfigurationProvider.getConfigurationContextBuilder()
    .addDefaultPropertyConverters()
    .addDefaultPropertyFilters()
    .addPropertySources(
      new EnvironmentPropertySource(),
      new SystemPropertySource(),
      new CLIPropertySource(),
      new EtcdPropertySource("localhost")
  );
  ConfigurationProvider.setConfiguration(ConfigurationProvider.createConfiguration(context));
  
  // start your application
}

Given this snippet you see that Tamaya is very flexible it does not tell you, how you have to write your applications or what framework you have to use. Nevertheless to make life easier Tamaya provides several extensions/integrations, for example with

  • Spring/Spring Boot
  • Vertx
  • CDI/Java EE
  • Microprofile.io (work in progress)
  • OSGI Configuration (work in progress)

This is why we call Tamaya to be a toolkit for Configuration. It should shine with ease of use and flexibility and not impose any constraints on developers.

Continue reading

Advertisements
Posted in Uncategorized | Leave a comment

Configuration with Apache Tamaya

This is a first post of a series discussing the Apache Tamaya configuration toolkit. So let’s directly have a look at Tamaya and see what it can do for us…

Setting up a minimal project

First we need to dowload and install Tamaya. Fortunately you don’t need any third party libraries. So given you have Java 7 and a reasonable build system such as maven installed you simply add the following dependencies:

<dependency>
  <groupId>org.apache.tamaya</groupId>
  <artifactId>tamaya-core</artifactId>
  <version>0.3-incubating</version>
</dependency>

As a result you will have a small maven project with the following dependencies:

[INFO] org.apache.tamaya.examples:01-minimal:jar:0.3-incubating
[INFO] +- org.apache.tamaya:tamaya-core:jar:0.3-incubating:compile
[INFO] | \- org.apache.tamaya:tamaya-api:jar:0.3-incubating:compile
[INFO] +- org.apache.geronimo.specs:geronimo-annotation_1.2_spec:jar:1.0-alpha-1:compile

Tamaya’s core transitively also requires Tamaya API and the 1.2 annotations package:

  • tamaya-api contain the API you will see in your code.
  • whereas tamaya-core actually implements the API.
  • Since the service location mechanismimplemented by tamaya-core also supports the usage of @Priority annotations the corresponding annotation package is required as well.

Accessing Configuration

So there you go. You are ready to access configuration. Tamaya, by default maps all systemand environment properties, so we can access them easily:

public static void main(String[] args) {
    Configuration cfg = ConfigurationProvider.getConfiguration();

    System.out.println("User Home   :  " + cfg.get("user.home"));
    System.out.println("User Name   :  " + cfg.get("user.name"));
    System.out.println("PATH        :  " + cfg.get("env.PATH"));
}

As expected the output looks somethinglike this:

User Home   :  /home/atsticks
User Name   :  atsticks
PATH        :  /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin ...

This is great, but we want to see a bit more what is going on under the hood, so let’s add our Configuration instance to the output using

System.out.println(cfg);

The output gives us much details about our configuration setup:

Configuration{
 ConfigurationContext{
  Property Sources
  ----------------
  CLASS                           NAME                   SCANNABLE SIZE STATE ERROR

  CLIPropertySource               CLIPropertySource      true         0 OK 
  EnvironmentPropertySource       environment-properties true        69 OK 
  JavaConfigurationPropertySour   resource:META-INF/javaconfiguration.* true 7 OK 
  SystemPropertySource            system-properties      true 52 OK

 Property Filters
 ----------------
 No property filters loaded.

 Property Converters
 -------------------
 CLASS                      TYPE             INFO

 CurrencyConverter          Currency         org.apache.tamaya.core.internal.converters.CurrencyConverter@3fee733d
 LongConverter              Long             org.apache.tamaya.core.internal.converters.LongConverter@5acf9800
 FloatConverter             Float            org.apache.tamaya.core.internal.converters.FloatConverter@4617c264
 BigIntegerConverter        BigInteger       org.apache.tamaya.core.internal.converters.BigIntegerConverter@36baf30c
 NumberConverter            Number           org.apache.tamaya.core.internal.converters.NumberConverter@7a81197d
 URLConverter               URL              org.apache.tamaya.core.internal.converters.URLConverter@5ca881b5
 DoubleConverter            Double           org.apache.tamaya.core.internal.converters.DoubleConverter@24d46ca6
 ClassConverter             Class            org.apache.tamaya.core.internal.converters.ClassConverter@72ea2f77
 IntegerConverter           Integer          org.apache.tamaya.core.internal.converters.IntegerConverter@4517d9a3
 URIConverter               URI              org.apache.tamaya.core.internal.converters.URIConverter@372f7a8d
 ByteConverter              Byte             org.apache.tamaya.core.internal.converters.ByteConverter@2f92e0f4
 ShortConverter             Short            org.apache.tamaya.core.internal.converters.ShortConverter@28a418fc
 BigDecimalConverter        BigDecimal       org.apache.tamaya.core.internal.converters.BigDecimalConverter@5305068a
 PathConverter              Path             org.apache.tamaya.core.internal.converters.PathConverter@1f32e575
 CharConverter              Character        org.apache.tamaya.core.internal.converters.CharConverter@279f2327
 FileConverter              File             org.apache.tamaya.core.internal.converters.FileConverter@2ff4acd0
 BooleanConverter           Boolean          org.apache.tamaya.core.internal.converters.BooleanConverter@54bedef2

 PropertyValueCombinationPolicy: org.apache.tamaya.spi.PropertyValueCombinationPolicy$1
}}

From the output you easily can identify the main abstractions of Tamaya:

  • The Configuration object, which provides the API.
  • The ConfigurationContext, which actually it the container of all artifacts required to provide a Configuration.
    • Property Sources that provide configuration key/value pairs (both as Strings).
    • Property Converters that provide capabilities for converting configuration values to typed instances.
    • Property Filters that allow to add/extend/remove or update configuration String values after evaluation.
    • The PropertyValueCombinationPolicy allows to define the way how entries from less significant property sources are combined with values from property sources with higher significance to a new value. Adapting this policy also allows to combine properties, as it is needed to support collcetion types.

Similarly we see that the following property sources are included in the current configuration:

  • CLI parameters (must be explicitly set, refer to next section)
  • Environment properties as returned by System.getenv().
  • Entries from META-INF/javaconfiguration.* (supported are .properties and .xml property files).
  • System properties as returned by System.getProperties().

The order given also reflects the significance of the different property sources. So system properties override properties from all other property sources. This behaviour can be tweaked in different ways. This will probably be discussed in another blog post.

CLI Properties

Command line parameters cannot be evaluated by default, so they must be passed explicitly to the CLIPropertySource using the static initializer method:

public static void main(String[] args) {
    CLIPropertySource.initMainArgs(args);
    ...
}

Given that configuration also can be passed as main paramters:

java ... TestExampleMain -server-port 800

 

Typed Configuration Access

As we have seen we can also use typed configuration, e.g. let’s access the user.home property as a file:

File homeDir = cfg.get("user.home", File.class);
System.out.println("\tUser Home   :  " + homeDir);

Of course, the output will not change, but we successfully accessed configuration using a target type. The list of converters shown earlier shows the types that are supported by default. With the upcoming version 0.4-incubating Java 8 will be required and additional Java 8 specific types will be supported as well:

  • Optional
  • LocalTime
  • LocalDate
  • LocalDateTime
  • OffsetTime
  • OffsetDateTime
  • Instant

Providing Default Values

In many cases you want to provide default values. Of course, this is also possible with Apache Tamaya:

int port = cfg.getOrDefault("server.port", int.class, 80);
System.out.println("\tServer port :  " + port);

As expected 80 is returned for the configuration value:

Server port : 80

Now let’s define an additional system property: -Dserver.port=889. As expected the value returned has been changed:

Server port : 889

Recap

This blog showed you the minimal API of Apache Tamaya. This is fairly easy and in most real-world scenarious you want probably more features. This is way there is a variety of extensions, which provides additional functionality such as:

  • placeholder and resolution patterns,
  • resource location,
  • additional file formats,
  • property sources for various backend technologies,
  • configuration injection with or without CDI,
  • integration with other frameworks such as Spring, Camel and more.

Also accessing Configuration using a singleton pattern (ConfigurationProvider.getConfiguration();) is known to be rather inflexible. But good news is, that Tamaya has several mechanisms to use your own flexible implementation of a ConfigurationProvider.

Also Apache Tamaya is a toolkit. You don’t have to use CDI, Java EE, Spring for using it. Most extensions provided work seemlessly with almost every Java based framework. You even can programmatically assemble your configuration using a builder pattern and manage configuration instance using your own lifecycle models. So you have full freedom how to use Apache Tamaya. Nevertheless Tamaya offers you a unified configuration model and API. So you don’t have to adapt your code, when changing from one plaform to another.

Subsequent post will discuss the various features. The Apache Tamaya team encourages you to give back any kind of feedback, so they can improve Tamya even more.

Posted in Uncategorized | Leave a comment

Composing Configuration Sources

Recently there were discussions regarding a new Java EE Configuration JSR. At JavaOne 2016 Oracle announced that they want to relaunch this JSR (first attempt was about 3 years ago, when I was working at Credit Suisse). Unfortunately  with latest reschedules and strategic decisions Oracle decided to postpone this initiative until Java EE 9. But the topic is an important cross-cutting concern, especially when running in distributed Cloud environments. Consequently discussions are ongoing, e.g. at microprofile.io. In this blog I would like to discuss my thaughts on how different configuration sources could be assembled into one uniform Configuration instance.

What is Configuration?

To keep things simple I assume Configuration to be something as simple as follows (using Java styled syntax):

public interface Configuration{
  String getProperty(String key);
  Map<String,String> getProperties();

  static Configuration getInstance(){
    ...
  }
}

Similarly I define a Configuration Source, which provides configuration entries as follows:

public interface ConfigSource{
  String getProperty(String key);
  Map<String,String> getProperties();
}

Composing Configuration from multiple Configuration Sources

In general composing of a Configuration is a function f that maps all known ConfigSources csx to a Configuration c:

f(cs1, cs2, cs3, …, csx) -> c

Most configuration frameworks currently known hereby simplify this function  by applying repeatedly a mapping function fm, which maps a initial value and a new value of property k repeatedly for pairs of configuration sources to a final value V. This can be written as follows:

V = fm(…fm(fm(fm(null, CV(cs1, k), CV(cs2, k),CV(cs3, k), …)

Despite Apache Tamaya, where fm can be configured (as so called PropertyValueCombinationPolicy), most frameworks use a fixed (overriding) mapping function, which can be defined as follows:

fm(v1, v2) ->  v2==null ? v1 : v2

There are use cases, where this strict kind of mapping function is not suitable (e.g. mapping of collection types and type safe configuration). This is worth to be discussed in a separate post. For now, given a mapping function fm, obvisously the only free dimension is the ordering of configuration sources. So the question is how the sources should be ordered before the evaluation functionality is applied. This sounds  easy, but in fact can get quite tricky, since the sources are not necessarily known at deploy time. This can be the case because:

  • The number of files in a config directory are different for different environments.
  • The configuration should be automatically composed based on what is accessible in the current classpath.
  • Configuration is evaluated from different class loaders with different configuration sources visible.

Using Numeric Values as Ordering Criteria

Many configuration systems solve this problem, by using some kind of a numeric value applied to each configuration source, called ordinal. So to a ConfigSource a method is added returning the ordinal as illsutrated below:

int getOrdinal();

Based on this ordinal value a configuration system can easily determine the order of ConfigSources. Advantages of ordinals hereby are

  • the concept is simple
  • it allows comparing of multiple property sources also from different sources/frameworks/modules
  • the ordinal value can be configured by adding config entries to owning configuration source
Nevertheless the concept has also its disadvantages, some of them are:
  • what should be done with multiple config sources that provide the same ordinal value?
  • what is the exact semanic of an ordinal. Is a config source with a ten times bigger ordinal, ten times more significant?
  • what to do if a an order based on ordinals is not sufficient for your requirements, meaning your configuration assembly policy requires a more complex ordering criteria?
  • how to add a config source between subsequent ordinals n and n+1?
  • when using builders to define your configuration, ordinals are confusing, since they are not used if a Configuration is built programmatically.

Given that let us discuss, if the concept of ordinal is really needed.

Using Meta-Configuration to define the Config Sources and their corresponding order

In the sandbox part of Apache Tamaya there is a metamodel module, which allows to define a configuration system using an XML formatted metaconfiguration file, similar to the one below (in fact there are many more features available ike immutability, property-source-level filtering, auto-refresh etc):

<configuration>
  <property-sources>
    <source type=”sys-properties” />
    <source type=”env-properties” />
    <source type=”etcd” />
 <property-sources>
</configuration>
This is very convenient, easy to understand and does not need any ordinal values at all, since the XML structure defines the ordering by the order of child elements of the property-sources element.

Using a Builder to define the Config Sources and their corresponding order

As part of Apache Tamaya core API it is possible to acquire a ConfigurationContextBuilder, which allows to configure the configuration system. The resulting ConfigurationContext can then be easily be converted into a Configuration instance:

ConfigurationContextBuilder builder = ConfigurationProvider
                                 .createConfigurationContextBuilder();
builder.addPropertySources(
   new SystemPropsPropertySource(),
   new EnvironmentPropsPropertySource(),
   new EtcdPropertySource(),
);
Configuration config = ConfigurationProvider.createConfiguration(builder.build());

Supporting Ordinals

If someone still wants to use an ordinal based approach, the ordinal can be accessed as a normal property of a ConfigSource:
String ordinal = propertySource.getProperty(“_ordinal”);

Since ordinals may not be the single and only property to define the ordering of ConfigSources, Tamaya’s ConfigurationContextBuilder supports passing a Comparator that has to be used for sorting of ConfigSources. As a consequence we must only add one line (highlighted in red) of code to emable ordinal based sorting:

ConfigurationContextBuilder builder = ConfigurationProvider
                                 .createConfigurationContextBuilder();
builder.addPropertySources(
   new SystemPropsPropertySource(),
   new EnvironmentPropsPropertySource(),
   new EtcdPropertySource(),
);
builder.sortPropertySources(
                   DefaultPropertySourceComparator.getInstance());
Configuration config = ConfigurationProvider.
                   createConfiguration(builder.build());
If we want to apply a custom ordering we simply pass our  custom Comparator instance:
ConfigurationContextBuilder builder = ConfigurationProvider
                                 .createConfigurationContextBuilder();
builder.addPropertySources(
   new SystemPropsPropertySource(),
   new EnvironmentPropsPropertySource(),
   new EtcdPropertySource(),
);
builder.sortPropertySources(MyCustomComparator.getInstance());
Configuration config = ConfigurationProvider.
                   createConfiguration(builder.build());
This way the API is simple, but still flexible and not pulluted by an ordinal concept.

Winding up…

Looking at all the disadvantages and especially at the function f, which renders a set of configuration sources into a Configuration, it is IMO not a good idea to use ordinals as part of the API of a configuration system. Instead
  • let the configuration system provide a Builder based approach to build a Configuration, which allows to define the ordering
    • freely by adding property sources programmatically in the desired order. The Builder must not change any order of the ConfigSources added.
    • by applying an arbitrary Comparator to establish the ordering of the configuration sources to be applied.
  • let the configuration system allow to define the function f, which defines the mapping of configuration sources to a final configuration. This way any kind of mapping can be implemented by users and the system does not imply concepts, which may not be appropriate for some use cases.

Feel free to add your comments below.

Posted in Architecture, Uncategorized | Tagged | Leave a comment

Embracing Microservices…

There are quite some discussions around micoservices and monoliths. Some people think microservices (MS) are a new version of SOA, others think MS are the only way to build modern software systems at all. What microservices are all about? Especially when looking at the reality in daily business there is in most cases no simple way to build your IT systems around microservices. But microservice can give you hints how can improve your IT landscape’s architecture in the future and I think many of the ideas are worth being embraced. But before doing so let us have a quick look at what microservices are. One definition I found quite handy is the one from  Martin Fowler , which I will try to shortly summarize here:

  • Componentization via Services: hereby a component can be defined as a unit of software that is independently replaceable and upgradeable. Services do more target the fact that we talk about out-of-process communications (in contrary to in-process communications as with libraries). Services also imply a clear API boundery because accessing internal state is basically not possible. This is basically very similar to what SOA tried to achieve, but in case of MS connectivity is achieved mainly using lean REST based communcation protocols or asynchronous messaging. As a result microservices enforece clear module bounderies, allowing us to indivually develop, deploy and scale things. We can have different teams and also easily different programming languages, libraries and even runtime containers. Nevertheless API design is still a crucial discipline. Badly designed APIs still leak into components.
  • Organized around Business Capabilities: IMO this is one of the most disrupting aspects when thinking of microservices. Basically microservices are designed around business capabilities. This is especially challenging when looking at traditional IT organizations, where UI development, middleware platforms, database and OS/network operations are separated into different teams (or even organizational units). Also think on Conway’s law, which says:

    Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.
    — Melvyn Conway, 1967

    As a consequence, for succeeding with microservices,  you might have to restructure your enterprise IT’s organization as well as parts of your business teams correspondingly.

  • Products not Projects: Also this is not matching traditional IT organizations. Their used to organize everything in projects, which are completed within some time and then abandoned. Teams leave the project when finished and operations are passed to some operations teams. Microservices, in contrary, are managed as products, typically with a long term lifecycle. As long as the corresponding business capability is required, there is a dedicated team dealing with it.
  • Smart endpoints and dumb pipes: Complexity should be located at the endpoints of communication. Communication between components is done directly without any intermediates. So there is no complex routing, no intermediary protocol and data conversion or similar happening between. Consequently there is no enterprise service bus middleware only dump pipes and queues.
  • Decentralized Governance: Many enterprise architects tend to standardize everything. Their idea is to avoid complexity. Unfortunately they achieve sometimes exactly the contrary. This is because Human beings are not robots, they think. And if standards do not match the problem, people will find ways around it. They will think inevitably on alternatives and they will implement them (but they will not tell you 😉 ). Also it is well known that influence is naturally getting weaker with incresed organizational and/or physical distance. So simply said centralized governance is not really matching with microservices (to some extend). With microservices you offer choice. Each microservice team can basically decide, which programming language, which database and which runtime platform they want to use.
  • Decentralized Data Management: As well as decentralizing governance and tooling, microservices also decentralize data models and storage decisions. Monolithic applications prefer typically a single logical database for persistant data.
  • Infrastructure Automation: Since with microservices we split systems along business capabilities, we end up with much more runtime components that interact with each other. Hereby components can hopefully, also due to its reduced complexity, be automatically tested, deployed and scaled. Fortunately experience of companies such as Google, Netflix and Amazon evolved into some common technologies that are implemeting the mechanisms needed. These technologies are preconditions before you should think on jumping the microservice train.
  • Design for failure: When splitting up systems into multiple independent services, we introduce network communications. Whereas in-process calls within a monolith rarely fail (excluded cases where the whole system goes down in one), network calls may fail, timeout, be slow, incomplete or even not returning at any time! These means system failures will inevitably occur! But this is nothing microservice specific. All modern distributed software systems nowadays are affected. Therefore every architect should really have a deep look into resilient system design and learn how to embrace failures (something I will blog probably in another post).
  • Evolutionary Design: the idea here is that the enforced coupling through services and APIs allows for easier evolution of components. The reduced complexity allows to deliver new features fast and in a reliable way.

Summarizing with microservices you get some awsome advantages:

  • you have a clear and enforceable component model
  • you have a reduced complexity because of clear component bounderies
  • your services are simpler, since they are dealing typically with only one concern
  • you have a clear mapping of business capabilities, services and teams
  • you can use the persistence technologies that best match your requirements
  • you can have independently operating teams (time-wise, geographic, organizational unit etc)
  • you can react quickly on changes and easily introduce new features and business capabilities

Looking at the majority of features above these are not new. These is what you get if you do componentization and isolation right. This is a good thing, because this means that you can achieve these things also without going the full microservice way.

But there are more features, which may not that easy to achieve:

  • you can independently and fine grained deploy, update and add new features (or fix bugs) to microservices
  • you can autoscale as needed, for each business capability
  • you have clear responsibilities for a products stability, you must cope with a real DevOps attitude
  • you get a system that (possibly) embraces failures

But also here looking at the bullets in detail you don’t have to write that logic completely on your own. Transparent service location is implemented by multiple OSS solutions ready for production. But my recommendation is to go one step further and use a common cloud PaaS platform that provides you automatic service location, scalability, isolation and DevOps functionality, such as automatic deployments, component monitoring etc.

Given that we may then as the ultimate step also consider aspects from resilient system design. Many of these aspects are also targeted by microservices. Nevertheless for your microservice landscape to be effectively resilient, you must also decouple communications. But also this is relatively easily achievable. You can use a platform that out-of-the-box comes with a resilient design, such as Typesafe’s Akka platform. Or you can add in and out message queues for your microservices as default offering into your PaaS solution and rewrite your services to use asynchronous communications instead of the synchronous one.

So this is the solution we are looking for since decades. Maybe. But…

  • your current IT and business organization may look completely different
  • development and operations are separated in different business organizations with competing objectives
  • your monoliths are lacking good encapsulation and API design
  • your technical dept is overwhelming
  • you probably lack of IT know how for dealing with MS, PaaS, IaaS etc.
  • your business capabilities are not well defined or distributed
  • you have huge databases with lots of business code operating on it
  • you do not have an efficient IT tooling in place
  • you have complex processes, budgeting issues or no high level management support for introducing microservices
  • you run lots of legacy host systems

The list could be continued. Summarizing I tend to say that in reality many (if not most) companies are simply not microservices-ready. Nevertheless, and this is my point, I think it is unevitable to adopt the ideas and principals of microservices in the long term:

  1. Educate your people how components should be designed. Enable them to define simple and comprehensive APIs (regardless of the language or technology). Teach them about dependencies, cohesion, coupling and reuse.
  2. Enable your developers to implement modular software components. Support them to use the platforms that matches best their problems. Don’t force them to use all the same technology, let them tailor their solutions with the requirements they must implement (I don’t say here they should not align with architecture).
  3. Enable IaaS and PaaS cloud services (separate discussions if you want to run your cloud in public, privately or hybrid). This also implies support for different runtime containers, e.g. in the Java World Jetty, Tomcat, a full blown application server, OSGI containers, Spring Boot, Wildfly Swarm, Typesafe Akka all are legit solutions (decide on a good selection to be offered). And most important, don’t try to implement all this yourself. Evaluate cloud solutions or container products and integrate it with your enterprise infrastructure (if your infrastructure lacks some needed capabilities, improve your infrastructure instead of reducing the service level of your IaaS, PaaS offering).
  4. Educate people to embrace failures. Start discussions also with your business stakeholders, where useful, how to deal with failures.

For IT departments thinking on how to improve their quality level and speed based on a cloud architecture is the first step to be taken. Cloud solutions mostly also ship with working deployment pipelines and other useful services, which were to be managed by companies individually for long times. All this comes with a high quality. This solutions enables your development departments to regain efficiency and to deliver new features at scale fast and in a unified way. And it allows you to improve your IT skills for building systems using resilient system design principles. But good news is you don’t need a big-bang approach. Add this kind of infrastructure to your IT portfolio and start gaining experience by migrating projects on it.

A working PaaS layer and automated build ans deployment processes are IMO preconditions to be met before you should start moving towards a product/business-capability/MS oriented operational model. And for sure it is a long term effort, requiring good planning, high management attention and long term budgeting. But looking at the technological evolution currently happening I think there is no way around because the efficiency gains of competitors following this path will sooner or later erase your business competitiveness, so start adopting things now!

If you have any kind of remarks, criticism or interesting links to add, use the comment function to share it with the community.

Posted in Architecture | Tagged , , , , , , , | 1 Comment