Schuberg Philis

Techtalk

We all have in common that we want to come up with solutions for our customers and then help to implement them. That's why we interviewed some of them - what was 2018 like for them and what are they looking forward to in the year ahead?

Sebastiaan la Fleur

Data Engineering starting a data platform

What was the most compelling tech you worked with in 2018?

After starting in my team last year, I immediately began writing one of the core Lambdas for the ingestion part of the data platform for our customer. The data platform is fully built on AWS, which allows me to leverage all the tools and services AWS has to offer. The ingestion side of a data platform performs platform-level analysis and controls if data is accepted. It's primary purpose is to securely store all the data received from the sources. It must be very robust and trusted. We have a multi-staged ingestion pattern where each stage consists of a landing zone and a persistent S3 bucket. The Lambda I wrote checks whether a file is accepted when it enters the landing zone. If it is, it's moved to the persistent environment.

My second contribution consisted of helping to build a pipeline to standardize CSV and proprietary-formatted flat files into Parquet. As well as being the chosen standard file format, Parquet also adds to efficiency and compression. We first tried to convert the proprietary format with Glue, because it's serverless and provides out-of-the-box scalability. However, Glue was considered to be unsuitable for this transformation. So, we opted for a single Lambda execution per file instead. This meant that we needed to compromise on scalability, as the execution time of the Lambda execution is tied directly to the file size. This solution doesn't scale infinitely, given that Lambda has a limited execution time.

Most of the excitement, and the feeling of achievement, comes from conquering the nitty-gritty details of all the AWS services. For example, SNS provides an 'at least once' delivery guarantee. This means it can deliver more than once, which results in a file that needs to be ingested twice – something we don't want! Quality requires attention. What's more, it's energizing designing a solution which scales to more than 200 data streams.

What's next?

The customer sees value in the data platform, wants to be ahead of the curve, and can't wait to start using the platform company-wide. We need to extend the platform in AWS to cater for all the needs of our customer. Thanks to our experiences, our team has helped to select a set of tools which will be at the core of the data platform. AWS provides significant tooling, but it's still lacking in areas such as scalable data governance and management domain. I'm looking forward to finding a solution, together with my team and the customer.

Israel Roldán

Front-end development

What was the most exciting tech you worked with in 2018?

I was really excited to see how close we came to bringing more value to our customers' customers. One of the most exciting projects I worked on in 2018 was the implementation of a next-generation real-time paperless interface for the creation, filing, and validation of digital certificates. This was part of one of our customer's primary-process digitalization efforts. I once heard a ship's captain say: "You have the biggest port in Europe and you still work with paper?" Well, not anymore. This rewarding project was carried out in collaboration with an external partner and we handled the infrastructure and front-end. Both our customer and their end-users were happy. We made a positive impact by providing a better and faster experience.

I'm very excited that we're paying a lot of attention to the experience of our mission-critical systems. I had the opportunity to work with fantastic tools like GraphQL and Nuxt.js, Vue.js, Typescript, and Serverless Functions. I've also been re-architecting legacy applications and replacing them with Cloud-Native apps. These technologies are exciting because they allow us to use them progressively and change or use systems in small steps without completely having to rewrite them. It's great for customers as they know that we're not completely removing their mission-critical systems in one go and what's more, it matches our Agile way of working.

What's next?

I've been hearing and reading a lot about DevOps 2.0 and it sounds like just the kind of thing I'm most passionate about: bringing business goals and user experience into the mix, working closer with partners, customers, and Customer Teams. There seems to be an evolution toward mission-critical experience, which goes beyond amazing technical accomplishments. The aim is to provide a 100% experience.

Personally, I'm also very interested in how front-end oriented tools provided by cloud providers, such as AWS Amplify and Google Firebase, will change the way Cloud-Native systems are developed.

This will enable front-end developers to easily integrate more cloud services into their applications from the get-go. This will be a game changer. And were you aware that most people don't know that we're quite strong in front-end as well? We're also excited to be more active in the community; in 2019 we sponsored the biggest Vue.js conference in Europe and we will arrange a front-end meet up later in the year. More to come.

Yvo van Doorn

Serverless and the power of Public Cloud Serverless

What was the most impressive tech you worked with in 2018?

I'm impressed by the rapid developments in cloud services and what I'm most excited about is the Microsoft Azure ecosystem, especially their work on Azure Functions and Azure DevOps. It's exciting to see software as a service being used to provide solutions. While toolchains are very complicated to create, with a pragmatic approach, these SaaS solutions make it easier to build and release software. The speed of software delivery has grown exponentially, and this gives us the ability to rapidly deploy new solutions many times a day. We no longer need to build complex infrastructures which require our SaaS Team to invest many hours in building and maintaining software. The functionality that allows us to deliver new features to our customers is incredible. They experience faster business solutions, as we can release features and fix bugs faster. The result is delighted and eager customers, so our new challenge will be managing customer's expectations.

Mid-2018, Microsoft finally released the ability to define build pipelines in Yet Another Markup Language (YAML), enabling engineers to leave a file behind with the definitions of the process and the build in their code. When someone else needs to follow up, they can read the file and implement their work immediately. The YAML file can be continually adjusted and improved, and Microsoft provides this as seamless integration, both into Azure and as a SaaS solution. To give an example, we built and deployed container images on Azure, which used to be a challenge, because everything was manual, and Azure didn't help either. By storing our release & build processes in YAML, we can define in code how the containers should be built, where it needs to be stored, how to start it in web services, and finally run various canary tests to confirm the availability of the new service. The team I'm part of started using the Azure Pipelines YAML feature the day it was released and we're now on a third or fourth iteration.

What's next?

I'm very optimistic about the proliferation of Cloud-Native services, especially those built around data analytics. Customers have been collecting data for years, and storing it, without any idea how to analyze it. Our data initiatives provide fast data ingestion and analysis of mass data without the need for costly data warehouse tech, knowledge, or other costs. Companies are now able to analyze data faster (and at a fraction of the cost) and truly understand the data they have acquired. With the exception of public authorities, the need to run core business in a private cloud or server room is becoming redundant. Public Cloud is being run and managed worldwide 24/7 by a vast team of engineers, with unsurpassed security.

We are not an infra provider, but we are excited about the Public Cloud and having the ability to design, build, and run our customers' core business processes in a Public Cloud environment. Nowadays, companies that are moving some of their workload into the Public Cloud are unlocking new techniques of data analysis and thus learning new vectors about their business. In the year ahead, we will continue to work with one of our customers that has the best engineers – people I respect – and I will be working with them on solving a challenge they couldn't solve. We are planning to build a data lake, containing many years' data about how people use critical business features. We will work with the customer on how to analyze this data, and they will be able to use this data to improve their products for future users. All of this work will be done in the Public Cloud.

Otávio Fernandes

Breaking down the monolith

What was the most interesting project you worked on this year?

The most interesting work I did last year was splitting up a former monolith into a more Agile IT landscape, based on microservices, while increasing tracing, instrumentation, and monitoring capabilities. This customer is an online bank and its interactions take place through mobile devices and web applications. About five years ago, we inherited an application called Midlayer, which enables mobile devices and web applications to communicate with the complex ecosystem of backend apps employed by the bank. Over time Midlayer became a monolith, because features kept being added as the business continued to grow. A lot of effort was put in splitting it up into more manageable components. We followed Cloud-Native principles, employing Kubernetes in the orchestrator role, so that more teams could operate in parallel and produce new applications. As a result, the platform requires less traditional operations work and the overall speed of delivery was greatly increased.

A game changer was Kafka, as it acts as a systems backbone. But it's not just that; microservices operate as smaller, simplified pieces of software, making development more streamlined and easier to test. The result is a streaming-based ecosystem in which business logic is executed in near real time. Kafka is reliable and performance-driven and, by employing its design recommendations, components have been re-used to perform roles in different business initiatives.

Developing a microservices-based platform has its own challenges, i.e. we have a lot more moving parts to manage than before. The team became heavily invested in the observability of those components, where each individual instance provides metrics to a time-series database that is employed for dashboards and functional alerts. The challenge is that you can end up with a fragmented IT ecosystem that needs orchestration. That's where Kubernetes came in; we used it to automate simple operation tasks and add standards to the platform. All this work was designed to improve the way we co-create with the customer and we're now moving full steam ahead toward a DevOps way of working.

What's next?

This work will continue in 2019. Last year there was an exponential increase in the number of microservices and that growth is forecast to continue this year. We now have a more complex environment to manage and I would like to take advantage of some of the managed services in Public Clouds so that we can delegate some of the platform complexity and focus on business-related features. Every year you see new managed services becoming generally available and we're also keen to join this bandwagon. Our Cloud-Native principles make our platform a first-class citizen in any Public Cloud, while it extends our capabilities. So the challenge this year will be to provide a seamless experience for development teams working in on-premise and Public Cloud-based platforms and thus increase our focus on end-customer features. We will, of course, maintain our standards for uptime, reliability, data protection, security, compliance and all the other added values that Schuberg Philis offers such as a fixed price, experts in the lead, innovation first, and fearless learning. The year ahead will be exciting.

Daniele Bonomi

Identity management in the cloud

What is the most satisfying project you worked on in the past year?

In 2018 I worked on multiple digital identities. As a user of digital services, whether this be your email or your energy provider portal, you have to login with a different sets of credentials. Having to manage several digital identities and credentials thus becomes a daily hassle for us all.

I used Okta, a solution that offers our customers a single, secure identity for all the services we offer, rather than having to deal with several accounts. A key principle is that we have decentralized the control of those identities. Our Customer Teams now have full control and can even delegate management rights to customers when needed.

For about eight months, I worked on a blueprint solution that really made people's work easier and now provides a foundation that can be expanded in the future when needed. Together with the SaaS provider, we will be able to plug and play the different services that we offer and have more flexibility when inventing new solutions. It's very satisfying to see that our customers are now asking to integrate more of their own systems in our identity management platform, as it clearly improves the way they work.

What's next?

As a follow up to last year's project, I will work on providing solutions that help decentralize the control of our platforms, to empower my colleagues to move safer and faster. For example, by performing more automated checks on our installed base to spot known issues and improve the vulnerability assessment of our infrastructure. This will enable engineers to act on the spot and to shorten the feedback loop so that auditors don't surprise engineers when they perform assessments.