Modular Golang Series-why modular?

devops terminal
8 min readOct 12, 2020

This is the 1st blog for the series — Modular Golang; hence would be focusing on some discussions about golang as a language and the challenges in developing apps with it. The coming blogs in this series would focus on use-cases and scenarios on how to design the app by a modular approach. Since app development is an on-going process (where new ideas and methodologies pop out nearly every day); the points discussed in this series might still be valid after a few months / years, however do bare in mind that outdated ideas and methodologies should be replaced when desired. It is not just the app’s design itself is modular; instead we also think in modular :)

golang in brief

Golang is a superb language in speed and efficiency. It is much easier to develop apps comparing with the traditional C language in several ways including the following:

  • everything is a function (you can say bye to directives or inline function~)
  • a rich SDK providing coverage / support for main use cases (e.g. network and http packages are available by default hence no extra installation of libraries necessary)
  • missing features usually could be added through integrating others’ package / library (check out https://pkg.go.dev/ for packages covering your needs)

Golang also have additional advantages as below:

  • compiled to native binary; speeds up the execution (no Runtime-environment necessary)
  • in actual, the golang compiler could compile the binaries based on different os architecture. It is possible to code once and get them compiled to MS Windows, Linux and Mac binaries by applying the GOARCH parameter.
  • less chances to handle Object Pointer(s). There is still a big chance for us to touch pointers eventually, however for most cases we would be using an Object reference directly instead.
  • golang is a statically typed language; hence for most cases our code would contain typed variables. Yes, we do have scenarios in which a dynamic typed variable might occur in runtime, such variables are treated as “unsafe”. You can imagine casting a dynamic variable into a given type / struct is indeed an “unsafe” operation … Being strict in types during the development removes pressure from committing mistakes at an early stage (you will know if something is wrong during the compilation process) and improves our app’s stability (less surprises simply~ unless a lot of logics depend on handling dynamic typed variables in runtime).
  • golang also has its own formatter. If necessary, you could re-format your code by applying the formatter. The rationale behind is that every developer would have his/her own coding style and thus making the code’s readability and maintainability drops; if a general formatter could convert the code into a style-neutral format then all the above issues might be solved. For the moment of time, the golang formatter is not a MUST apply step, so it really up-to-you.

Development challenges

After knowing the characteristics of golang, let’s take a look on the challenges using it for app development.

the “main” package…

As the project starts, we usually code everything under 1 single package named “main”. The reason is package main contains a source file with the “main()” function, the main() function is execution point of the whole app. As a no brainer approach, we usually just create a source file under the main package and start coding the logics; and of course to compile and get the app runnable~

This is actually good for developing a tiny and simple app. However when we are building a feature rich system, this approach starts to make you headache:

  • lack of packaging and structuring of code makes it hard to maintain eventually. In the good old days of C language development, we usually adopt a similar approach with 1 big folder containing all the source files handling every feature of a system. You might ask why this works in the past but not nowadays? Think about it, in the past, the features required for a system is much less than what we need today (e.g. we don’t expect to provide http request handling in the past since most systems are desktop apps at that era and apps might not be able to communicate with each other as well, the only common place to share data between users or even apps is the database / datastore engine). That is why for today (also the future), we should structure the code into different packages / modules for ease of expandability and maintainability.
  • using a 1 package development approach is usually great for a small team of developers (less than 5). Then if we just need to deal with code changes within 5 developers, that might still be … ok (already taking away conflicts / brainstorms during the design stage). For today many companies have broken up a huge system into parts / modules and each of such parts would be handled by a different team of developers (maybe 5 ~ 10 developers)… imagine 5 parts, totally 50 developers, working on 1 single package … how chaotic would it be??? A more reasonable way to solve this is by introducing a separate package for a logical part of the system. (e.g. accounting package — responsible for accounting related features, sales package — responsible for sales features, core package — responsible for core and common features to keep the system running)

the public API…

It is pretty common for nowadays systems and products to provide features through a public API interface. One of the benefits is the power of control, take an example, a social media platform providing a way to access all posts created by a user. This feature looks boring but it might be handy when you need to get back all posts’ contents and convert them into a diary like presentation which is not available on the social media platform. Another example is a system providing Stocks information, reading raw numbers is never more valuable than converting them into charts, as you can guess such feature requires your implementation — you would need the data through the API.

There are still lots of benefits and reasons for setting up a public API interface; also come with a serious concern as well… When a system is gaining more and more popular, with more and more features available, so is the complexity of the system. Several issues on a growing codebase might be the following:

  • the system was built without much design in the beginning; when the codebase is getting bigger and bigger, it is hard to maintain. Some use-cases could be solved by applying simple design patterns.
  • the system’s internal coding requires design, so does the public API interface. The design of the public API interface should be flexible and consistent. Flexible is crucial as we need to make sure the API specifications (e.g. number of parameters, return type) won’t need to change if new features are to be supported in future. For example, we could add an optional Map / Object parameter in the public API, hence additional parameters (e.g. an underlying protocol now requires a new mandatory parameter) in future releases could still be handled without breaking the specification. To explain what is consistent, think about how different API should look like. The naming convention should be similar, the parameter list should look familiar, the result of the API should be consistent in type and structure as well.

Therefore, the design process is inevitable for a system or a serious app; we would dive into the process in the coming blogs.

microservices…

We probably heard of “microservices”~ Maybe you didn’t but trust me, in some sense if you were working as a service tier developer, you have already developed some microservices without knowing it :)))

A simple sentence to describe microservices is — many apps / servers are running in a collaborative way in which each of them provides a specific service / operation to fulfil parts of the API call.

Let’s have an example, a user wants to send an email attaching some photos. In the old days, we would have an all-in-one server providing all features including the send email action and many more. However in the terms of microservices; the send email action would be broken in several services instead:

  • htmlEmailFormatterService: a service for encoding the given content in an html format (validation and sanitisation of the html content).
  • emailAttachmentService: a service for handling the email attachment process. Note that emails do not necessary need an attachment every time, hence this service is optional.
  • smtpConnectorService: a service for making sure the smtp protocol works for email sending.
  • sendEmailService: acting as a connector service to link up different microservices listed above to send an email.

Holy… why do we break up a simple send email function into 3 or 4 microservices?? Looks like a lot of work, isn’t it? Again maintainability and flexibility are the answer :) ~

Why is it more flexible? If suddenly there was a change in the smtp protocol (e.g. smtp 3.x released), since in the old days everything is bound to the all-in-one server, the corresponding code change would be applied to the 1-and-only-1 codebase as well. Remember that software releases are not that straightforward, we can’t just update the smtp handling and suddenly tell the public we would have a minor version release~ Since the all-in-one server covers a lot of use-cases and types of users. A minor release would trigger a series of testing not just on the smtp module but also to the other modules as well. Totally not flexible at all~ However in microservices point of view, updating the smtp handling would only trigger code change and testing on the smtpConnectorService and leaving other parts of the process intact (as long as no API specification changes). Hence the overall process become much easier now.

What about maintainability? In the old days, updating a software would require a shutdown / restart. Think about it… every single feature is bound to the all-in-one server, even just upgrading a simple feature would need a restart, if we are supposed to have services available during the upgrade, a procedure to failover from primary to secondary server must be achieved beforehand. And later on when the upgrade is done, it is time to failover back to the previous primary server. Whew… this cycle might take you a day to finish, not mentioning the documentation and procedure rehearsals run earlier… In the microservice point of view, maintaining an upgrade could be easier since every service is itself a manageable app. Simply upgrade that tiny app / server and restart, all done~ If Docker is involved, the upgrade could be even easier, simply we prepare a new Docker image with the latest version of that app / server, config it correctly and take down the current version’s Docker container, finally spin up the latest container~

If we are expecting to gain the most out of microservices’ benefits; then we must apply some design thinkings on the public API interface as well as the code internals mentioned in the previous point. Since every part of the action is now an app itself~ We of course must provide a consistent and flexible API layer to prevent confusions and unnecessary code changes in other app’s interfacing code.

Modular Design

The above discussions have illustrated stories why design is important. True that we all start coding in the most natural way, things usually go well until the codebase grows dramatically. Do remember 1 important point is to refactor the structure of the code when necessary, it is NEVER too late to do some corrections if you are serious in the development. Indeed having a big refactoring is something painful (especially when you are working with a big team), hence it is suggested that we do spend some design at the very beginning of the feature.

In the coming blog, we would start discussing more in details on how an imaginary Fan Club system would be designed. Stay Tuned :)

--

--

devops terminal

a java / golang / flutter developer, a big data scientist, a father :)