The big selling point of Go is its focus on concurrency, and its simplicity. Go delivers on those concepts extremely well. There’s no doubt that the designers of Go have done a great job in that direction.
Perhaps curiously, those strengths aren’t my favourite facets of the language. They certainly rank highly, but, by far and away, Go’s interfaces are what I enjoy about Go the most.
Before getting to the why it’s probably easier to start with the what. Interfaces in Go are not unlike Java interfaces. They are constructs that define a set of methods that must be implemented in order for the concrete instance to be able to be treated as an instance of that interface.
Like Java’s interfaces, and unlike its abstract classes, Go’s interfaces cannot have methods attached to them.
This is what an interface definition looks like in Go, with a simple method added.
type Foo interface{
Stringify(a, b int) string
}
There isn’t much to them, their strength, however, becomes apparent when someone wants to implement a given interface. Go interfaces differ from those in (say) Java by being implicitly implemented.
That is Go implementations of given interfaces do not need to explicitly say which interface they are implementing, or even that they are implementing an interface at all.
type Milkshake struct{}
func (m *Milkshake) Stringify(a, b int) string{
return fmt.Sprintf("%d is better than %d by far", a, b)
}
That’s all that’s needed to satisfy the Foo
interface (a bit more really, satisfying the interface doesn’t mean you have to do anything, just that your function name, accepted parameter, and returns (types, order, and number) on your type matches that that the interface specifies.
This is why I find interfaces to be so useful in Go. The implementation does not need to be coupled in any way to the definition. There is absolutely no need for the implementation to know anything about the interface definition. It’s only when you go to use my Type as a concrete implementation of the given interface that matters.
This means that it can be a happy coincidence that a given type implements a given interface.
Further, a type can implement multiple interfaces concurrently, with no penalty.
These strengths really come to the fore when used in conjunction with hexagonal programming.
When a developer creates a package that deals specifically with the business logic, and applies the Dependency Reversal Principle, they ‘protect’ that business logic from depending on services by defining interfaces that the business layer calls instead.
As an aside, the use of interfaces like this allows testing to occur using ‘mocks’ - implementations of the interfaces that only exist for the purposes of testing, and only return specific information, designed to test various parts of the business logic’s code.
By using implicit interfaces, the services that provide the concrete implementation of those interfaces are not dependant upon the business logic. That is, those implementations don’t import anything from the business logic (ignoring layer traversing objects), and are not broken by changes to the interfaces (with the exception that, until they satisfy the new definition they cannot be used as instances of that interface).
The business logic is informed of which concrete implementation is being used via techniques like Dependency Injection.
To try and make this a little clearer I’ll talk about a slightly less abstract example.
If the business logic has the following:
package payments
import "fmt"
type Account struct {
FirstName string
LastName string
Number string
Bank string
}
type Payment struct {
Payer Account
Payee Account
Amount int64
Storage Datastore
ID int64
}
type Datastore interface{
Create(payer, payee string, amount int64) (int64, error)
Read(id int64) (payer, payee string, amount int64, err error)
}
func (p *Payment) SavePayment() error {
if p.Storage == nil {
return fmt.Error("no datastore defined")
}
if id, err := p.storage.Create(p.Payer.Number, p.Payee.Number, p.Amount); err == nil {
p.ID = id
} else {
return err
}
}
func (p *Payment) GetPayment() error {
// Defensive code
if p.ID == 0 {
return fmt.Error("cannot get payment with invalid id")
}
if p.Storage == nil {
return fmt.Error("no datastore defined")
}
if payer, payee, amount, err := p.Storage.Read(p.ID); err == nil {
p.Payer.Number, p.Payee.Number, p.Amount = payer, payee, amount
} else {
return fmt.Errorf("read got error: %w", err)
}
return nil
}
Then the business logic is blissfully ignorant on how the datastore it is asked to use does anything. All the business logic ever needs to know is that it has (in this artificial case) a Create, and a Read method, and how to call those methods, and what to expect in return.
Now, the Software Architect on the project decides that the data should be stored in a SQL database, Postgres for our purposes. All that needs to happen is a ‘driver’ is created that implements the Create and Read methods, storing and reading data in a Postgres DB.
Something like:
package postgres
import (
"fmt"
"github.com/jackc/pgx/v4"
)
type Postgres struct{
conn *pgx.conn
}
// Connection handling omitted for brevity
...
func (p *Postgres) Create(payer, payee string, amount int64) (int64, error) {
sqlStatement := `
INSERT INTO payments (payer, payee, amount)
VALUES ($1, $2, $3)
RETURNING id`
id := 0
err = db.QueryRow(sqlStatement, payer, payee, amount).Scan(&id)
if err != nil {
// log the error
log.Printf("Creation of payment in postgres generated %v", err)
}
return id, err
}
func (p *Postgres) Read(id int64) (payer, payee string, amount int64, err error) {
err = p.conn.QueryRow(context.Background(), "select payer, payee, amount from payments where id=$1", id).Scan(&payer, &payee, &amount)
if err != nil {
return payer, payee, amount, fmt.Errorf("QueryRow failed: %w\n", err)
}
return payer, payee, amount, nil
}
Some time later a decision is made to store the data in a SQL database, MySQL, but Read from a NoSQL cache, such as Redis. This will mean that a ‘driver’ is created that writes to a MySQL database, and Reads from a Redis Cache. (Some other code will be created to synchronise the data in both stores).
During CI/CD testing, unit tests are run with a mock implementation of the interface, so that the tests can create scenarios such as errors being returned from the datastore.
At no point has the business logic needed to be informed of the changes to the services that are storing the data. Thus the most important piece of code in the system, the business logic, is decoupled from everything. Any changes to the way the business logic behaves, though, may affect downstream services, but this is the way it should be.
Finally, it should be noted that the drivers that have been created are not limited to implementing this business logic’s interface. As long as the names of the methods are different, the drivers can quite happily implement another piece of business logic’s interfaces, allowing the nominated datastore to be used for multiple purposes (although, please be aware that this is not an invitation to share information via the database).
The strength of Go’s interfaces match so well with SOLID programming principles, that they, in my opinion, encourage Clean architecture, which is what I love to use when creating applications.