Monday, June 19, 2023

Use functional programming concepts to increase quality when re-factoring

Sometimes when you want to implement a new feature to a system the developer needs to figure out if this is new functionality, or a modification of the existing. How is this going to fit in, or is it new on its own? Chances are you have parts of the functionality there already but it be a mess. There are functions being called from other components and doing mostly the same thing, but in a slightly different way. This is a great opportunity to refactor this behaviour to one abstract type so you can add some automated tests, fix the design issue (without fixing everything) and leaving it in a better place than you found it.

By using some function programming basics your code can read and perform better than when you found it. By recognizing these patterns and the value they bring, you will find yourself recognizing this situation in many places and it will become easier to have smaller refactors that don’t slow you down on your current estimate, but give you the peace of mind that you will save some time down the road.

This will cover (IMHO) the basics of functional programming that benefit any codebase; iterative functions, OOP classes and any ball of mud you run into

Immutability

Immutable parameters don't change the value internally. use this value to create a new one
this is referred to as ‘side effects’

This is essential when doing concurrent programming. The data you put into that thread can’t have a hard reference (or any ref) to the global value.

By default, try to keep inputs immutable. Be efficient at copying the data you need and returning a new set (or reference to that set). You can enforce this in Java with the final keyword. In python, you can use tuples to pass arguments into functions (instead of lists)

Functional Composition


When you have many functions in a file, the callee has to know all of the details of the internals to be able to create the behaviour desired by combining the function calls in the correct sequence.

This really violates basic encapsulation and leaving this will cause some messy coupling between components and pieces of code will call specific functions. Easier to provide an interface that has specific behaviour. There is a parallel here with the Facade pattern in OOP, provide a simplified interface for the behaviour of the component and then refactor all the reference to that functionality to the new interface. This is a great re-factoring strategy, to add some functional design to your classes. Use abstract data types or interfaces to enforce the behaviour of the internals.

Higher-order functions


Using higher order functions in any language that you are using is really important. Why? because you end up making these functions anyway. So, you can create one of your own making with many loops and callbacks, or just use the built-in versions that come with many languages; or are available as a library to extend the languages base functionality.

The most common are using map, reduce, and filter

  • Map - apply a function to all elements in the list. The list won’t change, but he values in the like will
  • Filter - remove elements from the list that don’t satisfy a certain condition
  • Reduce - apply a function to each element, and combine all results into one value
Immutability in these functions. 
When using map, the elements in the list will change, so these are mutable elements. For Filter and Reduce, you are reading the elements and producing a new output. With these immutable values, you could use partition the list, and use concurrency to run filter and reduce in parallel.

Monoid

This has an obscure name; but its really some very fundamental algebraic rules for designing functional behaviour. Monoids have properties and its a good idea to follow these as a guidelines when creating the functions that are used/applied in the higher order functions like map, reduce and filter. 

Closure - types inputs is type output. When you pass in integers, you have integers returned.

Identity - depending on operation, the base value for type
  • for addition: 0  (1 + 0  = 1)
  • multiplication: is 1  (1 * 5 = 5)
  • concatenation is empty: "this" + "" = "this"
Associative
  • a + b = b + a
  • a * b = b * a

Recursion vs Iteration

Recursion can improve readability, but it comes at a cost of execution. you can exhaust the resources of your stack pretty easily when dealing with large datasets. Use iteration when possible for more predictable linear nature of the execution


Conclusion

By using some concepts from functional programming and being able to see these patterns emerge when refactoring code; the quality and consistency of the refactored result will increase in quality. This is from using tried and true concepts that, like many patterns, will just emerge from your work and add consistency so your teammates will be able to more easily read and maintain the code.





Thursday, June 15, 2023

Maintainable and Portable codebases increase developer velocity by reducing complexity

When teams are developing the same codebase, teammates will bring their own opinions on best practices for code formatting and quality. Everyone has a lot of experience and curiosity, so its good to use that knowledge to implement some simple practices to allow the team to move faster together, with more quality.

Starting with some simple tools that implement Maintainability and Portability checks; we can automate these actions to really enable development productivity. The collaboration on defining these and evolving these quality attributes will have a significant effect on the quality and velocity of your projects. 


Portability is the ability of your project to work on your local machine, and different testing and production environments

Maintainability is the quality attribute that helps other people on your team work on the same project and be able to add value without too much overhead to understand how it all fits together.

Automation
Automating small problems help the developer concentrate on the bigger tasks. It always helps productivity; less time and more consistent than manual checks. 

Maintainability

Common code formatting and structure helps team members understand each others work. This results in a more effective PR that can be reviewed and merged easily.

For a first version, have a code formatting and lint tool. The output is much easier to review when the formatting doesn't need to be understood. Running a linter on your code will always gives me more robust code and reduces complexity.

Type safety

For dynamic languages like python and javascript, a type checking action is really handy to increase code quality and make more robust functions and classes. Using type hints, a tool like mypy will check the validity of the variables being used to prevent TypeError exceptions. For javascript, we have TypeScript now that adds a lot of type safety over top of vanilla js. 

Automation


Python
  • run pre commit for everything,
  • use the same hook to run on main branch on the repository when merging pull requests

Node Typescript
  • Prettier
  • ES lint 

Java
  • Prettier
  • Java compilation when building the byte code files will do the static type checks due to the nature of the language

Using git for doing the tasks

git is an amazing piece of software engineering for source control and versioning. Also it will run tasks or 'hooks' to run specific functionality before and/after a commit. 

Maintainable Structure

When packaging application code for development, there is the application, and then there is all the artifacts around the application. Data files, unit tests, different configuration are need to build the app, and then there is the app code

YAGNI: Beyond not putting test data into production, it really helps reduce complexity to build the leanest version of the codebase for deploy to production. This helps quality by reducing complexity of the artifacts used to debug when something goes wrong, or to add a new feature.

Maintainable structure is more about the functional nature of the application. The programming language to implement the system may use some common conventions, but its still an application that can be decomposed into a set of features

package by feature

Many applications have a directory structure that reflects the implementation pattern of the system architecture and not so much the functionality that it has. Directories like "Model, View, Controller" and "Api, Data, Logic" or some variation of what the structure of the components are. 
This gets complex as I would have to add and edit files to many different directories to add a feature. The new SearchCatalog feature would need some changes to a file in api, or a new file and so on with the different parts of the pattern. Not all features are implemented with the same pattern, so having a strict structure can cause issues.

Instead, try packaging by the feature itself. In a team this really helps as the teams are really operating in their own part of the larger system with better understanding of the coupling between them.

Lets make a feature called "SearchCatalog" and implement your pattern for the data models, views and logic classes that make the feature work. By better understanding the interfaces, this makes understanding the dependencies a little easier, and can also result in nice shared libraries that are used by many features.

For an API endpoint that uses some logic to read and write to a datastore; the pattern is a basic robustness pattern 

Responsibilities
  • request/response boundary
    • parameters
    • validate request
    • serialize response
  • logical objects that have behaviour
    • called from boundary object
    • unit test these with objects
  • data entities
    • model and model behaviour
  • workers, caches etc
    • tasks and clients for dependencies

Lets say we make a python flask api, by using these in files called "api.py, search.py, data.py, clients.py" give the reader an easy time to understand whats in the directory and what it does

Test

By having a functional test with an http client you can test the api boundary objects and for any logical classes and components the tests become clearer because they test the functionality.


Portability


When developing an application there is going to be a need to build the app locally, and deploy to your production environment for the users to use the app. There could be testing, staging, and other environments along the way. Enabling some good practices for the configuration of the app makes developing and testing a lot easier and reduces the stress of a deployment.

  • A new person should be able to pull, run tests, start app with just .local.env, and can hook into secrets in a vault when in dev, staging and production environments
  • Keep environment config versions and use the environment to use a specific file.

Dependencies should be mocked when developing locally, or you could end up with some side effects like incurring charges for an api every time you run your unit tests!

Automation

Use pipelines in bitbucket or github to automatically deploy your main branch to a development server, for larger production deployments there is Harness and other tools can deploy to many nodes to reduce complexity of setting up many production instances.

Test it

How do you validate? use a automated test to add a quality step to your deployment scripts. The deploy is finished when its validated, not just deployed

Testing dependencies config

No matter how careful you are about the portability of the app, things are going to happen with many people working on the application. Using more than a developing and production environment will go a long way to ironing out the issues before deploying to production

To go even safer, its a good idea to use your load balancer in production to route to validated instances. Enable this by deploying to a new node and smoke testing the live production instance before adding it to the load balancer. 

Conclusion


By automating some good habits standards, a nice safety net is created that allows a flow of increased productivity. Group culture is positively affected as the standards are agreed and evolved over time. More time is spent adding value to the product, instead of working the codebase. 




Friday, June 09, 2023

Consistent Software design enables quality and schedule predictabilty

All forms of technology have an architecture. If that design is intentional, it can enable velocity as its understood what is being made. if its organic and coincidental the vague plan will result in confusion, unpredictability and lower quality outcomes

Why software design?


A coherent design gives a clear understanding of the structure and behaviour of the system. How complete does this design need to be? For civil engineering, the blueprints that are required to build a structure are quite detailed. Early software design followed this expectation of detail, but it was found that created detailed designs before implementing the system really slowed the process. This became known as "Big Design Up Front" and was such an impediment to completing a project that the concepts of design were seen as slow and left behind for the sake of velocity.

Where did that lead us? We ended up making big balls of mud really quickly, and then took twice as long to debug this pile and put it into production

There is a happy medium in the middle here; where Just Enough design provides the clarity of the finished product, but doesn't get lost in the details. Its become clear that asking for a clear estimate (When this is done) is really problematic before knowing What is being made and How it gets built.

But we are in a hurry....

When creating products and keeping promises to your customers and yourself when it come to delivering these products. It natural to imagine you can produce more than the time and capabilities allow. Given the rush to deliver, the ‘status’ of the feature that you are delivering becomes the most important question. Whats blocking this from getting done? In many cases, the definition of WHAT you are making isn’t clear. With this not clear there is still more investigation, experimentation, and conversations that need to happen before its known what is being made.

With a clear picture of what is being made, the clarity of HOW that thing gets made is clearer. The dependencies get understood, and the complexities are known, so the planning of making that are known. At this point, the predictability of the plan becomes a little more solid. This can enable a decent estimation of the work.

What is produced?


An understanding of the behaviour and structure that satisfies the simplest requirement. This works on a whiteboard, and I have built many projects with a few people in a room with a whiteboard. This worked as long as the group was small and the deliverable was the groups alone.

As the product scope and resulting dependencies increased, this informal process didn't scale well. There were always a need for a meeting to clear things up. This gets even more unwieldily in the world of remote work and remote meetings. The need to write things down and operate from a common source of truth is clear.

By having a standard for design, the consistency of this will start to enable better decisions as the clarity of the context will become clearer for the implementation. 

Formalizing software design with a standardized template, but keep it really simple. Functional Specs, and ADRs to give the requirements a solid foundation that can be implemented. 

How can design be a lightweight process? By never finishing it, or having the expectation that the design is complete and a deliverable on its own. Its purpose is to clarify what the real deliverable is; the implemented feature that has value for the paying customer.

Who is involved?


You need the group to build something significant. Efficient architecture is the result of efficient group communication. Good communication enables velocity when the outputs are recorded and the context is known. Collaborate as a group. Silence probably denotes confusion, not consent or agreement; so this can take a few tries before people start talking. 

So what works?

Architecture meetups on the scrum team level. 
  • This isn't a planning or status meeting, its a discovery meeting. The group needs to discover the value of this feature and the dependencies around it. How does this new piece of functionality fit into to the larger picture?

Working groups on the company level. 
  • For the quality attributes that are cross-cutting concerns (Usability, Performance, Security, Maintainability) its a very efficient way to formalize some basic concepts and enable some consistency the technical roadmap.

In general; Writing things down and being objective about goals will align the group. Evolve your template for a technical spec, and keep removing as much detail as possible. Complexity in the documentation works for consulting, but its the enemy of efficient design. Keep it clean and revisit often.

Conclusion

When building a product and operating with velocity, there can be a tendency to not formalize the design for the sake of moving faster.

This can result in early wins for the demo of the implementation; but if the debt incurred does not get addressed the complexity will slow progress of the project.

Communicate and collaborate as group on What you are making. When this becomes clearer, the process of how that gets built will become more predictable. This predictability makes estimation of effort possible, and the next deadline will be less of a guess and more of a plan


Thursday, June 08, 2023

Effective Software Architecture meetings

For an effective architecture strategy to develop; there needs to be some alignment when meeting together. In meetings with no agenda or focus it can seem to be a waste of time. It is. if the meeting invite is “we are going to brainstorm on everything” then meeting fatigue will set in and participants will have less engagement.

Its also essential to be clear that this isn’t a status meeting; the status is on the board, backlog and roadmap. Those are the outcomes of these meetings. The status should be visible without a meeting.

The common goal/output of the meeting is some decision on how to move forward with alignment of the participants. To set the theme and get an output of the technical meeting to be an effective one; its essential to understand the context of the meeting, and the participants.

Problems and Solutions

How to enable problem solving and know the valuable problem is being addressed? How to enable decision making so that we have a solution to work with and con move the understanding of the problem and our technical solution forward. Communication is a lot of listening and focusing on context

Focus on Context and Outputs

Meetings and the audience

For any meetings you have; take a moment to understand the Goal of the meeting, and from that it should be understood what you are working with here. Its also nice for any participants to know the context if they have some topics to bring up, or to be a listener in the background; or not participate at all

Discovery

Product discovery meetings are key to understanding user problems for features; and the deeper understanding of the technical constraints for this new feature.

These could be from ideas from business, dev, product, or triaged from a customer issue or suggestion; but the rationale for adding or modifying the architecture should be rooted from something that creates value in the product.

These output to the Roadmap level, product or technical; and sometimes need a prototype or more investigation to qualify the actual goal (and motivation).

Outputs

  1. Valuable problems
  2. Feature value
  3. Market direction
  4. Business Impact

Audience

  • Product
  • Sales
  • Support
  • Development
  • QA

Planning

You have discovered the valuable problems and understood the value solving that problem will bring. Now its time to plan the technical solution; to creating the requirements and design for system features. 

User story mapping is really helpful as this stage as it uncovers the quality attributes to support this new feature. Also at this time trade-offs are made, so be mindful of any debt you are going to put on the books. Debt is useful, but like any debt it will overwhelm you if not kept in check.
As you go a long, you will learn more, so this meeting works well as a regularly scheduled meeting. In agile this is usually a "Backlog grooming"

Output

  • Backlog items for functionality
  • Technical spec
  • Architecture Decision Records (ADR)

Audience

  • Development
  • QA
  • Product

Implementation

The backlog has items for the functionality and the tech spec has the designs and decisions made. Lets build this thing! This is usually called a sprint onboarding, a kick-off meeting and gets the project people involved. This is when estimation can become possible with the details of what is being made now down on paper.

Outputs

  • implementation Epics for some definition of done
  • interfaces for implementation
  • how requirements build acceptance criteria

Audience

  • Devs
  • QA
  • project managers
  • product managers

Release

Are we releasing work to support a feature, or is this a new one? Release meetings can take on many forms. and again depends on the context. A demo is an internal release and what is in that demo can be used to form the external release. For the technical architecture, the items can be lost in favour of the functional product notes; but its important to keep the architecture in-step with the release numbers you put on the deliverable

Version the architecture with the release

Output

  • in the release notes, link to latest requirements that created that version
  • Link docs describing features and acceptance criteria
  • diagrams of structure
  • mockups of UI and flows

Audience

  • product
  • QA
  • devs
  • devops
  • Support
  • Sales

When the meetings happen

There can be a tendency to get people involved, but having the wrong people can really hurt the output. Keep the focus to the group affected. For most architecture decisions its usually the team making the feature, when they plan to do the work. 
For architecture decisions on cross cutting quality attributes (Usability, Security, Performance, etc) its important to have a working group that focuses on setting the policy (a good checklist) for the functional features to get built on

Conclusion

When making the context clear and involving the relevant participants; architecture meetings can be a reliable way to output relevant decisions. Its essential to know What we are making so we can figure out how we make it and when it happens.

By taking a moment to understand the context and the problem to be addresses; the actual people that are needed to participate will become clearer and the chances of a meaningful solution will increase.